Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
2,700 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pregunta 2
a) Carga de datos de NORB
Carga de datos de entrenamiento. En este caso debido a limitaciones de hardware se modificó la función original para cargar un solo batch de datos NORB.
Step1: b) Función para escalar data entre rango (-1,1) o bien normalización.
Step2: A continuación se cargará un batch del dataset y se mostrarán imágenes convertidas bajo las dos formas de escalamiento.
Step3: c) Entrenamiento de Red FF variando tamaño de batches utilizados.
Como la red al ser entrenada llenaba el buffer definido por el notebook los resultados fueron ingresados manualmente a partir de los arrojados por el entrenamiento.
Step4: Como se ha aprendido, se observa que a medida que aumentamos la cantidad de datos la red es capaz de aprender y lograr un buen rendimiento. La línea punteada indica la menor pérdida obtenida (~ 0.204), utilizando 8 de 10 batches.
d) Agregar pre-entrenamiento a red FF ReLu
En este caso se han utilizado Autoencoders y RBM para verificar si existen o no mejoras.
Las estrategias a seguir fueron las siguientes
Step5: Se puede apreciar que en general la reducción de dimensionalidad ayuda bastante en obtener buenos resultados rápidamente sobre el conjunto de pruebas. Existen no obstante inestabilidades, por ejemplo durante el entrenamiento de la red FF con RBM, que provocan peaks en la pérdida.
Los métodos que aumentaron la dimensionalidad durante el pre entrenamiento sufrieron por mucho tiempo de, lo que se teoriza, son mínimos locales, y no fue hasta que más de la mitad del conjunto de entrenamiento se incorporó a la red FF que se logró escapar de dicha pérdida.
Step6: En este gráfico se aprecia mejor el efecto del pre entrenamiento, donde mientras más data haya para pre entrenar, mejor score inicial se obtiene. A medida que este factor $\theta$ crece, el efecto desaparece.
e) Análisis utilizando FF con Sigmoidales y tanh
Step7: Se puede apreciar que en general la versión pre entrenada con autoencoders funciona mejor que con RBM dados por supuesto los parámetros con los que se armaron dichas redes. No se pudo superar con pre entrenamiento el mínimo logrado con FF ReLu sin pre entrenamiento. | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.optimizers import SGD
from keras.utils import np_utils
def unpickle(file):
import cPickle
fo = open(file, 'rb')
dict = cPickle.load(fo)
fo.close()
return dict
def load_single_NORB_train_val(PATH, i):
print "Cargando batch training set",i,"..."
f = os.path.join(PATH, 'data_batch_%d' % (i, ))
datadict = unpickle(f)
X = datadict['data'].T
Y = np.array(datadict['labels'])
Z = np.zeros((X.shape[0], X.shape[1] + 1))
Z[:,:-1] = X
Z[:, -1] = Y
np.random.shuffle(Z)
Xtr = Z[5832:,0:-1]
Ytr = Z[5832:,-1]
Xval = Z[:5832,0:-1]
Yval = Z[:5832,-1]
print "Cargado"
return Xtr, Ytr, Xval, Yval
def load_NORB_test(PATH):
print "Cargando testing set..."
xts = []
yts = []
for b in range(11, 13):
f = os.path.join(PATH, 'data_batch_%d' % (b, ))
datadict = unpickle(f)
X = datadict['data'].T
Y = np.array(datadict['labels'])
Z = np.zeros((X.shape[0], X.shape[1] + 1))
Z[:,:-1] = X
Z[:, -1] = Y
np.random.shuffle(Z)
xts.append(Z[0:,0:-1])
yts.append(Z[:,-1])
Xts = np.concatenate(xts)
Yts = np.concatenate(yts)
del xts,yts
print "Cargado."
return Xts, Yts
# Modelo MLP FF
def get_ff_model(activation, n_classes):
model = Sequential()
model.add(Dense(4000, input_dim=2048, activation=activation))
model.add(Dense(2000, activation=activation))
model.add(Dense(n_classes, activation='softmax'))
sgd = SGD(lr=0.1, decay=0.0)
model.compile(optimizer=sgd,
loss='binary_crossentropy',
metrics=['accuracy'])
return model
# Establecer rangos para dividir training set en escenario no supervisado
def split_train(X, Y, theta):
# n_s es la cantidad de ejemplos que si sabemos su etiqueta
n_s = int(theta * n_tr)
# Dividir training set
X_s = X[0: n_s]
Y_s = Y[0: n_s]
X_ns = X[n_s: ]
return X_s, Y_s, X_ns
Explanation: Pregunta 2
a) Carga de datos de NORB
Carga de datos de entrenamiento. En este caso debido a limitaciones de hardware se modificó la función original para cargar un solo batch de datos NORB.
End of explanation
def scale_data(X, normalize=True, myrange=None):
from sklearn.preprocessing import MinMaxScaler, StandardScaler
if normalize and not myrange:
print "Normalizing data (mean 0, std 1)"
return StandardScaler().fit_transform(X)
elif isinstance(myrange, tuple):
print "Scaling data to range", myrange
return X * (myrange[1] - myrange[0]) + myrange[0]
else:
return "Error while scaling data."
Explanation: b) Función para escalar data entre rango (-1,1) o bien normalización.
End of explanation
(Xtr, Ytr, Xval, Yval) = load_single_NORB_train_val(".", 1)
%matplotlib inline
img = Xtr[25][0:1024].reshape((32,32))
plt.imshow(img, cmap='gray', interpolation='nearest')
plt.title("Original dataset image", fontsize=16)
plt.show()
img_scaled_2 = scale_data(img, normalize=False, myrange=(-1,1))
plt.title("Scaled to (-1, 1) image", fontsize=16)
plt.imshow(img_scaled_2, cmap='gray', interpolation='nearest')
plt.show()
img_scaled_01 = scale_data(img, normalize=True)
plt.imshow(img_scaled_01, cmap='gray', interpolation='nearest')
plt.title("Normalized image", fontsize=16)
plt.show()
Explanation: A continuación se cargará un batch del dataset y se mostrarán imágenes convertidas bajo las dos formas de escalamiento.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
thetas = np.linspace(0.1, 1, 10)
plt.figure(figsize=(15,8))
ff_score = np.array([[0.34965633492870829, 0.85337219132808007], [0.30873315344310387, 0.86633515952791207],
[0.28638169108870831, 0.8779120964645849], [0.28662411729384862, 0.87917810043802969],
[0.27735552345804965, 0.88243598899764453], [0.25611561523247078, 0.89160094224172037],
[0.25466067208978765, 0.89329561710725591], [0.23046189319056207, 0.90432671244947349],
[0.23262660608841529, 0.9034007855362689], [0.24611747253683636, 0.90140890138957075]])
ff_loss = ff_score[:,0]
min_loss = np.min(loss)
print "Min loss:",min_loss
ff_accuracy = ff_score[:,1]
plt.title(u'Error con red FF ReLu variando tamaño del training set etiquetado', fontsize=20)
plt.xlabel(r'$ \theta _s = n_s/n_{tr}$', fontsize=20)
#plt.ylabel(u'Error de prueba', fontsize=20)
plt.xticks(thetas)
plt.xlim((0.1, 1))
plt.ylim((0, 1))
plt.plot(thetas, ff_loss, 'ro-', lw=2, label="Loss")
plt.plot(thetas, ff_accuracy, 'bo-', lw=2, label="Accuracy")
plt.plot([0.1, 1.0], [min_loss, min_loss], 'k--', lw=2)
plt.grid()
plt.legend(loc='best')
plt.show()
Explanation: c) Entrenamiento de Red FF variando tamaño de batches utilizados.
Como la red al ser entrenada llenaba el buffer definido por el notebook los resultados fueron ingresados manualmente a partir de los arrojados por el entrenamiento.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
thetas = np.linspace(0.1, 1, 10)
plt.figure(figsize=(17,8))
# Score RBM con 4000,2000 unidades escondidas
scoreRBM = np.array([[4.4196084509047955, 0.72383116337947229], [4.4055034453445341, 0.72456847458301099],
[4.4394828855419028, 0.72233082052570641], [4.3964983934053672, 0.72311099884400476],
[0.48378892842838628, 0.83260457615262029], [0.45056016894443207, 0.83313326793115983],
[0.43085911105526165, 0.83389629889662864],[0.29784013472614995, 0.87206218527586532],
[0.22757617234340544, 0.90586420740365003], [0.24125085612158553, 0.90226624000546696]])
# Score RBM con 512,100 unidades escondidas
scoreRBM2 = np.array([[0.45062856095624559, 0.83333331346511841], [4.4528440643403426, 0.72222222402099068],
[0.45072658859775883, 0.83333331346511841], [0.4505750023911928, 0.83333331346511841],
[0.4429623542662674, 0.82581160080571892], [0.31188208550094682, 0.86670381540051866],
[0.25056480781516299, 0.89664495450192849], [0.23156057456198476, 0.90414952838330931],
[0.21324420206690148, 0.91417467884402548],[0.22067879932703877, 0.91151407051356237]])
# Score AE con 4000,2000 unidades escondidas de entrenamiento, lr=1e-1
scoreAE = np.array([[4.423457844859942, 0.72402835125295228], [4.4528440647287137, 0.72222222367350131],
[4.4528440647287137, 0.72222222367350131], [4.4528440647287137, 0.72222222367350131],
[4.4528440647287137, 0.72222222367350131], [4.4528440665888036, 0.72222222393922841],
[4.4528440670384954, 0.722222223775704], [4.4528440630321473, 0.72222222410275283],
[0.30934715830596082, 0.90771034156548469], [0.24611747253683636, 0.90140890138957075]])
# Score AE con 512,100 unidades escondidas de entrenamiento, lr=1e-3
scoreAE2 = np.array([ [0.31381310524600359, 0.87419124327814623], [0.29862671588044182, 0.88824588954653105],
[0.29122977651079507, 0.89595336747063203],[0.28631529953734458, 0.89848251762417941],
[0.26520682642347659, 0.90412380808992476],[0.29133136388273245, 0.90601852678263306],
[0.28742324732237945, 0.90006859361389535],[0.28288218034693235, 0.90822188775930224],
[0.26280671141278994, 0.91445188456315885], [0.22067879932703877, 0.91151407051356237]])
lossRBM = scoreRBM[:,0]
accuracyRBM = scoreRBM[:,1]
lossRBM2 = scoreRBM2[:,0]
accuracyRBM2 = scoreRBM2[:,1]
lossAE = scoreAE[:,0]
accuracyAE = scoreAE[:,1]
lossAE2 = scoreAE2[:,0]
accuracyAE2 = scoreAE2[:,1]
plt.title(u'RBM vs AE', fontsize=20)
plt.xlabel(r'$ \theta _s = n_s/n_{tr}$', fontsize=20)
plt.xticks(thetas)
plt.xlim((0.1, 1))
plt.plot(thetas, lossRBM, 'o-',lw=2, label="RBM(4000,2000, lr=1e-2)")
plt.plot(thetas, lossRBM2, 'o-', lw=2, label="RBM(512,100, lr=1e-2)")
plt.plot(thetas, lossAE, 'o-', lw=2, label=r"AE(4000,2000,lr=1e-1)")
plt.plot(thetas, lossAE2, 'o-', lw=2, label="AE(512,100,lr=1e-3)")
plt.plot([0.1, 1.0], [min_loss, min_loss], 'k--', lw=2)
plt.plot(thetas, ff_loss, 'ko-', lw=2, label="Raw FF ReLu")
plt.ylabel("Loss", fontsize=20)
plt.grid()
plt.legend(loc='best', fontsize=16)
plt.show()
Explanation: Como se ha aprendido, se observa que a medida que aumentamos la cantidad de datos la red es capaz de aprender y lograr un buen rendimiento. La línea punteada indica la menor pérdida obtenida (~ 0.204), utilizando 8 de 10 batches.
d) Agregar pre-entrenamiento a red FF ReLu
En este caso se han utilizado Autoencoders y RBM para verificar si existen o no mejoras.
Las estrategias a seguir fueron las siguientes:
1) Utilizar exactamente la misma red FF (con 4000 unidades en la primera capa oculta y 2000 en la segunda), preentrenando con autoencoder y RBM, lo que implica aumentar la dimensionalidad original de 2048 a 4000.
2) Apostar por reduccion de dimensionalidad, forzando a reducir de 2048 a 512 tanto con autoencoders con RBM.
End of explanation
plt.figure(figsize=(15,8))
plt.title(u'RBM vs AE (Rango [0,1])', fontsize=20)
plt.xlabel(r'$ \theta _s = n_s/n_{tr}$', fontsize=20)
plt.xticks(thetas)
plt.yticks(np.arange(0, 1.1, 0.1))
plt.xlim((0.1, 1))
plt.ylim((0, 1))
plt.plot(thetas, lossRBM, 'o-',lw=2, label="RBM(4000,2000, lr=1e-2)")
plt.plot(thetas, lossRBM2, 'o-', lw=2, label="RBM(512,100, lr=1e-2)")
plt.plot(thetas, lossAE, 'o-', lw=2, label=r"AE(4000,2000,lr=1e-1)")
plt.plot(thetas, lossAE2, 'o-', lw=2, label="AE(512,100,lr=1e-3)")
plt.plot(thetas, ff_loss, 'ko-', lw=2, label="Raw FF ReLu")
#plt.plot([0.1, 1.0], [min_loss, min_loss], 'k--', lw=2)
plt.ylabel("Loss", fontsize=20)
plt.grid()
plt.legend(loc='upper left', fontsize=16)
plt.show()
Explanation: Se puede apreciar que en general la reducción de dimensionalidad ayuda bastante en obtener buenos resultados rápidamente sobre el conjunto de pruebas. Existen no obstante inestabilidades, por ejemplo durante el entrenamiento de la red FF con RBM, que provocan peaks en la pérdida.
Los métodos que aumentaron la dimensionalidad durante el pre entrenamiento sufrieron por mucho tiempo de, lo que se teoriza, son mínimos locales, y no fue hasta que más de la mitad del conjunto de entrenamiento se incorporó a la red FF que se logró escapar de dicha pérdida.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
thetas = np.linspace(0.1, 1, 10)
plt.figure(figsize=(17,8))
score_RBM_512_sig = np.array([[0.45771518255620991, 0.83333331346511841],[0.45365564825092486, 0.83333331346511841],
[0.38187248474408569, 0.83758285605899263], [0.34743673724361246, 0.84777375865620352],
[0.31626026423083331, 0.85903920216897223], [0.30048906122716285, 0.86761259483825037],
[0.28214155361235183, 0.8757115920752655], [0.26172314131113766, 0.88501657923716737],
[0.25423361747608675, 0.88900034828686425], [0.23126906758322494, 0.90095736967356277]]
)
score_AE_512_sig = np.array([[0.36305900167454092, 0.84831959977030591],[0.33650884347289434, 0.85545838133665764],
[0.2976713113037075, 0.8675240031015562],[0.27910095958782688, 0.87922668276132376],
[0.26177046708864848, 0.88473937295591876], [0.26421231387402805, 0.88531093294278751],
[0.24143604994706799, 0.89582762480885891],[0.26421231387402805, 0.88531093294278751],
[0.22964162562190668, 0.90080876522716669], [0.23126906758322494, 0.90095736967356277]])
loss_RBM_512_sig = score_RBM_512_sig[:,0]
loss_AE_512_sig = score_AE_512_sig[:,0]
plt.title(u'RBM vs AE Sigmoid', fontsize=20)
plt.xlabel(r'$ \theta _s = n_s/n_{tr}$', fontsize=20)
plt.xticks(thetas)
plt.yticks(np.arange(0, 1.1, 0.1))
plt.xlim((0.1, 1))
plt.plot(thetas, loss_RBM_512_sig, 'ro-',lw=2, label="RBM(512,100, lr=1e-2)")
plt.plot(thetas, loss_AE_512_sig, 'bo-',lw=2, label="AE(512,100, lr=1e-2)")
#plt.plot(thetas, ff_loss, 'ko-', lw=2, label="Raw FF")
plt.plot([0.1, 1.0], [min_loss, min_loss], 'k--', lw=2)
plt.ylabel("Loss", fontsize=20)
plt.grid()
plt.legend(loc='best', fontsize=16)
plt.show()
Explanation: En este gráfico se aprecia mejor el efecto del pre entrenamiento, donde mientras más data haya para pre entrenar, mejor score inicial se obtiene. A medida que este factor $\theta$ crece, el efecto desaparece.
e) Análisis utilizando FF con Sigmoidales y tanh
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
thetas = np.linspace(0.1, 1, 10)
plt.figure(figsize=(17,8))
score_RBM_512_tanh = np.array([[0.47132537652967071, 0.83333617125846071], [0.43773106490204361, 0.81772403969890628],
[0.39085262340579097, 0.83759428964434668], [0.3556465181906609, 0.84698787069942072],
[0.3184757399882493, 0.86379743537090115], [0.30265503884286749, 0.87056469970442796],
[0.25681082942537115, 0.89028635624136943], [0.2447317991679118, 0.89441301461069023],
[0.2497562807307557, 0.89397577249505067], [0.2468455718116086, 0.89485025849443733]])
score_AE_512_tanh = np.array([[0.35625298988631071, 0.85218620511852661], [0.32329397273744331, 0.8609053455957496],
[0.29008143559987409, 0.87735196781460967], [0.28377419225207245, 0.87887231570212443],
[0.2652835471910745, 0.88701417956332607], [0.26240534692509571, 0.88862026109780468],
[0.26505667790617227, 0.88871742608166204], [0.25295013446802855, 0.89182956670046831],
[0.25422075993875848, 0.89163523609909667], [0.25069631986889801, 0.8929641120473053]]
)
loss_RBM_512_tanh = score_RBM_512_tanh[:,0]
loss_AE_512_tanh = score_AE_512_tanh[:,0]
plt.title(u'RBM vs AE Tanh', fontsize=20)
plt.xlabel(r'$ \theta _s = n_s/n_{tr}$', fontsize=20)
plt.xticks(thetas)
plt.xlim((0.1, 1))
plt.plot(thetas, loss_RBM_512_tanh, 'ro-',lw=2, label="RBM(512,100, lr=1e-2)")
plt.plot(thetas, loss_AE_512_tanh, 'bo-',lw=2, label="AE(512,100, lr=1e-2)")
#plt.plot(thetas, ff_loss, 'ko-', lw=2, label="Raw FF")
#plt.plot([0.1, 1.0], [min_loss, min_loss], 'k--', lw=2)
plt.ylabel("Loss", fontsize=20)
plt.grid()
plt.legend(loc='best', fontsize=16)
plt.show()
Explanation: Se puede apreciar que en general la versión pre entrenada con autoencoders funciona mejor que con RBM dados por supuesto los parámetros con los que se armaron dichas redes. No se pudo superar con pre entrenamiento el mínimo logrado con FF ReLu sin pre entrenamiento.
End of explanation |
2,701 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Introduction to Signal Processing</h1>
<h3>Lecture 1</h3>
<h2 class="title_stuff">Sivakumar Balasubramanian</h2>
<h4 class="title_stuff">Lecturer in Bioengineering</h4>
<h4 class="title_stuff">Christian Medical College, Vellore</h4>
<h6 class="header">Introduction to Signal Processing (Lecture 1)</h6>
<h1>What is the course about?</h1>
<hr>
<ul class="content">
<li>Course on the theory and practice of signal processing techniques.</li>
<li>First part of the course will focus on continuous-time signal processing.</li>
<li>Second part of the course will focus on <b>digital signal processing</b>.</li>
<li>Final part of the course will introduce ideas on time-frequency analysis.</li>
</ul>
<h1>What to expect from the course?</h1>
<hr>
<ul class="content">
<li>A good understanding of the theory of continuous and discrete-time signal processing.</li>
<li>Ability to analyze and synthesize analog and digital filters.</li>
<li>An intuitive understanding of time-frequency analysis.</li>
</ul>
<h6 class="header">Introduction to Signal Processing (Lecture 1)</h6>
<h1>Pre-requisites</h1>
<hr>
<ul class="content">
<li>Basic understanding of calculus (limits, derivative, integration).</li>
<li>Basic understanding of probability theory.</li>
<li>Experience in programming (C and Python would be ideal).</li>
</ul>
<h6 class="header">Introduction to Signal Processing (Lecture 1)</h6>
<h1>Course Layout</h1>
<hr>
<ul class="content">
<li><b>Total score
Step1: <p class="normal"> » <b>Continuous-valued</b> vs. <b>Discrete-valued</b>
Step2: <p class="normal">Last two classifications can be combined to have four possible combinations of signals
Step3: <p class="normal">EMG recorded from a linear electrode array.</p>
<p class="normal"> » <b>Deterministic</b> vs. <b>Stochastic</b>
Step4: <p class="normal"> » <b>Even</b> vs. <b>Odd</b>
Step5: <p class="normal"> » <b>Periodic</b> vs. <b>Non-periodic</b>
Step6: <p class="normal"> » <b>Causality | Python Code:
def continuous_discrete_time_signals():
t = np.arange(-10, 10.01, 0.01)
n = np.arange(-10, 11, 1.0)
x_t = np.exp(-0.1 * (t ** 2)) # continuous signal
x_n = np.exp(-0.1 * (n ** 2)) # discrete signal
fig = figure(figsize=(17,5))
plot(t, x_t, label="$e^{-0.1*t^{2}}$")
stem(n, x_n, label="$e^{-0.1*n^{2}}$", basefmt='.')
ylim(-0.1, 1.1)
xticks(fontsize=25)
yticks(fontsize=25)
xlabel('Time', fontsize=25)
legend(prop={'size':30});
savefig("img/cont_disc.svg", format="svg")
continuous_discrete_time_signals()
Explanation: <h1>Introduction to Signal Processing</h1>
<h3>Lecture 1</h3>
<h2 class="title_stuff">Sivakumar Balasubramanian</h2>
<h4 class="title_stuff">Lecturer in Bioengineering</h4>
<h4 class="title_stuff">Christian Medical College, Vellore</h4>
<h6 class="header">Introduction to Signal Processing (Lecture 1)</h6>
<h1>What is the course about?</h1>
<hr>
<ul class="content">
<li>Course on the theory and practice of signal processing techniques.</li>
<li>First part of the course will focus on continuous-time signal processing.</li>
<li>Second part of the course will focus on <b>digital signal processing</b>.</li>
<li>Final part of the course will introduce ideas on time-frequency analysis.</li>
</ul>
<h1>What to expect from the course?</h1>
<hr>
<ul class="content">
<li>A good understanding of the theory of continuous and discrete-time signal processing.</li>
<li>Ability to analyze and synthesize analog and digital filters.</li>
<li>An intuitive understanding of time-frequency analysis.</li>
</ul>
<h6 class="header">Introduction to Signal Processing (Lecture 1)</h6>
<h1>Pre-requisites</h1>
<hr>
<ul class="content">
<li>Basic understanding of calculus (limits, derivative, integration).</li>
<li>Basic understanding of probability theory.</li>
<li>Experience in programming (C and Python would be ideal).</li>
</ul>
<h6 class="header">Introduction to Signal Processing (Lecture 1)</h6>
<h1>Course Layout</h1>
<hr>
<ul class="content">
<li><b>Total score: 100</b>[25 + 15 + 15 + 45]
<ul class="content">
<li> Assignments [6]: 5 x 5 = 25</li>
<li> Suprise quizes [4]: 3 x 5 = 15</li>
<li> Mid-term exam: 15</li>
<li> Final exam: 45</li>
</ul>
</li>
</ul>
<h6 class="header">Introduction to Signal Processing (Lecture 1)</h6>
<h1>Course Content</h1>
<hr>
<ul class="content-packed">
<li><b>Signals and systems</b></li>
<li><b>Continuous-time Signals</b></li>
<li><b>Continuous-time Systems</b></li>
<li><b>Impulse response and convolution integral</b></li>
<li><b>Fourier and Laplace transforms</b></li>
<li><b>Introduction to filtering</b></li>
<li><b>Analog filters</b></li>
<li><b>Sampling theorem</b></li>
<li><b>Discrete-time signals and systems</b></li>
<li><b>Discrete Fourier transform and its computation</b></li>
<li><b>Z-transform</b></li>
<li><b>Digital filter design and implementation</b></li>
<li><b>Stochastic processes and spectral estimation</b></li>
<li><b>Time-frequency analysis</b></li>
</ul>
<h6 class="header">Introduction to Signal Processing (Lecture 1)</h6>
<h1>Reference material:</h1>
<hr>
<ul class="content-packed">
<li>Oppenhein, Alan V., and Ronald W. Schafer. <i>"Discrete-time signal processing."</i> Prentice Hall, New York, 1999.</li>
<li>Proakis, John G. <i>Digital signal processing: principles algorithms and applications.</i> Pearson Education India, 2001.</li>
<li>Devasahayam, Suresh R. <i>Signals and systems in biomedical engineering: signal processing and physiological systems modeling</i>. Springer, 2012.</li>
<li>Haykin, Simon, and Barry Van Veen. <i>Signals and systems</i>. John Wiley & Sons, 2007.</li>
<li>[<i>In progress</i>] https://github.com/siva82kb/intro_to_signal_processing/</li>
</ul>
<h6 class="header">Introduction to Signal Processing (Lecture 1)</h6>
<h1>What is signal processing?</h1>
<hr>
<ul class="content">
<li class="small">"Signal processing is an enabling technology that encompasses the fundamental theory, applications, algorithms, and implementations of <b>processing or transferring information</b> contained in many different physical, symbolic, or abstract formats broadly designated as signals and uses <b>mathematical, statistical, computational, heuristic, and/or linguistic representations</b>, formalisms, and techniques for <b>representation, modeling, analysis, synthesis, discovery, recovery, sensing, acquisition, extraction, learning, security, or forensics.</b>"[1]</li>
<li>An umbrella term to describe a wide variety of things.</li>
</ul>
<br>
<p class="reference">[1] Moura, J.M.F. (2009). "What is signal processing?, President’s Message". <i>IEEE Signal Processing Magazine</i> <b>26</b> (6). doi:10.1109/MSP.2009.934636</p>
<h6 class="header">Introduction to Signal Processing (Lecture 1)</h6>
<h1>What is a signal?</h1>
<p class="normal">Any physical quantity carrying information that varies with one or more independent variables.</p>
<div class="formula"> $$ s\left(t\right) = 1.23t^2 - 5.11t +41.5 $$ </div>
<div class="formula"> $$ s\left(x,y\right) = e^{-(x^2 + y^2 + 0.5xy)} $$ </div>
<p class="normal">Mathematical representation will not be possible <i>(e.g. physiological signals, either because the exact function is not known or is too complicated.)</i></p>
<p class="normal"><i>Can you think of example of 3D and 4D signals?</i></p>
<h6 class="header">Introduction to Signal Processing (Lecture 1)</h6>
<h1>Classification of signals</h1>
<p class="normal"> » Based on the dimensions. <i>e.g. 1D, 2D signals</i><br>
<p class="normal"> » <b>Scalar</b> vs. <b>Vector</b>: <i>e.g: gray-scale versus RGB image</i></p>
<div class="formula"> $$ I_g(x,y) \in \mathbb{R} \,\,\, \text{and} \,\,\, I_{color} \in \mathbb{R}^3 $$ </div>
<p class="normal"> » <b>Continuous-time</b> vs. <b>Discrete-time</b>: <i>based on values assumed by the independent variable.</i></p>
<div class="formula">
$$\begin{cases}
x(t) = e^{-0.1t^{2}}, \,\, t \in \mathbb{R} & \text{Continuous-time} \\
x[n] = e^{-0.1n^{2}}, \,\, n \in \mathbb{Z} & \text{Discrete-time}
\end{cases}
$$
</div>
End of explanation
def continuous_discrete_valued_signals():
t = np.arange(-10, 10.01, 0.01)
n_steps = 10.
x_c = np.exp(-0.1 * (t ** 2))
x_d = (1/n_steps) * np.round(n_steps * x_c)
fig = figure(figsize=(17,5))
plot(t, x_c, label="Continuous-valued")
plot(t, x_d, label="Discrete-valued")
ylim(-0.1, 1.1)
xticks(fontsize=25)
yticks(fontsize=25)
xlabel('Time', fontsize=25)
legend(prop={'size':25});
savefig("img/cont_disc_val.svg", format="svg")
continuous_discrete_valued_signals()
Explanation: <p class="normal"> » <b>Continuous-valued</b> vs. <b>Discrete-valued</b>: <i>based on values assumed by the dependent variable.</i></p>
<div class="formula">
$$ \begin{cases}
x(t) \in [a, b] & \text{Continuous-valued} \\
x(t) \in \{a_1, a_2, \cdots\} & \text{Discrete-valued} \\
\end{cases}
$$
</div>
End of explanation
def continuous_discrete_combos():
t = np.arange(-10, 10.01, 0.01)
n = np.arange(-10, 11, 0.5)
n_steps = 5.
# continuous-time continuous-valued signal
x_t_c = np.exp(-0.1 * (t ** 2))
# continuous-time discrete-valued signal
x_t_d = (1/n_steps) * np.round(n_steps * x_t_c)
# discrete-time continuous-valued signal
x_n_c = np.exp(-0.1 * (n ** 2))
# discrete-time discrete-valued signal
x_n_d = (1/n_steps) * np.round(n_steps * x_n_c)
figure(figsize=(17,8))
subplot2grid((2,2), (0,0), rowspan=1, colspan=1)
plot(t, x_t_c,)
ylim(-0.1, 1.1)
xticks(fontsize=25)
yticks(fontsize=25)
title("Continuous-time Continuous-valued", fontsize=25)
subplot2grid((2,2), (0,1), rowspan=1, colspan=1)
plot(t, x_t_d,)
ylim(-0.1, 1.1)
xticks(fontsize=25)
yticks(fontsize=25)
title("Continuous-time Discrete-valued", fontsize=25)
subplot2grid((2,2), (1,0), rowspan=1, colspan=1)
stem(n, x_n_c, basefmt='.')
ylim(-0.1, 1.1)
xlim(-10, 10)
xticks(fontsize=25)
yticks(fontsize=25)
title("Discrete-time Continuous-valued", fontsize=25)
subplot2grid((2,2), (1,1), rowspan=1, colspan=1)
stem(n, x_n_d, basefmt='.')
ylim(-0.1, 1.1)
xlim(-10, 10)
xticks(fontsize=25)
yticks(fontsize=25)
title("Discrete-time Discrete-valued", fontsize=25);
tight_layout();
savefig("img/signal_types.svg", format="svg");
continuous_discrete_combos()
Explanation: <p class="normal">Last two classifications can be combined to have four possible combinations of signals:</p>
<ul class="content">
<li><i>Continuous-time continuous-valued signals</i></li>
<li><i>Continuous-time discrete-valued signals</i></li>
<li><i>Discrete-time continuous-valued signals</i></li>
<li><i>Discrete-time discrete-valued signals</i></li>
</ul>
End of explanation
def deterministic_stochastic():
t = np.arange(0., 10., 0.005)
x = np.exp(-0.5 * t) * np.sin(2 * np.pi * 2 * t)
y1 = np.random.normal(0, 1., size=len(t))
y2 = np.random.uniform(0, 1., size=len(t))
figure(figsize=(17, 10))
# deterministic signal
subplot2grid((3,3), (0,0), rowspan=1, colspan=3)
plot(t, x, label="$e^{-0.5t}\sin 4\pi t$")
title('Deterministic signal', fontsize=25)
xticks(fontsize=25)
yticks(fontsize=25)
legend(prop={'size':25});
# stochastic signal - normal distribution
subplot2grid((3,3), (1,0), rowspan=1, colspan=2)
plot(t, y1, label="Normal distribution")
ylim(-4, 6)
title('Stochastic signal (Normal)', fontsize=25)
xticks(fontsize=25)
yticks(fontsize=25)
legend(prop={'size':25});
# histogram
subplot2grid((3,3), (1,2), rowspan=1, colspan=1)
hist(y1)
xlim(-6, 6)
title("Histogram", fontsize=25)
xticks(fontsize=25)
yticks([0, 200, 400, 600], fontsize=25)
legend(prop={'size':25});
# stochastic signal - normal distribution
subplot2grid((3,3), (2,0), rowspan=1, colspan=2)
plot(t, y2, label="Normal distribution")
ylim(-0.3, 1.5)
title('Stochastic signal (Uniform)', fontsize=25)
xticks(fontsize=25)
yticks(fontsize=25)
legend(prop={'size':25});
# histogram
subplot2grid((3,3), (2,2), rowspan=1, colspan=1)
hist(y2)
xlim(-0.2, 1.2)
title("Histogram", fontsize=25)
xticks(fontsize=25)
yticks([0, 100, 200], fontsize=25)
legend(prop={'size':25})
tight_layout()
savefig("img/det_stoch.svg", format="svg");
deterministic_stochastic()
Explanation: <p class="normal">EMG recorded from a linear electrode array.</p>
<p class="normal"> » <b>Deterministic</b> vs. <b>Stochastic</b>: <i>e.g. EMG is an example of a stochastic signal.</i></p>
End of explanation
def even_odd_decomposition():
t = np.arange(-5, 5, 0.01)
x = (0.5 * np.exp(-(t-2.1)**2) * np.cos(2*np.pi*t) +
np.exp(-t**2) * np.sin(2*np.pi*3*t))
figure(figsize=(17,4))
# Original function
plot(t, x, label="$x(t)$")
# Even component
plot(t, 0.5 * (x + x[::-1]) - 2, label="$x_{even}(t)$")
# Odd component
plot(t, 0.5 * (x - x[::-1]) + 2, label="$x_{odd}(t)$")
xlim(-5, 8)
title('Decomposition of a signal into even and odd components', fontsize=25)
xlabel('Time', fontsize=25)
xticks(fontsize=25)
yticks([])
legend(prop={'size':25})
savefig("img/even_odd.svg", format="svg");
even_odd_decomposition()
Explanation: <p class="normal"> » <b>Even</b> vs. <b>Odd</b>: <i>based on symmetry about the $t=0$ axis.</i></p>
<div class="formula">
$$
\begin{cases}
x(t) = x(-t), & \text{Even signal} \\
x(t) = -x(-t), & \text{Odd signal} \\
\end{cases}
$$
</div>
<p class="normal"><i>Can there be signals that are neither even nor odd?</i></p>
<div class="theorem"><b>Theorem</b>: Any arbitrary function can be represented as a sum of an odd and even function.
<div class="formula">$$ x(t) = x_{even}(t) + x_{odd}(t) $$</div>
where,<div class="formula-inline">$ x_{even}(t) = \frac{x(t) + x(-t)}{2} $</div> and <div class="formula-inline">$ x_{odd}(t) = \frac{x(t) - x(-t)}{2} $</div>.
</div>
End of explanation
def memory():
dt = 0.01
N = np.round(0.5/dt)
t = np.arange(-1.0, 5.0, dt)
x = 1.0 * np.array([t >= 1.0, t < 3.0]).all(0)
# memoryless system
y1 = 0.5 * x
# system with memory.
y2 = np.zeros(len(x))
for i in xrange(len(y2)):
y2[i] = np.sum(x[max(0, i-N):i]) * dt
figure(figsize=(17,4))
plot(t, x, lw=2, label="$x(t)$")
plot(t, y1, lw=2, label="$0.5x(t)$")
plot(t, y2, lw=2, label="$\int_{t-0.5}^{t}x(p)dp$")
xlim(-1, 5)
ylim(-0.1, 1.1)
xticks(fontsize=25)
yticks(fontsize=25)
xlabel('Time', fontsize=25)
legend(prop={'size':25})
savefig("img/memory.svg", format="svg");
memory()
Explanation: <p class="normal"> » <b>Periodic</b> vs. <b>Non-periodic</b>: <i>a signal is periodic, if and only if</i></p>
<div class="formula">
$$ x(t) = x(t + T), \,\, \forall t, \,\,\, T \text{ is the fundamental period.}$$
</div>
<p class="normal"> » <b>Energy</b> vs. <b>Power</b>: <i>indicates if a signal is short-lived.</i></p>
<div class="formula">
$$
E = \int_{-\infty}^{\infty}\left|x(t)\right|^{2}dt \,\,\,\,\,\,\,\,\,\, P = \lim_{T\to\infty}\frac{1}{T}\int_{-T/2}^{T/2}\left|x(t)\right|^{2}dt
$$
</div>
<p class="normal"><i> A signal is an energy signal, if</i></p>
<div class="formula"> $$ 0 < E < \infty $$ </div>
<p class="normal"><i> and a signal is a power signal, if</i></p>
<div class="formula"> $$ 0 < P < \infty $$ </div>
<h6 class="header">Introduction to Signal Processing (Lecture 1)</h6>
<h1>What is a system?</h1>
<p class="normal">A system is any physical device or algorithm that performs some operation on a signal to transform it into another signal.</p>
<h6 class="header">Introduction to Signal Processing (Lecture 1)</h6>
<h1>Classification of systems</h1>
<p class="normal"><i>Based on the properties of a system:</i></p>
<p class="normal"> » <b>Linearity:</b> $\implies$ <i><b>scaling</b> and <b>superposition</b> </i></p>
<p class="normal"> Lets assume, </p>
<div class="formula"> $$ f: x_i(t) \mapsto y_i(t) $$ </div>
<p class="normal"> The system is linear, if and only if,</p>
<div class="formula"> $$ f: \sum_{i}a_ix_i(t) \mapsto \sum_{i}a_iy_i(t) $$ </div>
<p class="normal"><i>Which of the following systems are linear?</i></p>
<div class="formula">(a) $y(t) = k_1x(t) + k_2x(t-2)$</div>
<div class="formula">(b) $y(t) = \int_{t-T}^{t}x(\tau)d\tau$</div>
<div class="formula">(c) $y(t) = 0.5x(t) + 1.5$</div>
<p class="normal"> » <b>Memory:</b> <i>a system whose output depends on past, present or future values of input is a <b>system with memory</b>, else the system is <b>memoryless</b></i></p>
<p class="normal">Memoryless system: </p>
<div class="formula"> $$ y(t) = 0.5x(t) $$ </div>
<p class="normal">System with memory: </p>
<div class="formula"> $$ y(t) = \int_{t-0.5}^{t}x(\tau)d\tau$$ </div>
End of explanation
%pylab inline
import seaborn as sb
# Functions to generate plots for the different sections.
def signal_examples():
t = np.arange(-5, 5, 0.01)
s1 = 1.23 * (t ** 2) - 5.11 * t + 41.5
x, y = np.arange(-2.0, 2.0, 0.01), np.arange(-2.0, 2.0, 0.01)
fig = figure(figsize=(17, 7))
subplot(121)
plot(t, s1)
xlabel('Time', fontsize=20)
xlim(-5, 5)
xticks(fontsize=25)
yticks(fontsize=25)
title("$1.23t^2 - 5.11t + 41.5$", fontsize=30)
subplot(122)
s = np.array([[np.exp(-(_x**2 + _y**2 + 0.5*_x*_y)) for _y in y] for _x in x])
X, Y = meshgrid(x, y)
contourf(X, Y, s)
xticks(fontsize=25)
yticks(fontsize=25)
title("$s(x, y) = e^{-(x^2 + y^2 + 0.5xy)}$", fontsize=30);
savefig("img/signals.svg", format="svg")
def memory():
dt = 0.01
N = np.round(0.5/dt)
t = np.arange(-1.0, 5.0, dt)
x = 1.0 * np.array([t >= 1.0, t < 3.0]).all(0)
# memoryless system
y1 = 0.5 * x
# system with memory.
y2 = np.zeros(len(x))
for i in xrange(len(y2)):
y2[i] = np.sum(x[max(0, i-N):i]) * dt
figure(figsize=(17,4))
plot(t, x, lw=2, label="$x(t)$")
plot(t, y1, lw=2, label="$0.5x(t)$")
plot(t, y2, lw=2, label="$\int_{t-0.5}^{t}x(p)dp$")
xlim(-1, 5)
ylim(-0.1, 1.1)
xlabel('Time', fontsize=15)
legend(prop={'size':20});
from IPython.core.display import HTML
def css_styling():
styles = open("../../styles/custom_aero.css", "r").read()
return HTML(styles)
css_styling()
Explanation: <p class="normal"> » <b>Causality:</b> <i>a system whose output depends on past and present values of input.</i></p>
<p class="normal"> » <b>Time invariance:</b> <i>system remains the same with time.</i></p>
<p class="normal">If a system is time invariant, then if </p>
<div class="formula">$$ x(t) \mapsto y(t) \implies x(t-\tau) \mapsto y(t-\tau)$$</div>
<p class="normal"> » <b>Stability:</b> <i>bounded input produces bounded output</i></p>
<div class="formula">$$ \left|x(t)\right| < M_x < \infty \mapsto \left|y(t)\right| < M_y < \infty $$</div>
<p class="normal"> » <b>Invertibility:</b> <i>input can be recovered from the output</i></p>
End of explanation |
2,702 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Naive Bayes and Bayes Classifiers
Step1: The data seems like it comes from two normal distributions, with the cyan class being more prevalent than the magenta class. A natural way to model this data would be to create a normal distribution for the cyan data, and another for the magenta distribution.
Let's take a look at doing that. All we need to do is use the from_samples class method of the NormalDistribution class.
Step2: It looks like some aspects of the data are captured well by doing things this way-- specifically the mean and variance of the normal distributions. This allows us to easily calculate $P(D|M)$ as the probability of a sample under either the cyan or magenta distributions using the normal (or Gaussian) probability density equation
Step3: The prior $P(M)$ is a vector of probabilities over the classes that the model can predict, also known as components. In this case, if we draw a sample randomly from the data that we have, there is a ~83% chance that it will come from the cyan class and a ~17% chance that it will come from the magenta class.
Let's multiply the probability densities we got before by this imbalance.
Step4: This looks a lot more faithful to the original data, and actually corresponds to $P(M)P(D|M)$, the prior multiplied by the likelihood. However, these aren't actually probability distributions anymore, as they no longer integrate to 1. This is why the $P(M)P(D|M)$ term has to be normalized by the $P(D)$ term in Bayes' rule in order to get a probability distribution over the components. However, $P(D)$ is difficult to determine exactly-- what is the probability of the data? Well, we can sum over the classes to get that value, since $P(D) = \sum_{i=1}^{c} P(D|M)P(M)$ for a problem with c classes. This translates into $P(D) = P(M=Cyan)P(D|M=Cyan) + P(M=Magenta)P(D|M=Magenta)$ for this specific problem, and those values can just be pulled from the unnormalized plots above.
This gives us the full Bayes' rule, with the posterior $P(M|D)$ being the proportion of density of the above plot coming from each of the two distributions at any point on the line. Let's take a look at the posterior probabilities of the two classes on the same line.
Step5: The top plot shows the same densities as before, while the bottom plot shows the proportion of the density belonging to either class at that point. This proportion is known as the posterior $P(M|D)$, and can be interpreted as the probability of that point belonging to each class. This is one of the native benefits of probabilistic models, that instead of providing a hard class label for each sample, they can provide a soft label in the form of the probability of belonging to each class.
We can implement all of this simply in pomegranate using the NaiveBayes class.
Step6: Looks like we're getting the same plots for the posteriors just through fitting the naive Bayes model directly to data. The predictions made will come directly from the posteriors in this plot, with cyan predictions happening whenever the cyan posterior is greater than the magenta posterior, and vice-versa.
Naive Bayes
In the univariate setting, naive Bayes is identical to a general Bayes classifier. The divergence occurs in the multivariate setting, the naive Bayes model assumes independence of all features, while a Bayes classifier is more general and can support more complicated interactions or covariances between features. Let's take a look at what this means in terms of Bayes' rule.
\begin{align}
P(M|D) &= \frac{P(M)P(D|M)}{P(D)} \
&= \frac{P(M)\prod_{i=1}^{d}P(D_{i}|M_{i})}{P(D)}
\end{align}
This looks fairly simple to compute, as we just need to pass each dimension into the appropriate distribution and then multiply the returned probabilities together. This simplicity is one of the reasons why naive Bayes is so widely used. Let's look closer at using this in pomegranate, starting off by generating two blobs of data that overlap a bit and inspecting them.
Step7: Now, let's fit our naive Bayes model to this data using pomegranate. We can use the from_samples class method, pass in the distribution that we want to model each dimension, and then the data. We choose to use NormalDistribution in this particular case, but any supported distribution would work equally well, such as BernoulliDistribution or ExponentialDistribution. To ensure we get the correct decision boundary, let's also plot the boundary recovered by sklearn.
Step8: Drawing the decision boundary helps to verify that we've produced a good result by cleanly splitting the two blobs from each other.
Bayes' rule provides a great deal of flexibility in terms of what the actually likelihood functions are. For example, when considering a multivariate distribution, there is no need for each dimension to be modeled by the same distribution. In fact, each dimension can be modeled by a different distribution, as long as we can multiply the $P(D|M)$ terms together.
Let's consider the example of some noisy signals that have been segmented. We know that they come from two underlying phenomena, the cyan phenomena and the magenta phenomena, and want to classify future segments. To do this, we have three features-- the mean signal of the segment, the standard deviation, and the duration.
Step9: We can start by modeling each variable as Gaussians, like before, and see what accuracy we get.
Step10: We get identical values for sklearn and for pomegranate, which is good. However, let's take a look at the data itself to see whether a Gaussian distribution is the appropriate distribution for the data.
Step11: So, unsurprisingly (since you can see that I used non-Gaussian distributions to generate the data originally), it looks like only the mean follows a normal distribution, whereas the standard deviation seems to follow either a gamma or a log-normal distribution. We can take advantage of that by explicitly using these distributions instead of approximating them as normal distributions. pomegranate is flexible enough to allow for this, whereas sklearn currently is not.
Step12: It looks like we're able to get a small improvement in accuracy just by using appropriate distributions for the features, without any type of data transformation or filtering. This certainly seems worthwhile if you can determine what the appropriate underlying distribution is.
Next, there's obviously the issue of speed. Let's compare the speed of the pomegranate implementation and the sklearn implementation.
Step13: Looks as if on this small dataset they're all taking approximately the same time. This is pretty much expected, as the fitting step is fairly simple and both implementations use C-level numerics for the calculations. We can give a more thorough treatment of the speed comparison on larger datasets. Let's look at the average time it takes to fit a model to data of increasing dimensionality across 25 runs.
Step14: It appears as if the two implementations are basically the same speed. This is unsurprising given the simplicity of the calculations, and as mentioned before, the low level implementation.
Bayes Classifiers
The natural generalization of the naive Bayes classifier is to allow any multivariate function take the place of $P(D|M)$ instead of it being the product of several univariate probability distributions. One immediate difference is that now instead of creating a Gaussian model with effectively a diagonal covariance matrix, you can now create one with a full covariance matrix. Let's see an example of that at work.
Step15: It looks like we are able to get a better boundary between the two blobs of data. The primary for this is because the data don't form spherical clusters, like you assume when you force a diagonal covariance matrix, but are tilted ellipsoids, that can be better modeled by a full covariance matrix. We can quantify this quickly by looking at performance on the training data.
Step16: Looks like there is a significant boost. Naturally you'd want to evaluate the performance of the model on separate validation data, but for the purposes of demonstrating the effect of a full covariance matrix this should be sufficient.
While using a full covariance matrix is certainly more complicated than using only the diagonal, there is no reason that the $P(D|M)$ has to even be a single simple distribution versus a full probabilistic model. After all, all probabilistic models, including general mixtures, hidden Markov models, and Bayesian networks, can calculate $P(D|M)$. Let's take a look at an example of using a mixture model instead of a single gaussian distribution. | Python Code:
X = numpy.concatenate((numpy.random.normal(3, 1, 200), numpy.random.normal(10, 2, 1000)))
y = numpy.concatenate((numpy.zeros(200), numpy.ones(1000)))
x1 = X[:200]
x2 = X[200:]
plt.figure(figsize=(16, 5))
plt.hist(x1, bins=25, color='m', edgecolor='m', label="Class A")
plt.hist(x2, bins=25, color='c', edgecolor='c', label="Class B")
plt.xlabel("Value", fontsize=14)
plt.ylabel("Count", fontsize=14)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.show()
Explanation: Naive Bayes and Bayes Classifiers: A Tutorial
author: Jacob Schreiber <br>
contact: [email protected]
Bayes classifiers are some of the simplest machine learning models that exist, due to their intuitive probabilistic interpretation and simple fitting step. Each class is modeled as a probability distribution, and the data is interpreted as samples drawn from these underlying distributions. Fitting the model to data is as simple as calculating maximum likelihood parameters for the data that falls under each class, and making predictions is as simple as using Bayes' rule to determine which class is most likely given the distributions. Bayes' Rule is the following:
\begin{equation}
P(M|D) = \frac{P(D|M)P(M)}{P(D)}
\end{equation}
where M stands for the model and D stands for the data. $P(M)$ is known as the <i>prior</i>, because it is the probability that a sample is of a certain class before you even know what the sample is. This is generally just the frequency of each class. Intuitively, it makes sense that you would want to model this, because if one class occurs 10x more than another class, it is more likely that a given sample will belong to that distribution. $P(D|M)$ is the likelihood, or the probability, of the data under a given model. Lastly, $P(M|D)$ is the posterior, which is the probability of each component of the model, or class, being the component which generated the data. It is called the posterior because the prior corresponds to probabilities before seeing data, and the posterior corresponds to probabilities after observing the data. In cases where the prior is uniform, the posterior is just equal to the normalized likelihoods. This equation forms the basis of most probabilistic modeling, with interesting priors allowing the user to inject sophisticated expert knowledge into the problem directly.
Let's take a look at some single dimensional data in order to introduce these concepts more thoroughly.
End of explanation
d1 = NormalDistribution.from_samples(x1)
d2 = NormalDistribution.from_samples(x2)
idxs = numpy.arange(0, 15, 0.1)
p1 = map(d1.probability, idxs)
p2 = map(d2.probability, idxs)
plt.figure(figsize=(16, 5))
plt.plot(idxs, p1, color='m'); plt.fill_between(idxs, 0, p1, facecolor='m', alpha=0.2)
plt.plot(idxs, p2, color='c'); plt.fill_between(idxs, 0, p2, facecolor='c', alpha=0.2)
plt.xlabel("Value", fontsize=14)
plt.ylabel("Probability", fontsize=14)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.show()
Explanation: The data seems like it comes from two normal distributions, with the cyan class being more prevalent than the magenta class. A natural way to model this data would be to create a normal distribution for the cyan data, and another for the magenta distribution.
Let's take a look at doing that. All we need to do is use the from_samples class method of the NormalDistribution class.
End of explanation
magenta_prior = 1. * len(x1) / len(X)
cyan_prior = 1. * len(x2) / len(X)
plt.figure(figsize=(4, 6))
plt.title("Prior Probabilities P(M)", fontsize=14)
plt.bar(0, magenta_prior, facecolor='m', edgecolor='m')
plt.bar(1, cyan_prior, facecolor='c', edgecolor='c')
plt.xticks([0, 1], ['P(Magenta)', 'P(Cyan)'], fontsize=14)
plt.yticks(fontsize=14)
plt.show()
Explanation: It looks like some aspects of the data are captured well by doing things this way-- specifically the mean and variance of the normal distributions. This allows us to easily calculate $P(D|M)$ as the probability of a sample under either the cyan or magenta distributions using the normal (or Gaussian) probability density equation:
\begin{align}
P(D|M) &= P(x|\mu, \sigma) \
&= \frac{1}{\sqrt{2\pi\sigma^{2}}} exp \left(-\frac{(x-u)^{2}}{2\sigma^{2}} \right)
\end{align}
However, if we look at the original data, we see that the cyan distributions is both much wider than the purple distribution and much taller, as there were more samples from that class in general. If we reduce that data down to these two distributions, we lose the class imbalance. We want our prior to model this class imbalance, with the reasoning being that if we randomly draw a sample from the samples observed thus far, it is far more likely to be a cyan than a magenta sample. Let's take a look at this class imbalance exactly.
End of explanation
d1 = NormalDistribution.from_samples(x1)
d2 = NormalDistribution.from_samples(x2)
idxs = numpy.arange(0, 15, 0.1)
p_magenta = numpy.array(map(d1.probability, idxs)) * magenta_prior
p_cyan = numpy.array(map(d2.probability, idxs)) * cyan_prior
plt.figure(figsize=(16, 5))
plt.plot(idxs, p_magenta, color='m'); plt.fill_between(idxs, 0, p_magenta, facecolor='m', alpha=0.2)
plt.plot(idxs, p_cyan, color='c'); plt.fill_between(idxs, 0, p_cyan, facecolor='c', alpha=0.2)
plt.xlabel("Value", fontsize=14)
plt.ylabel("P(M)P(D|M)", fontsize=14)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.show()
Explanation: The prior $P(M)$ is a vector of probabilities over the classes that the model can predict, also known as components. In this case, if we draw a sample randomly from the data that we have, there is a ~83% chance that it will come from the cyan class and a ~17% chance that it will come from the magenta class.
Let's multiply the probability densities we got before by this imbalance.
End of explanation
magenta_posterior = p_magenta / (p_magenta + p_cyan)
cyan_posterior = p_cyan / (p_magenta + p_cyan)
plt.figure(figsize=(16, 5))
plt.subplot(211)
plt.plot(idxs, p_magenta, color='m'); plt.fill_between(idxs, 0, p_magenta, facecolor='m', alpha=0.2)
plt.plot(idxs, p_cyan, color='c'); plt.fill_between(idxs, 0, p_cyan, facecolor='c', alpha=0.2)
plt.xlabel("Value", fontsize=14)
plt.ylabel("P(M)P(D|M)", fontsize=14)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.subplot(212)
plt.plot(idxs, magenta_posterior, color='m')
plt.plot(idxs, cyan_posterior, color='c')
plt.xlabel("Value", fontsize=14)
plt.ylabel("P(M|D)", fontsize=14)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.show()
Explanation: This looks a lot more faithful to the original data, and actually corresponds to $P(M)P(D|M)$, the prior multiplied by the likelihood. However, these aren't actually probability distributions anymore, as they no longer integrate to 1. This is why the $P(M)P(D|M)$ term has to be normalized by the $P(D)$ term in Bayes' rule in order to get a probability distribution over the components. However, $P(D)$ is difficult to determine exactly-- what is the probability of the data? Well, we can sum over the classes to get that value, since $P(D) = \sum_{i=1}^{c} P(D|M)P(M)$ for a problem with c classes. This translates into $P(D) = P(M=Cyan)P(D|M=Cyan) + P(M=Magenta)P(D|M=Magenta)$ for this specific problem, and those values can just be pulled from the unnormalized plots above.
This gives us the full Bayes' rule, with the posterior $P(M|D)$ being the proportion of density of the above plot coming from each of the two distributions at any point on the line. Let's take a look at the posterior probabilities of the two classes on the same line.
End of explanation
idxs = idxs.reshape(idxs.shape[0], 1)
X = X.reshape(X.shape[0], 1)
model = NaiveBayes.from_samples(NormalDistribution, X, y)
posteriors = model.predict_proba(idxs)
plt.figure(figsize=(14, 4))
plt.plot(idxs, posteriors[:,0], color='m')
plt.plot(idxs, posteriors[:,1], color='c')
plt.xlabel("Value", fontsize=14)
plt.ylabel("P(M|D)", fontsize=14)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.show()
Explanation: The top plot shows the same densities as before, while the bottom plot shows the proportion of the density belonging to either class at that point. This proportion is known as the posterior $P(M|D)$, and can be interpreted as the probability of that point belonging to each class. This is one of the native benefits of probabilistic models, that instead of providing a hard class label for each sample, they can provide a soft label in the form of the probability of belonging to each class.
We can implement all of this simply in pomegranate using the NaiveBayes class.
End of explanation
X = numpy.concatenate([numpy.random.normal(3, 2, size=(150, 2)), numpy.random.normal(7, 1, size=(250, 2))])
y = numpy.concatenate([numpy.zeros(150), numpy.ones(250)])
plt.figure(figsize=(8, 8))
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c')
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m')
plt.xlim(-2, 10)
plt.ylim(-4, 12)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.show()
Explanation: Looks like we're getting the same plots for the posteriors just through fitting the naive Bayes model directly to data. The predictions made will come directly from the posteriors in this plot, with cyan predictions happening whenever the cyan posterior is greater than the magenta posterior, and vice-versa.
Naive Bayes
In the univariate setting, naive Bayes is identical to a general Bayes classifier. The divergence occurs in the multivariate setting, the naive Bayes model assumes independence of all features, while a Bayes classifier is more general and can support more complicated interactions or covariances between features. Let's take a look at what this means in terms of Bayes' rule.
\begin{align}
P(M|D) &= \frac{P(M)P(D|M)}{P(D)} \
&= \frac{P(M)\prod_{i=1}^{d}P(D_{i}|M_{i})}{P(D)}
\end{align}
This looks fairly simple to compute, as we just need to pass each dimension into the appropriate distribution and then multiply the returned probabilities together. This simplicity is one of the reasons why naive Bayes is so widely used. Let's look closer at using this in pomegranate, starting off by generating two blobs of data that overlap a bit and inspecting them.
End of explanation
from sklearn.naive_bayes import GaussianNB
model = NaiveBayes.from_samples(NormalDistribution, X, y)
clf = GaussianNB().fit(X, y)
xx, yy = np.meshgrid(np.arange(-2, 10, 0.02), np.arange(-4, 12, 0.02))
Z1 = model.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
Z2 = clf.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.figure(figsize=(16, 8))
plt.subplot(121)
plt.title("pomegranate naive Bayes", fontsize=16)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c')
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m')
plt.contour(xx, yy, Z1)
plt.xlim(-2, 10)
plt.ylim(-4, 12)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.subplot(122)
plt.title("sklearn naive Bayes", fontsize=16)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c')
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m')
plt.contour(xx, yy, Z2)
plt.xlim(-2, 10)
plt.ylim(-4, 12)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.show()
Explanation: Now, let's fit our naive Bayes model to this data using pomegranate. We can use the from_samples class method, pass in the distribution that we want to model each dimension, and then the data. We choose to use NormalDistribution in this particular case, but any supported distribution would work equally well, such as BernoulliDistribution or ExponentialDistribution. To ensure we get the correct decision boundary, let's also plot the boundary recovered by sklearn.
End of explanation
def plot_signal(X, n):
plt.figure(figsize=(16, 6))
t_current = 0
for i in range(n):
mu, std, t = X[i]
chunk = numpy.random.normal(mu, std, int(t))
plt.plot(numpy.arange(t_current, t_current+t), chunk, c='cm'[i % 2])
t_current += t
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.xlabel("Time (s)", fontsize=14)
plt.ylabel("Signal", fontsize=14)
plt.ylim(20, 40)
plt.show()
def create_signal(n):
X, y = [], []
for i in range(n):
mu = numpy.random.normal(30.0, 0.4)
std = numpy.random.lognormal(-0.1, 0.4)
t = int(numpy.random.exponential(50)) + 1
X.append([mu, std, int(t)])
y.append(0)
mu = numpy.random.normal(30.5, 0.8)
std = numpy.random.lognormal(-0.3, 0.6)
t = int(numpy.random.exponential(200)) + 1
X.append([mu, std, int(t)])
y.append(1)
return numpy.array(X), numpy.array(y)
X_train, y_train = create_signal(1000)
X_test, y_test = create_signal(250)
plot_signal(X_train, 20)
Explanation: Drawing the decision boundary helps to verify that we've produced a good result by cleanly splitting the two blobs from each other.
Bayes' rule provides a great deal of flexibility in terms of what the actually likelihood functions are. For example, when considering a multivariate distribution, there is no need for each dimension to be modeled by the same distribution. In fact, each dimension can be modeled by a different distribution, as long as we can multiply the $P(D|M)$ terms together.
Let's consider the example of some noisy signals that have been segmented. We know that they come from two underlying phenomena, the cyan phenomena and the magenta phenomena, and want to classify future segments. To do this, we have three features-- the mean signal of the segment, the standard deviation, and the duration.
End of explanation
model = NaiveBayes.from_samples(NormalDistribution, X_train, y_train)
print "Gaussian Naive Bayes: ", (model.predict(X_test) == y_test).mean()
clf = GaussianNB().fit(X_train, y_train)
print "sklearn Gaussian Naive Bayes: ", (clf.predict(X_test) == y_test).mean()
Explanation: We can start by modeling each variable as Gaussians, like before, and see what accuracy we get.
End of explanation
plt.figure(figsize=(14, 4))
plt.subplot(131)
plt.title("Mean")
plt.hist(X_train[y_train == 0, 0], color='c', alpha=0.5, bins=25)
plt.hist(X_train[y_train == 1, 0], color='m', alpha=0.5, bins=25)
plt.subplot(132)
plt.title("Standard Deviation")
plt.hist(X_train[y_train == 0, 1], color='c', alpha=0.5, bins=25)
plt.hist(X_train[y_train == 1, 1], color='m', alpha=0.5, bins=25)
plt.subplot(133)
plt.title("Duration")
plt.hist(X_train[y_train == 0, 2], color='c', alpha=0.5, bins=25)
plt.hist(X_train[y_train == 1, 2], color='m', alpha=0.5, bins=25)
plt.show()
Explanation: We get identical values for sklearn and for pomegranate, which is good. However, let's take a look at the data itself to see whether a Gaussian distribution is the appropriate distribution for the data.
End of explanation
model = NaiveBayes.from_samples(NormalDistribution, X_train, y_train)
print "Gaussian Naive Bayes: ", (model.predict(X_test) == y_test).mean()
clf = GaussianNB().fit(X_train, y_train)
print "sklearn Gaussian Naive Bayes: ", (clf.predict(X_test) == y_test).mean()
model = NaiveBayes.from_samples([NormalDistribution, LogNormalDistribution, ExponentialDistribution], X_train, y_train)
print "Heterogeneous Naive Bayes: ", (model.predict(X_test) == y_test).mean()
Explanation: So, unsurprisingly (since you can see that I used non-Gaussian distributions to generate the data originally), it looks like only the mean follows a normal distribution, whereas the standard deviation seems to follow either a gamma or a log-normal distribution. We can take advantage of that by explicitly using these distributions instead of approximating them as normal distributions. pomegranate is flexible enough to allow for this, whereas sklearn currently is not.
End of explanation
%timeit GaussianNB().fit(X_train, y_train)
%timeit NaiveBayes.from_samples(NormalDistribution, X_train, y_train)
%timeit NaiveBayes.from_samples([NormalDistribution, LogNormalDistribution, ExponentialDistribution], X_train, y_train)
Explanation: It looks like we're able to get a small improvement in accuracy just by using appropriate distributions for the features, without any type of data transformation or filtering. This certainly seems worthwhile if you can determine what the appropriate underlying distribution is.
Next, there's obviously the issue of speed. Let's compare the speed of the pomegranate implementation and the sklearn implementation.
End of explanation
pom_time, skl_time = [], []
n1, n2 = 15000, 60000,
for d in range(1, 101, 5):
X = numpy.concatenate([numpy.random.normal(3, 2, size=(n1, d)), numpy.random.normal(7, 1, size=(n2, d))])
y = numpy.concatenate([numpy.zeros(n1), numpy.ones(n2)])
tic = time.time()
for i in range(25):
GaussianNB().fit(X, y)
skl_time.append((time.time() - tic) / 25)
tic = time.time()
for i in range(25):
NaiveBayes.from_samples(NormalDistribution, X, y)
pom_time.append((time.time() - tic) / 25)
plt.figure(figsize=(14, 6))
plt.plot(range(1, 101, 5), pom_time, color='c', label="pomegranate")
plt.plot(range(1, 101, 5), skl_time, color='m', label="sklearn")
plt.xticks(fontsize=14)
plt.xlabel("Number of Dimensions", fontsize=14)
plt.yticks(fontsize=14)
plt.ylabel("Time (s)")
plt.legend(fontsize=14)
plt.show()
Explanation: Looks as if on this small dataset they're all taking approximately the same time. This is pretty much expected, as the fitting step is fairly simple and both implementations use C-level numerics for the calculations. We can give a more thorough treatment of the speed comparison on larger datasets. Let's look at the average time it takes to fit a model to data of increasing dimensionality across 25 runs.
End of explanation
tilt_a = [[-2, 0.5], [5, 2]]
tilt_b = [[-1, 1.5], [3, 3]]
X = numpy.concatenate((numpy.random.normal(4, 1, size=(250, 2)).dot(tilt_a), numpy.random.normal(3, 1, size=(800, 2)).dot(tilt_b)))
y = numpy.concatenate((numpy.zeros(250), numpy.ones(800)))
model_a = NaiveBayes.from_samples(NormalDistribution, X, y)
model_b = BayesClassifier.from_samples(MultivariateGaussianDistribution, X, y)
xx, yy = np.meshgrid(np.arange(-5, 30, 0.02), np.arange(0, 25, 0.02))
Z1 = model_a.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
Z2 = model_b.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.figure(figsize=(18, 8))
plt.subplot(121)
plt.contour(xx, yy, Z1)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c', alpha=0.3)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m', alpha=0.3)
plt.xlim(-5, 30)
plt.ylim(0, 25)
plt.subplot(122)
plt.contour(xx, yy, Z2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c', alpha=0.3)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m', alpha=0.3)
plt.xlim(-5, 30)
plt.ylim(0, 25)
plt.show()
Explanation: It appears as if the two implementations are basically the same speed. This is unsurprising given the simplicity of the calculations, and as mentioned before, the low level implementation.
Bayes Classifiers
The natural generalization of the naive Bayes classifier is to allow any multivariate function take the place of $P(D|M)$ instead of it being the product of several univariate probability distributions. One immediate difference is that now instead of creating a Gaussian model with effectively a diagonal covariance matrix, you can now create one with a full covariance matrix. Let's see an example of that at work.
End of explanation
print "naive training accuracy: {:4.4}".format((model_a.predict(X) == y).mean())
print "bayes classifier training accuracy: {:4.4}".format((model_b.predict(X) == y).mean())
Explanation: It looks like we are able to get a better boundary between the two blobs of data. The primary for this is because the data don't form spherical clusters, like you assume when you force a diagonal covariance matrix, but are tilted ellipsoids, that can be better modeled by a full covariance matrix. We can quantify this quickly by looking at performance on the training data.
End of explanation
X = numpy.empty(shape=(0, 2))
X = numpy.concatenate((X, numpy.random.normal(4, 1, size=(200, 2)).dot([[-2, 0.5], [2, 0.5]])))
X = numpy.concatenate((X, numpy.random.normal(3, 1, size=(350, 2)).dot([[-1, 2], [1, 0.8]])))
X = numpy.concatenate((X, numpy.random.normal(7, 1, size=(700, 2)).dot([[-0.75, 0.8], [0.9, 1.5]])))
X = numpy.concatenate((X, numpy.random.normal(6, 1, size=(120, 2)).dot([[-1.5, 1.2], [0.6, 1.2]])))
y = numpy.concatenate((numpy.zeros(550), numpy.ones(820)))
model_a = BayesClassifier.from_samples(MultivariateGaussianDistribution, X, y)
gmm_a = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X[y == 0])
gmm_b = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X[y == 1])
model_b = BayesClassifier([gmm_a, gmm_b], weights=numpy.array([1-y.mean(), y.mean()]))
xx, yy = np.meshgrid(np.arange(-10, 10, 0.02), np.arange(0, 25, 0.02))
Z1 = model_a.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
Z2 = model_b.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
centroids1 = numpy.array([distribution.mu for distribution in model_a.distributions])
centroids2 = numpy.concatenate([[distribution.mu for distribution in component.distributions] for component in model_b.distributions])
plt.figure(figsize=(18, 8))
plt.subplot(121)
plt.contour(xx, yy, Z1)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c', alpha=0.3)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m', alpha=0.3)
plt.scatter(centroids1[:,0], centroids1[:,1], color='k', s=100)
plt.subplot(122)
plt.contour(xx, yy, Z2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c', alpha=0.3)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m', alpha=0.3)
plt.scatter(centroids2[:,0], centroids2[:,1], color='k', s=100)
plt.show()
Explanation: Looks like there is a significant boost. Naturally you'd want to evaluate the performance of the model on separate validation data, but for the purposes of demonstrating the effect of a full covariance matrix this should be sufficient.
While using a full covariance matrix is certainly more complicated than using only the diagonal, there is no reason that the $P(D|M)$ has to even be a single simple distribution versus a full probabilistic model. After all, all probabilistic models, including general mixtures, hidden Markov models, and Bayesian networks, can calculate $P(D|M)$. Let's take a look at an example of using a mixture model instead of a single gaussian distribution.
End of explanation |
2,703 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute source power using DICS beamformer
Compute a Dynamic Imaging of Coherent Sources (DICS)
Step1: Reading the raw data and creating epochs
Step2: We are interested in the beta band. Define a range of frequencies, using a
log scale, from 12 to 30 Hz.
Step3: Computing the cross-spectral density matrix for the beta frequency band, for
different time intervals. We use a decim value of 20 to speed up the
computation in this example at the loss of accuracy.
Step4: To compute the source power for a frequency band, rather than each frequency
separately, we average the CSD objects across frequencies.
Step5: Computing DICS spatial filters using the CSD that was computed on the entire
timecourse.
Step6: Applying DICS spatial filters separately to the CSD computed using the
baseline and the CSD computed during the ERS activity.
Step7: Visualizing source power during ERS activity relative to the baseline power. | Python Code:
# Author: Marijn van Vliet <[email protected]>
# Roman Goj <[email protected]>
# Denis Engemann <[email protected]>
# Stefan Appelhoff <[email protected]>
#
# License: BSD-3-Clause
import os.path as op
import numpy as np
import mne
from mne.datasets import somato
from mne.time_frequency import csd_morlet
from mne.beamformer import make_dics, apply_dics_csd
print(__doc__)
Explanation: Compute source power using DICS beamformer
Compute a Dynamic Imaging of Coherent Sources (DICS) :footcite:GrossEtAl2001
filter from single-trial activity to estimate source power across a frequency
band. This example demonstrates how to source localize the event-related
synchronization (ERS) of beta band activity in the
somato dataset <somato-dataset>.
End of explanation
data_path = somato.data_path()
subject = '01'
task = 'somato'
raw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg',
'sub-{}_task-{}_meg.fif'.format(subject, task))
# Use a shorter segment of raw just for speed here
raw = mne.io.read_raw_fif(raw_fname)
raw.crop(0, 120) # one minute for speed (looks similar to using all ~800 sec)
# Read epochs
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events, event_id=1, tmin=-1.5, tmax=2, preload=True)
del raw
# Paths to forward operator and FreeSurfer subject directory
fname_fwd = op.join(data_path, 'derivatives', 'sub-{}'.format(subject),
'sub-{}_task-{}-fwd.fif'.format(subject, task))
subjects_dir = op.join(data_path, 'derivatives', 'freesurfer', 'subjects')
Explanation: Reading the raw data and creating epochs:
End of explanation
freqs = np.logspace(np.log10(12), np.log10(30), 9)
Explanation: We are interested in the beta band. Define a range of frequencies, using a
log scale, from 12 to 30 Hz.
End of explanation
csd = csd_morlet(epochs, freqs, tmin=-1, tmax=1.5, decim=20)
csd_baseline = csd_morlet(epochs, freqs, tmin=-1, tmax=0, decim=20)
# ERS activity starts at 0.5 seconds after stimulus onset
csd_ers = csd_morlet(epochs, freqs, tmin=0.5, tmax=1.5, decim=20)
info = epochs.info
del epochs
Explanation: Computing the cross-spectral density matrix for the beta frequency band, for
different time intervals. We use a decim value of 20 to speed up the
computation in this example at the loss of accuracy.
End of explanation
csd = csd.mean()
csd_baseline = csd_baseline.mean()
csd_ers = csd_ers.mean()
Explanation: To compute the source power for a frequency band, rather than each frequency
separately, we average the CSD objects across frequencies.
End of explanation
fwd = mne.read_forward_solution(fname_fwd)
filters = make_dics(info, fwd, csd, noise_csd=csd_baseline,
pick_ori='max-power', reduce_rank=True, real_filter=True)
del fwd
Explanation: Computing DICS spatial filters using the CSD that was computed on the entire
timecourse.
End of explanation
baseline_source_power, freqs = apply_dics_csd(csd_baseline, filters)
beta_source_power, freqs = apply_dics_csd(csd_ers, filters)
Explanation: Applying DICS spatial filters separately to the CSD computed using the
baseline and the CSD computed during the ERS activity.
End of explanation
stc = beta_source_power / baseline_source_power
message = 'DICS source power in the 12-30 Hz frequency band'
brain = stc.plot(hemi='both', views='axial', subjects_dir=subjects_dir,
subject=subject, time_label=message)
Explanation: Visualizing source power during ERS activity relative to the baseline power.
End of explanation |
2,704 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Propriedades-da-Convolução" data-toc-modified-id="Propriedades-da-Convolução-1"><span class="toc-item-num">1 </span>Propriedades da Convolução</a></div><div class="lev2 toc-item"><a href="#Translação-por-um-impulso" data-toc-modified-id="Translação-por-um-impulso-11"><span class="toc-item-num">1.1 </span>Translação por um impulso</a></div><div class="lev2 toc-item"><a href="#Resposta-ao-impulso" data-toc-modified-id="Resposta-ao-impulso-12"><span class="toc-item-num">1.2 </span>Resposta ao impulso</a></div><div class="lev2 toc-item"><a href="#Decomposição" data-toc-modified-id="Decomposição-13"><span class="toc-item-num">1.3 </span>Decomposição</a></div><div class="lev3 toc-item"><a href="#Visualizando-as-imagens
Step1: Translação por um impulso
Quando o núcleo da composição é composto de apenas um único valor um e os demais zeros, a
imagem resultante será a translação da imagem original pelas coordenadas do valor não zero do núcleo.
No exemplo a seguir, o núcleo da convolução consiste do valor 1 na coordenada (19,59). Assim, a imagem
resultante ficara deslocada de 19 pixels para baixo e 59 para a direita. Observe que como estamos
tratando as imagens como infinitas com valores zeros fora do retângulo da imagem, esta translação faz
com que o retângulo da imagem aumente e vários valores iguais a zero sejam agora visíveis.
Step2: Resposta ao impulso
Quando a imagem é formada por um único pixel de valor 1, o resultado da convolução é o núcleo da convolução. Esta propriedade permite que se visualize o núcleo da convolução. Se você souber que existe algum software que possui um filtro linear invariante à translação e você não sabe qual é o seu núcleo, basta aplicá-lo numa imagem com um único pixel igual a 1. O resultado do filtro revelará o seu núcleo. Na ilustração a seguir, uma imagem com vários impulsos é criada. Após aplicar a convolução com um filtro qualquer, é possível visualizar o núcleo sendo repetido em cada lugar do impulso.
Step3: Decomposição
A propriedade da associatividade da convolução é dada por
Step4: Note a grande diferença no tempo de execução com o nucleo original e com o nucleo separado
Visualizando as imagens | Python Code:
# importando a função a ser utilizada nesse tutorial
import numpy as np
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Propriedades-da-Convolução" data-toc-modified-id="Propriedades-da-Convolução-1"><span class="toc-item-num">1 </span>Propriedades da Convolução</a></div><div class="lev2 toc-item"><a href="#Translação-por-um-impulso" data-toc-modified-id="Translação-por-um-impulso-11"><span class="toc-item-num">1.1 </span>Translação por um impulso</a></div><div class="lev2 toc-item"><a href="#Resposta-ao-impulso" data-toc-modified-id="Resposta-ao-impulso-12"><span class="toc-item-num">1.2 </span>Resposta ao impulso</a></div><div class="lev2 toc-item"><a href="#Decomposição" data-toc-modified-id="Decomposição-13"><span class="toc-item-num">1.3 </span>Decomposição</a></div><div class="lev3 toc-item"><a href="#Visualizando-as-imagens:" data-toc-modified-id="Visualizando-as-imagens:-131"><span class="toc-item-num">1.3.1 </span>Visualizando as imagens:</a></div>
# Propriedades da Convolução
A convolução possui várias propriedades que são úteis tanto para o melhor entendimento
do seu funcionamento como de uso prático. Aqui são ilustradas três propriedades: translação
por impulso, resposta ao impulso e decomposição do núcleo da convolução.
End of explanation
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import sys,os
os.chdir('../data')
f = mpimg.imread('cameraman.tif')
h = np.zeros((20,60))
h[19,59] = 1
nb = ia.nbshow(3)
nb.nbshow(f,'entrada')
g = ia.conv(f,h)
nb.nbshow(g.astype(np.uint8),'entrada translada de (20,60)')
nb.nbshow()
Explanation: Translação por um impulso
Quando o núcleo da composição é composto de apenas um único valor um e os demais zeros, a
imagem resultante será a translação da imagem original pelas coordenadas do valor não zero do núcleo.
No exemplo a seguir, o núcleo da convolução consiste do valor 1 na coordenada (19,59). Assim, a imagem
resultante ficara deslocada de 19 pixels para baixo e 59 para a direita. Observe que como estamos
tratando as imagens como infinitas com valores zeros fora do retângulo da imagem, esta translação faz
com que o retângulo da imagem aumente e vários valores iguais a zero sejam agora visíveis.
End of explanation
import numpy as np
#gerando imagem com pulsos
# 1 pulso a cada 4 linhas e 4 colunas
f = np.zeros((4,4))
f[3,3]= 1
f = np.tile(f,(2,2))
print('Matriz com impulsos:\n',f)
#gerando filtro
h = np.array([ [1,2,3],[4,5,6],[7,8,9]])
print('\nNucleo do Filtro:\n',h)
g = ia.conv(f,h)
print('\nVisualização do núcleo após aplicar o fitro sobre a matriz com pulsos:\n',g)
import numpy as np
print('Aplicando a resposta ao impulso numa imagem real para ilustrar o seu comportamento')
#gerando imagem com pulsos
f = np.zeros((40,40))
f[20,20]= 1
f = np.tile(f,(10,10))
f = ia.normalize(f)
nb.nbshow(f, 'imagem original')
#gerando filtro - circulo de raio 50
r,c = np.indices( (40, 40) )
h = ((r-20)**2 + (c-20)**2 < 20**2)
h = ia.normalize(h)
nb.nbshow(h, 'nucleo')
g = ia.conv(f,h)
nb.nbshow(ia.normalize(g), 'resposta ao impulso')
nb.nbshow()
Explanation: Resposta ao impulso
Quando a imagem é formada por um único pixel de valor 1, o resultado da convolução é o núcleo da convolução. Esta propriedade permite que se visualize o núcleo da convolução. Se você souber que existe algum software que possui um filtro linear invariante à translação e você não sabe qual é o seu núcleo, basta aplicá-lo numa imagem com um único pixel igual a 1. O resultado do filtro revelará o seu núcleo. Na ilustração a seguir, uma imagem com vários impulsos é criada. Após aplicar a convolução com um filtro qualquer, é possível visualizar o núcleo sendo repetido em cada lugar do impulso.
End of explanation
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import sys,os
os.chdir('../data')
f = mpimg.imread('cameraman.tif')
h1 = np.ones((1,10))
h2 = np.ones((10,1))
h = ia.conv(h1,h2)
print('Nucleo original h=\n',h)
print('\nTempo de processamento 10 x 10:')
%timeit ia.conv(f,h)
f2 = ia.conv(f,h1)
print('\nNucleo decomposto\nh1=\n',h1,'\nh2=\n',h2)
print('\nTempo de processamento 10 horizontal e 10 vertical:')
%timeit ia.conv(f,h1), ia.conv(f2,h2)
Explanation: Decomposição
A propriedade da associatividade da convolução é dada por:
\begin{align}
fh_{eq} = f(h1h2) = (fh1)*h2
\end{align}
Se conseguirmos decompor um núcleo de modo que ele seja o resultado da convolução de dois núcleos mais simples, esta propriedade permite um ganho computacional se a convolução for aplicada por cada núcleo separadamente. A seguir é ilustrado o caso do núcleo que faz a soma dos pixels numa janela quadrada de 10 pixels de lado. Se a convolução for feita com o quadrado 10 x 10, serão 100 operações feitas na convolução. Se o núcleo for decomposto em dois núcleos uma linha e uma coluna de 10 pixels cada, cada convolução precisará de 10 operações, totalizando 20 operações ao todo. Observe a diferença no tempo de processamento destes dois casos.
End of explanation
f1 = ia.conv(f,h)
f3= ia.conv(f2,h2)
nb.nbshow(ia.normalize(f1), 'filtragem pela soma na janela 10x10 (f1)')
nb.nbshow(ia.normalize(f3), 'filtragem pela soma na janela 10 horizontal e 10 vertical separadas (f3)')
nb.nbshow()
print('f1 é igual f3?\nMaxima diferença entre f1 e f3:', np.max(np.abs(f1-f3)) )
Explanation: Note a grande diferença no tempo de execução com o nucleo original e com o nucleo separado
Visualizando as imagens:
End of explanation |
2,705 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow IO Authors.
Step1: BigQuery TensorFlow 리더의 엔드 투 엔드 예제
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 인증합니다.
Step3: 프로젝트 ID를 설정합니다.
Step4: Python 라이브러리를 가져오고 상수를 정의합니다.
Step5: BigQuery로 인구 조사 데이터 가져오기
BigQuery에 데이터를 로드하는 도우미 메서드를 정의합니다.
Step6: BigQuery에서 인구 조사 데이터를 로드합니다.
Step7: 가져온 데이터를 확인합니다.
수행할 작업
Step8: BigQuery 리더를 사용하여 TensorFlow DataSet에 인구 조사 데이터 로드하기
BigQuery에서 인구 조사 데이터를 읽고 TensorFlow DataSet로 변환합니다.
Step9: 특성 열 정의하기
Step10: 모델 빌드 및 훈련하기
모델을 빌드합니다.
Step11: 모델을 훈련합니다.
Step12: 모델 평가하기
모델을 평가합니다.
Step13: 몇 가지 무작위 샘플을 평가합니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow IO Authors.
End of explanation
try:
# Use the Colab's preinstalled TensorFlow 2.x
%tensorflow_version 2.x
except:
pass
!pip install fastavro
!pip install tensorflow-io==0.9.0
!pip install google-cloud-bigquery-storage
Explanation: BigQuery TensorFlow 리더의 엔드 투 엔드 예제
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/io/tutorials/bigquery"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/io/tutorials/bigquery.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/io/tutorials/bigquery.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/io/tutorials/bigquery.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
개요
이 가이드에서는 Keras 순차 API를 사용하여 신경망을 훈련하기 위한 BigQuery TensorFlow 리더의 사용 방법을 보여줍니다.
데이터세트
이 튜토리얼에서는 UC Irvine 머신러닝 리포지토리에서 제공하는 United States Census Income 데이터세트를 사용합니다. 이 데이터세트에는 연령, 학력, 결혼 상태, 직업 및 연간 수입이 $50,000 이상인지 여부를 포함하여 1994년 인구 조사 데이터베이스에 수록된 사람들에 대한 정보가 포함되어 있습니다.
설정
GCP 프로젝트 설정하기
노트북 환경과 관계없이 다음 단계가 필요합니다.
GCP 프로젝트를 선택하거나 만듭니다.
프로젝트에 결제가 사용 설정되어 있는지 확인하세요.
BigQuery Storage API 사용
아래 셀에 프로젝트 ID를 입력합니다. 그런 다음 셀을 실행하여 Cloud SDK가 이 노트북의 모든 명령에 올바른 프로젝트를 사용하는지 확인합니다.
참고: Jupyter는 앞에 !가 붙은 줄을 셸 명령으로 실행하고 앞에 $가 붙은 Python 변수를 이러한 명령에 보간하여 넣습니다.
필수 패키지를 설치하고 런타임을 다시 시작합니다.
End of explanation
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
Explanation: 인증합니다.
End of explanation
PROJECT_ID = "<YOUR PROJECT>" #@param {type:"string"}
! gcloud config set project $PROJECT_ID
%env GCLOUD_PROJECT=$PROJECT_ID
Explanation: 프로젝트 ID를 설정합니다.
End of explanation
from __future__ import absolute_import, division, print_function, unicode_literals
import os
from six.moves import urllib
import tempfile
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import bigquery
from google.api_core.exceptions import GoogleAPIError
LOCATION = 'us'
# Storage directory
DATA_DIR = os.path.join(tempfile.gettempdir(), 'census_data')
# Download options.
DATA_URL = 'https://storage.googleapis.com/cloud-samples-data/ml-engine/census/data'
TRAINING_FILE = 'adult.data.csv'
EVAL_FILE = 'adult.test.csv'
TRAINING_URL = '%s/%s' % (DATA_URL, TRAINING_FILE)
EVAL_URL = '%s/%s' % (DATA_URL, EVAL_FILE)
DATASET_ID = 'census_dataset'
TRAINING_TABLE_ID = 'census_training_table'
EVAL_TABLE_ID = 'census_eval_table'
CSV_SCHEMA = [
bigquery.SchemaField("age", "FLOAT64"),
bigquery.SchemaField("workclass", "STRING"),
bigquery.SchemaField("fnlwgt", "FLOAT64"),
bigquery.SchemaField("education", "STRING"),
bigquery.SchemaField("education_num", "FLOAT64"),
bigquery.SchemaField("marital_status", "STRING"),
bigquery.SchemaField("occupation", "STRING"),
bigquery.SchemaField("relationship", "STRING"),
bigquery.SchemaField("race", "STRING"),
bigquery.SchemaField("gender", "STRING"),
bigquery.SchemaField("capital_gain", "FLOAT64"),
bigquery.SchemaField("capital_loss", "FLOAT64"),
bigquery.SchemaField("hours_per_week", "FLOAT64"),
bigquery.SchemaField("native_country", "STRING"),
bigquery.SchemaField("income_bracket", "STRING"),
]
UNUSED_COLUMNS = ["fnlwgt", "education_num"]
Explanation: Python 라이브러리를 가져오고 상수를 정의합니다.
End of explanation
def create_bigquery_dataset_if_necessary(dataset_id):
# Construct a full Dataset object to send to the API.
client = bigquery.Client(project=PROJECT_ID)
dataset = bigquery.Dataset(bigquery.dataset.DatasetReference(PROJECT_ID, dataset_id))
dataset.location = LOCATION
try:
dataset = client.create_dataset(dataset) # API request
return True
except GoogleAPIError as err:
if err.code != 409: # http_client.CONFLICT
raise
return False
def load_data_into_bigquery(url, table_id):
create_bigquery_dataset_if_necessary(DATASET_ID)
client = bigquery.Client(project=PROJECT_ID)
dataset_ref = client.dataset(DATASET_ID)
table_ref = dataset_ref.table(table_id)
job_config = bigquery.LoadJobConfig()
job_config.write_disposition = bigquery.WriteDisposition.WRITE_TRUNCATE
job_config.source_format = bigquery.SourceFormat.CSV
job_config.schema = CSV_SCHEMA
load_job = client.load_table_from_uri(
url, table_ref, job_config=job_config
)
print("Starting job {}".format(load_job.job_id))
load_job.result() # Waits for table load to complete.
print("Job finished.")
destination_table = client.get_table(table_ref)
print("Loaded {} rows.".format(destination_table.num_rows))
Explanation: BigQuery로 인구 조사 데이터 가져오기
BigQuery에 데이터를 로드하는 도우미 메서드를 정의합니다.
End of explanation
load_data_into_bigquery(TRAINING_URL, TRAINING_TABLE_ID)
load_data_into_bigquery(EVAL_URL, EVAL_TABLE_ID)
Explanation: BigQuery에서 인구 조사 데이터를 로드합니다.
End of explanation
%%bigquery --use_bqstorage_api
SELECT * FROM `<YOUR PROJECT>.census_dataset.census_training_table` LIMIT 5
Explanation: 가져온 데이터를 확인합니다.
수행할 작업: <YOUR PROJECT>를 PROJECT_ID로 바꿉니다.
참고: --use_bqstorage_api는 BigQueryStorage API를 사용하여 데이터를 가져오고 사용 권한이 있는지 확인합니다. 프로젝트에 이 부분이 활성화되어 있는지 확인합니다(https://cloud.google.com/bigquery/docs/reference/storage/#enabling_the_api).
End of explanation
from tensorflow.python.framework import ops
from tensorflow.python.framework import dtypes
from tensorflow_io.bigquery import BigQueryClient
from tensorflow_io.bigquery import BigQueryReadSession
def transofrom_row(row_dict):
# Trim all string tensors
trimmed_dict = { column:
(tf.strings.strip(tensor) if tensor.dtype == 'string' else tensor)
for (column,tensor) in row_dict.items()
}
# Extract feature column
income_bracket = trimmed_dict.pop('income_bracket')
# Convert feature column to 0.0/1.0
income_bracket_float = tf.cond(tf.equal(tf.strings.strip(income_bracket), '>50K'),
lambda: tf.constant(1.0),
lambda: tf.constant(0.0))
return (trimmed_dict, income_bracket_float)
def read_bigquery(table_name):
tensorflow_io_bigquery_client = BigQueryClient()
read_session = tensorflow_io_bigquery_client.read_session(
"projects/" + PROJECT_ID,
PROJECT_ID, table_name, DATASET_ID,
list(field.name for field in CSV_SCHEMA
if not field.name in UNUSED_COLUMNS),
list(dtypes.double if field.field_type == 'FLOAT64'
else dtypes.string for field in CSV_SCHEMA
if not field.name in UNUSED_COLUMNS),
requested_streams=2)
dataset = read_session.parallel_read_rows()
transformed_ds = dataset.map (transofrom_row)
return transformed_ds
BATCH_SIZE = 32
training_ds = read_bigquery(TRAINING_TABLE_ID).shuffle(10000).batch(BATCH_SIZE)
eval_ds = read_bigquery(EVAL_TABLE_ID).batch(BATCH_SIZE)
Explanation: BigQuery 리더를 사용하여 TensorFlow DataSet에 인구 조사 데이터 로드하기
BigQuery에서 인구 조사 데이터를 읽고 TensorFlow DataSet로 변환합니다.
End of explanation
def get_categorical_feature_values(column):
query = 'SELECT DISTINCT TRIM({}) FROM `{}`.{}.{}'.format(column, PROJECT_ID, DATASET_ID, TRAINING_TABLE_ID)
client = bigquery.Client(project=PROJECT_ID)
dataset_ref = client.dataset(DATASET_ID)
job_config = bigquery.QueryJobConfig()
query_job = client.query(query, job_config=job_config)
result = query_job.to_dataframe()
return result.values[:,0]
from tensorflow import feature_column
feature_columns = []
# numeric cols
for header in ['capital_gain', 'capital_loss', 'hours_per_week']:
feature_columns.append(feature_column.numeric_column(header))
# categorical cols
for header in ['workclass', 'marital_status', 'occupation', 'relationship',
'race', 'native_country', 'education']:
categorical_feature = feature_column.categorical_column_with_vocabulary_list(
header, get_categorical_feature_values(header))
categorical_feature_one_hot = feature_column.indicator_column(categorical_feature)
feature_columns.append(categorical_feature_one_hot)
# bucketized cols
age = feature_column.numeric_column('age')
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
Explanation: 특성 열 정의하기
End of explanation
Dense = tf.keras.layers.Dense
model = tf.keras.Sequential(
[
feature_layer,
Dense(100, activation=tf.nn.relu, kernel_initializer='uniform'),
Dense(75, activation=tf.nn.relu),
Dense(50, activation=tf.nn.relu),
Dense(25, activation=tf.nn.relu),
Dense(1, activation=tf.nn.sigmoid)
])
# Compile Keras model
model.compile(
loss='binary_crossentropy',
metrics=['accuracy'])
Explanation: 모델 빌드 및 훈련하기
모델을 빌드합니다.
End of explanation
model.fit(training_ds, epochs=5)
Explanation: 모델을 훈련합니다.
End of explanation
loss, accuracy = model.evaluate(eval_ds)
print("Accuracy", accuracy)
Explanation: 모델 평가하기
모델을 평가합니다.
End of explanation
sample_x = {
'age' : np.array([56, 36]),
'workclass': np.array(['Local-gov', 'Private']),
'education': np.array(['Bachelors', 'Bachelors']),
'marital_status': np.array(['Married-civ-spouse', 'Married-civ-spouse']),
'occupation': np.array(['Tech-support', 'Other-service']),
'relationship': np.array(['Husband', 'Husband']),
'race': np.array(['White', 'Black']),
'gender': np.array(['Male', 'Male']),
'capital_gain': np.array([0, 7298]),
'capital_loss': np.array([0, 0]),
'hours_per_week': np.array([40, 36]),
'native_country': np.array(['United-States', 'United-States'])
}
model.predict(sample_x)
Explanation: 몇 가지 무작위 샘플을 평가합니다.
End of explanation |
2,706 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modules, Imports and Packages
Dr. Chris Gwilliams
[email protected]
Python Modules
We have seen that there are many things one can do using Python, but this barely touches the surface.
Python uses modules (a.ka. libraries) to extend the basic functionality and we have 3 ways of doing this
Step1: Exercise
Using the dir method (and the documentation), import the random module and generate a random number between 42 and 749.
Step2: Modules and Packages
A Python module is just a .py file, which you can import directly.
import config (relates to config.py somewhere on your system)
A package is a collection of Python modules that you can import all of, or just import the modules you want. For example
Step3: Installing packages
Ever used aptitude or yum on Linux? These are package managers that allow you to extend the functionality of the system you are using. Python has these, in the form of pip and easy_install.
Exercise
Which one of these should you use? (Cite your sources)
| pip | easy_install |
|----------------------------------------|--------------------------------------------------------------------------------|
| actively maintained | partially maintained |
| part of core python distribution | support for version control |
| packages downloaded and then installed | packages downloaded and installed asynchronously |
| allows uninstall | does not provide uninstall functionality |
| automated installing of requirements | if an install fails, it may not fail cleanly and leave your environment broken |
pip
The recommended tool for installing Python packages.
Stands for Pip Installs Packages. Packages can be found on PyPi
Usage | Python Code:
import random
dir(random)
Explanation: Modules, Imports and Packages
Dr. Chris Gwilliams
[email protected]
Python Modules
We have seen that there are many things one can do using Python, but this barely touches the surface.
Python uses modules (a.ka. libraries) to extend the basic functionality and we have 3 ways of doing this:
Standard Modules
These are modules built into the Python Standard Library, similar to the built in functions (type, len) that we have been using.
The majority of them are listed here
Exercise
Follow the link in the previous slide and find the documentation for the random module.
External Modules
These are libraries, written by developers (like you), that extend the functionality of Python. They do things like:
- Web Scraping
- Network Visualisation
- Neural Networks
- Gaming
We will cover these a bit later in the course.
Local Modules
These are .py within your file system and we will look at these later on in the session.
DO NOT EVER SAVE A SCRIPT WITH THE SAME NAME AS A MODULE YOU USE
import statements
A Python script is, typically, made up of three things at the high level:
import - modules you can use within your code
Executable code - the code you have written
Comments - ignored by the interpreter
End of explanation
import random #import section
print(random.randrange(42,749)) #code section
from random import randrange
print(randrange(10,300))
Explanation: Exercise
Using the dir method (and the documentation), import the random module and generate a random number between 42 and 749.
End of explanation
import city
print(city.name)
print("This city has {0} people".format(city.pop))
Explanation: Modules and Packages
A Python module is just a .py file, which you can import directly.
import config (relates to config.py somewhere on your system)
A package is a collection of Python modules that you can import all of, or just import the modules you want. For example:
import random (all modules in random package)
from random import randint (importing module from packages)
Import dos and don'ts
You can (and will) import many modules in one script, PEP asks that you follow this structure:
python
import standard_library_modules
import external_library_modules
import local_modules
More info on this and other styles can be found here
You will also see that some people will import multiple modules in one line:
import os, sys, csv, math, random
Do not do this, it makes your code hard to read and modularise
However, it is good to import multiple modules from the same package in one line:
from random import randrange, randint
Writing Your Own Local Modules
Exercise
Create a file and call it city.py.
Put some variables in there that describe a city of your choice (size, population, country etc)
Now, create a file and call it main.py.
Import your city.py file and print out the city information with formatted strings.
https://gitlab.cs.cf.ac.uk/scm7cg/cm6111_python_modules/tree/master
End of explanation
import sys #system package to read command line args
print(len(sys.argv))
print(sys.argv)
Explanation: Installing packages
Ever used aptitude or yum on Linux? These are package managers that allow you to extend the functionality of the system you are using. Python has these, in the form of pip and easy_install.
Exercise
Which one of these should you use? (Cite your sources)
| pip | easy_install |
|----------------------------------------|--------------------------------------------------------------------------------|
| actively maintained | partially maintained |
| part of core python distribution | support for version control |
| packages downloaded and then installed | packages downloaded and installed asynchronously |
| allows uninstall | does not provide uninstall functionality |
| automated installing of requirements | if an install fails, it may not fail cleanly and leave your environment broken |
pip
The recommended tool for installing Python packages.
Stands for Pip Installs Packages. Packages can be found on PyPi
Usage:
pip install <package-name>
Note: Some packages will require administrator privilges to be installed.
Exercise
Install a package called blessings
Find the documentation on PyPi
Write a script that uses blessings
Make the script print Sup, World in bold
List 3 commands you can run with pip
Virtual Environments (virtual env)
Picture this: You are given a project to work on in a team. You install some packages, write some code and push it to git. Your team-mates say they cannot get it to run.
Why can they not get it to run?
Modules not installed
Modules won't install
Different operating system
Different Python version
Any number of these and more.
VirtualEnv
pip installs packages globally by default. This means your Python code is always affected by the current state of your system. If you upgrade a package to the latest version that breaks what you are working on, it will also break every other project that uses that system.
VirtualEnv aims to address this. This package creates an isolated Python environment in a directory with your name of choice.
From here you can, specify Python versions, install packages and run code.
virtualenv <environment_name>
That is how you get started. Do not type this yet!
What does virtualenv <env> do?
Installs an isolated Python environment in a directory named after your env variable. All scripts are put into the bin folder, like so:
Exercise
Create a virtual environment, called 'comp_thinking'
Activating your virtual environment
Unix: source bin/activate
Windows: \Scripts\activate
This adds the scripts in bin to your PATH, so they are executed when you run pip or python.
You can also call the scripts directly:
bin/python <script>.py
Exercise
Activate your environment!
Deactivating an environment
Guess...
deactivate
It is that simple. If you do not want to use the environment again, then you can simply delete the folder.
Exercise
You guessed it, deactivate your environment!
Exercise
Create a new virtual environment, call it test
Activate it (or just use the scripts)
Install the terminaltables package
Write a script to read details about the user (favourite movie/game, age, height etc)
Use the documentation to print an ASCII table of these data
Command Line Arguments
So, we can output with print and we can input with input. But...input relies on user interaction as the script runs. Here, we can use arguments in the command line to act as our input.
python <script>.py argument some_other_argument
NOTE: This is only a brief intro to using arguments and we will come back to these as the course progresses.
End of explanation |
2,707 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Graph format
The EDeN library allows the vectorization of graphs, i.e. the transformation of graphs into sparse vectors.
The graphs that can be processed by the EDeN library have the following restrictions
Step1: Build graphs and then display them
Step2: Create a vector representation
Step3: Compute pairwise similarity matrix | Python Code:
%matplotlib inline
import pylab as plt
import networkx as nx
G=nx.Graph()
G.add_node(0, label='A')
G.add_node(1, label='B')
G.add_node(2, label='C')
G.add_edge(0,1, label='x')
G.add_edge(1,2, label='y')
G.add_edge(2,0, label='z')
from eden.util import display
print display.serialize_graph(G)
from eden.util import display
display.draw_graph(G, size=15, node_size=1500, font_size=24, node_border=True, size_x_to_y_ratio=3)
G=nx.Graph()
G.add_node(0, label=[0,0,.1])
G.add_node(1, label=[0,.1,0])
G.add_node(2, label=[.1,0,0])
G.add_edge(0,1, label='x')
G.add_edge(1,2, label='y')
G.add_edge(2,0, label='z')
display.draw_graph(G, size=15, node_size=1500, font_size=24, node_border=True, size_x_to_y_ratio=3)
G=nx.Graph()
G.add_node(0, label={'A':1, 'B':2, 'C':3})
G.add_node(1, label={'A':1, 'B':2, 'D':3})
G.add_node(2, label={'A':1, 'D':2, 'E':3})
G.add_edge(0,1, label='x')
G.add_edge(1,2, label='y')
G.add_edge(2,0, label='z')
display.draw_graph(G, size=15, node_size=1500, font_size=24, node_border=True, size_x_to_y_ratio=3)
G=nx.Graph()
G.add_node(0, label='A')
G.add_node(1, label='B')
G.add_node(2, label='C')
G.add_node(3, label='D')
G.add_node(4, label='E')
G.add_node(5, label='F')
G.add_edge(0,1, label='x')
G.add_edge(0,2, label='y')
G.add_edge(1,3, label='z', nesting=True, weight=.5)
G.add_edge(0,3, label='z', nesting=True, weight=.1)
G.add_edge(2,3, label='z', nesting=True, weight=.01)
G.add_edge(3,4, label='k')
G.add_edge(3,5, label='j')
display.draw_graph(G, size=15, node_size=1500, font_size=24, node_border=True, size_x_to_y_ratio=3, prog='circo')
from eden.graph import Vectorizer
X=Vectorizer(2).transform_single(G)
from eden.util import describe
print describe(X)
print X
G=nx.Graph()
G.add_node(0, label='A')
G.add_node(1, label='B')
G.add_node(2, label='C')
G.add_node(3, label='D')
G.add_node(4, label='E')
G.add_node(5, label='F')
G.add_edge(0,1, label='x')
G.add_edge(0,2, label='y')
G.add_edge(1,3, label='z', nesting=True)
G.add_edge(0,3, label='z', nesting=True)
G.add_edge(2,3, label='z', nesting=True)
G.add_edge(3,4, label='k')
G.add_edge(3,5, label='j')
from eden.graph import Vectorizer
X=Vectorizer(2).transform_single(G)
from eden.util import describe
print describe(X)
print X
Explanation: Graph format
The EDeN library allows the vectorization of graphs, i.e. the transformation of graphs into sparse vectors.
The graphs that can be processed by the EDeN library have the following restrictions:
- the graphs are implemented as networkx graphs
- nodes and edges have identifiers: the following identifiers are used as reserved words
1. label
2. weight
3. entity
4. nesting
nodes and edges must have the 'label' attribute
the 'label' attribute can be of one of the following types:
string
vector
dictionary
strings are used to represent categorical values;
dictionaries are used to represent sparse vectors: keys are of string type and values are of type float
- nodes and edges can have a 'weight' attribute of type float
- nodes can have a 'entity' attribute of type string
- nesting edges must have a 'nesting' attribute of type boolean set to True
End of explanation
import networkx as nx
graph_list = []
G=nx.Graph()
G.add_node(0, label='A', entity='CATEG')
G.add_node(1, label='B', entity='CATEG')
G.add_node(2, label='C', entity='CATEG')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='A', entity='CATEG')
G.add_node(1, label='B', entity='CATEG')
G.add_node(2, label='X', entity='CATEG')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='A', entity='CATEG')
G.add_node(1, label='B', entity='CATEG')
G.add_node(2, label='X', entity='CATEG')
G.add_edge(0,1, label='x', entity='CATEG_EDGE')
G.add_edge(1,2, label='x', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='X', entity='CATEG')
G.add_node(1, label='X', entity='CATEG')
G.add_node(2, label='X', entity='CATEG')
G.add_edge(0,1, label='x', entity='CATEG_EDGE')
G.add_edge(1,2, label='x', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label=[1,0,0], entity='VEC')
G.add_node(1, label=[0,1,0], entity='VEC')
G.add_node(2, label=[0,0,1], entity='VEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label=[1,1,0], entity='VEC')
G.add_node(1, label=[0,1,1], entity='VEC')
G.add_node(2, label=[0,0,1], entity='VEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label=[1,0.1,0.2], entity='VEC')
G.add_node(1, label=[0.3,1,0.4], entity='VEC')
G.add_node(2, label=[0.5,0.6,1], entity='VEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label=[0.1,0.2,0.3], entity='VEC')
G.add_node(1, label=[0.4,0.5,0.6], entity='VEC')
G.add_node(2, label=[0.7,0.8,0.9], entity='VEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label={'A':1, 'B':1, 'C':1}, entity='SPVEC')
G.add_node(1, label={'a':1, 'B':1, 'C':1}, entity='SPVEC')
G.add_node(2, label={'a':1, 'b':1, 'C':1}, entity='SPVEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label={'A':1, 'C':1, 'D':1}, entity='SPVEC')
G.add_node(1, label={'a':1, 'C':1, 'D':1}, entity='SPVEC')
G.add_node(2, label={'a':1, 'C':1, 'D':1}, entity='SPVEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label={'A':1, 'D':1, 'E':1}, entity='SPVEC')
G.add_node(1, label={'a':1, 'D':1, 'E':1}, entity='SPVEC')
G.add_node(2, label={'a':1, 'D':1, 'E':1}, entity='SPVEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label={'A':1, 'B':1, 'C':1, 'D':1, 'E':1}, entity='SPVEC')
G.add_node(1, label={'a':1, 'B':1, 'C':1, 'D':1, 'E':1}, entity='SPVEC')
G.add_node(2, label={'a':1, 'b':1, 'C':1, 'D':1, 'E':1}, entity='SPVEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
from eden.util import display
for g in graph_list:
display.draw_graph(g, size=5, node_size=800, node_border=1, layout='shell', secondary_vertex_label = 'entity')
Explanation: Build graphs and then display them
End of explanation
%%time
from eden.graph import Vectorizer
vectorizer = Vectorizer(complexity=2, n=4)
vectorizer.fit(graph_list)
X = vectorizer.transform(graph_list)
y=[1]*4+[2]*4+[3]*4
print 'Instances: %d \nFeatures: %d with an avg of %d features per instance' % (X.shape[0], X.shape[1], X.getnnz()/X.shape[0])
opts={'knn': 3, 'metric': 'rbf', 'k_threshold': 0.7, 'gamma': 1e-2}
from eden.embedding import display_embedding, embedding_quality
print 'Embedding quality [adjusted Rand index]: %.2f data: %s #classes: %d' % (embedding_quality(X, y, opts), X.shape, len(set(y)))
display_embedding(X,y, opts)
Explanation: Create a vector representation
End of explanation
from ipy_table import *
def prep_table(K):
header = [' ']
header += [i for i in range(K.shape[0])]
mat = [header]
for id, row in enumerate(K):
new_row = [id]
new_row += list(row)
mat.append(new_row)
return mat
from sklearn import metrics
K=metrics.pairwise.pairwise_kernels(X, metric='linear')
mat=prep_table(K)
make_table(mat)
apply_theme('basic')
set_global_style(float_format = '%0.2f')
Explanation: Compute pairwise similarity matrix
End of explanation |
2,708 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Taylor series expansion for the trigonometric function $\sin{x}$ around the point $a=0$ (also known as the Maclaurin series in this case) is given by
Step1: The factorial generator function returns an iterable, and we can slice it to obain the factorials of the first 10 non-negative integers.
Step2: In like manner, we define a generator function which yields an infinite sequence of the terms in the Maclaurin series expansion of $\sin{x}$
Step3: We can inspect the first 10 terms
Step4: Note that it is trivial to generalize this to a Taylor series expansion, by supporting an optional keyword argument a=0. and subtracting it from x as the first operation in the function. We omit this here for the sake of simplicity.
Using islice and our generator, we can implement the sine function with an option to specify how many terms to use in the approximation
Step5: Now we can plot our Taylor polynomial approximation of increasing degrees against the NumPy implementation of $\sin{x}$. Before we do that, it will come in handy later if we first vectorize our function to accept array inputs
Step6: The Taylor polynomial of 23 degrees appears to be making a decent approximation. Where we fall short with our implementation is that it is difficult to know what the approximation error is, let alone know upfront how many terms are actually required to obtain a good approximation.
What we really want is to continue adding terms to our approximation until the absolute value of the next term falls below some tolerable error threshold, since the error in the approximation will be no greater than the value of that term.
For example, if we use 5 terms to approximate $\sin{x}$
Step7: We can use the higher-order function takewhile to continuously obtain more terms to use in our approximation until the value of the term falls below some threshold. Here we take terms from the generator until it falls below $1 \times 10^{-20}$
Step8: Now we can define a sine function with an option to specify the maximum error that we can tolerate
Step9: Again, we vectorize it to accept array inputs
Step10: As a sanity check, we can ensure that given an array of $x$ values around a neighborhood of 0 as input, our implementation produces outputs that are element-wise equal to that of the NumPy implementation, within a certain tolerance.
Step11: When we plot the error, it is of no surprise that it increases exponentially as we get further away from 0.
Step12: To wrap this up, we provide a recursive implementation of our generator function
Step13: This is still considered a bottom-up approach, as we are still computing the current terms using the results of the previous terms, and no memoization is necessary. It is interesting to see recursion used in a generator function, and the use of the new yield from expression, introduced in Python 3.3, which delegates part of its yield operations to another generator.
Now, it is trivial to adapt this implementation to $\cos{x}$. In fact, the body of the for-loop remains the same, the only change required is in the initial values of curr and n. | Python Code:
def factorial():
a = b = 1
while True:
yield a
a *= b
b += 1
Explanation: The Taylor series expansion for the trigonometric function $\sin{x}$ around the point $a=0$ (also known as the Maclaurin series in this case) is given by:
$$
\sin{x} = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \dotsb \text{ for all } x
$$
The $k$th term of the expansion is given by
$$
\frac{(-1)^k}{(2k+1)!} x^{2k+1}
$$
It is easy to evaluate this closed-form expression directly. However, it is more elegant and indeed more efficient to compute the terms bottom-up, by iteratively calculating the next term using the value of the previous term. This is just like computing factorials or a sequence of Fibonacci numbers using the bottom-up approach in dynamic programming.
<!-- TEASER_END -->
For example, the following generator function yields an infinite sequence of factorials. It uses the value of the previous number in the sequence to compute the subsequent numbers.
End of explanation
list(islice(factorial(), 10))
Explanation: The factorial generator function returns an iterable, and we can slice it to obain the factorials of the first 10 non-negative integers.
End of explanation
def sin_terms(x):
curr = x
for n in count(2, 2):
yield curr
curr *= -x**2
curr /= n*(n+1)
Explanation: In like manner, we define a generator function which yields an infinite sequence of the terms in the Maclaurin series expansion of $\sin{x}$:
End of explanation
list(islice(sin_terms(np.pi), 10))
Explanation: We can inspect the first 10 terms:
End of explanation
sin1 = lambda x, terms=50: sum(islice(sin_terms(x), terms))
sin1(.5*np.pi)
sin1(0)
sin1(np.pi)
sin1(-.5*np.pi)
Explanation: Note that it is trivial to generalize this to a Taylor series expansion, by supporting an optional keyword argument a=0. and subtracting it from x as the first operation in the function. We omit this here for the sake of simplicity.
Using islice and our generator, we can implement the sine function with an option to specify how many terms to use in the approximation:
End of explanation
sin1 = np.vectorize(sin1, excluded=['terms'])
x = np.linspace(-3*np.pi, 3*np.pi, 100)
fig, ax = plt.subplots(figsize=(8, 6))
ax.grid(True)
ax.set_ylim((-1.25, 1.25))
ax.plot(x, np.sin(x), label='$\sin{(x)}$')
for t in range(4, 20, 4):
ax.plot(x, sin1(x, terms=t),
label='$T_{{{degree}}}(x)$'.format(degree=2*t-1))
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
Explanation: Now we can plot our Taylor polynomial approximation of increasing degrees against the NumPy implementation of $\sin{x}$. Before we do that, it will come in handy later if we first vectorize our function to accept array inputs:
End of explanation
fac = lambda n: next(islice(factorial(), n, n+1))
np.max(np.abs(np.linspace(-1, 1, 100))**11/fac(11))
Explanation: The Taylor polynomial of 23 degrees appears to be making a decent approximation. Where we fall short with our implementation is that it is difficult to know what the approximation error is, let alone know upfront how many terms are actually required to obtain a good approximation.
What we really want is to continue adding terms to our approximation until the absolute value of the next term falls below some tolerable error threshold, since the error in the approximation will be no greater than the value of that term.
For example, if we use 5 terms to approximate $\sin{x}$:
$$
\sin{x} \approx x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \frac{x^9}{9!}
$$
The error in this approximation is no greater than $\frac{{\mid x \mid}^{11}}{11!}$. In fact, for $x \in (-1, 1)$, the error is no greater than $2.6 \times 10^{-8}$:
End of explanation
list(takewhile(lambda t: np.abs(t) > 1e-20, sin_terms(np.pi)))
Explanation: We can use the higher-order function takewhile to continuously obtain more terms to use in our approximation until the value of the term falls below some threshold. Here we take terms from the generator until it falls below $1 \times 10^{-20}$:
End of explanation
sin2 = lambda x, max_tol=1e-20: \
sum(takewhile(lambda t: np.abs(t) > max_tol, sin_terms(x)))
sin2(.5*np.pi, max_tol=1e-15)
Explanation: Now we can define a sine function with an option to specify the maximum error that we can tolerate:
End of explanation
sin2 = np.vectorize(sin2, excluded=['max_tol'])
Explanation: Again, we vectorize it to accept array inputs:
End of explanation
x = np.linspace(-5*np.pi, 5*np.pi, 100)
np.allclose(np.sin(x), sin2(x))
Explanation: As a sanity check, we can ensure that given an array of $x$ values around a neighborhood of 0 as input, our implementation produces outputs that are element-wise equal to that of the NumPy implementation, within a certain tolerance.
End of explanation
fig, ax = plt.subplots(figsize=(8, 6))
ax.bar(x, (np.sin(x)-sin2(x))**2, width=.1, log=True)
ax.set_xlabel('$x$')
ax.set_ylabel('error')
plt.show()
Explanation: When we plot the error, it is of no surprise that it increases exponentially as we get further away from 0.
End of explanation
def sin_terms(x, curr=None, n=2):
if curr is None:
curr = x
yield curr
yield from sin_terms(x, -curr*x**2/(n*(n+1)), n+2)
Explanation: To wrap this up, we provide a recursive implementation of our generator function:
End of explanation
def cos_terms(x):
curr = 1
for n in count(1, 2):
yield curr
curr *= -x**2
curr /= n*(n+1)
def cos_terms(x, curr=None, n=1):
if curr is None:
curr = 1
yield curr
yield from cos_terms(x, -curr*x**2/(n*(n+1)), n+2)
Explanation: This is still considered a bottom-up approach, as we are still computing the current terms using the results of the previous terms, and no memoization is necessary. It is interesting to see recursion used in a generator function, and the use of the new yield from expression, introduced in Python 3.3, which delegates part of its yield operations to another generator.
Now, it is trivial to adapt this implementation to $\cos{x}$. In fact, the body of the for-loop remains the same, the only change required is in the initial values of curr and n.
End of explanation |
2,709 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ARCH and GARCH Models
By Delaney Granizo-Mackenzie and Andrei Kirilenko.
This notebook developed in collaboration with Prof. Andrei Kirilenko as part of the Masters of Finance curriculum at MIT Sloan.
Part of the Quantopian Lecture Series
Step1: Simulating a GARCH(1, 1) Case
We'll start by using Monte Carlo sampling to simulate a GARCH(1, 1) process. Our dynamics will be
$$\sigma_1 = \sqrt{\frac{a_0}{1-a_1-b_1}}$$
$$\sigma_t^2 = a_0 + a_1 x_{t-1}^2+b_1 \sigma_{t-1}^2$$
$$x_t = \sigma_t \epsilon_t$$
$$\epsilon \sim \mathcal{N}(0, 1)$$
Our parameters will be $a_0 = 1$, $a_1=0.1$, and $b_1=0.8$. We will drop the first 10% (burn-in) of our simulated values.
Step2: Now we'll compare the tails of the GARCH(1, 1) process with normally distributed values. We expect to see fatter tails, as the GARCH(1, 1) process will experience extreme values more often.
Step3: Sure enough, the tails of the GARCH(1, 1) process are fatter. We can also look at this graphically, although it's a little tricky to see.
Step4: What we're looking at here is the GARCH process in blue and the normal process in green. The 1 and 3 std bars are drawn on the plot. We can see that the blue GARCH process tends to cross the 3 std bar much more often than the green normal one.
Testing for ARCH Behavior
The first step is to test for ARCH conditions. To do this we run a regression on $x_t$ fitting the following model.
$$x_t^2 = a_0 + a_1 x_{t-1}^2 + \dots + a_p x_{t-p}^2$$
We use OLS to estimate $\hat\theta = (\hat a_0, \hat a_1, \dots, \hat a_p)$ and the covariance matrix $\hat\Omega$. We can then compute the test statistic
$$F = \hat\theta \hat\Omega^{-1} \hat\theta'$$
We will reject if $F$ is greater than the 95% confidence bars in the $\mathcal(X)^2(p)$ distribution.
To test, we'll set $p=20$ and see what we get.
Step5: Fitting GARCH(1, 1) with MLE
Once we've decided that the data might have an underlying GARCH(1, 1) model, we would like to fit GARCH(1, 1) to the data by estimating parameters.
To do this we need the log-likelihood function
$$\mathcal{L}(\theta) = \sum_{t=1}^T - \ln \sqrt{2\pi} - \frac{x_t^2}{2\sigma_t^2} - \frac{1}{2}\ln(\sigma_t^2)$$
To evaluate this function we need $x_t$ and $\sigma_t$ for $1 \leq t \leq T$. We have $x_t$, but we need to compute $\sigma_t$. To do this we need to make a guess for $\sigma_1$. Our guess will be $\sigma_1^2 = \hat E[x_t^2]$. Once we have our initial guess we compute the rest of the $\sigma$'s using the equation
$$\sigma_t^2 = a_0 + a_1 x_{t-1}^2 + b_1\sigma_{t-1}^2$$
Step6: Let's look at the sigmas we just generated.
Step7: Now that we can compute the $\sigma_t$'s, we'll define the actual log likelihood function. This function will take as input our observations $x$ and $\theta$ and return $-\mathcal{L}(\theta)$. It is important to note that we return the negative log likelihood, as this way our numerical optimizer can minimize the function while maximizing the log likelihood.
Note that we are constantly re-computing the $\sigma_t$'s in this function.
Step8: Now we perform numerical optimization to find our estimate for
$$\hat\theta = \arg \max_{(a_0, a_1, b_1)}\mathcal{L}(\theta) = \arg \min_{(a_0, a_1, b_1)}-\mathcal{L}(\theta)$$
We have some constraints on this
$$a_1 \geq 0, b_1 \geq 0, a_1+b_1 < 1$$
Step9: Now we would like a way to check our estimate. We'll look at two things
Step10: GMM for Estimating GARCH(1, 1) Parameters
We've just computed an estimate using MLE, but we can also use Generalized Method of Moments (GMM) to estimate the GARCH(1, 1) parameters.
To do this we need to define our moments. We'll use 4.
1. The residual $\hat\epsilon_t = x_t / \hat\sigma_t$
2. The variance of the residual $\hat\epsilon_t^2$
3. The skew moment $\mu_3/\hat\sigma_t^3 = (\hat\epsilon_t - E[\hat\epsilon_t])^3 / \hat\sigma_t^3$
4. The kurtosis moment $\mu_4/\hat\sigma_t^4 = (\hat\epsilon_t - E[\hat\epsilon_t])^4 / \hat\sigma_t^4$
Step11: GMM now has three steps.
Start with $W$ as the identity matrix.
Estimate $\hat\theta_1$ by using numerical optimization to minimize
$$\min_{\theta \in \Theta} \left(\frac{1}{T} \sum_{t=1}^T g(x_t, \hat\theta)\right)' W \left(\frac{1}{T}\sum_{t=1}^T g(x_t, \hat\theta)\right)$$
Recompute $W$ based on the covariances of the estimated $\theta$. (Focus more on parameters with explanatory power)
$$\hat W_{i+1} = \left(\frac{1}{T}\sum_{t=1}^T g(x_t, \hat\theta_i)g(x_t, \hat\theta_i)'\right)^{-1}$$
Repeat until $|\hat\theta_{i+1} - \hat\theta_i| < \epsilon$ or we reach an iteration threshold.
Initialize $W$ and $T$ and define the objective function we need to minimize.
Step12: Now we're ready to the do the iterated minimization step.
Step13: Predicting the Future
Step14: Now we'll just sample values walking forward.
Step15: One should note that because we are moving foward using a random walk, this analysis is supposed to give us a sense of the magnitude of sigma and therefore the risk we could face. It is not supposed to accurately model future values of X. In practice you would probably want to use Monte Carlo sampling to generate thousands of future scenarios, and then look at the potential range of outputs. We'll try that now. Keep in mind that this is a fairly simplistic way of doing this analysis, and that better techniques, such as Bayesian cones, exist. | Python Code:
import cvxopt
from functools import partial
import math
import numpy as np
import scipy
from scipy import stats
import statsmodels as sm
from statsmodels.stats.stattools import jarque_bera
import matplotlib.pyplot as plt
Explanation: ARCH and GARCH Models
By Delaney Granizo-Mackenzie and Andrei Kirilenko.
This notebook developed in collaboration with Prof. Andrei Kirilenko as part of the Masters of Finance curriculum at MIT Sloan.
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License.
AutoRegressive Conditionally Heteroskedasticity (ARCH) occurs when the volatility of a time series is also autoregressive.
End of explanation
# Define parameters
a0 = 1.0
a1 = 0.1
b1 = 0.8
sigma1 = math.sqrt(a0 / (1 - a1 - b1))
def simulate_GARCH(T, a0, a1, b1, sigma1):
# Initialize our values
X = np.ndarray(T)
sigma = np.ndarray(T)
sigma[0] = sigma1
for t in range(1, T):
# Draw the next x_t
X[t - 1] = sigma[t - 1] * np.random.normal(0, 1)
# Draw the next sigma_t
sigma[t] = math.sqrt(a0 + b1 * sigma[t - 1]**2 + a1 * X[t - 1]**2)
X[T - 1] = sigma[T - 1] * np.random.normal(0, 1)
return X, sigma
Explanation: Simulating a GARCH(1, 1) Case
We'll start by using Monte Carlo sampling to simulate a GARCH(1, 1) process. Our dynamics will be
$$\sigma_1 = \sqrt{\frac{a_0}{1-a_1-b_1}}$$
$$\sigma_t^2 = a_0 + a_1 x_{t-1}^2+b_1 \sigma_{t-1}^2$$
$$x_t = \sigma_t \epsilon_t$$
$$\epsilon \sim \mathcal{N}(0, 1)$$
Our parameters will be $a_0 = 1$, $a_1=0.1$, and $b_1=0.8$. We will drop the first 10% (burn-in) of our simulated values.
End of explanation
X, _ = simulate_GARCH(10000, a0, a1, b1, sigma1)
X = X[1000:] # Drop burn in
X = X / np.std(X) # Normalize X
def compare_tails_to_normal(X):
# Define matrix to store comparisons
A = np.zeros((2,4))
for k in range(4):
A[0, k] = len(X[X > (k + 1)]) / float(len(X)) # Estimate tails of X
A[1, k] = 1 - stats.norm.cdf(k + 1) # Compare to Gaussian distribution
return A
compare_tails_to_normal(X)
Explanation: Now we'll compare the tails of the GARCH(1, 1) process with normally distributed values. We expect to see fatter tails, as the GARCH(1, 1) process will experience extreme values more often.
End of explanation
plt.hist(X, bins=50)
plt.xlabel('sigma')
plt.ylabel('observations');
# Sample values from a normal distribution
X2 = np.random.normal(0, 1, 9000)
both = np.matrix([X, X2])
# Plot both the GARCH and normal values
plt.plot(both.T, alpha=.7);
plt.axhline(X2.std(), color='yellow', linestyle='--')
plt.axhline(-X2.std(), color='yellow', linestyle='--')
plt.axhline(3*X2.std(), color='red', linestyle='--')
plt.axhline(-3*X2.std(), color='red', linestyle='--')
plt.xlabel('time')
plt.ylabel('sigma');
Explanation: Sure enough, the tails of the GARCH(1, 1) process are fatter. We can also look at this graphically, although it's a little tricky to see.
End of explanation
X, _ = simulate_GARCH(1100, a0, a1, b1, sigma1)
X = X[100:] # Drop burn in
p = 20
# Drop the first 20 so we have a lag of p's
Y2 = (X**2)[p:]
X2 = np.ndarray((980, p))
for i in range(p, 1000):
X2[i - p, :] = np.asarray((X**2)[i-p:i])[::-1]
model = sm.regression.linear_model.OLS(Y2, X2)
model = model.fit()
theta = np.matrix(model.params)
omega = np.matrix(model.cov_HC0)
F = np.asscalar(theta * np.linalg.inv(omega) * theta.T)
print np.asarray(theta.T).shape
plt.plot(range(20), np.asarray(theta.T))
plt.xlabel('Lag Amount')
plt.ylabel('Estimated Coefficient for Lagged Datapoint')
print 'F = ' + str(F)
chi2dist = scipy.stats.chi2(p)
pvalue = 1-chi2dist.cdf(F)
print 'p-value = ' + str(pvalue)
# Finally let's look at the significance of each a_p as measured by the standard deviations away from 0
print theta/np.diag(omega)
Explanation: What we're looking at here is the GARCH process in blue and the normal process in green. The 1 and 3 std bars are drawn on the plot. We can see that the blue GARCH process tends to cross the 3 std bar much more often than the green normal one.
Testing for ARCH Behavior
The first step is to test for ARCH conditions. To do this we run a regression on $x_t$ fitting the following model.
$$x_t^2 = a_0 + a_1 x_{t-1}^2 + \dots + a_p x_{t-p}^2$$
We use OLS to estimate $\hat\theta = (\hat a_0, \hat a_1, \dots, \hat a_p)$ and the covariance matrix $\hat\Omega$. We can then compute the test statistic
$$F = \hat\theta \hat\Omega^{-1} \hat\theta'$$
We will reject if $F$ is greater than the 95% confidence bars in the $\mathcal(X)^2(p)$ distribution.
To test, we'll set $p=20$ and see what we get.
End of explanation
X, _ = simulate_GARCH(10000, a0, a1, b1, sigma1)
X = X[1000:] # Drop burn in
# Here's our function to compute the sigmas given the initial guess
def compute_squared_sigmas(X, initial_sigma, theta):
a0 = theta[0]
a1 = theta[1]
b1 = theta[2]
T = len(X)
sigma2 = np.ndarray(T)
sigma2[0] = initial_sigma ** 2
for t in range(1, T):
# Here's where we apply the equation
sigma2[t] = a0 + a1 * X[t-1]**2 + b1 * sigma2[t-1]
return sigma2
Explanation: Fitting GARCH(1, 1) with MLE
Once we've decided that the data might have an underlying GARCH(1, 1) model, we would like to fit GARCH(1, 1) to the data by estimating parameters.
To do this we need the log-likelihood function
$$\mathcal{L}(\theta) = \sum_{t=1}^T - \ln \sqrt{2\pi} - \frac{x_t^2}{2\sigma_t^2} - \frac{1}{2}\ln(\sigma_t^2)$$
To evaluate this function we need $x_t$ and $\sigma_t$ for $1 \leq t \leq T$. We have $x_t$, but we need to compute $\sigma_t$. To do this we need to make a guess for $\sigma_1$. Our guess will be $\sigma_1^2 = \hat E[x_t^2]$. Once we have our initial guess we compute the rest of the $\sigma$'s using the equation
$$\sigma_t^2 = a_0 + a_1 x_{t-1}^2 + b_1\sigma_{t-1}^2$$
End of explanation
plt.plot(range(len(X)), compute_squared_sigmas(X, np.sqrt(np.mean(X**2)), (1, 0.5, 0.5)))
plt.xlabel('Time')
plt.ylabel('Sigma');
Explanation: Let's look at the sigmas we just generated.
End of explanation
def negative_log_likelihood(X, theta):
T = len(X)
# Estimate initial sigma squared
initial_sigma = np.sqrt(np.mean(X ** 2))
# Generate the squared sigma values
sigma2 = compute_squared_sigmas(X, initial_sigma, theta)
# Now actually compute
return -sum(
[-np.log(np.sqrt(2.0 * np.pi)) -
(X[t] ** 2) / (2.0 * sigma2[t]) -
0.5 * np.log(sigma2[t]) for
t in range(T)]
)
Explanation: Now that we can compute the $\sigma_t$'s, we'll define the actual log likelihood function. This function will take as input our observations $x$ and $\theta$ and return $-\mathcal{L}(\theta)$. It is important to note that we return the negative log likelihood, as this way our numerical optimizer can minimize the function while maximizing the log likelihood.
Note that we are constantly re-computing the $\sigma_t$'s in this function.
End of explanation
# Make our objective function by plugging X into our log likelihood function
objective = partial(negative_log_likelihood, X)
# Define the constraints for our minimizer
def constraint1(theta):
return np.array([1 - (theta[1] + theta[2])])
def constraint2(theta):
return np.array([theta[1]])
def constraint3(theta):
return np.array([theta[2]])
cons = ({'type': 'ineq', 'fun': constraint1},
{'type': 'ineq', 'fun': constraint2},
{'type': 'ineq', 'fun': constraint3})
# Actually do the minimization
result = scipy.optimize.minimize(objective, (1, 0.5, 0.5),
method='SLSQP',
constraints = cons)
theta_mle = result.x
print 'theta MLE: ' + str(theta_mle)
Explanation: Now we perform numerical optimization to find our estimate for
$$\hat\theta = \arg \max_{(a_0, a_1, b_1)}\mathcal{L}(\theta) = \arg \min_{(a_0, a_1, b_1)}-\mathcal{L}(\theta)$$
We have some constraints on this
$$a_1 \geq 0, b_1 \geq 0, a_1+b_1 < 1$$
End of explanation
def check_theta_estimate(X, theta_estimate):
initial_sigma = np.sqrt(np.mean(X ** 2))
sigma = np.sqrt(compute_squared_sigmas(X, initial_sigma, theta_estimate))
epsilon = X / sigma
print 'Tails table'
print compare_tails_to_normal(epsilon / np.std(epsilon))
print ''
_, pvalue, _, _ = jarque_bera(epsilon)
print 'Jarque-Bera probability normal: ' + str(pvalue)
check_theta_estimate(X, theta_mle)
Explanation: Now we would like a way to check our estimate. We'll look at two things:
1. How fat are the tails of the residuals.
2. How normal are the residuals under the Jarque-Bera normality test.
We'll do both in our check_theta_estimate function.
End of explanation
# The n-th standardized moment
# skewness is 3, kurtosis is 4
def standardized_moment(x, mu, sigma, n):
return ((x - mu) ** n) / (sigma ** n)
Explanation: GMM for Estimating GARCH(1, 1) Parameters
We've just computed an estimate using MLE, but we can also use Generalized Method of Moments (GMM) to estimate the GARCH(1, 1) parameters.
To do this we need to define our moments. We'll use 4.
1. The residual $\hat\epsilon_t = x_t / \hat\sigma_t$
2. The variance of the residual $\hat\epsilon_t^2$
3. The skew moment $\mu_3/\hat\sigma_t^3 = (\hat\epsilon_t - E[\hat\epsilon_t])^3 / \hat\sigma_t^3$
4. The kurtosis moment $\mu_4/\hat\sigma_t^4 = (\hat\epsilon_t - E[\hat\epsilon_t])^4 / \hat\sigma_t^4$
End of explanation
def gmm_objective(X, W, theta):
# Compute the residuals for X and theta
initial_sigma = np.sqrt(np.mean(X ** 2))
sigma = np.sqrt(compute_squared_sigmas(X, initial_sigma, theta))
e = X / sigma
# Compute the mean moments
m1 = np.mean(e)
m2 = np.mean(e ** 2) - 1
m3 = np.mean(standardized_moment(e, np.mean(e), np.std(e), 3))
m4 = np.mean(standardized_moment(e, np.mean(e), np.std(e), 4) - 3)
G = np.matrix([m1, m2, m3, m4]).T
return np.asscalar(G.T * W * G)
def gmm_variance(X, theta):
# Compute the residuals for X and theta
initial_sigma = np.sqrt(np.mean(X ** 2))
sigma = np.sqrt(compute_squared_sigmas(X, initial_sigma, theta))
e = X / sigma
# Compute the squared moments
m1 = e ** 2
m2 = (e ** 2 - 1) ** 2
m3 = standardized_moment(e, np.mean(e), np.std(e), 3) ** 2
m4 = (standardized_moment(e, np.mean(e), np.std(e), 4) - 3) ** 2
# Compute the covariance matrix g * g'
T = len(X)
s = np.ndarray((4, 1))
for t in range(T):
G = np.matrix([m1[t], m2[t], m3[t], m4[t]]).T
s = s + G * G.T
return s / T
Explanation: GMM now has three steps.
Start with $W$ as the identity matrix.
Estimate $\hat\theta_1$ by using numerical optimization to minimize
$$\min_{\theta \in \Theta} \left(\frac{1}{T} \sum_{t=1}^T g(x_t, \hat\theta)\right)' W \left(\frac{1}{T}\sum_{t=1}^T g(x_t, \hat\theta)\right)$$
Recompute $W$ based on the covariances of the estimated $\theta$. (Focus more on parameters with explanatory power)
$$\hat W_{i+1} = \left(\frac{1}{T}\sum_{t=1}^T g(x_t, \hat\theta_i)g(x_t, \hat\theta_i)'\right)^{-1}$$
Repeat until $|\hat\theta_{i+1} - \hat\theta_i| < \epsilon$ or we reach an iteration threshold.
Initialize $W$ and $T$ and define the objective function we need to minimize.
End of explanation
# Initialize GMM parameters
W = np.identity(4)
gmm_iterations = 10
# First guess
theta_gmm_estimate = theta_mle
# Perform iterated GMM
for i in range(gmm_iterations):
# Estimate new theta
objective = partial(gmm_objective, X, W)
result = scipy.optimize.minimize(objective, theta_gmm_estimate, constraints=cons)
theta_gmm_estimate = result.x
print 'Iteration ' + str(i) + ' theta: ' + str(theta_gmm_estimate)
# Recompute W
W = np.linalg.inv(gmm_variance(X, theta_gmm_estimate))
check_theta_estimate(X, theta_gmm_estimate)
Explanation: Now we're ready to the do the iterated minimization step.
End of explanation
sigma_hats = np.sqrt(compute_squared_sigmas(X, np.sqrt(np.mean(X**2)), theta_mle))
initial_sigma = sigma_hats[-1]
initial_sigma
Explanation: Predicting the Future: How to actually use what we've done
Now that we've fitted a model to our observations, we'd like to be able to predict what the future volatility will look like. To do this, we can just simulate more values using our original GARCH dynamics and the estimated parameters.
The first thing we'll do is compute an initial $\sigma_t$. We'll compute our squared sigmas and take the last one.
End of explanation
a0_estimate = theta_gmm_estimate[0]
a1_estimate = theta_gmm_estimate[1]
b1_estimate = theta_gmm_estimate[2]
X_forecast, sigma_forecast = simulate_GARCH(100, a0_estimate, a1_estimate, b1_estimate, initial_sigma)
plt.plot(range(-100, 0), X[-100:], 'b-')
plt.plot(range(-100, 0), sigma_hats[-100:], 'r-')
plt.plot(range(0, 100), X_forecast, 'b--')
plt.plot(range(0, 100), sigma_forecast, 'r--')
plt.xlabel('Time')
plt.legend(['X', 'sigma']);
Explanation: Now we'll just sample values walking forward.
End of explanation
plt.plot(range(-100, 0), X[-100:], 'b-')
plt.plot(range(-100, 0), sigma_hats[-100:], 'r-')
plt.xlabel('Time')
plt.legend(['X', 'sigma'])
max_X = [-np.inf]
min_X = [np.inf]
for i in range(100):
X_forecast, sigma_forecast = simulate_GARCH(100, a0_estimate, a1_estimate, b1_estimate, initial_sigma)
if max(X_forecast) > max(max_X):
max_X = X_forecast
elif min(X_forecast) < min(max_X):
min_X = X_forecast
plt.plot(range(0, 100), X_forecast, 'b--', alpha=0.05)
plt.plot(range(0, 100), sigma_forecast, 'r--', alpha=0.05)
# Draw the most extreme X values specially
plt.plot(range(0, 100), max_X, 'g--', alpha=1.0)
plt.plot(range(0, 100), min_X, 'g--', alpha=1.0);
Explanation: One should note that because we are moving foward using a random walk, this analysis is supposed to give us a sense of the magnitude of sigma and therefore the risk we could face. It is not supposed to accurately model future values of X. In practice you would probably want to use Monte Carlo sampling to generate thousands of future scenarios, and then look at the potential range of outputs. We'll try that now. Keep in mind that this is a fairly simplistic way of doing this analysis, and that better techniques, such as Bayesian cones, exist.
End of explanation |
2,710 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Update BIOM file with data from STOQS
Given a .biom file and multiple STOQS databases, explore Next Generation Sequence and associated STOQS data
Executing this Notebook requires a personal STOQS server. Follow the steps to build your own development system — this will take a few hours and depends on a good connection to the Internet. Once your server is up log into it (after a cd ~/Vagrants/stoqsvm) and activate your virtual environment with the usual commands
Step1: Open a .biom file that contains sequence data from Net Tows conducted on these campaigns. (You will need to create the BIOM directory and copy the .biom file there as we generally do not keep data files in the STOQS git repository.)
Step2: Find all the VerticalNetTow Sample identifiers for all our SIMZ campaigns. (For hints on names to filter on use the STOQS REST api to explore Sample data for a campaign, e.g.
Step3: It looks as though the BIOM table ids (SIMZ1, SIMZ2, ...) correspond to the STOQS s.instantpoint.activity.names (simz2013c01_NetTow1, simz2013c02_NetTow1, ...). Let's loop through the STOQS sample names and BIOM file sample ids, extract relevant data from STOQS and populate a dictionary formatted for adding metadata back to the BIOM file.
Step4: Create a copy of the original table, add the new metadata to the samples and save to a new file name.
Step5: Compare the first two metadata records from the original table and the new table.
Step6: To test the results, upload the new file to http | Python Code:
from campaigns import campaigns
dbs = [c for c in campaigns if 'simz' in c]
print dbs
Explanation: Update BIOM file with data from STOQS
Given a .biom file and multiple STOQS databases, explore Next Generation Sequence and associated STOQS data
Executing this Notebook requires a personal STOQS server. Follow the steps to build your own development system — this will take a few hours and depends on a good connection to the Internet. Once your server is up log into it (after a cd ~/Vagrants/stoqsvm) and activate your virtual environment with the usual commands:
vagrant ssh -- -X
cd ~/dev/stoqsgit
source venv-stoqs/bin/activate
Then load all of the SIMZ databases with the commands below. In order to have all the subsample analysis data (Sampled Parameters) loaded it's necessary to have SIMZ<month><year> directories containing those .csv files. (See the subsample_csv_files attribute setting in the load script for the campaign.)
cd stoqs
ln -s mbari_campaigns.py campaigns.py
export DATABASE_URL=postgis://stoqsadm:[email protected]:5432/stoqs
loaders/load.py --db stoqs_simz_aug2013 stoqs_simz_oct2013 \
stoqs_simz_spring2014 stoqs_simz_jul2014 stoqs_simz_oct2014
loaders/load.py --db stoqs_simz_aug2013 stoqs_simz_oct2013 \
stoqs_simz_spring2014 stoqs_simz_jul2014 stoqs_simz_oct2014 --updateprovenance
Loading these database will take a few hours. Once it's finished you can interact with the data quite efficiently, as this Notebook demonstrates. Launch Jupyter Notebook with:
cd contrib/notebooks
../../manage.py shell_plus --notebook
navigate to this file and open it. You will then be able to execute the cells and experiment.
Make a Python list of all SIMZ database from the campaigns on our system.
End of explanation
biom_file = '../../loaders/MolecularEcology/BIOM/otu_table_newsiernounclass_wmetadata.biom'
from biom import load_table
table = load_table(biom_file)
print table.ids(axis='sample')
print table.ids(axis='observation')[:5]
Explanation: Open a .biom file that contains sequence data from Net Tows conducted on these campaigns. (You will need to create the BIOM directory and copy the .biom file there as we generally do not keep data files in the STOQS git repository.)
End of explanation
nettows = {}
for db in dbs:
for s in Sample.objects.using(db).filter(sampletype__name='VerticalNetTow'
).order_by('instantpoint__activity__name'):
print s.instantpoint.activity.name, db
nettows[s.instantpoint.activity.name] = db
Explanation: Find all the VerticalNetTow Sample identifiers for all our SIMZ campaigns. (For hints on names to filter on use the STOQS REST api to explore Sample data for a campaign, e.g.: http://localhost:8000/stoqs_simz_aug2013/api/sample.html.) These will be our links to the environmental and other sample data.
End of explanation
stoqs_sample_data = {}
for s, b in [('simz2013c{:02d}_NetTow1'.format(int(n[4:])), n) for n in table.ids()]:
sps = SampledParameter.objects.using(nettows[s]
).filter(sample__instantpoint__activity__name=s)
# Values of BIOM metadata must be strings, even if they are numbers
stoqs_sample_data[b] = {sp.parameter.name: str(float(sp.datavalue)) for sp in sps}
Explanation: It looks as though the BIOM table ids (SIMZ1, SIMZ2, ...) correspond to the STOQS s.instantpoint.activity.names (simz2013c01_NetTow1, simz2013c02_NetTow1, ...). Let's loop through the STOQS sample names and BIOM file sample ids, extract relevant data from STOQS and populate a dictionary formatted for adding metadata back to the BIOM file.
End of explanation
new_table = table.copy()
new_table.add_metadata(stoqs_sample_data)
with open(biom_file.replace('.biom', '_stoqs.biom'), 'w') as f:
new_table.to_json('explore_BIOM_data_for_SIMZ.ipynb', f)
Explanation: Create a copy of the original table, add the new metadata to the samples and save to a new file name.
End of explanation
import pprint
pp = pprint.PrettyPrinter(indent=4)
print 'Original: ' + biom_file
print '-' * len('Original: ' + biom_file)
pp.pprint(table.metadata()[:2])
print
print 'New: ' + biom_file.replace('.biom', '_stoqs.biom')
print '-' * len('New: ' + biom_file.replace('.biom', '_stoqs.biom'))
pp.pprint(new_table.metadata()[:2])
Explanation: Compare the first two metadata records from the original table and the new table.
End of explanation
from IPython.display import Image
Image('../../../doc/Screenshots/Screen_Shot_2015-10-24_at_10.37.52_PM.png')
Explanation: To test the results, upload the new file to http://phinch.org/ and you should see something like this:
End of explanation |
2,711 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
The goal of this Artificial Neural Network (ANN) 101 session is twofold
Step1: Get the data
Step2: Build the artificial neural-network
Step3: Train the artificial neural-network model
Step4: Evaluate the model
Step5: Predict new output data | Python Code:
# To enable Tensorflow 2 instead of TensorFlow 1.15, uncomment the next 4 lines
#try:
# %tensorflow_version 2.x
#except Exception:
# pass
# library to store and manipulate neural-network input and output data
import numpy as np
# library to graphically display any data
import matplotlib.pyplot as plt
# library to manipulate neural-network models
import tensorflow as tf
from tensorflow import keras
# the code is compatible with Tensflow v1.15 and v2, but interesting info anyway
print("Tensorlow version:", tf.__version__)
# Versions needs to be 1.15.1 or greater (e.g. this code won't work with 1.13.1)
# To check whether you code will use a GPU or not, uncomment the following two
# lines of code. You should either see:
# * an "XLA_GPU",
# * or better a "K80" GPU
# * or even better a "T100" GPU
#from tensorflow.python.client import device_lib
#device_lib.list_local_devices()
import time
# trivial "debug" function to display the duration between time_1 and time_2
def get_duration(time_1, time_2):
duration_time = time_2 - time_1
m, s = divmod(duration_time, 60)
h, m = divmod(m, 60)
s,m,h = int(round(s, 0)), int(round(m, 0)), int(round(h, 0))
duration = "duration: " + "{0:02d}:{1:02d}:{2:02d}".format(h, m, s)
return duration
Explanation: Introduction
The goal of this Artificial Neural Network (ANN) 101 session is twofold:
To build an ANN model that will be able to predict y value according to x value.
In other words, we want our ANN model to perform a regression analysis.
To observe three important KPI when dealing with ANN:
The size of the network (called trainable_params in our code)
The duration of the training step (called training_ duration: in our code)
The efficiency of the ANN model (called evaluated_loss in our code)
The data used here are exceptionally simple:
X represents the interesting feature (i.e. will serve as input X for our ANN).
Here, each x sample is a one-dimension single scalar value.
Y represents the target (i.e. will serve as the exected output Y of our ANN).
Here, each x sample is also a one-dimension single scalar value.
Note that in real life:
You will never have such godsent clean, un-noisy and simple data.
You will have more samples, i.e. bigger data (better for statiscally meaningful results).
You may have more dimensions in your feature and/or target (e.g. space data, temporal data...).
You may also have more multiple features and even multiple targets.
Hence your ANN model will be more complex that the one studied here
Work to be done:
For exercices A to E, the only lines of code that need to be added or modified are in the create_model() Python function.
Exercice A
Run the whole code, Jupyter cell by Jupyter cell, without modifiying any line of code.
Write down the values for:
trainable_params:
training_ duration:
evaluated_loss:
In the last Jupyter cell, what is the relationship between the predicted x samples and y samples? Try to explain it base on the ANN model?
Exercice B
Add a first hidden layer called "hidden_layer_1" containing 8 units in the model of the ANN.
Restart and execute everything again.
Write down the obtained values for:
trainable_params:
training_ duration:
evaluated_loss:
How better is it with regard to Exercice A?
Worse? Not better? Better? Strongly better?
Exercice C
Modify the hidden layer called "hidden_layer_1" so that it contains 128 units instead of 8.
Restart and execute everything again.
Write down the obtained values for:
trainable_params:
training_ duration:
evaluated_loss:
How better is it with regard to Exercice B?
Worse? Not better? Better? Strongly better?
Exercice D
Add a second hidden layer called "hidden_layer_2" containing 32 units in the model of the ANN.
Write down the obtained values for:
trainable_params:
training_ duration:
evaluated_loss:
How better is it with regard to Exercice C?
Worse? Not better? Better? Strongly better?
Exercice E
Add a third hidden layer called "hidden_layer_3" containing 4 units in the model of the ANN.
Restart and execute everything again.
Look at the graph in the last Jupyter cell. Is it better?
Write down the obtained values for:
trainable_params:
training_ duration:
evaluated_loss:
How better is it with regard to Exercice D?
Worse? Not better? Better? Strongly better?
Exercice F
If you still have time, you can also play with the training epochs parameter, the number of training samples (or just exchange the training datasets with the test datasets), the type of runtime hardware (GPU orTPU), and so on...
Python Code
Import the tools
End of explanation
# DO NOT MODIFY THIS CODE
# IT HAS JUST BEEN WRITTEN TO GENERATE THE DATA
# library fr generating random number
#import random
# secret relationship between X data and Y data
#def generate_random_output_data_correlated_from_input_data(nb_samples):
# generate nb_samples random x between 0 and 1
# X = np.array( [random.random() for i in range(nb_samples)] )
# generate nb_samples y correlated with x
# Y = np.tan(np.sin(X) + np.cos(X))
# return X, Y
#def get_new_X_Y(nb_samples, debug=False):
# X, Y = generate_random_output_data_correlated_from_input_data(nb_samples)
# if debug:
# print("generate %d X and Y samples:" % nb_samples)
# X_Y = zip(X, Y)
# for i, x_y in enumerate(X_Y):
# print("data sample %d: x=%.3f, y=%.3f" % (i, x_y[0], x_y[1]))
# return X, Y
# Number of samples for the training dataset and the test dateset
#nb_samples=50
# Get some data for training the futture neural-network model
#X_train, Y_train = get_new_X_Y(nb_samples)
# Get some other data for evaluating the futture neural-network model
#X_test, Y_test = get_new_X_Y(nb_samples)
# In most cases, it will be necessary to normalize X and Y data with code like:
# X_centered -= X.mean(axis=0)
# X_normalized /= X_centered.std(axis=0)
#def mstr(X):
# my_str ='['
# for x in X:
# my_str += str(float(int(x*1000)/1000)) + ','
# my_str += ']'
# return my_str
## Call get_data to have an idead of what is returned by call data
#generate_data = False
#if generate_data:
# nb_samples = 50
# X_train, Y_train = get_new_X_Y(nb_samples)
# print('X_train = np.array(%s)' % mstr(X_train))
# print('Y_train = np.array(%s)' % mstr(Y_train))
# X_test, Y_test = get_new_X_Y(nb_samples)
# print('X_test = np.array(%s)' % mstr(X_test))
# print('Y_test = np.array(%s)' % mstr(Y_test))
X_train = np.array([0.765,0.838,0.329,0.277,0.45,0.833,0.44,0.634,0.351,0.784,0.589,0.816,0.352,0.591,0.04,0.38,0.816,0.732,0.32,0.597,0.908,0.146,0.691,0.75,0.568,0.866,0.705,0.027,0.607,0.793,0.864,0.057,0.877,0.164,0.729,0.291,0.324,0.745,0.158,0.098,0.113,0.794,0.452,0.765,0.983,0.001,0.474,0.773,0.155,0.875,])
Y_train = np.array([6.322,6.254,3.224,2.87,4.177,6.267,4.088,5.737,3.379,6.334,5.381,6.306,3.389,5.4,1.704,3.602,6.306,6.254,3.157,5.446,5.918,2.147,6.088,6.298,5.204,6.147,6.153,1.653,5.527,6.332,6.156,1.766,6.098,2.236,6.244,2.96,3.183,6.287,2.205,1.934,1.996,6.331,4.188,6.322,5.368,1.561,4.383,6.33,2.192,6.108,])
X_test = np.array([0.329,0.528,0.323,0.952,0.868,0.931,0.69,0.112,0.574,0.421,0.972,0.715,0.7,0.58,0.69,0.163,0.093,0.695,0.493,0.243,0.928,0.409,0.619,0.011,0.218,0.647,0.499,0.354,0.064,0.571,0.836,0.068,0.451,0.074,0.158,0.571,0.754,0.259,0.035,0.595,0.245,0.929,0.546,0.901,0.822,0.797,0.089,0.924,0.903,0.334,])
Y_test = np.array([3.221,4.858,3.176,5.617,6.141,5.769,6.081,1.995,5.259,3.932,5.458,6.193,6.129,5.305,6.081,2.228,1.912,6.106,4.547,2.665,5.791,3.829,5.619,1.598,2.518,5.826,4.603,3.405,1.794,5.23,6.26,1.81,4.18,1.832,2.208,5.234,6.306,2.759,1.684,5.432,2.673,5.781,5.019,5.965,6.295,6.329,1.894,5.816,5.951,3.258,])
print('X_train contains %d samples' % X_train.shape)
print('Y_train contains %d samples' % Y_train.shape)
print('')
print('X_test contains %d samples' % X_test.shape)
print('Y_test contains %d samples' % Y_test.shape)
# Graphically display our training data
plt.scatter(X_train, Y_train, color='green', alpha=0.5)
plt.title('Scatter plot of the training data')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# Graphically display our test data
plt.scatter(X_test, Y_test, color='blue', alpha=0.5)
plt.title('Scatter plot of the testing data')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
Explanation: Get the data
End of explanation
# THIS IS THE ONLY CELL WHERE YOU HAVE TO ADD AND/OR MODIFY CODE
def create_model():
# This returns a tensor
model = keras.Sequential([
keras.layers.Input(shape=(1,), name='input_layer'),
keras.layers.Dense(128, activation=tf.nn.relu, name='hidden_layer_1'),
keras.layers.Dense(32, activation=tf.nn.relu, name='hidden_layer_2'),
keras.layers.Dense(4, activation=tf.nn.relu, name='hidden_layer_3'),
keras.layers.Dense(1, name='output_layer')
])
model.compile(optimizer=tf.keras.optimizers.RMSprop(0.01),
loss='mean_squared_error',
metrics=['mean_absolute_error', 'mean_squared_error'])
return model
# Same model but for Keras 1.13.1
#inputs_data = keras.layers.Input(shape=(1, ), name='input_layer')
#hl_1_out_data = keras.layers.Dense(units=128, activation=tf.nn.relu, name='hidden_layer_1')(inputs_data)
#hl_2_out_data = keras.layers.Dense(units=32, activation=tf.nn.relu, name='hidden_layer_2')(hl_1_out_data)
#hl_3_out_data = keras.layers.Dense(units=4, activation=tf.nn.relu, name='hidden_layer_3')(hl_2_out_data)
#outputs_data = keras.layers.Dense(units=1)(hl_3_out_data)
#model = keras.models.Model(inputs=inputs_data, outputs=outputs_data)
ann_model = create_model()
# Display a textual summary of the newly created model
# Pay attention to size (a.k.a. total parameters) of the network
ann_model.summary()
print('trainable_params:', ann_model.count_params())
%%html
As a reminder for understanding, the following ANN unit contains <b>m + 1</b> trainable parameters:<br>
<img src='https://www.degruyter.com/view/j/nanoph.2017.6.issue-3/nanoph-2016-0139/graphic/j_nanoph-2016-0139_fig_002.jpg' alt="perceptron" width="400" />
Explanation: Build the artificial neural-network
End of explanation
# Train the model with the input data and the output_values
# validation_split=0.2 means that 20% of the X_train samples will be used
# for a validation test and that "only" 80% will be used for training
t0 = time.time()
results = ann_model.fit(X_train, Y_train, verbose=False,
batch_size=1, epochs=500, validation_split=0.2)
t1 = time.time()
print('training_%s' % get_duration(t0, t1))
#plt.plot(r.history['mean_squared_error'], label = 'mean_squared_error')
plt.plot(results.history['loss'], label = 'train_loss')
plt.plot(results.history['val_loss'], label = 'validation_loss')
plt.legend()
plt.show()
# If you can write a file locally (i.e. If Google Drive available on Colab environnement)
# then, you can save your model in a file for future reuse.
# (c.f. https://www.tensorflow.org/guide/keras/save_and_serialize)
# Only uncomment the following file if you can write a file
# model.save('ann_101.h5')
Explanation: Train the artificial neural-network model
End of explanation
loss, mean_absolute_error, mean_squared_error = ann_model.evaluate(X_test, Y_test, verbose=True)
Explanation: Evaluate the model
End of explanation
X_new_values = [0., 0.2, 0.4, 0.6, 0.8, 1.0]
Y_predicted_values = ann_model.predict(X_new_values)
# Display training data and predicted data graphically
plt.title('Training data (green color) + Predicted data (red color)')
# training data in green color
plt.scatter(X_train, Y_train, color='green', alpha=0.5)
# training data in green color
#plt.scatter(X_test, Y_test, color='blue', alpha=0.5)
# predicted data in blue color
plt.scatter(X_new_values, Y_predicted_values, color='red', alpha=0.5)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
Explanation: Predict new output data
End of explanation |
2,712 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Verifying the MLOps environment on GCP
This notebook verifies the MLOps environment provisioned on GCP
1. Test using the local MLflow server in AI Notebooks instance in log entries to the Cloud SQL
2. Test deploying and running an Airflow workflow on Composer that uses MLflow server on GKE to log entries to the Cloud SQL
1. Running a local MLflow experiment
We implement a simple Scikit-learn model training routine, and examine the logged entries in Cloud SQL and produced articats in Cloud Storage through MLflow tracking.
Step1: 1.1. Training a simple Scikit-learn model from Notebook environment
Step2: 1.2. Query the Mlfow entries from Cloud SQL
Step3: List tables
You should see a list of table names like 'experiments','metrics','model_versions','runs'
Step4: Retrieve experiment
Step5: Query runs
Step6: Query metrics
Step7: 1.3. List the artifacts in Cloud Storage
Step8: 2. Submitting a workflow to Composer
We implement a one-step Airflow workflow that trains a Scikit-learn model, and examine the logged entries in Cloud SQL and produced articats in Cloud Storage through MLflow tracking.
Step9: 2.1. Writing the Airflow workflow
Step10: 2.2. Uploading the Airflow workflow
Step11: 2.3. Triggering the workflow
Please wait for 30-60 seconds before triggering the workflow at the first Airflow Dag import
Step12: 2.4. Query the MLfow entries from Cloud SQL
Step13: Retrieve experiment
Step14: Query runs
Step15: Query metrics
Step16: 2.5. List the artifacts in Cloud Storage | Python Code:
import os
import re
import mlflow
import mlflow.sklearn
import numpy as np
from sklearn.linear_model import LogisticRegression
import pymysql
from IPython.core.display import display, HTML
mlflow_tracking_uri = mlflow.get_tracking_uri()
MLFLOW_EXPERIMENTS_URI = os.environ['MLFLOW_EXPERIMENTS_URI']
print("MLflow tracking server URI: {}".format(mlflow_tracking_uri))
print("MLflow artifacts store root: {}".format(MLFLOW_EXPERIMENTS_URI))
print("MLflow SQL connction name: {}".format(os.environ['MLFLOW_SQL_CONNECTION_NAME']))
print("MLflow SQL connction string: {}".format(os.environ['MLFLOW_SQL_CONNECTION_STR']))
print("Cloud Composer name: {}".format(os.environ['MLOPS_COMPOSER_NAME']))
print("Cloud Composer instance region: {}".format(os.environ['MLOPS_REGION']))
display(HTML('<hr>You can check results of this test in MLflow and GCS folder:'))
display(HTML('<h4><a href="{}" rel="noopener noreferrer" target="_blank">Click to open MLflow UI</a></h4>'.format(os.environ['MLFLOW_TRACKING_EXTERNAL_URI'])))
display(HTML('<h4><a href="https://console.cloud.google.com/storage/browser/{}" rel="noopener noreferrer" target="_blank">Click to open GCS folder</a></h4>'.format(MLFLOW_EXPERIMENTS_URI.replace('gs://',''))))
Explanation: Verifying the MLOps environment on GCP
This notebook verifies the MLOps environment provisioned on GCP
1. Test using the local MLflow server in AI Notebooks instance in log entries to the Cloud SQL
2. Test deploying and running an Airflow workflow on Composer that uses MLflow server on GKE to log entries to the Cloud SQL
1. Running a local MLflow experiment
We implement a simple Scikit-learn model training routine, and examine the logged entries in Cloud SQL and produced articats in Cloud Storage through MLflow tracking.
End of explanation
experiment_name = "notebooks-test"
mlflow.set_experiment(experiment_name)
with mlflow.start_run(nested=True):
X = np.array([-2, -1, 0, 1, 2, 1]).reshape(-1, 1)
y = np.array([0, 0, 1, 1, 1, 0])
lr = LogisticRegression()
lr.fit(X, y)
score = lr.score(X, y)
print("Score: %s" % score)
mlflow.log_metric("score", score)
mlflow.sklearn.log_model(lr, "model")
print("Model saved in run %s" % mlflow.active_run().info.run_uuid)
current_model=mlflow.get_artifact_uri('model')
Explanation: 1.1. Training a simple Scikit-learn model from Notebook environment
End of explanation
sqlauth=re.search('mysql\\+pymysql://(?P<user>.*):(?P<psw>.*)@127.0.0.1:3306/mlflow', os.environ['MLFLOW_SQL_CONNECTION_STR'],re.DOTALL)
connection = pymysql.connect(
host='127.0.0.1',
port=3306,
database='mlflow',
user=sqlauth.group('user'),
passwd=sqlauth.group('psw')
)
Explanation: 1.2. Query the Mlfow entries from Cloud SQL
End of explanation
cursor = connection.cursor()
cursor.execute("SHOW TABLES")
for entry in cursor:
print(entry[0])
Explanation: List tables
You should see a list of table names like 'experiments','metrics','model_versions','runs'
End of explanation
cursor.execute("SELECT * FROM experiments where name='{}' ORDER BY experiment_id desc LIMIT 1".format(experiment_name))
if cursor.rowcount == 0:
print("Experiment not found")
else:
experiment_id = list(cursor)[0][0]
print("'{}' experiment ID: {}".format(experiment_name, experiment_id))
Explanation: Retrieve experiment
End of explanation
cursor.execute("SELECT * FROM runs where experiment_id={} ORDER BY start_time desc LIMIT 1".format(experiment_id))
if cursor.rowcount == 0:
print("No runs found")
else:
entity=list(cursor)[0]
run_uuid = entity[0]
print("Last run id of '{}' experiment is: {}\n".format(experiment_name, run_uuid))
print(entity)
Explanation: Query runs
End of explanation
cursor.execute("SELECT * FROM metrics where run_uuid = '{}'".format(run_uuid))
for entry in cursor:
print(entry)
Explanation: Query metrics
End of explanation
!gsutil ls {current_model}
Explanation: 1.3. List the artifacts in Cloud Storage
End of explanation
COMPOSER_NAME=os.environ['MLOPS_COMPOSER_NAME']
REGION=os.environ['MLOPS_REGION']
Explanation: 2. Submitting a workflow to Composer
We implement a one-step Airflow workflow that trains a Scikit-learn model, and examine the logged entries in Cloud SQL and produced articats in Cloud Storage through MLflow tracking.
End of explanation
%%writefile test-sklearn-mlflow.py
import airflow
import mlflow
import mlflow.sklearn
import numpy as np
from datetime import timedelta
from sklearn.linear_model import LogisticRegression
from airflow.operators import PythonOperator
def train_model(**kwargs):
print("Train lr model step started...")
print("MLflow tracking uri: {}".format(mlflow.get_tracking_uri()))
mlflow.set_experiment("airflow-test")
with mlflow.start_run(nested=True):
X = np.array([-2, -1, 0, 1, 2, 1]).reshape(-1, 1)
y = np.array([0, 0, 1, 1, 1, 0])
lr = LogisticRegression()
lr.fit(X, y)
score = lr.score(X, y)
print("Score: %s" % score)
mlflow.log_metric("score", score)
mlflow.sklearn.log_model(lr, "model")
print("Model saved in run %s" % mlflow.active_run().info.run_uuid)
print("Train lr model step finished.")
default_args = {
'retries': 1,
'start_date': airflow.utils.dates.days_ago(0)
}
with airflow.DAG(
'test_sklearn_mlflow',
default_args=default_args,
schedule_interval=None,
dagrun_timeout=timedelta(minutes=20)) as dag:
train_model_op = PythonOperator(
task_id='train_sklearn_model',
provide_context=True,
python_callable=train_model
)
Explanation: 2.1. Writing the Airflow workflow
End of explanation
!gcloud composer environments storage dags import \
--environment {COMPOSER_NAME} --location {REGION} \
--source test-sklearn-mlflow.py
!gcloud composer environments storage dags list \
--environment {COMPOSER_NAME} --location {REGION}
Explanation: 2.2. Uploading the Airflow workflow
End of explanation
!gcloud composer environments run {COMPOSER_NAME} \
--location {REGION} unpause -- test_sklearn_mlflow
!gcloud composer environments run {COMPOSER_NAME} \
--location {REGION} trigger_dag -- test_sklearn_mlflow
Explanation: 2.3. Triggering the workflow
Please wait for 30-60 seconds before triggering the workflow at the first Airflow Dag import
End of explanation
cursor = connection.cursor()
Explanation: 2.4. Query the MLfow entries from Cloud SQL
End of explanation
experiment_name = "airflow-test"
cursor.execute("SELECT * FROM experiments where name='{}' ORDER BY experiment_id desc LIMIT 1".format(experiment_name))
if cursor.rowcount == 0:
print("Experiment not found")
else:
experiment_id = list(cursor)[0][0]
print("'{}' experiment ID: {}".format(experiment_name, experiment_id))
Explanation: Retrieve experiment
End of explanation
cursor.execute("SELECT * FROM runs where experiment_id={} ORDER BY start_time desc LIMIT 1".format(experiment_id))
if cursor.rowcount == 0:
print("No runs found")
else:
entity=list(cursor)[0]
run_uuid = entity[0]
print("Last run id of '{}' experiment is: {}\n".format(experiment_name, run_uuid))
print(entity)
Explanation: Query runs
End of explanation
cursor.execute("SELECT * FROM metrics where run_uuid = '{}'".format(run_uuid))
if cursor.rowcount == 0:
print("No metrics found")
else:
for entry in cursor:
print(entry)
Explanation: Query metrics
End of explanation
!gsutil ls {MLFLOW_EXPERIMENTS_URI}/{experiment_id}/{run_uuid}/artifacts/model
Explanation: 2.5. List the artifacts in Cloud Storage
End of explanation |
2,713 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PYT-DS SAISOFT
Overview 1
Overview 3
<a data-flickr-embed="true" href="https
Step1: People needing to divide a fiscal year starting in July, into quarters, are in luck with pandas. I've been looking for lunar year and other periodic progressions. The whole timeline thing still seems difficult, even with a proleptic Gregorian plus UTC timezones.
Step2: As usual, I'm recommending telling yourself a story, in this case about an exclusive party you've been hosting ever since 2000, all the way up to 2018. Once you get the interactive version of this Notebook, you'll be able to extend this record by as many more years as you want.
Step3: DBAs who know SQL / noSQL, will find pandas, especially its inner outer left and right merge possibilities somewhat familiar. We learn about the set type through maths, through Python, and understand about unions and intersections, differences.
We did a fair amount of practicing with merge, appreciating that pandas pays a lot of attention to the DataFrame labels, synchronizing along indexes and columns, creating NaN empty cells where needed.
We're spared a lot of programming, and yet even so though these patchings- together can become messy and disorganized. At least the steps are chronicled. That's why spreadsheets are not a good idea. You lose your audit trail. There's no good way to find and debug your mistakes.
Keep the whole pipeline in view, from raw data sources, through numerous cleaning and filtering steps. The linked Youtube is a good example
Step4: What's the average number of party-goers over this nine-year period?
Step5: Might you also want the median and mode? Do you remember what those are?
Step6: Now that seems strange. Isn't the mode of a column of numbers, a number?
We're looking at the numbers that appear most often, the top six in the ranking. Surely there must be some tie breaking rule. | Python Code:
import pandas as pd
import numpy as np
rng_years = pd.period_range('1/1/2000', '1/1/2018', freq='Y')
Explanation: PYT-DS SAISOFT
Overview 1
Overview 3
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/27963484878/in/album-72157693427665102/" title="Barry at Large"><img src="https://farm1.staticflickr.com/969/27963484878_b38f0db42a_m.jpg" width="240" height="180" alt="Barry at Large"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
DATA SCIENCE WITH PYTHON
Where Have We Been, What Have We Seen?
Data Science includes Data Management. This means we might call a DBA (Database Administrator) a kind of data scientist? Why not? Their speciality is efficiently warehousing data, meaning the same information is not redundantly scattered.
In terms of rackspace and data center security, of course we want redundancy, but in databases the potential for data corruption increases exponentially with the number of places the same information must be kept up to date. If a person changes their legal name, you don't want to have to break your primary key, which should be based on something less mutable.
Concepts of mutability versus immutability are important in data science. In consulting, I would often advertise spreadsheets as ideal for "what if" scenarios, but if the goal is to chronicle "what was" then the mutability of a spreedsheet becomes a liability. The bookkeeping community always encourages databases over spreadsheets when it comes to keeping a company or agency's books.
DBAs also concern themselves with missing data. If the data is increasingly full of holes, that's a sign the database may no longer be loved. DBAs engage in load balancing, meaning they must give priority to services most in demand. However "what's in demand" may be a changing vista.
End of explanation
head_count = np.random.randint(10,35, size=19)
Explanation: People needing to divide a fiscal year starting in July, into quarters, are in luck with pandas. I've been looking for lunar year and other periodic progressions. The whole timeline thing still seems difficult, even with a proleptic Gregorian plus UTC timezones.
End of explanation
new_years_party = pd.DataFrame(head_count, index = rng_years,
columns=["Attenders"])
Explanation: As usual, I'm recommending telling yourself a story, in this case about an exclusive party you've been hosting ever since 2000, all the way up to 2018. Once you get the interactive version of this Notebook, you'll be able to extend this record by as many more years as you want.
End of explanation
new_years_party
Explanation: DBAs who know SQL / noSQL, will find pandas, especially its inner outer left and right merge possibilities somewhat familiar. We learn about the set type through maths, through Python, and understand about unions and intersections, differences.
We did a fair amount of practicing with merge, appreciating that pandas pays a lot of attention to the DataFrame labels, synchronizing along indexes and columns, creating NaN empty cells where needed.
We're spared a lot of programming, and yet even so though these patchings- together can become messy and disorganized. At least the steps are chronicled. That's why spreadsheets are not a good idea. You lose your audit trail. There's no good way to find and debug your mistakes.
Keep the whole pipeline in view, from raw data sources, through numerous cleaning and filtering steps. The linked Youtube is a good example: the data scientist vastly shrinks the data needed, by weeding out what's irrelevant. Data science is all about dismissing the irrelevant, which takes work, real energy.
End of explanation
np.round(new_years_party.Attenders.mean())
Explanation: What's the average number of party-goers over this nine-year period?
End of explanation
new_years_party.Attenders.mode()
Explanation: Might you also want the median and mode? Do you remember what those are?
End of explanation
new_years_party.Attenders.median()
Explanation: Now that seems strange. Isn't the mode of a column of numbers, a number?
We're looking at the numbers that appear most often, the top six in the ranking. Surely there must be some tie breaking rule.
End of explanation |
2,714 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interacting with web APIs
Overview. We introduce the basics of interacting with web APIs using the requests package. We discuss the basics of how web APIs are usually constructed and show how to interact with the BEA and as illustrations of the concepts.
Outline
Web APIs
Step1: Web API basics <a id=apis></a>
Many websites make data available through the use of their API (examples
Step2: Notice that we have used a new syntax **kwargs in that function. What this does is at the time the function is called, all extra parameters set by name are added to a dict called kwargs. Here's a more simple example that illustrates the point
Step3: Exercise (2 min)
Step4: The actual data returned from the BEA website is contained in datasets.content. This will be a JSON object (remember the plotly notebook), but can be converted into a python dict by calling the json method
Step5: Notice that this dict has one item. The key is BEAAPI. The value is another dict. Let's take a look inside this one
Step6: The value here is another dict, this time with two keys
Step7: What we have here is a mapping from a DatasetName to a description of that dataset. This is helpful as we'll use it later on when we actually want to get our data.
Exercise (4 min)
Step8: The ParameterName column above tells us the name of all additional parameters we can send to GetData.
The ParameterIsRequiredFlag has a 1 if that parameter is required and a 0 if it is optional
Finally, the ParameterDataType tells us what type the value of each parameter should be.
I did a of digging and found that the GDP data we are after lives in table 6. Let's get quarterly data for 1990 to 2016
Step9: The important columns for us are going to be DataValue, SeriesCode, and TimePeriod. I did a bit more digging and found that the series codes map into our variables as follows
Step10: Let's insert the names we know into the SeriesCode column using the replace method
Step11: Exercise (10 min) WARNING
Step12: Exercise
Step13: Plot Chicago crime over time
Recall, we only have the first 25000 elements of the dataset, so the results are likely to be nonsense. We do it anyways because it gives us a chance to use the timeseries tools we talked about previously. | Python Code:
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics
import datetime as dt # date tools, used to note current date
import sys
# these are new
import requests
%matplotlib inline
print('\nPython version: ', sys.version)
print('Pandas version: ', pd.__version__)
print('Requests version: ', requests.__version__)
print("Today's date:", dt.date.today())
Explanation: Interacting with web APIs
Overview. We introduce the basics of interacting with web APIs using the requests package. We discuss the basics of how web APIs are usually constructed and show how to interact with the BEA and as illustrations of the concepts.
Outline
Web APIs: We describe how APIs are usually accessed via urls with special a special format
BEA: We us the Bureau of Economic Analysis (BEA)'s API as an in-depth example of how this works
Open Data Network: We use the Open Data Network API as another, simpler example of getting data from the web
Note: requires internet access to run.
This Jupyter notebook was created by Chase Coleman and Spencer Lyon for the NYU Stern course Data Bootcamp.
Preliminaries
Import the usual suspects
End of explanation
import requests
def bea_request(method, **kwargs):
# this is the UserID they gave me
BEA_ID = "2A629F24-EF8D-4043-BC1F-8CB6A331A2F3"
# root url for bea API
API_URL = "https://bea.gov/api/data"
# start constructing params dict
params = dict(UserID=BEA_ID, method=method)
# bring in any additional keyword arguments to the dict
params.update(kwargs)
# Make request
r = requests.get(API_URL, params=params)
return r
Explanation: Web API basics <a id=apis></a>
Many websites make data available through the use of their API (examples: Airbnb, quandl, FRED, BEA, ESPN, and many others)
Most of the time you interact with the API by making http (or https) requests. To do this you direct your browser to a special URL for the website. Usually this URL takes the following form:
<pre><font color="red">https://my_website.com/api</font><font color="blue">?</font><font color="green">FirstParam=first_value</font><font color="blue">&</font><font color="green">SecondParam=second_value</font></pre>
Notice that I have broken the URL into pieces using different colors of text:
The red part (https://my_website.com/api) is called the root url for the API. This is the starting point for all API interactions with this website
Next is the blue question mark <font color="blue">?</font>. This separates the root url from a list of parameters
Finally, in green we have a list of parameters that take the form key=value. Each key, value pair is separated by a &.
Because we are lazy and use Python, instead of directing our browser to these special urls, we will use the function requests.get (that is, the get function from the requests package). Here's how the example above looks when using that function
python
root_url = "https://my_website.com/api"
params = {"FirstParam": "first_value", "SecondParam": "second_value"}
requests.get(root_url, params=params)
BEA API <a id=bea></a>
In this section we will look at how to use the requests package to interact with the API provided by the Bureau of Economic Analysis (BEA).
The API itself is documented on their website at this link.
Some key takeaways from that document:
The root url is https://bea.gov/api/data
There are two required parameters to every API call:
UserID: This is a special "password" you obtain when you register to use the API. I registered with the email address [email protected]. The UserID they gave me is in the next code cell
Method: This is one of 5 possible methods the BEA has defined: GetDataSetList, GetParameterList, GetParameterValues, GetParameterValuesFiltered, GetData.
Any additional parameters will depend on the Method that is used
Let's use what we know already and prepare some tools for interacting with their API
End of explanation
# NOTE: the name kwargs wasn't special, here I use
def my_func(**some_params):
return some_params
my_func(b=10)
my_func(a=1, b=2)
Explanation: Notice that we have used a new syntax **kwargs in that function. What this does is at the time the function is called, all extra parameters set by name are added to a dict called kwargs. Here's a more simple example that illustrates the point:
End of explanation
datasets_raw = bea_request("GetDataSetList")
type(datasets_raw)
# did the request succeed?
datasets_raw.ok
# status code 200 means success!
datasets_raw.status_code
datasets_raw.content
Explanation: Exercise (2 min): Experiment with my_func to make sure you understand how it works. You might try these things out:
Why doesn't my_func(1) work?
What is the type of x in x = my_func(a=1, b=2)?
What is the type of and len of x in x = my_func()?
Let's test out our bea_request function by calling the GetDataSetList method.
First, we need to check the methods page of the documentation to make sure we don't need any additional parameters. Looks like this one doesn't. Let's call it and see what we get
End of explanation
datasets_raw_dict = datasets_raw.json()
print("length of datasets_raw_dict:", len(datasets_raw_dict))
datasets_raw_dict
Explanation: The actual data returned from the BEA website is contained in datasets.content. This will be a JSON object (remember the plotly notebook), but can be converted into a python dict by calling the json method:
End of explanation
datasets_dict = datasets_raw_dict["BEAAPI"]
print("length of datasets_dict:", len(datasets_dict))
datasets_dict
Explanation: Notice that this dict has one item. The key is BEAAPI. The value is another dict. Let's take a look inside this one
End of explanation
datasets = pd.DataFrame(datasets_dict["Results"]["Dataset"])
datasets
Explanation: The value here is another dict, this time with two keys:
Request: gives details regarding the API request we made -- we'll throw this one away
Results: The actual data.
Let's pull the data into a dataframe so we can see what we are working with
End of explanation
nipa_params_raw = bea_request("GetParameterList", DataSetName="NIPA")
nipa_params = pd.DataFrame(nipa_params_raw.json()["BEAAPI"]["Results"]["Parameter"])
nipa_params
Explanation: What we have here is a mapping from a DatasetName to a description of that dataset. This is helpful as we'll use it later on when we actually want to get our data.
Exercise (4 min): Read the documentation for the GetData API method (here) and determine the following:
What are the required parameters?
What are optional parameters?
How can we determine what optional parameters are available? (Hint 1: it varies by dataset. Hint 2: check out the GetParameterList method)
Let's put this to practice and actually get some data.
Suppose I wanted to get data on the expenditure formula for GDP. You might remember from econ 101 that this is:
$$GDP = C + G + I + NX$$
where $GDP$ is GDP , $C$ is personal consumption, $G$ is government spending, $I$ is investment, and $NX$ is net exports.
All of these variables are available from the BEA in the national income and product accounts (NIPA) table. Let's see what parameters are required to use the GetData method when DataSetName=NIPA (NOTE, I'm not walking us through what the response look like this time -- I'll just write the code that gets us to the result)
End of explanation
gdp_data = bea_request("GetData", DataSetName="NIPA",
TableId=6,
Frequency="Q",
Year=list(range(1990, 2017)))
# check to make sure we have a 200, meaning success
gdp_data.status_code
# extract the results and read into a DataFrame
gdp = pd.DataFrame(gdp_data.json()["BEAAPI"]["Results"]["Data"])
print("The shape of gdp is", gdp.shape)
gdp.head()
Explanation: The ParameterName column above tells us the name of all additional parameters we can send to GetData.
The ParameterIsRequiredFlag has a 1 if that parameter is required and a 0 if it is optional
Finally, the ParameterDataType tells us what type the value of each parameter should be.
I did a of digging and found that the GDP data we are after lives in table 6. Let's get quarterly data for 1990 to 2016
End of explanation
gdp_names = {"DPCERX": "C",
"A191RX": "GDP",
"A019RX": "NX",
"A006RX": "I",
"A822RX": "G"}
Explanation: The important columns for us are going to be DataValue, SeriesCode, and TimePeriod. I did a bit more digging and found that the series codes map into our variables as follows
End of explanation
gdp.iloc[[0, 107, 498, 1102, 1672], :]
gdp["SeriesCode"] = gdp["SeriesCode"].replace(gdp_names)
gdp.iloc[[0, 107, 498, 1102, 1672], :]
Explanation: Let's insert the names we know into the SeriesCode column using the replace method:
End of explanation
chi_apie = "https://data.cityofchicago.org/"
chi_crime_url = chi_apie + "resource/6zsd-86xi.json?$limit=25000"
chi_df = pd.read_json(chi_crime_url)
chi_df.head()[["arrest", "case_number", "community_area", "date"]]
Explanation: Exercise (10 min) WARNING: this is a long exercise, but should make you use tools from almost every lecture of the last 6 weeks.
Our want is:
A DataFrame with one column for each of those 5 variables
The index should be the time period and should have type DatetimeIndex
The dtype for all columns should be float64
Here's an outline of how I would do this:
Remove all rows where Series code isn't one of our 5 variables (now named GDP, C, G, etc.)
drop all columns we don't need
Convert the TimePeriod column to a datetime (HINT: use pd.to_datetime)
convert the DataValue column to have the correct dtype (HINT: you'll need to use the .str methods here)
At this point you have 3 columns, all with the right dtype. Now use some combination of set_index and unstack to get the correct row and column labels (HINT: You might have ended up with 2 levels on your column index (I did) -- drop the one for DataValue if necessary)
Test out how well this went by plotting the DataFarme
Open Data Network API <a id=open_data></a>
The Open Data Network is a collection of cities, states, and Federal Governmental agencies that have all open accessed their data using the same tools. If you follow the link to the Open Data Network, there is a list of all cities that participate at the bottom. It includes New York City, Chicago, Boston, Houston, and many more.
The tool all of these cities are using to open source their data is called Socrata. One of the benefits of using the same tool is that it leads to being able to access various datasets using the same API.
The general API documentation can be found here. Let's open this up and see whether we can extract some of the important pieces of information that we'd like. We need to find two things:
A "root url" that we put at the beginning of all of our requests
The set of parameters that we want to define for any request (information like what dataset, how many observations, or what time frame).
This API has some nice features that you won't necessarily get on other APIs. One of these is that it will return a type of file called a json file. Lucky for us, pandas knows how to read this type of file, so when we interact with the Open Data Network (or any other Socrata based dataset) we can just use pd.read_json instead of what we showed in our previous example.
Root URL
The documentation starts by discussing "API Endpoints." An API endpoint is just the thing that we are referring to as the root url -- The website that we use to make our requests. Each dataset will have a different API endpoint because they are hosted by different organizations (different cities/states/agencies).
One example of an API endpoint is https://data.cityofchicago.org/. We could find this by going to the Open Data Network site and searching "Chicago crime."
Parameters
The types of parameters that we need to pass will depend on the dataset that we will be using. The only way you'll understand all of these parameters is by carefully reading the docs -- If you ask too many questions without having read the documentation, some people online may tell you RTFD. I will describe a few of them here though.
Socrata has created a system that allows you to use parameters to limit the type of data you return. Many of these act like SQL queries and, in a nod to this, they called this functionality SoQL queries . It allows you to do things like:
Choose a specific subset of columns from the data
Choose how many observations you want (useful if you are just playing with data for the first time and don't need the full dataset -- much like using df.head())
Choose observations based on some type of a requirement
You also have access to some more parameters that give authorization like an app_token.
Example
We read in the data on all crimes in Chicago since 2001.
End of explanation
bos_df.dtypes
chi_df.dtypes
Explanation: Exercise: Find the API endpoint for Boston crime (use the Crime Incident Reports July 2012-August 2015 data).
Exercise: Read in the first 50 observations of the Boston crime dataset into a dataframe named bos_df
We can now look at what types everything in these two datasets are and look at what information is contained in them.
End of explanation
chi_df = chi_df.set_index("date")
cases_per_month = chi_df.resample("M").count()["case_number"]
cases_per_month.plot()
Explanation: Plot Chicago crime over time
Recall, we only have the first 25000 elements of the dataset, so the results are likely to be nonsense. We do it anyways because it gives us a chance to use the timeseries tools we talked about previously.
End of explanation |
2,715 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rejecting bad data (channels and segments)
Step1: Marking bad channels
Sometimes some MEG or EEG channels are not functioning properly
for various reasons. These channels should be excluded from
analysis by marking them bad as. This is done by setting the 'bads'
in the measurement info of a data container object (e.g. Raw, Epochs,
Evoked). The info['bads'] value is a Python string. Here is
example
Step2: Why setting a channel bad?
Step3: Let's now interpolate the bad channels (displayed in red above)
Step4: Let's plot the cleaned data
Step5: <div class="alert alert-info"><h4>Note</h4><p>Interpolation is a linear operation that can be performed also on
Raw and Epochs objects.</p></div>
For more details on interpolation see the page channel_interpolation.
Marking bad raw segments with annotations
MNE provides an
Step6: It is also possible to draw bad segments interactively using
Step7: <div class="alert alert-info"><h4>Note</h4><p>The rejection values can be highly data dependent. You should be careful
when adjusting these values. Make sure not too many epochs are rejected
and look into the cause of the rejections. Maybe it's just a matter
of marking a single channel as bad and you'll be able to save a lot
of data.</p></div>
We then construct the epochs
Step8: We then drop/reject the bad epochs
Step9: And plot the so-called drop log that details the reason for which some
epochs have been dropped. | Python Code:
# sphinx_gallery_thumbnail_number = 3
import numpy as np
import mne
from mne.datasets import sample
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname) # already has an EEG ref
Explanation: Rejecting bad data (channels and segments)
End of explanation
raw.info['bads'] = ['MEG 2443']
Explanation: Marking bad channels
Sometimes some MEG or EEG channels are not functioning properly
for various reasons. These channels should be excluded from
analysis by marking them bad as. This is done by setting the 'bads'
in the measurement info of a data container object (e.g. Raw, Epochs,
Evoked). The info['bads'] value is a Python string. Here is
example:
End of explanation
# Reading data with a bad channel marked as bad:
fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname, condition='Left Auditory',
baseline=(None, 0))
# restrict the evoked to EEG and MEG channels
evoked.pick_types(meg=True, eeg=True, exclude=[])
# plot with bads
evoked.plot(exclude=[], time_unit='s')
print(evoked.info['bads'])
Explanation: Why setting a channel bad?: If a channel does not show
a signal at all (flat) it is important to exclude it from the
analysis. If a channel as a noise level significantly higher than the
other channels it should be marked as bad. Presence of bad channels
can have terribe consequences on down stream analysis. For a flat channel
some noise estimate will be unrealistically low and
thus the current estimate calculations will give a strong weight
to the zero signal on the flat channels and will essentially vanish.
Noisy channels can also affect others when signal-space projections
or EEG average electrode reference is employed. Noisy bad channels can
also adversely affect averaging and noise-covariance matrix estimation by
causing unnecessary rejections of epochs.
Recommended ways to identify bad channels are:
Observe the quality of data during data
acquisition and make notes of observed malfunctioning channels to
your measurement protocol sheet.
View the on-line averages and check the condition of the channels.
Compute preliminary off-line averages with artifact rejection,
SSP/ICA, and EEG average electrode reference computation
off and check the condition of the channels.
View raw data with :func:mne.io.Raw.plot without SSP/ICA
enabled and identify bad channels.
<div class="alert alert-info"><h4>Note</h4><p>Setting the bad channels should be done as early as possible in the
analysis pipeline. That's why it's recommended to set bad channels
the raw objects/files. If present in the raw data
files, the bad channel selections will be automatically transferred
to averaged files, noise-covariance matrices, forward solution
files, and inverse operator decompositions.</p></div>
The actual removal happens using :func:pick_types <mne.pick_types> with
exclude='bads' option (see picking_channels).
Instead of removing the bad channels, you can also try to repair them.
This is done by interpolation of the data from other channels.
To illustrate how to use channel interpolation let us load some data.
End of explanation
evoked.interpolate_bads(reset_bads=False, verbose=False)
Explanation: Let's now interpolate the bad channels (displayed in red above)
End of explanation
evoked.plot(exclude=[], time_unit='s')
Explanation: Let's plot the cleaned data
End of explanation
eog_events = mne.preprocessing.find_eog_events(raw)
n_blinks = len(eog_events)
# Center to cover the whole blink with full duration of 0.5s:
onset = eog_events[:, 0] / raw.info['sfreq'] - 0.25
duration = np.repeat(0.5, n_blinks)
raw.annotations = mne.Annotations(onset, duration, ['bad blink'] * n_blinks,
orig_time=raw.info['meas_date'])
print(raw.annotations) # to get information about what annotations we have
raw.plot(events=eog_events) # To see the annotated segments.
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Interpolation is a linear operation that can be performed also on
Raw and Epochs objects.</p></div>
For more details on interpolation see the page channel_interpolation.
Marking bad raw segments with annotations
MNE provides an :class:mne.Annotations class that can be used to mark
segments of raw data and to reject epochs that overlap with bad segments
of data. The annotations are automatically synchronized with raw data as
long as the timestamps of raw data and annotations are in sync.
See sphx_glr_auto_tutorials_plot_brainstorm_auditory.py
for a long example exploiting the annotations for artifact removal.
The instances of annotations are created by providing a list of onsets and
offsets with descriptions for each segment. The onsets and offsets are marked
as seconds. onset refers to time from start of the data. offset is
the duration of the annotation. The instance of :class:mne.Annotations
can be added as an attribute of :class:mne.io.Raw.
End of explanation
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
Explanation: It is also possible to draw bad segments interactively using
:meth:raw.plot <mne.io.Raw.plot> (see
sphx_glr_auto_tutorials_plot_visualize_raw.py).
As the data is epoched, all the epochs overlapping with segments whose
description starts with 'bad' are rejected by default. To turn rejection off,
use keyword argument reject_by_annotation=False when constructing
:class:mne.Epochs. When working with neuromag data, the first_samp
offset of raw acquisition is also taken into account the same way as with
event lists. For more see :class:mne.Epochs and :class:mne.Annotations.
Rejecting bad epochs
When working with segmented data (Epochs) MNE offers a quite simple approach
to automatically reject/ignore bad epochs. This is done by defining
thresholds for peak-to-peak amplitude and flat signal detection.
In the following code we build Epochs from Raw object. One of the provided
parameter is named reject. It is a dictionary where every key is a
channel type as a sring and the corresponding values are peak-to-peak
rejection parameters (amplitude ranges as floats). Below we define
the peak-to-peak rejection values for gradiometers,
magnetometers and EOG:
End of explanation
events = mne.find_events(raw, stim_channel='STI 014')
event_id = {"auditory/left": 1}
tmin = -0.2 # start of each epoch (200ms before the trigger)
tmax = 0.5 # end of each epoch (500ms after the trigger)
baseline = (None, 0) # means from the first instant to t = 0
picks_meg = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
stim=False, exclude='bads')
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks_meg, baseline=baseline, reject=reject,
reject_by_annotation=True)
Explanation: <div class="alert alert-info"><h4>Note</h4><p>The rejection values can be highly data dependent. You should be careful
when adjusting these values. Make sure not too many epochs are rejected
and look into the cause of the rejections. Maybe it's just a matter
of marking a single channel as bad and you'll be able to save a lot
of data.</p></div>
We then construct the epochs
End of explanation
epochs.drop_bad()
Explanation: We then drop/reject the bad epochs
End of explanation
print(epochs.drop_log[40:45]) # only a subset
epochs.plot_drop_log()
Explanation: And plot the so-called drop log that details the reason for which some
epochs have been dropped.
End of explanation |
2,716 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Project Euler
Step2: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
Step4: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
Step5: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
Step6: Finally used your count_letters function to solve the original question. | Python Code:
def round_down(n):
s = str(n)
if n <= 20:
return n
elif n < 100:
return int(s[0] + '0'), int(s[1])
elif n<1000:
return int(s[0] + '00'),int(s[1]),int(s[2])
assert round_down(5) == 5
assert round_down(55) == (50,5)
assert round_down(222) == (200,2,2)
def number_to_words(n):
Given a number n between 1-1000 inclusive return a list of words for the number.
lst = []
dic = {
0: 'zero',
1: 'one',
2: 'two',
3: 'three',
4: 'four',
5: 'five',
6: 'six',
7: 'seven',
8: 'eight',
9: 'nine',
10: 'ten',
11: 'eleven',
12: 'twelve',
13: 'thirteen',
14: 'fourteen',
15: 'fifteen',
16: 'sixteen',
17: 'seventeen',
18: 'eighteen',
19: 'nineteen',
20: 'twenty',
30: 'thirty',
40: 'forty',
50: 'fifty',
60: 'sixty',
70: 'seventy',
80: 'eighty',
90: 'ninety',
100: 'one hundred',
200: 'two hundred',
300: 'three hundred',
400: 'four hundred',
500: 'five hundred',
600: 'six hundred',
700: 'seven hundred',
800: 'eight hundred',
900: 'nine hundred'}
for i in range(1,n+1):
if i <= 20:
for entry in dic:
if i == entry:
lst.append(dic[i])
elif i < 100:
first,second = round_down(i)
for entry in dic:
if first == entry:
if second == 0:
lst.append(dic[first])
else:
lst.append(dic[first] + '-' + dic[second])
elif i <1000:
first,second,third = round_down(i)
for entry in dic:
if first == entry:
if second == 0 and third == 0:
lst.append(dic[first])
elif second == 0:
lst.append(dic[first] + ' and ' + dic[third])
elif second == 1:
#For handling the teen case
lst.append(dic[first] + ' and ' + dic[int(str(second)+str(third))])
elif third == 0:
#Here I multiply by 10 because round_down removes the 0 for my second digit
lst.append(dic[first] + ' and ' + dic[second*10])
else:
lst.append(dic[first] + ' and ' + dic[second*10] + '-' + dic[third])
elif i == 1000:
lst.append('one thousand')
return lst
number_to_words(5)
Explanation: Project Euler: Problem 17
https://projecteuler.net/problem=17
If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.
If all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?
NOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of "and" when writing out numbers is in compliance with British usage.
First write a number_to_words(n) function that takes an integer n between 1 and 1000 inclusive and returns a list of words for the number as described above
End of explanation
assert len(number_to_words(5))==5
assert len(number_to_words(900))==900
assert number_to_words(50)[-1]=='fifty'
assert True # use this for grading the number_to_words tests.
Explanation: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
End of explanation
def count_letters(n):
Count the number of letters used to write out the words for 1-n inclusive.
lst2 = []
for entry in number_to_words(n):
count = 0
for char in entry:
if char != ' ' and char != '-':
count = count + 1
lst2.append(count)
return lst2
Explanation: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
End of explanation
assert count_letters(1) == [3]
assert len(count_letters(342)) == 342
assert count_letters(5) == [3,3,5,4,4]
assert True # use this for grading the count_letters tests.
Explanation: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
End of explanation
print(sum(count_letters(1000)))
assert True # use this for gradig the answer to the original question.
Explanation: Finally used your count_letters function to solve the original question.
End of explanation |
2,717 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mpi-m', 'mpi-esm-1-2-lr', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: MPI-M
Source ID: MPI-ESM-1-2-LR
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:17
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
2,718 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Support Vector Machines
This notebook discusses <em style="color
Step1: We construct a small data set containing just three points.
Step2: To proceed, we will plot the data points using a scatter plot. Furthermore, we plot a green line that intuitively make the best decision boundary.
Step3: If we want to separate the two red crosses at $(1,2)$ and $(2,1)$ from the blue bullet at $(3.5, 3.5)$, then the decision boundary that would create the
widest margin between these points would be given by the green line. The road separating these points would have a width of $4 \cdot \sqrt{2}$.
Let us classify these data using logistic regression and see what we get. We will plot the <b style="color
Step4: The function $\texttt{train_and_plot}(X, Y)$ takes a design matrix $X$ and a vector $Y$ containing zeros and ones. It builds a regression model and plots the data together with the decision boundary.
Step5: We decision boundary is closer to the blue data point than to the red data points. This is not optimal.
The function $\texttt{gen_X_Y}(n)$ take a natural number $n$ and generates additional data. The number $n$ is the number of blue data points.
Concretely, it will add $n-1$ data points to the right of the blue dot shown above. This should not really change the decision boundary as the data
do not provide any new information. After all, these data are to the right of the first blue dot and hence should share the class of this data point.
Step6: When we test logistic regression with this data set, we see that the slope of the decision boundary is much steeper now and the separation of the blue dots from the red crosses is far worse than it needs to be, had the optimal decision boundary been computed.
Let us see how <em style="color
Step7: First, we construct a support vector machine with a linear kernel and next to no regularization and train it with the data.
Step8: The following function is used for plotting.
Step9: The decision boundary separates the data perfectly because it maximizes the distance of the data from the boundary.
Step10: Let's load some strange data that I have found somewhere. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn.linear_model as lm
Explanation: Support Vector Machines
This notebook discusses <em style="color:blue;">support vector machines</em>. In order to understand why we need support vector machines (abbreviated as SVMs), we
will first demonstrate that classifiers constructed with <em style="color:blue;">logistic regression</em> sometimes behave unintuitively.
The Problem with Logistic Regression
In this section of the notebook we discuss an example that demonstrates that logistic regression is not necessarily the best classifier we can get.
End of explanation
X = np.array([[1.00, 2.00],
[2.00, 1.00],
[3.50, 3.50]])
Y = np.array([0,
0,
1])
Explanation: We construct a small data set containing just three points.
End of explanation
plt.figure(figsize=(12, 12))
Corner = np.array([[0.0, 5.0], [5.0, 0.0]])
X_pass = X[Y == 1]
X_fail = X[Y == 0]
sns.set(style='darkgrid')
plt.title('A Simple Classification Problem')
plt.axvline(x=0.0, c='k')
plt.axhline(y=0.0, c='k')
plt.xlabel('x1')
plt.ylabel('x2')
plt.xticks(np.arange(0.0, 5.1, step=0.5))
plt.yticks(np.arange(0.0, 5.1, step=0.5))
X1 = np.arange(0, 5.05, 0.05)
X2 = 5 - X1
plt.plot(X1, X2, color='green', linestyle='-')
X1 = np.arange(0, 3.05, 0.05)
X2 = 3 - X1
plt.plot(X1, X2, color='cyan', linestyle=':')
X1 = np.arange(2.0, 5.05, 0.05)
X2 = 7 - X1
plt.plot(X1, X2, color='cyan', linestyle=':')
plt.scatter(Corner[:,0], Corner[:,1], color='white', marker='.')
plt.scatter(X_pass[:,0], X_pass[:,1], color='b', marker='o') # class 1 is blue
plt.scatter(X_fail[:,0], X_fail[:,1], color='r', marker='x') # class 2 is red
Explanation: To proceed, we will plot the data points using a scatter plot. Furthermore, we plot a green line that intuitively make the best decision boundary.
End of explanation
def plot_data_and_boundary(X, Y, ϑ0, ϑ1, ϑ2):
Corner = np.array([[0.0, 5.0], [5.0, 0.0]])
X_pass = X[Y == 1]
X_fail = X[Y == 0]
plt.figure(figsize=(12, 12))
sns.set(style='darkgrid')
plt.title('A Simple Classification Problem')
plt.axvline(x=0.0, c='k')
plt.axhline(y=0.0, c='k')
plt.xlabel('x1')
plt.ylabel('x2')
plt.xticks(np.arange(0.0, 5.1, step=0.5))
plt.yticks(np.arange(0.0, 5.1, step=0.5))
plt.scatter(Corner[:,0], Corner[:,1], color='white', marker='.')
plt.scatter(X_pass[:,0], X_pass[:,1], color='blue' , marker='o')
plt.scatter(X_fail[:,0], X_fail[:,1], color='red' , marker='x')
a = max(- (ϑ0 + ϑ2 * 5)/ϑ1, 0.0)
b = min(- ϑ0/ϑ1 , 5.0)
a, b = min(a, b), max(a, b)
X1 = np.arange(a-0.1, b+0.02, 0.05)
X2 = -(ϑ0 + ϑ1 * X1)/ϑ2
print('slope of decision boundary', -ϑ1/ϑ2)
plt.plot(X1, X2, color='green')
Explanation: If we want to separate the two red crosses at $(1,2)$ and $(2,1)$ from the blue bullet at $(3.5, 3.5)$, then the decision boundary that would create the
widest margin between these points would be given by the green line. The road separating these points would have a width of $4 \cdot \sqrt{2}$.
Let us classify these data using logistic regression and see what we get. We will plot the <b style="color:blue;">decision boundary</b>. If $\vartheta_0$, $\vartheta_1$, and $\vartheta_2$ are the parameters of the logistic model, then the decision boundary is given by the linear equation
$$ \vartheta_0 + \vartheta_1 \cdot x_1 + \vartheta_2 \cdot x_2 = 0. $$
This can be rewritten as
$$ x_2 = - \frac{\vartheta_0 + \vartheta_1 \cdot x_1}{\vartheta_2}. $$
The function $\texttt{plot_data_and_boundary}(X, Y, \vartheta_0, \vartheta_1, \vartheta_2)$ takes the data $X$, their classes $Y$ and the parameters
$\vartheta_0$, $\vartheta_1$, and $\vartheta_2$ of the logistic model as inputs and plots the data and the decision boundary.
End of explanation
def train_and_plot(X, Y):
M = lm.LogisticRegression(C=1, solver='lbfgs')
M.fit(X, Y)
ϑ0 = M.intercept_[0]
ϑ1, ϑ2 = M.coef_[0]
plot_data_and_boundary(X, Y, ϑ0, ϑ1, ϑ2)
train_and_plot(X, Y)
Explanation: The function $\texttt{train_and_plot}(X, Y)$ takes a design matrix $X$ and a vector $Y$ containing zeros and ones. It builds a regression model and plots the data together with the decision boundary.
End of explanation
def gen_X_Y(n):
X = np.array([[1.0, 2.0], [2.0, 1.0]] +
[[3.5 + k*0.0015, 3.5] for k in range(n)])
Y = np.array([0, 0] + [1] * n)
return X, Y
X, Y = gen_X_Y(1000)
train_and_plot(X, Y)
Explanation: We decision boundary is closer to the blue data point than to the red data points. This is not optimal.
The function $\texttt{gen_X_Y}(n)$ take a natural number $n$ and generates additional data. The number $n$ is the number of blue data points.
Concretely, it will add $n-1$ data points to the right of the blue dot shown above. This should not really change the decision boundary as the data
do not provide any new information. After all, these data are to the right of the first blue dot and hence should share the class of this data point.
End of explanation
import sklearn.svm as svm
Explanation: When we test logistic regression with this data set, we see that the slope of the decision boundary is much steeper now and the separation of the blue dots from the red crosses is far worse than it needs to be, had the optimal decision boundary been computed.
Let us see how <em style="color:blue;">support vector machines</em> deal with these data.
End of explanation
M = svm.SVC(kernel='linear', C=10000)
M.fit(X, Y)
M.score(X, Y)
Explanation: First, we construct a support vector machine with a linear kernel and next to no regularization and train it with the data.
End of explanation
def plot_data_and_boundary(X, Y, M, title):
Corner = np.array([[0.0, 5.0], [5.0, 0.0]])
X0, X1 = X[:, 0], X[:, 1]
XX, YY = np.meshgrid(np.arange(0, 5, 0.005), np.arange(0, 5, 0.005))
Z = M.predict(np.c_[XX.ravel(), YY.ravel()])
Z = Z.reshape(XX.shape)
plt.figure(figsize=(10, 10))
sns.set(style='darkgrid')
plt.contour(XX, YY, Z)
plt.scatter(Corner[:,0], Corner[:,1], color='black', marker='.')
plt.scatter(X0, X1, c=Y, edgecolors='k')
plt.xlim(XX.min(), XX.max())
plt.ylim(YY.min(), YY.max())
plt.xlabel('x1')
plt.ylabel('x2')
plt.xticks()
plt.yticks()
plt.title(title)
plot_data_and_boundary(X, Y, M, 'some data')
Explanation: The following function is used for plotting.
End of explanation
X = np.array([[1.00, 2.00],
[2.00, 1.00],
[3.50, 3.50]])
Y = np.array([0,
0,
1])
plot_data_and_boundary(X, Y, M, 'three points')
import pandas as pd
Explanation: The decision boundary separates the data perfectly because it maximizes the distance of the data from the boundary.
End of explanation
DF = pd.read_csv('strange-data.csv')
DF.head()
X = np.array(DF[['x1', 'x2']])
Y = np.array(DF['y'])
Red = X[Y == 1]
Blue = X[Y == 0]
M = svm.SVC(kernel='rbf', gamma=400.0, C=10000)
M.fit(X, Y)
M.score(X, Y)
X0, X1 = X[:, 0], X[:, 1]
XX, YY = np.meshgrid(np.arange(0.0, 1.1, 0.001), np.arange(0.3, 1.0, 0.001))
Z = M.predict(np.c_[XX.ravel(), YY.ravel()])
Z = Z.reshape(XX.shape)
plt.figure(figsize=(12, 12))
plt.contour(XX, YY, Z, colors='green')
plt.scatter(Blue[:, 0], Blue[:, 1], color='blue')
plt.scatter(Red [:, 0], Red [:, 1], color='red')
plt.xlabel('x1')
plt.ylabel('x2')
plt.title('Strange Data')
Explanation: Let's load some strange data that I have found somewhere.
End of explanation |
2,719 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have two numpy arrays x and y | Problem:
import numpy as np
x = np.array([0, 1, 1, 1, 3, 1, 5, 5, 5])
y = np.array([0, 2, 3, 4, 2, 4, 3, 4, 5])
a = 1
b = 4
idx_list = ((x == a) & (y == b))
result = idx_list.nonzero()[0] |
2,720 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute MxNE with time-frequency sparse prior
The TF-MxNE solver is a distributed inverse method (like dSPM or sLORETA)
that promotes focal (sparse) sources (such as dipole fitting techniques)
[1] [2]. The benefit of this approach is that
Step1: Run solver
Step2: Plot dipole activations
Step3: Show the evoked response and the residual for gradiometers
Step4: Generate stc from dipoles
Step5: View in 2D and 3D ("glass" brain like 3D plot) | Python Code:
# Author: Alexandre Gramfort <[email protected]>
# Daniel Strohmeier <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
from mne.inverse_sparse import tf_mixed_norm, make_stc_from_dipoles
from mne.viz import (plot_sparse_source_estimates,
plot_dipole_locations, plot_dipole_amplitudes)
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
ave_fname = data_path + '/MEG/sample/sample_audvis-no-filter-ave.fif'
cov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif'
# Read noise covariance matrix
cov = mne.read_cov(cov_fname)
# Handling average file
condition = 'Left visual'
evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))
evoked = mne.pick_channels_evoked(evoked)
# We make the window slightly larger than what you'll eventually be interested
# in ([-0.05, 0.3]) to avoid edge effects.
evoked.crop(tmin=-0.1, tmax=0.4)
# Handling forward solution
forward = mne.read_forward_solution(fwd_fname)
Explanation: Compute MxNE with time-frequency sparse prior
The TF-MxNE solver is a distributed inverse method (like dSPM or sLORETA)
that promotes focal (sparse) sources (such as dipole fitting techniques)
[1] [2]. The benefit of this approach is that:
it is spatio-temporal without assuming stationarity (sources properties
can vary over time)
activations are localized in space, time and frequency in one step.
with a built-in filtering process based on a short time Fourier
transform (STFT), data does not need to be low passed (just high pass
to make the signals zero mean).
the solver solves a convex optimization problem, hence cannot be
trapped in local minima.
References
.. [1] A. Gramfort, D. Strohmeier, J. Haueisen, M. Hamalainen, M. Kowalski
"Time-Frequency Mixed-Norm Estimates: Sparse M/EEG imaging with
non-stationary source activations",
Neuroimage, Volume 70, pp. 410-422, 15 April 2013.
DOI: 10.1016/j.neuroimage.2012.12.051
.. [2] A. Gramfort, D. Strohmeier, J. Haueisen, M. Hamalainen, M. Kowalski
"Functional Brain Imaging with M/EEG Using Structured Sparsity in
Time-Frequency Dictionaries",
Proceedings Information Processing in Medical Imaging
Lecture Notes in Computer Science, Volume 6801/2011, pp. 600-611, 2011.
DOI: 10.1007/978-3-642-22092-0_49
End of explanation
# alpha parameter is between 0 and 100 (100 gives 0 active source)
alpha = 40. # general regularization parameter
# l1_ratio parameter between 0 and 1 promotes temporal smoothness
# (0 means no temporal regularization)
l1_ratio = 0.03 # temporal regularization parameter
loose, depth = 0.2, 0.9 # loose orientation & depth weighting
# Compute dSPM solution to be used as weights in MxNE
inverse_operator = make_inverse_operator(evoked.info, forward, cov,
loose=loose, depth=depth)
stc_dspm = apply_inverse(evoked, inverse_operator, lambda2=1. / 9.,
method='dSPM')
# Compute TF-MxNE inverse solution with dipole output
dipoles, residual = tf_mixed_norm(
evoked, forward, cov, alpha=alpha, l1_ratio=l1_ratio, loose=loose,
depth=depth, maxit=200, tol=1e-6, weights=stc_dspm, weights_min=8.,
debias=True, wsize=16, tstep=4, window=0.05, return_as_dipoles=True,
return_residual=True)
# Crop to remove edges
for dip in dipoles:
dip.crop(tmin=-0.05, tmax=0.3)
evoked.crop(tmin=-0.05, tmax=0.3)
residual.crop(tmin=-0.05, tmax=0.3)
Explanation: Run solver
End of explanation
plot_dipole_amplitudes(dipoles)
# Plot dipole location of the strongest dipole with MRI slices
idx = np.argmax([np.max(np.abs(dip.amplitude)) for dip in dipoles])
plot_dipole_locations(dipoles[idx], forward['mri_head_t'], 'sample',
subjects_dir=subjects_dir, mode='orthoview',
idx='amplitude')
# # Plot dipole locations of all dipoles with MRI slices
# for dip in dipoles:
# plot_dipole_locations(dip, forward['mri_head_t'], 'sample',
# subjects_dir=subjects_dir, mode='orthoview',
# idx='amplitude')
Explanation: Plot dipole activations
End of explanation
ylim = dict(grad=[-120, 120])
evoked.pick_types(meg='grad', exclude='bads')
evoked.plot(titles=dict(grad='Evoked Response: Gradiometers'), ylim=ylim,
proj=True, time_unit='s')
residual.pick_types(meg='grad', exclude='bads')
residual.plot(titles=dict(grad='Residuals: Gradiometers'), ylim=ylim,
proj=True, time_unit='s')
Explanation: Show the evoked response and the residual for gradiometers
End of explanation
stc = make_stc_from_dipoles(dipoles, forward['src'])
Explanation: Generate stc from dipoles
End of explanation
plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),
opacity=0.1, fig_name="TF-MxNE (cond %s)"
% condition, modes=['sphere'], scale_factors=[1.])
time_label = 'TF-MxNE time=%0.2f ms'
clim = dict(kind='value', lims=[10e-9, 15e-9, 20e-9])
brain = stc.plot('sample', 'inflated', 'rh', views='medial',
clim=clim, time_label=time_label, smoothing_steps=5,
subjects_dir=subjects_dir, initial_time=150, time_unit='ms')
brain.add_label("V1", color="yellow", scalar_thresh=.5, borders=True)
brain.add_label("V2", color="red", scalar_thresh=.5, borders=True)
Explanation: View in 2D and 3D ("glass" brain like 3D plot)
End of explanation |
2,721 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Speech Recognition using Graphs
Team members
Step1: Recompute
WARNING If you set recompute to True this will reextract all featrues, which will take approiximately a days, so we do not recommend it. It is here for completeness so you can see how the step was done during th project. We've already computed it and saved the results into a pickle file. Our entire used data, as well as the pickle files can be found here.<br>
Step2: Feature Extraction
The Dataset that was taken from the Kaggle competition was initally separated in 2 sets. A Training Set containing labeled test words from different speakers with various background noises. The test set contains unlabeled data, and will not be used with this project as it won't allow the calculation of the accuracy. In the following we will only use the original training set as our main dataset. The words are provided in the form of a .wav sound file of 1 second. <br>
Feature Extraction Pipeline
Step3: Pipeline for a small number of audio files
Step4: After selecting 2 words we normalize their values to their maximum.
Step7: Next we define two auxiliary function that allow us to select main lobes of the signal and to keep only those lobes.
Step8: For the selection of the lobes we use the RMSE transformation. We next display the shape of those signals after this transformation
Step9: Next we apply the our auxiliary function cut_signal to the 2 audio samples. As we see it removes efficiently the silence surrounding the main lobes.
Step10: Of the cut audio file we want now to compute our features the Mel-Frequency Cepstral Coefficients, short mfccs. For this, no matter the length of the audio file, we compute 20 mfcc vectors of dimension 10. This means we compoute a short-time Fourier transform at 20 equidistant time points inside the cut audio files and keep the lower 10 mfccs of the spectrum. Since the audio files are of different length after the cutting, we adjust the hop-length (length between two short-time Fourier analyses) for every audio file accordingly. this makes the resulting feature vectors comparable and adds a "time warping" effect which should make the feature more robust to slower/faster spoken words.
Step11: As we have allready computed the Features for the whole dataset we load it directly from the following pickle file.
Step12: Classification Methods
Now that we have extracted some meaningful features from the raw data, we want to build a model that uses some training set $S_t$ of cardinality $|S_t|=N$, which can be used to classify the rest of the data (validation set). We've mainly analyzed two methods, "Spectral Clustering" and "Semi-Supervised Clustering", which we will describe in detail in this section.<br>
<br>
We found that using a training set of cardinality $|S_t| = N = 4800$ and a validation batch size of $|\mathbb{v}|= K = 200$ to be both computationally reasonable and yielding good results. This means that we use the same $N$ datapoints (feature vectors of audio files), of which we know the labels, to classify all other audio files (validation set), of which we pretend not to know the labels. The classification of the validation set (size $V$) is done batch-wise, i.e. $K$ files are classified simultaniously and we iterate trough the entire validation set, i.e. $V/k$ iterations are performed. Which $N$ datapoints are chosen to form the training set is determined randomly with a restriction that every word (or class) is represented equally. Thus, for $N = 4800$ we choose $160$ audio samples of every one of the 30 clases/words at random.<br>
<br>
In this section we will classify one batch of size $K = 200$ and while doing that explain both classification methods using graphs in detail.
Create training set, validation set and Data Matrix
In a first step, we create the label vector $\mathbf{y}\in{1,2,3,...,30}^{64'720}$ for all datapoints. In addition we plot the labels and the distribution of the classes.
Step13: In the above histogram we can see that the classes are not balanced inside the test set. However, for our testing we will chose a balanced training, as well as a balanced validation step. This corresponds to having an equal prior probability of occurence between the different words we want to classify. Thus, in the next cell we choose at random $160$ datapoints per class to form our training set $S_t$ ($30160=4800$) and $1553$ datapoints per class to form the validation set $S_v$ ($155330 = 46590$), which is the maximum amount of datapoints we can put into the vlidation set for it to still be balanced.
Step14: We will define the batch sizem which defines how many validation samples are classified simultaniously. Then we choose at random 200 datapoints of the validation set $S_v$ to build said batch. Remark
Step15: Now we build our feature matrix $\mathbf{X}^{(N+K)\times D}$ by concatenating the feature vectors of all datapoints inside the training set $S_t$ and the batch datapoints. The feature are then normalized by substracting their mean, as well as dividing by the standard deviation. The feature normalizing step was found to have a ver significant effect on the resulting classification accuracy.
Step16: Build Graph from Data Matrix
We now want to build a graph from the earlier obtained data matrix. Every node in our graph will correspond to one datapoint (feature vector of one audio file). We use a weighted, undirected graph. The weight is very important for our application, since it gives us a measure of how similair the feature vectors of the two datapoints are. The undirectedeness is a logical conclusion of our edges being similitarity measures, which are inherently undirected.<br>
<br>
To build the weight matrix $W\in \mathbb{R}^{(N+K)\times (N+K)}$ we compute the cosine distance between each datapoint $\mathbf{x_n}$ in the datamatrix $\mathbf{X}$, which is defined as
$$d(\mathbf{x_i},\mathbf{x_j}) = \frac{\mathbf{x_i}^T\mathbf{x_j}}{||\mathbf{x_i}||2||\mathbf{x_j}||_2}$$
and then build a similarity graph using
$$\mathbf{W{i,j}} = exp(\frac{-d(\mathbf{x_i},\mathbf{x_j})^2}{\sigma^2}).$$
Other, distance functions were tested, but the cosine distance was found to be the most effective. We used the mean overall distance as $\sigma$.
Step17: We can already see that there is a distinct square pattern inside the weight matrix. This points to a clustering inside a graph, achieved by good feature extraction (rows and columns are sorted by labels, except last 200). At this point we are ready to present the first classification method that was analyzed
Step18: We can now calculate the eigenvectors of the Laplacian matrix. These eigenvectors will be used as feature vectors for our classifier.
Step19: In a next step we split the eigenvectors of the graph into two parts, one containing the nodes representing the training datapoints, one containing the nodes representing the validation datapoints.
Step20: A wide range of classifiers were tested on our input features. Remarkably, a very simple classifier such as the Gaussian Naive Bayes classifier produced far better results than more advanced techniques. This is mainly because the graph datapoints were generated using a gaussian kernel, and is therefore sensible to assume that our feature distribution will be gaussian as well. However, the best results were obtained using a Quadratic Discriminant Analysis classifier.
Step21: Once our test set has been classified we can visualize the effectiveness of our classification using a confusion matrix.
Step22: Finally we can focus on the core words that need to be classified and label the rest as 'unknown'.
Step23: In conclusion, we can say that, using spectral clustering, we were able to leverage the properties of graph theory to find relevant features in speech recognition. However, the accuracy achieved with our model is too far low for any practical applications. Moreover, this model does not benefit from sparsity, meaning that it will not be able to scale with large datasets.
Semi-Supervised classification
Now that we have seen the spectral clustering method, we want to present the semi-supervised classification method. For this we start by using the same training set $S_t$, validation set $S_v$, batch and Laplaian, as we used to explain the spectral clustering method.<br>
<br>
Unlike for spectral clustering, for this method sparsifying the graph is very important. We noticed a significant increase in classificatiion accuracy using quite sparse graphs. Thus we now sparsify the graph, to have a more significant clustering. We use k-nearest neighbors approach for this.For the purpose of explaining the method we will keep $120$ strongest neighbors of each node.
Step24: We can see that the sparsifyied weight matrix is very focused on its diagonal. We will now build the normalized Laplacian, since it is the core graph feature we will use for semi-supervised classification. The normalized Laplacian is defined as
$$L = \mathbf{I}-\mathbf{D}^{-1/2}\mathbf{W}\mathbf{D}^{-1/2},$$
where $\mathbf{I}$ is the $(N+K)\times (N+K)$ identity matrix and $\mathbf{D}\in \mathbb{N}^{(N+K)\times (N+K)}$ is the degree matrix of the graph.
Step25: For the semi-supervised classification approach, we now want to transform the label vector of our training data $\mathbf{y_t} \in {1,2,...,30}^{N}$ into a matrix $\mathbf{Y_t}\in {0,1}^{30\times N}$. Each row $i$ of the matrix $\mathbf{Y_t}$ contains an indicator vector $\mathbf{y_{t,i}}\in{0,1}^N$ for class $i$, which means it contains a vector which specifyies for each training node in the graph if it belongs to node $i$ or not.
Step26: In the next cell we extend our label matrix $\mathbf{Y_t}$, such that there are labels (not known yet) for the validation datapoints we want to classify. Thus we extend the rows of $\mathbf{Y}$ by $K$ zeros, since the last $K$ nodes in the weight matrix of the used graph correspond to the validation points. We also create the masking matrix $\mathbf{M}\in{0,1}^{30\times (N+K)}$, which specifies which of the entries in $\mathbf{Y}$ are known (training) and which are unknown (validation).
Step28: Now comes the main part of semi-supervised classification. The method relies on the fact that we have a clustered graph, which gives us similarity measures between all the considered datapoints. The above mentioned class indicator vectors $\mathbf{y_i}$ (rows of $\mathbf{Y}$) are considered to be smooth signals on the graph, which is why achieving a clustered graph with good feature extraction was important.<br>
<br>
We try to fill in the gaps left in the label vector $\mathbf{y}$, i.e. estimating a $\mathbf{\hat{y}}\in {1,2,...,30}$, which should ideally be equal to the original label vector, i.e. containing the correctly classified labels for the validation datapoints. To achieve this we try to learn indicator vectors $\mathbf{\hat{y_i}} \in \mathbb{R}^{N+K}$ for each class $i$, which also contain labels for the validation points (unlike the afore mentioned $\mathbf{y_i}$). The higher the value $\mathbf{\hat{y_{i,j}}}$, the higher the probability that node $j$ belongs to class $i$. For this purpose, we solve the following optimization problem for each of the 30 classes, specified by $i\in {1,2,...,30}$.
$$ \underset{\mathbf{\hat{y_i}} \in \mathbb{R}^{N}}{argmin} \quad \frac{1}{2}||\mathbf{M_i}(\mathbf{y_i}-\mathbf{\hat{y_i}})||^2_2 + \frac{\alpha}{2} \mathbf{\hat{y_i}}^T\mathbf{L}\mathbf{\hat{y_i}} + \frac{\beta}{2}||\mathbf{\hat{y_i}}||_2^2$$
The matrix $\mathbf{M_i}$ is defined as the diagonal matrix containing the $i^{th}$ row of $\mathbf{M}$ on its diagonal.
The first term of the above depicted cost function is the fidelity term, which makes sure that the estimated vector $\mathbf{\hat{y_i}}$ is sufficiently close to the known entries of $\mathbf{y_i}$ (i.e. the labels of the training data points). The second term makes sure that the learned vector $\mathbf{\hat{y_i}}$ is smooth on the graph. The last term is there to make sure that we solve for a low energy verctor and avoid that the optimization problem is ill-posed. The two factors $\alpha, \beta >0$ are hyperparameters which give weight to their respective term or criterion.<br>
<br>
For the above described optimization problem we can find an explicit solution. For this, we first compute the gradient of the cost function with respect to $\mathbf{\hat{y_i}}$.
$$\nabla f(\mathbf{\hat{y_i}}) = -\mathbf{M_i}^T\mathbf{M_i}(\mathbf{y_i}-\mathbf{\hat{y_i}}) + \frac{\alpha}{2} (\mathbf{L}^T +\mathbf{L})\mathbf{\hat{y_i}} + \beta \mathbf{\hat{y_i}}$$
Using the fact that $\mathbf{M_i}$ is a diagonal, symmetric matrix containing only '1' and '0', as well as the fact that $\mathbf{L}$ is symmetric, we can simplify $\nabla \mathbf{f}$ to
$$\nabla f(\mathbf{\hat{y_i}}) = -\mathbf{M_i}(\mathbf{y_i}-\mathbf{\hat{y_i}}) + \alpha \mathbf{L} \mathbf{\hat{y_i}} + \beta \mathbf{\hat{y_i}}.$$
To find the solution $\mathbf{\hat{y_i}^*}$ to the optimization problem we set the gradient to 0 to obtain
$$\nabla f(\mathbf{\hat{y_i}^}) = 0 = \mathbf{M_i}(\mathbf{y_i}-\mathbf{\hat{y_i}^}) - \alpha \mathbf{L} \mathbf{\hat{y_i}^} - \beta \mathbf{\hat{y_i}^},$$
and thus
$$\mathbf{M_i y_i} = (\mathbf{M_i}+\alpha \mathbf{L} + \beta \mathbf{I}_{(N+K)(N+K)}) \mathbf{\hat{y_i}^*}.$$
$\mathbf{I}{(N+K)(N+K)}$ is the identity matrix of size $(N+K) \times (N+K)$. Introducing $\mathbf{y{i,compr}} = \mathbf{M_i y_i}$ we can write
$$\mathbf{y_{i,compr}} = (\mathbf{M_i}+\alpha \mathbf{L} + \beta \mathbf{I}_{(N+K)(N+K)}) \mathbf{\hat{y_i}^*}.$$
We define the matrix $\mathbf{A} = (\mathbf{M_i}+\alpha \mathbf{L} + \beta \mathbf{I}_{(N+K)(N+K)})$ and now want to analyse its invertibility.<br>
<br>
We know that the Laplacian $\mathbf{L}$ is positive semi-definite (PSD), which means that all its eigenvlues are $\geq 0$. $M_i$ simply adds '1' to some of that eigenvalues, unfortunately not to all of them and thus it is not a sufficient criteria to render $\mathbf{A}$ full-ranlk an thus invertible. For this prupose we introduce the $l_2$-prior which adds $\beta >0$ to each eigenvalue, which makes $\mathbf{A}$ psoitive definite and thus invertible. I.e. by controlling $\beta$ our problem is well-posed and a unique solution $\mathbf{\hat{y_i}^*}$ can be found.
$$\mathbf{\hat{y_i}^*} = \mathbf{A^{-1}}\mathbf{y_{i,compr}}$$
Having found an $\mathbf{\hat{y_i}}$ for every class $i$, we then build a matrix $\mathbf{\hat{Y}}\in \mathbb{R}^{30\times N}$, containing learned vectors $\mathbf{\hat{y_i}}$ as its rows. The final labelling vector $\mathbf{\hat{y_i}_{fin}}\in {1,2,...,30}$ is obtained by finding the row $i$ for each column $j$ of $\mathbf{\hat{Y}}$ in which the value is maximal and the index $i$ of the corresponding row will be the class $i$ of the datapoint (node) corresponding to the column $j$.
$$\mathbf{M_i y_i} = (\mathbf{M_i}+\alpha \mathbf{L} + \beta \mathbf{I}_{NN}) \mathbf{\hat{y_i}^*}.$$
$\mathbf{I}{NN}$ is the identity matrix of size $N \times N$. Introducing $\mathbf{y{i,compr}} = \mathbf{M_i y_i}$ we can write
$$\mathbf{y_{i,compr}} = (\mathbf{M_i}+\alpha \mathbf{L} + \beta \mathbf{I}_{NN}) \mathbf{\hat{y_i}*}.$$
We define the matrix $\mathbf{A} = (\mathbf{M_i}+\alpha \mathbf{L} + \beta \mathbf{I}_{NN})$ and analyse its invertibility.
We know that the Laplacian $\mathbf{L}$ is positive semi-definite (PSD), which means that all its eigenvlues are $\geq 0$. $M_i$ simply adds '1' to some of that eigenvalues, unfortunately not to all of them and thus it is not a sufficient criteria to render $\mathbf{A}$ full-ranlk an thus invertible. For this prupose we introduce the $l_2$-prior which adds $\beta >0$ to each eigenvalue, which makes $\mathbf{A}$ psoitive definite and thus invertible. I.e. by controlling $\beta$ our problem is well-posed and a unique solution $\mathbf{\hat{y_i}^*}$ can be found.
$$\mathbf{\hat{y_i}^*} = \mathbf{A^{-1}}\mathbf{y_{i,compr}}$$
Having found a $\mathbf{\hat{y_i}}$ for every class $i$, we then build a matrix $\mathbf{\hat{Y}}\in \mathbb{R}^{30\times N}$, containing the learned vectors $\mathbf{\hat{y_i}}$ as its rows. The final labelling vector $\mathbf{y_{pred}}\in {1,2,...,30}$ is obtained by finding the row $i$ for each column $j$ of $\mathbf{\hat{Y}}$ in which the value is maximal and the index $i$ of the corresponding row will be the class $i$ of the datapoint (node) corresponding to the column $j$.
Step29: Method Validation
In the previous section we introduced two ways of doing speech classification using graphs. For this purpose one single validation batch of size 200 was classified. To have a better idea on how well the two methods classify data, we will now perform 100 iterations of the above explained code (from feature extraction to prediction). The used code can be found in its entirity in main_pipeline.py. This means that for every iteration an entirely new training set and validation set is created, such that the variance due to good or bad training sets is included in the obtained results.<br>
<br>
In the boxplot shown below we can see the resulting mean accuracy, as well as the variance for both classification methods. | Python Code:
import os
from os.path import isdir, join
from pathlib import Path
import pandas as pd
from tqdm import tqdm
# Math
import numpy as np
import scipy.stats
from scipy.fftpack import fft
from scipy import signal
from scipy.io import wavfile
import librosa
import librosa.display
from scipy import sparse, stats, spatial
import scipy.sparse.linalg
# Machine learning
from sklearn.utils import shuffle
from sklearn.metrics import confusion_matrix
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import cross_val_score
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
# Visualization
import matplotlib.pyplot as plt
import seaborn as sns
import IPython.display as ipd
# Self_made functions
from main_pipeline import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (17, 5)
Explanation: Speech Recognition using Graphs
Team members: Adrian Löwenstein, Kiran Bacsa, Manuel Vonlanthen<br>
Date: 22.01.2018
Important Prior Information
To be able to rerun this notebook the function files "cut_audio.py" and "main_pipeline.py" (provided in the uploaded .zip file)have to be located inside the working path. In addition, the entire used data set, as well as a pickle file containing all extracted features from it can be downloaded here. For recomputation purposes the pickle file has to be located in a folder called "Features Data", which in turn has to be located in the working path. The dataset containing all audio files (circa 2 GB) has to be located at ..\Data.
Problem Description
In this section we will describe in detail the problem we studied during the final project of the course "Network Tour of Data Science". We wanted to do speech recognition using the Graph/Network theory learned during the course. We were inspired by the kaggle competition "Tensor Flow Speech Recognition Challenge: Can you build an algorithm that understands simple speech commands" put up by google. In said competition the goal was to classify 20 distinct words. Words that do not belong to any of the 20 classes should be classified as "unknown". For the kaggle competition the TensorFlow library had to be used. For this purpose Google provided a large training data set (64'720 audio files) with known labels and an even larger test (150'000+ audio files) with unknown labels for them to evaluate the built algorithms. We, however, decided to only work with the provided training data, because the data set is large enough to perform statistically valid model evaluation and it this we we weren't dependant on the kaggle competition.<br>
<br>
The provided data set consists of 64'720 .wave files of length 1s, sampled with a sampling rate of $16$ kHz. Each audio files contain one of 30 possible spoken words. The files were created using crowd-sourcing, which means that the conditioning of the audio signal is not equal for all audio files. This led to very different noise levels, amplitudes, etc. Also the same speaker might have recorded different audio files. The 20 core words which have to be classified correctly are:
- up, down
- zero, one, two, three, four, five ,six ,seven, eight, nine
- go, stop
- left, rigth
- no, yes
- off, on
In addition to these 20 words, 10 other words were provided inside the training set to train the algorithm to classify words which it should not react to as "unknown". The following "unknown" words are contained in the training data:
- bed
- bird, cat, dog, mouse
- tree
- happy
- marvin, sheila
- wow
<br>
At this point we want to empphasize that we did not study the problem suggested for the kaggle competition, but a slightly simplified one. First of all we did not restrict ourselves to using TensorFlow, in fact we did not use it at all. However, we did restrict ourselves to use Graph theory as central part of our classification algorithm. Our goal was, using only the provided training set, to build a classifier which calssifies the above listed core words as accurate as possible. In addition, all other words (the 10 additional words) should be classified as "unknown". This means we wanted to build a "word"-classifier for 21 different classes using graph theory.<br>
<br>
Mathematically, we can define the task as follows. We split up our training data set into a training set $S_t$ of some size N (we will later comment on its size) and a validation set $S_v$ of size $V \leq 64'720-N$, used to check how well the classifier works.
Using $\mathbf{x_n} \in S_t$, which is some training audio file $\mathbf{x_n} \in \mathbb{R}^D$, where $D = 1s\cdot 16kHz = 16000$ is the number of samples per audio file, we can build our training data matrix $\mathbf{X}^{\mathbf{N\times D}}$. Using $\mathbf{X}$ we want to learn a function $f(\mathbf{v_v}, \mathbf{X}): \mathbb{R}^{N+1\times D} \to {1,2,3,...,21}$, where $\mathbf{v_v}\in S_v$ is a validation audio file $\mathbf{v_v}\in\mathbb{R}^D$, such that the resulting estimated label $\hat{y_v} = f(\mathbf{v_V}, \mathbf{X})$ is equal to the correct label $y_v \in {1,2,3,...,21}$ for as many validation samples as possible. Hence, we use the accuracy measure defined as
$$acc = \frac{\sum_{i=1}^{K}\max[\min[(|y_k-\hat{y_k}|),1],0]}{K},$$
where $K$ is the number of tested samples $v_k$. We want to remark that the model could also work with a subset $\mathbb{v}\subseteq S_v$ (batch) instead of a single validation file $v_v$. In this case we define the cardinality of said subset $|\mathbb{v}|=K$ and the model would correspond to $f(\mathbb{v}, \mathbf{X}): \mathbb{R}^{(N+K)\times D} \to {1,2,3,...,21}^K$.
Imports
End of explanation
recompute = False
Explanation: Recompute
WARNING If you set recompute to True this will reextract all featrues, which will take approiximately a days, so we do not recommend it. It is here for completeness so you can see how the step was done during th project. We've already computed it and saved the results into a pickle file. Our entire used data, as well as the pickle files can be found here.<br>
End of explanation
# Conditional recomputing : (WARNING : THIS TAKES MORE THAN 24H)
if recompute == True :
# Extracts and cuts the audio files from the folder to store it inside a set of pickles
main_train_audio_extraction()
# Computes the features from the previously extracted audio files. Save them into a single pickle.
main_train_audio_features()
Explanation: Feature Extraction
The Dataset that was taken from the Kaggle competition was initally separated in 2 sets. A Training Set containing labeled test words from different speakers with various background noises. The test set contains unlabeled data, and will not be used with this project as it won't allow the calculation of the accuracy. In the following we will only use the original training set as our main dataset. The words are provided in the form of a .wav sound file of 1 second. <br>
Feature Extraction Pipeline :
1. We first analyse the dataset and save into a dataframe for each word : the path, the label and the id of the speaker.
2. We then import the audio content using Librosa and store it into this dataframe.
3. Next we proceed to the cleaning of the data by cutting the silences before and after the words are pronounced.
4. Finally we compute the main features : the MFCCs that are then stored into a pickle.
As for the feature chosen, we have kept only the MFCCs of the audio file. Other features we considered as the MFCCs for the whole audio file (instead of the cut version), or the statistics on the MFCCs .
Recomputing the whole Feature Extraction and Computation :
End of explanation
N = 2
train_audio_path = '../Data/train/audio'
dirs = [f for f in os.listdir(train_audio_path) if isdir(join(train_audio_path, f))]
dirs.sort()
path = []
word = []
speaker = []
iteration = []
for direct in dirs:
if not direct.startswith('_'):
# Random selection of N files per folder
list_files = os.listdir(join(train_audio_path, direct))
wave_selected = list(np.random.choice([ f for f in list_files if f.endswith('.wav')],N,replace=False))
# Extraction of file informations for dataframe
word.extend(list(np.repeat(direct,N,axis=0)))
speaker.extend([wave_selected[f].split('.')[0].split('_')[0] for f in range(N) ])
iteration.extend([wave_selected[f].split('.')[0].split('_')[-1] for f in range(N) ])
path.extend([train_audio_path + '/' + direct + '/' + wave_selected[f] for f in range(N)])
# Creation of the Main Dataframe :
features_og = pd.DataFrame({('info','word',''): word,
('info','speaker',''): speaker,
('info','iteration',''): iteration,
('info','path',''): path})
index_og = [('info','word',''),('info','speaker',''),('info','iteration','')]
features_og.head()
Explanation: Pipeline for a small number of audio files :
End of explanation
word_1 = 1
word_2 = 59
def get_audio(filepath):
audio, sampling_rate = librosa.load(filepath, sr=None, mono=True)
return audio, sampling_rate
audio_1, sampling_rate_1 = get_audio(features_og[('info','path')].iloc[word_1])
audio_2, sampling_rate_2 = get_audio(features_og[('info','path')].iloc[word_2])
# normalize audio signals
audio_1 = audio_1/np.max(audio_1)
audio_2 = audio_2/np.max(audio_2)
# Look at the signal in the time domain
plt.plot(audio_1)
plt.hold
plt.plot(audio_2)
# Listen to the first word
ipd.Audio(data=audio_1, rate=sampling_rate_1)
# Listen to the first word
ipd.Audio(data=audio_2, rate=sampling_rate_1)
Explanation: After selecting 2 words we normalize their values to their maximum.
End of explanation
def find_lobes(Thresh, audio, shift = int(2048/16)):
Finds all energy lobes in an audio signal and returns their start and end indices. The parameter Thresh defines
the sensitivity of the algforithm.
# Compute rmse
audio = audio/np.max(audio)
rmse_audio = librosa.feature.rmse(audio, hop_length = 1, frame_length=int(shift*2)).reshape(-1,)
rmse_audio -= np.min(rmse_audio)
rmse_audio /= np.max(rmse_audio)
i_start = np.array([])
i_end = np.array([])
for i in range(len(rmse_audio)-1):
if (int(rmse_audio[i]>Thresh)-int(rmse_audio[i+1]>Thresh)) == -1:
i_start = np.append(i_start,i)
elif (int(rmse_audio[i]>Thresh)-int(rmse_audio[i+1]>Thresh)) == 1:
i_end = np.append(i_end,i)
if len(i_start) == 0:
i_start = np.append(i_start,0)
if len(i_end) == 0:
i_end = np.append(i_end,i)
if i_start[0]>i_end[0]:
i_start = np.append(np.array(0), i_start)
if i_start[-1]>i_end[-1]:
i_end = np.append(i_end,i)
return i_start, i_end, rmse_audio, shift
def cut_signal( audio, Thresh = 0.1, mode = 'proxy',reach = 2000, number_lobes = 2):
Extracts relevant parts ofn audio signal.
The Thresh input value defines the sensitivity of the cut, its value has to be positive.
Two modes can be chosen:
- proxy(Default): Finds main energy lobe of the signal and also adds lobes that are within reach.
The reach parameter can be adjusted adn has to be a positive value (default is 2000.)
- num_lobes: Finds the highest energy lobes of the signal. The parameter num_lobes (default value 2)
defines how many of the largest lobes are being considered.
i_start, i_end, rmse_audio, shift = find_lobes(Thresh, audio)
energy = np.array([])
for i in range(len(i_start)):
energy = np.append(energy,sum(rmse_audio[int(i_start[i]):int(i_end[i])]))
if mode is 'num_lobes':
lobes = np.argsort(energy)[-number_lobes:]
start = np.min(i_start[lobes])
end = np.max(i_end[lobes])
elif mode is 'proxy':
main_lobe = np.argsort(energy)[-1]
start = i_start[main_lobe]
end = i_end[main_lobe]
for i in range(main_lobe):
if (i_start[main_lobe]-i_end[i])<reach:
start = np.min((i_start[i],start))
for i in range(main_lobe,len(i_start)):
if (i_start[i]-i_end[main_lobe])<reach:
end = i_end[i]
else:
print('ERROR: mode not implemented.')
audio_cut = audio[int(np.max((0,int(start-shift-300)))):int(np.min((int(end)+300,len(audio))))]
return audio_cut
Explanation: Next we define two auxiliary function that allow us to select main lobes of the signal and to keep only those lobes.
End of explanation
rmse_audio_1 = librosa.feature.rmse(audio_1, hop_length = 1, frame_length=int(2048/8)).reshape(-1,)
rmse_audio_1 -= np.min(rmse_audio_1)
rmse_audio_1 /= np.max(rmse_audio_1)
plt.plot(rmse_audio_1)
plt.grid()
plt.title('RMSE of Audio signal')
plt.xlabel('mffc sample')
plt.ylabel('rmse')
plt.hold
rmse_audio_2 = librosa.feature.rmse(audio_2, hop_length = 1, frame_length=int(2048/8)).reshape(-1,)
rmse_audio_2 -= np.min(rmse_audio_2)
rmse_audio_2 /= np.max(rmse_audio_2)
plt.plot(rmse_audio_2)
Explanation: For the selection of the lobes we use the RMSE transformation. We next display the shape of those signals after this transformation :
End of explanation
# Cutting above the threshold and keeping the main lobes :
audio_1_cut = cut_signal(audio_1)
audio_2_cut = cut_signal(audio_2)
# Display cut time signal
plt.plot(audio_1_cut)
plt.hold
plt.plot(audio_2_cut)
print('Cut Version 1 :')
ipd.Audio(data=audio_1_cut, rate=sampling_rate_1)
print('Cut Version 2 :')
ipd.Audio(data=audio_2_cut, rate=sampling_rate_2)
Explanation: Next we apply the our auxiliary function cut_signal to the 2 audio samples. As we see it removes efficiently the silence surrounding the main lobes.
End of explanation
N_MFCCS = 10
#n_fft, hop_length
mfccs_1 = librosa.feature.mfcc(y=audio_1_cut,sr=sampling_rate_1, n_mfcc=N_MFCCS, n_fft = int(2048/2), hop_length = int(np.floor(len(audio_1_cut)/20)))
mfccs_2 = librosa.feature.mfcc(y=audio_2_cut,sr=sampling_rate_2, n_mfcc=N_MFCCS, n_fft = int(2048/2), hop_length = int(np.floor(len(audio_2_cut)/20)))
mfccs_1 = mfccs_1[:,:-1]
mfccs_2 = mfccs_2[:,:-1]
print(np.shape(mfccs_1))
print(np.shape(mfccs_2))
plt.figure(figsize=(10, 4))
librosa.display.specshow(mfccs_1, x_axis='time')
plt.colorbar()
plt.title('MFCC 1st Word')
plt.tight_layout()
plt.figure(figsize=(10, 4))
librosa.display.specshow(mfccs_2, x_axis='time')
plt.colorbar()
plt.title('MFCC 2nd Word')
plt.tight_layout()
Explanation: Of the cut audio file we want now to compute our features the Mel-Frequency Cepstral Coefficients, short mfccs. For this, no matter the length of the audio file, we compute 20 mfcc vectors of dimension 10. This means we compoute a short-time Fourier transform at 20 equidistant time points inside the cut audio files and keep the lower 10 mfccs of the spectrum. Since the audio files are of different length after the cutting, we adjust the hop-length (length between two short-time Fourier analyses) for every audio file accordingly. this makes the resulting feature vectors comparable and adds a "time warping" effect which should make the feature more robust to slower/faster spoken words.
End of explanation
# Load features
features_og = pd.read_pickle('./Features Data/cut_mfccs_all_raw_10_1028_20.pickle')
features_og.head()
Explanation: As we have allready computed the Features for the whole dataset we load it directly from the following pickle file.
End of explanation
# Build Label vector
# Define class name vector, the index will correspond to the class label
class_names = features_og['info']['word'].unique()
y = np.ones(len(features_og))
for i in range(0,len(class_names)):
y +=(features_og['info','word'] == class_names[i]) * i
# Plot the label vector
print('We have {} datapoints over the entire dataset.'.format(len(y)))
fix, axes = plt.subplots(1, 2, figsize=(17, 5))
axes[0].plot(y)
axes[0].grid()
axes[0].set_xlabel('datapoint n')
axes[0].set_ylabel('label yn')
# Plot distribution of classe
axes[1].hist(y,30)
axes[1].set_xlabel('class')
axes[1].set_ylabel('number of datapoints')
Explanation: Classification Methods
Now that we have extracted some meaningful features from the raw data, we want to build a model that uses some training set $S_t$ of cardinality $|S_t|=N$, which can be used to classify the rest of the data (validation set). We've mainly analyzed two methods, "Spectral Clustering" and "Semi-Supervised Clustering", which we will describe in detail in this section.<br>
<br>
We found that using a training set of cardinality $|S_t| = N = 4800$ and a validation batch size of $|\mathbb{v}|= K = 200$ to be both computationally reasonable and yielding good results. This means that we use the same $N$ datapoints (feature vectors of audio files), of which we know the labels, to classify all other audio files (validation set), of which we pretend not to know the labels. The classification of the validation set (size $V$) is done batch-wise, i.e. $K$ files are classified simultaniously and we iterate trough the entire validation set, i.e. $V/k$ iterations are performed. Which $N$ datapoints are chosen to form the training set is determined randomly with a restriction that every word (or class) is represented equally. Thus, for $N = 4800$ we choose $160$ audio samples of every one of the 30 clases/words at random.<br>
<br>
In this section we will classify one batch of size $K = 200$ and while doing that explain both classification methods using graphs in detail.
Create training set, validation set and Data Matrix
In a first step, we create the label vector $\mathbf{y}\in{1,2,3,...,30}^{64'720}$ for all datapoints. In addition we plot the labels and the distribution of the classes.
End of explanation
# Specify the number of datapoints that should be sampled in each class to build training and validation set
train_size = 160
valid_size = 1553
train_x = np.array([])
train_y = np.array([])
valid_x = np.array([])
valid_y = np.array([])
for i in range(len(class_names)):
class_index = np.where(y == (i+1))[0]
random_index = np.random.choice(range(len(class_index)), size=train_size+valid_size, replace=False)
train_x_class = class_index[random_index[:train_size]]
train_y_class = y[train_x_class]
train_x = np.append(train_x, train_x_class).astype(int)
train_y = np.append(train_y, train_y_class).astype(int)
valid_x_class = class_index[random_index[train_size:train_size+valid_size]]
valid_y_class = y[valid_x_class]
valid_x = np.append(valid_x, valid_x_class).astype(int)
valid_y = np.append(valid_y, valid_y_class).astype(int)
Explanation: In the above histogram we can see that the classes are not balanced inside the test set. However, for our testing we will chose a balanced training, as well as a balanced validation step. This corresponds to having an equal prior probability of occurence between the different words we want to classify. Thus, in the next cell we choose at random $160$ datapoints per class to form our training set $S_t$ ($30160=4800$) and $1553$ datapoints per class to form the validation set $S_v$ ($155330 = 46590$), which is the maximum amount of datapoints we can put into the vlidation set for it to still be balanced.
End of explanation
# Define batch size
batch_size = 200
# Choose datapoints from validation set at random to form a batch
potential_elements = np.array(list(enumerate(np.array(valid_x))))
indices = np.random.choice(potential_elements[:,0].reshape(-1,), batch_size, replace=False)
# The batch index_variable contains the indices of the batch datapoints inside the complete dataset
batch_index = potential_elements[:,1].reshape(-1,)[indices]
Explanation: We will define the batch sizem which defines how many validation samples are classified simultaniously. Then we choose at random 200 datapoints of the validation set $S_v$ to build said batch. Remark: Although the training and the validation sets are both balanced, the batch itself is not necessarily, because this would be an unreasonable constraint for the task.
End of explanation
# Build data matrix and normalize features
X = pd.DataFrame(features_og['mfcc'], np.append(train_x, batch_index))
X -= X.mean(axis=0)
X /= X.std(axis=0)
print('The data matrix has {} datapoints.'.format(len(X)))
Explanation: Now we build our feature matrix $\mathbf{X}^{(N+K)\times D}$ by concatenating the feature vectors of all datapoints inside the training set $S_t$ and the batch datapoints. The feature are then normalized by substracting their mean, as well as dividing by the standard deviation. The feature normalizing step was found to have a ver significant effect on the resulting classification accuracy.
End of explanation
# Compute distances between all datapoints
distances = spatial.distance.squareform(spatial.distance.pdist(X,'cosine'))
n=distances.shape[0]
# Build weight matrix
kernel_width = distances.mean()
W = np.exp(np.divide(-np.square(distances),kernel_width**2))
# Make sure the diagonal is 0 for the weight matrix
np.fill_diagonal(W,0)
print('The weight matrix has a shape of {}.'.format(W.shape))
# Show the weight matrix
plt.matshow(W)
Explanation: Build Graph from Data Matrix
We now want to build a graph from the earlier obtained data matrix. Every node in our graph will correspond to one datapoint (feature vector of one audio file). We use a weighted, undirected graph. The weight is very important for our application, since it gives us a measure of how similair the feature vectors of the two datapoints are. The undirectedeness is a logical conclusion of our edges being similitarity measures, which are inherently undirected.<br>
<br>
To build the weight matrix $W\in \mathbb{R}^{(N+K)\times (N+K)}$ we compute the cosine distance between each datapoint $\mathbf{x_n}$ in the datamatrix $\mathbf{X}$, which is defined as
$$d(\mathbf{x_i},\mathbf{x_j}) = \frac{\mathbf{x_i}^T\mathbf{x_j}}{||\mathbf{x_i}||2||\mathbf{x_j}||_2}$$
and then build a similarity graph using
$$\mathbf{W{i,j}} = exp(\frac{-d(\mathbf{x_i},\mathbf{x_j})^2}{\sigma^2}).$$
Other, distance functions were tested, but the cosine distance was found to be the most effective. We used the mean overall distance as $\sigma$.
End of explanation
# compute laplacian
degrees = np.sum(W,axis=0)
laplacian = np.diag(degrees**-0.5) @ (np.diag(degrees) - W) @ np.diag(degrees**-0.5)
laplacian = sparse.csr_matrix(laplacian)
Explanation: We can already see that there is a distinct square pattern inside the weight matrix. This points to a clustering inside a graph, achieved by good feature extraction (rows and columns are sorted by labels, except last 200). At this point we are ready to present the first classification method that was analyzed: Spectral Clustering.
Spectral Clustering
Our first approach was to naively reuse the code from assignment 3 : spectral graph theory. By combining all our of samples into a single graph and extracting the resulting graph laplacian, we hope to identify clusters which would correspond to the different words that need to be classified.
Unfortunatly, increasing sparsity has the effect of reducing classification accuracy. We therefore decided to remove k-NN sparsification all together and keep the sample graph as it is.
End of explanation
eigenvalues, eigenvectors = sparse.linalg.eigsh(A=laplacian,k=25,which='SM')
plt.plot(eigenvalues[1:], '.-', markersize=15);
plt.grid()
fix, axes = plt.subplots(5, 5, figsize=(17, 8))
for i in range(1,6):
for j in range(1,6):
a = eigenvectors[:,i]
b = eigenvectors[:,j]
labels = np.sign(a)
axes[i-1,j-1].scatter(a, b, c=labels, cmap='RdBu', alpha=0.5)
Explanation: We can now calculate the eigenvectors of the Laplacian matrix. These eigenvectors will be used as feature vectors for our classifier.
End of explanation
# Splitt Eigenvectors into train and validation parts
train_features = eigenvectors[:len(train_x),:]
valid_features = eigenvectors[len(train_x):,:]
Explanation: In a next step we split the eigenvectors of the graph into two parts, one containing the nodes representing the training datapoints, one containing the nodes representing the validation datapoints.
End of explanation
def fit_and_test(clf, train_x, train_y, test_x, test_y):
clf.fit(train_x, train_y)
predict_y = clf.predict(test_x)
print('accuracy : ', np.sum(test_y==predict_y)/len(test_y))
return predict_y
clf = GaussianNB()
predict_y = fit_and_test(clf, train_features, train_y, valid_features, np.array(y[batch_index]))
clf = QuadraticDiscriminantAnalysis()
predict_y = fit_and_test(clf, train_features, train_y, valid_features, np.array(y[batch_index]))
Explanation: A wide range of classifiers were tested on our input features. Remarkably, a very simple classifier such as the Gaussian Naive Bayes classifier produced far better results than more advanced techniques. This is mainly because the graph datapoints were generated using a gaussian kernel, and is therefore sensible to assume that our feature distribution will be gaussian as well. However, the best results were obtained using a Quadratic Discriminant Analysis classifier.
End of explanation
def plot_confusion_matrix(test_y, predict_y, class_names):
conf_mat=confusion_matrix(test_y,predict_y)
plt.figure(figsize=(10,10))
plt.imshow(conf_mat/np.sum(conf_mat,axis=1),cmap=plt.cm.hot)
tick = np.arange(len(class_names))
plt.xticks(tick, class_names,rotation=90)
plt.yticks(tick, class_names)
plt.ylabel('ground truth')
plt.xlabel('prediction')
plt.title('Confusion matrix')
plt.colorbar()
plot_confusion_matrix(np.array(y[batch_index]), predict_y, class_names)
Explanation: Once our test set has been classified we can visualize the effectiveness of our classification using a confusion matrix.
End of explanation
def adapt_labels(x_hat):
# Real accuracy considering only the main words :
class_names_list = ["yes", "no", "up", "down", "left", "right", "on", "off", "stop", "go", "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine"]
mask_names_main = [True if name in class_names_list else False for name in class_names]
index_names_main = [i for i in range(len(mask_names_main)) if mask_names_main[i] == True]
inverted_index_names = dict(zip(index_names_main,range(len(index_names_main))))
# Creating the label names :
class_names_main = class_names[mask_names_main].tolist()
class_names_main.extend(["unknown"])
# Adapting the labels in the test and prediction sets :
return np.array([inverted_index_names[int(x_hat[i])] if x_hat[i] in index_names_main else len(class_names_main)-1 for i in range(len(x_hat)) ]),class_names_main
valid_y_adapted, class_names_main = adapt_labels(np.array(y[batch_index]))
predict_y_adapted, class_names_main = adapt_labels(predict_y)
acc_adapted = np.sum(valid_y_adapted==predict_y_adapted)/len(valid_y_adapted)
print('accuracy for main words classification : ', acc_adapted)
plot_confusion_matrix(valid_y_adapted,predict_y_adapted, class_names_main)
Explanation: Finally we can focus on the core words that need to be classified and label the rest as 'unknown'.
End of explanation
# Sparsify using k- nearest neighbours and make sure it stays symmetric
NEIGHBORS = 120
# Make sure
for i in range(W.shape[0]):
idx = W[i,:].argsort()[:-NEIGHBORS]
W[i,idx] = 0
W[idx,i] = 0
plt.matshow(W)
Explanation: In conclusion, we can say that, using spectral clustering, we were able to leverage the properties of graph theory to find relevant features in speech recognition. However, the accuracy achieved with our model is too far low for any practical applications. Moreover, this model does not benefit from sparsity, meaning that it will not be able to scale with large datasets.
Semi-Supervised classification
Now that we have seen the spectral clustering method, we want to present the semi-supervised classification method. For this we start by using the same training set $S_t$, validation set $S_v$, batch and Laplaian, as we used to explain the spectral clustering method.<br>
<br>
Unlike for spectral clustering, for this method sparsifying the graph is very important. We noticed a significant increase in classificatiion accuracy using quite sparse graphs. Thus we now sparsify the graph, to have a more significant clustering. We use k-nearest neighbors approach for this.For the purpose of explaining the method we will keep $120$ strongest neighbors of each node.
End of explanation
# Build normalized Laplacian Matrix
D = np.sum(W,axis=0)
L = np.diag(D**-0.5) @ (np.diag(D) - W) @ np.diag(D**-0.5)
L = sparse.csr_matrix(L)
Explanation: We can see that the sparsifyied weight matrix is very focused on its diagonal. We will now build the normalized Laplacian, since it is the core graph feature we will use for semi-supervised classification. The normalized Laplacian is defined as
$$L = \mathbf{I}-\mathbf{D}^{-1/2}\mathbf{W}\mathbf{D}^{-1/2},$$
where $\mathbf{I}$ is the $(N+K)\times (N+K)$ identity matrix and $\mathbf{D}\in \mathbb{N}^{(N+K)\times (N+K)}$ is the degree matrix of the graph.
End of explanation
# Build one-hot encoded class matrix
Y_t = np.eye(len(class_names))[train_y - 1].T
print('The shape of the new label matrix Y is {}, its maximum value is {} and its minimum value is {}.'.format(np.shape(Y_t),np.min(Y_t),np.max(Y_t)))
Explanation: For the semi-supervised classification approach, we now want to transform the label vector of our training data $\mathbf{y_t} \in {1,2,...,30}^{N}$ into a matrix $\mathbf{Y_t}\in {0,1}^{30\times N}$. Each row $i$ of the matrix $\mathbf{Y_t}$ contains an indicator vector $\mathbf{y_{t,i}}\in{0,1}^N$ for class $i$, which means it contains a vector which specifyies for each training node in the graph if it belongs to node $i$ or not.
End of explanation
# Create Mask Matrix
M = np.zeros((len(class_names), len(train_y) + batch_size))
M[:len(train_y),:len(train_y)] = 1
# Create extened label matrix and vector
Y = np.concatenate((Y_t, np.zeros((len(class_names), batch_size))), axis=1)
y_tv = np.concatenate((train_y,np.zeros((batch_size,)))) # y_tv corresponds to y in text
Explanation: In the next cell we extend our label matrix $\mathbf{Y_t}$, such that there are labels (not known yet) for the validation datapoints we want to classify. Thus we extend the rows of $\mathbf{Y}$ by $K$ zeros, since the last $K$ nodes in the weight matrix of the used graph correspond to the validation points. We also create the masking matrix $\mathbf{M}\in{0,1}^{30\times (N+K)}$, which specifies which of the entries in $\mathbf{Y}$ are known (training) and which are unknown (validation).
End of explanation
def solve(Y_compr, M, L, alpha, beta):
Solves the above defined optimization problem to find an estimated label vector.
X = np.ones(Y_compr.shape)
for i in range(Y_compr.shape[0]):
Mask = np.diag(M[i,:])
y_i_compr = Y_compr[i,:]
X[i,:] = np.linalg.solve((Mask+alpha*L+beta),y_i_compr)
return X
# Solve for the matrix X
Y_hat = solve(Y, M, L,alpha = 1e-3, beta = 1e-7)
# Go from matrix X to estimated label vector x_hat
y_predict = np.argmax(Y_hat,axis = 0)+np.ones(Y_hat[0,:].shape)
# Adapt the labels, whee all words of the category "unknown" are unified
y_predict_adapted, class_names_main = adapt_labels(y_predict)
y_adapted, class_names_main = adapt_labels(np.array(y[batch_index]))
# Compute accuracy in predicting unknown labels
pred = np.sum(y_predict_adapted[-batch_size:]==y_adapted)/batch_size
print('The achieved accuracy clasifying the bacth of validation points using semi-supervised classification is {}.'.format(pred))
plot_confusion_matrix(y_adapted,y_predict_adapted[-batch_size:], class_names_main)
Explanation: Now comes the main part of semi-supervised classification. The method relies on the fact that we have a clustered graph, which gives us similarity measures between all the considered datapoints. The above mentioned class indicator vectors $\mathbf{y_i}$ (rows of $\mathbf{Y}$) are considered to be smooth signals on the graph, which is why achieving a clustered graph with good feature extraction was important.<br>
<br>
We try to fill in the gaps left in the label vector $\mathbf{y}$, i.e. estimating a $\mathbf{\hat{y}}\in {1,2,...,30}$, which should ideally be equal to the original label vector, i.e. containing the correctly classified labels for the validation datapoints. To achieve this we try to learn indicator vectors $\mathbf{\hat{y_i}} \in \mathbb{R}^{N+K}$ for each class $i$, which also contain labels for the validation points (unlike the afore mentioned $\mathbf{y_i}$). The higher the value $\mathbf{\hat{y_{i,j}}}$, the higher the probability that node $j$ belongs to class $i$. For this purpose, we solve the following optimization problem for each of the 30 classes, specified by $i\in {1,2,...,30}$.
$$ \underset{\mathbf{\hat{y_i}} \in \mathbb{R}^{N}}{argmin} \quad \frac{1}{2}||\mathbf{M_i}(\mathbf{y_i}-\mathbf{\hat{y_i}})||^2_2 + \frac{\alpha}{2} \mathbf{\hat{y_i}}^T\mathbf{L}\mathbf{\hat{y_i}} + \frac{\beta}{2}||\mathbf{\hat{y_i}}||_2^2$$
The matrix $\mathbf{M_i}$ is defined as the diagonal matrix containing the $i^{th}$ row of $\mathbf{M}$ on its diagonal.
The first term of the above depicted cost function is the fidelity term, which makes sure that the estimated vector $\mathbf{\hat{y_i}}$ is sufficiently close to the known entries of $\mathbf{y_i}$ (i.e. the labels of the training data points). The second term makes sure that the learned vector $\mathbf{\hat{y_i}}$ is smooth on the graph. The last term is there to make sure that we solve for a low energy verctor and avoid that the optimization problem is ill-posed. The two factors $\alpha, \beta >0$ are hyperparameters which give weight to their respective term or criterion.<br>
<br>
For the above described optimization problem we can find an explicit solution. For this, we first compute the gradient of the cost function with respect to $\mathbf{\hat{y_i}}$.
$$\nabla f(\mathbf{\hat{y_i}}) = -\mathbf{M_i}^T\mathbf{M_i}(\mathbf{y_i}-\mathbf{\hat{y_i}}) + \frac{\alpha}{2} (\mathbf{L}^T +\mathbf{L})\mathbf{\hat{y_i}} + \beta \mathbf{\hat{y_i}}$$
Using the fact that $\mathbf{M_i}$ is a diagonal, symmetric matrix containing only '1' and '0', as well as the fact that $\mathbf{L}$ is symmetric, we can simplify $\nabla \mathbf{f}$ to
$$\nabla f(\mathbf{\hat{y_i}}) = -\mathbf{M_i}(\mathbf{y_i}-\mathbf{\hat{y_i}}) + \alpha \mathbf{L} \mathbf{\hat{y_i}} + \beta \mathbf{\hat{y_i}}.$$
To find the solution $\mathbf{\hat{y_i}^*}$ to the optimization problem we set the gradient to 0 to obtain
$$\nabla f(\mathbf{\hat{y_i}^}) = 0 = \mathbf{M_i}(\mathbf{y_i}-\mathbf{\hat{y_i}^}) - \alpha \mathbf{L} \mathbf{\hat{y_i}^} - \beta \mathbf{\hat{y_i}^},$$
and thus
$$\mathbf{M_i y_i} = (\mathbf{M_i}+\alpha \mathbf{L} + \beta \mathbf{I}_{(N+K)(N+K)}) \mathbf{\hat{y_i}^*}.$$
$\mathbf{I}{(N+K)(N+K)}$ is the identity matrix of size $(N+K) \times (N+K)$. Introducing $\mathbf{y{i,compr}} = \mathbf{M_i y_i}$ we can write
$$\mathbf{y_{i,compr}} = (\mathbf{M_i}+\alpha \mathbf{L} + \beta \mathbf{I}_{(N+K)(N+K)}) \mathbf{\hat{y_i}^*}.$$
We define the matrix $\mathbf{A} = (\mathbf{M_i}+\alpha \mathbf{L} + \beta \mathbf{I}_{(N+K)(N+K)})$ and now want to analyse its invertibility.<br>
<br>
We know that the Laplacian $\mathbf{L}$ is positive semi-definite (PSD), which means that all its eigenvlues are $\geq 0$. $M_i$ simply adds '1' to some of that eigenvalues, unfortunately not to all of them and thus it is not a sufficient criteria to render $\mathbf{A}$ full-ranlk an thus invertible. For this prupose we introduce the $l_2$-prior which adds $\beta >0$ to each eigenvalue, which makes $\mathbf{A}$ psoitive definite and thus invertible. I.e. by controlling $\beta$ our problem is well-posed and a unique solution $\mathbf{\hat{y_i}^*}$ can be found.
$$\mathbf{\hat{y_i}^*} = \mathbf{A^{-1}}\mathbf{y_{i,compr}}$$
Having found an $\mathbf{\hat{y_i}}$ for every class $i$, we then build a matrix $\mathbf{\hat{Y}}\in \mathbb{R}^{30\times N}$, containing learned vectors $\mathbf{\hat{y_i}}$ as its rows. The final labelling vector $\mathbf{\hat{y_i}_{fin}}\in {1,2,...,30}$ is obtained by finding the row $i$ for each column $j$ of $\mathbf{\hat{Y}}$ in which the value is maximal and the index $i$ of the corresponding row will be the class $i$ of the datapoint (node) corresponding to the column $j$.
$$\mathbf{M_i y_i} = (\mathbf{M_i}+\alpha \mathbf{L} + \beta \mathbf{I}_{NN}) \mathbf{\hat{y_i}^*}.$$
$\mathbf{I}{NN}$ is the identity matrix of size $N \times N$. Introducing $\mathbf{y{i,compr}} = \mathbf{M_i y_i}$ we can write
$$\mathbf{y_{i,compr}} = (\mathbf{M_i}+\alpha \mathbf{L} + \beta \mathbf{I}_{NN}) \mathbf{\hat{y_i}*}.$$
We define the matrix $\mathbf{A} = (\mathbf{M_i}+\alpha \mathbf{L} + \beta \mathbf{I}_{NN})$ and analyse its invertibility.
We know that the Laplacian $\mathbf{L}$ is positive semi-definite (PSD), which means that all its eigenvlues are $\geq 0$. $M_i$ simply adds '1' to some of that eigenvalues, unfortunately not to all of them and thus it is not a sufficient criteria to render $\mathbf{A}$ full-ranlk an thus invertible. For this prupose we introduce the $l_2$-prior which adds $\beta >0$ to each eigenvalue, which makes $\mathbf{A}$ psoitive definite and thus invertible. I.e. by controlling $\beta$ our problem is well-posed and a unique solution $\mathbf{\hat{y_i}^*}$ can be found.
$$\mathbf{\hat{y_i}^*} = \mathbf{A^{-1}}\mathbf{y_{i,compr}}$$
Having found a $\mathbf{\hat{y_i}}$ for every class $i$, we then build a matrix $\mathbf{\hat{Y}}\in \mathbb{R}^{30\times N}$, containing the learned vectors $\mathbf{\hat{y_i}}$ as its rows. The final labelling vector $\mathbf{y_{pred}}\in {1,2,...,30}$ is obtained by finding the row $i$ for each column $j$ of $\mathbf{\hat{Y}}$ in which the value is maximal and the index $i$ of the corresponding row will be the class $i$ of the datapoint (node) corresponding to the column $j$.
End of explanation
accuracy_mat = semisup_test_all_dataset(features_og, y, batch_size, NEIGHBORS, alpha = 1e-3, beta = 1e-7, iter_max=100, class_names = class_names)
# Display as boxplot
plt.boxplot(accuracy_mat.transpose(), labels = ['Spectral Clustering','Semi-Supervised Learning'])
plt.grid()
plt.title('Classification accuracy vs. classification method.')
plt.ylabel('Classification accuracy')
print('Using spectral clustering a mean accuracy of {}, with a variance of {} could be achieved.'.format(round(np.mean(accuracy_mat[0,:]),2),round(np.var(accuracy_mat[0,:]),4)))
print('Using semi-supervised classification a mean accuracy of {}, with a variance of {} could be achieved.'.format(round(np.mean(accuracy_mat[1,:]),2),round(np.var(accuracy_mat[1,:]),4)))
Explanation: Method Validation
In the previous section we introduced two ways of doing speech classification using graphs. For this purpose one single validation batch of size 200 was classified. To have a better idea on how well the two methods classify data, we will now perform 100 iterations of the above explained code (from feature extraction to prediction). The used code can be found in its entirity in main_pipeline.py. This means that for every iteration an entirely new training set and validation set is created, such that the variance due to good or bad training sets is included in the obtained results.<br>
<br>
In the boxplot shown below we can see the resulting mean accuracy, as well as the variance for both classification methods.
End of explanation |
2,722 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Noise model diagnostics
Step1: Visualisation of the data
After obtaining these parameters, it is useful to visualise the data and the fit.
Step2: Plotting autocorrelation of the residuals
Next we look at the autocorrelation plot of the residuals to evaluate the noise model (using pints.residuals_diagnostics.plot_residuals_autocorrelation).
Step3: The figure shows no significant autocorrelation in the residuals. Therefore, the assumption of independent noise may be valid.
Case 2
Step4: Visualisation of the data
As before we plot the data and the inferred trajectory.
Step5: Now the autocorrelation plot of the residuals shows high autocorrelation at small lags, which is typical of AR(1) noise. Therefore, this visualisation suggests that the assumption of independent Gaussian noise which we made during inference is invalid.
Case 3
Step6: Visualisation of the data
As before we plot the data and the inferred trajectories. | Python Code:
import pints
import pints.toy as toy
import pints.plot
import numpy as np
import matplotlib.pyplot as plt
# Use the toy logistic model
model = toy.LogisticModel()
real_parameters = [0.015, 500]
times = np.linspace(0, 1000, 100)
org_values = model.simulate(real_parameters, times)
# Add independent Gaussian noise
noise = 50
values = org_values + np.random.normal(0, noise, org_values.shape)
# Set up the problem and run the optimisation
problem = pints.SingleOutputProblem(model, times, values)
score = pints.SumOfSquaresError(problem)
boundaries = pints.RectangularBoundaries([0, 200], [1, 1000])
x0 = np.array([0.5, 500])
found_parameters, found_value = pints.optimise(
score,
x0,
boundaries=boundaries,
method=pints.XNES,
)
print('Score at true solution: ')
print(score(real_parameters))
print('Found solution: True parameters:' )
for k, x in enumerate(found_parameters):
print(pints.strfloat(x) + ' ' + pints.strfloat(real_parameters[k]))
Explanation: Noise model diagnostics: autocorrelation of the residuals
This example shows how to use the autocorrelation plots of the residuals to check assumptions of the noise model.
Three cases are shown. In the first two, optimisation is used to obtain a best-fit parameter vector in a single output problem. In the first case the noise is correctly specified and in the second case the noise is misspecified. The third case demonstrates the same method in a multiple output problem with Bayesian inference.
Case 1: Correctly specified noise
For the first example, we will use optimisation to obtain the best-fit parameter vector. See Optimisation First Example for more details. We begin with a problem in which the noise is correctly specified: both the data generation and the model use independent Gaussian noise.
End of explanation
fig, ax = pints.plot.series(np.array([found_parameters]), problem, ref_parameters=real_parameters)
fig.set_size_inches(15, 7.5)
plt.show()
Explanation: Visualisation of the data
After obtaining these parameters, it is useful to visualise the data and the fit.
End of explanation
from pints.residuals_diagnostics import plot_residuals_autocorrelation
# Plot the autocorrelation
fig = plot_residuals_autocorrelation(np.array([found_parameters]),
problem)
plt.show()
Explanation: Plotting autocorrelation of the residuals
Next we look at the autocorrelation plot of the residuals to evaluate the noise model (using pints.residuals_diagnostics.plot_residuals_autocorrelation).
End of explanation
import pints.noise
# Use the toy logistic model
model = toy.LogisticModel()
real_parameters = [0.015, 500]
times = np.linspace(0, 1000, 100)
org_values = model.simulate(real_parameters, times)
# Add AR(1) noise
rho = 0.75
sigma = 50
values = org_values + pints.noise.ar1(rho, sigma, len(org_values))
# Set up the problem and run the optimisation
problem = pints.SingleOutputProblem(model, times, values)
score = pints.SumOfSquaresError(problem)
boundaries = pints.RectangularBoundaries([0, 200], [1, 1000])
x0 = np.array([0.5, 500])
found_parameters, found_value = pints.optimise(
score,
x0,
boundaries=boundaries,
method=pints.XNES,
)
print('Score at true solution: ')
print(score(real_parameters))
print('Found solution: True parameters:' )
for k, x in enumerate(found_parameters):
print(pints.strfloat(x) + ' ' + pints.strfloat(real_parameters[k]))
Explanation: The figure shows no significant autocorrelation in the residuals. Therefore, the assumption of independent noise may be valid.
Case 2: Incorrectly specified noise
For the next case, we generate data with an AR(1) (first order autoregressive) noise model. However, we deliberately misspecify the model and assume independent Gaussian noise (as before) when fitting the parameters.
End of explanation
fig, ax = pints.plot.series(np.array([found_parameters]), problem, ref_parameters=real_parameters)
fig.set_size_inches(15, 7.5)
plt.show()
# Plot the autocorrelation
fig = plot_residuals_autocorrelation(np.array([found_parameters]),
problem)
plt.show()
Explanation: Visualisation of the data
As before we plot the data and the inferred trajectory.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
import pints
import pints.toy
model = pints.toy.LotkaVolterraModel()
times = np.linspace(0, 3, 50)
parameters = model.suggested_parameters()
model.set_initial_conditions([2, 2])
org_values = model.simulate(parameters, times)
# Add noise
sigma = 0.1
values = org_values + np.random.normal(0, sigma, org_values.shape)
# Create an object with links to the model and time series
problem = pints.MultiOutputProblem(model, times, values)
# Create a log posterior
log_prior = pints.UniformLogPrior([0, 0, 0, 0, 0, 0], [6, 6, 6, 6, 1, 1])
log_likelihood = pints.GaussianLogLikelihood(problem)
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Run MCMC on the noisy data
x0 = [[1.0, 1.0, 1.0, 1.0, 0.1, 0.1]]*3
mcmc = pints.MCMCController(log_posterior, 3, x0)
mcmc.set_max_iterations(4000)
print('Running')
chains = mcmc.run()
print('Done!')
Explanation: Now the autocorrelation plot of the residuals shows high autocorrelation at small lags, which is typical of AR(1) noise. Therefore, this visualisation suggests that the assumption of independent Gaussian noise which we made during inference is invalid.
Case 3: Multiple output Bayesian inference problem
The plot_residuals_autocorrelation function also works with Bayesian inference and multiple output problems. For the final example, we demonstrate the same strategy in this setting.
For this example, the Lotka-Volterra model is used. See the Lotka-Volterra example for more details. As in Case 1, the true data is generated with independent Gaussian noise.
End of explanation
# Get the first MCMC chain
chain1 = chains[0]
# Cut off the burn-in samples
chain1 = chain1[2500:]
fig, ax = pints.plot.series(chain1, problem, ref_parameters=parameters)
fig.set_size_inches(15, 7.5)
plt.show()
# Plot the autocorrelation
fig = plot_residuals_autocorrelation(chain1, problem)
plt.show()
Explanation: Visualisation of the data
As before we plot the data and the inferred trajectories.
End of explanation |
2,723 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Create TensorFlow wide-and-deep model </h1>
This notebook illustrates
Step1: <h2> Create TensorFlow model using TensorFlow's Estimator API </h2>
<p>
First, write an input_fn to read the data.
Step2: Next, define the feature columns
Step3: To predict with the TensorFlow model, we also need a serving input function. We will want all the inputs from our user.
Step4: Finally, train! | Python Code:
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
%%bash
ls *.csv
Explanation: <h1> Create TensorFlow wide-and-deep model </h1>
This notebook illustrates:
<ol>
<li> Creating a model using the high-level Estimator API
</ol>
End of explanation
import shutil
import numpy as np
import tensorflow as tf
print(tf.__version__)
# Determine CSV, label, and key columns
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',')
LABEL_COLUMN = 'weight_pounds'
KEY_COLUMN = 'key'
# Set default values for each CSV column
DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']]
TRAIN_STEPS = 1000
# Create an input function reading a file using the Dataset API
# Then provide the results to the Estimator API
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults=DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Create list of files that match pattern
file_list = tf.gfile.Glob(filename)
# Create dataset from file list
dataset = (tf.data.TextLineDataset(file_list) # Read text file
.map(decode_csv)) # Transform each elem by applying decode_csv fn
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size=10*batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset
return _input_fn
Explanation: <h2> Create TensorFlow model using TensorFlow's Estimator API </h2>
<p>
First, write an input_fn to read the data.
End of explanation
# Define feature columns
def get_wide_deep():
# Define column types
is_male,mother_age,plurality,gestation_weeks = \
[\
tf.feature_column.categorical_column_with_vocabulary_list('is_male',
['True', 'False', 'Unknown']),
tf.feature_column.numeric_column('mother_age'),
tf.feature_column.categorical_column_with_vocabulary_list('plurality',
['Single(1)', 'Twins(2)', 'Triplets(3)',
'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)']),
tf.feature_column.numeric_column('gestation_weeks')
]
# Discretize
age_buckets = tf.feature_column.bucketized_column(mother_age,
boundaries=np.arange(15,45,1).tolist())
gestation_buckets = tf.feature_column.bucketized_column(gestation_weeks,
boundaries=np.arange(17,47,1).tolist())
# Sparse columns are wide, have a linear relationship with the output
wide = [is_male,
plurality,
age_buckets,
gestation_buckets]
# Feature cross all the wide columns and embed into a lower dimension
crossed = tf.feature_column.crossed_column(wide, hash_bucket_size=20000)
embed = tf.feature_column.embedding_column(crossed, 3)
# Continuous columns are deep, have a complex relationship with the output
deep = [mother_age,
gestation_weeks,
embed]
return wide, deep
Explanation: Next, define the feature columns
End of explanation
# Create serving input function to be able to serve predictions later using provided inputs
def serving_input_fn():
feature_placeholders = {
'is_male': tf.placeholder(tf.string, [None]),
'mother_age': tf.placeholder(tf.float32, [None]),
'plurality': tf.placeholder(tf.string, [None]),
'gestation_weeks': tf.placeholder(tf.float32, [None])
}
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
# Create estimator to train and evaluate
def train_and_evaluate(output_dir):
wide, deep = get_wide_deep()
EVAL_INTERVAL = 300
run_config = tf.estimator.RunConfig(save_checkpoints_secs = EVAL_INTERVAL,
keep_checkpoint_max = 3)
estimator = tf.estimator.DNNLinearCombinedRegressor(
model_dir = output_dir,
linear_feature_columns = wide,
dnn_feature_columns = deep,
dnn_hidden_units = [64, 32],
config = run_config)
train_spec = tf.estimator.TrainSpec(
input_fn = read_dataset('train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = TRAIN_STEPS)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec = tf.estimator.EvalSpec(
input_fn = read_dataset('eval.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 60, # start evaluating after N seconds
throttle_secs = EVAL_INTERVAL, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
Explanation: To predict with the TensorFlow model, we also need a serving input function. We will want all the inputs from our user.
End of explanation
# Run the model
shutil.rmtree('babyweight_trained', ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
train_and_evaluate('babyweight_trained')
Explanation: Finally, train!
End of explanation |
2,724 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MLE with exponential distribution
Step1: Draw exponential density
$$f\left(y_{i},\theta\right)=\theta\exp\left(-\theta y_{i}\right),\quad y_{i}>0,\quad\theta>0$$
Step2: Draw several densities
Step3: Simulate data and draw histogram
Step4: Simulate data and estimate model parameter by MLE
MLE estimator is
$$\hat{\theta}=\frac{n}{\sum_{i=1}^{n}y_{i}}=\overline{y}^{-1}$$ | Python Code:
import numpy as np
import matplotlib.pylab as plt
import seaborn as sns
np.set_printoptions(precision=4, suppress=True)
sns.set_context('notebook')
%matplotlib inline
Explanation: MLE with exponential distribution
End of explanation
theta = 1
y = np.linspace(0, 10, 100)
f = theta * np.exp(-theta * y)
# plot function
plt.plot(y, f)
plt.xlabel('y')
plt.ylabel('f')
plt.show()
Explanation: Draw exponential density
$$f\left(y_{i},\theta\right)=\theta\exp\left(-\theta y_{i}\right),\quad y_{i}>0,\quad\theta>0$$
End of explanation
# try several parameters
theta = [1, .5, .25]
y = np.linspace(0, 10, 100)
# for each parameter value
for t in theta:
f = t * np.exp( - t * y)
plt.plot(y, f)
plt.xlabel('y')
plt.ylabel('f')
plt.legend(theta)
plt.show()
Explanation: Draw several densities
End of explanation
n = 100
theta = 1
# simulate data
y = np.random.exponential(1 / theta, n)
# plot data
plt.hist(y, bins=10, normed=True)
plt.xlabel(r'$y_i$')
plt.ylabel(r'$\hat{f}$')
plt.show()
Explanation: Simulate data and draw histogram
End of explanation
# sample size
n = int(1e2)
# true parameter value
theta = 1
# simulate data
y = np.sort(np.random.exponential(1 / theta, n))
# MLE estimator
theta_hat = n / np.sum(y)
print('Estimate is: theta = ', theta_hat)
# function of exponential density
f = lambda theta: theta * np.exp( - theta * y)
# plot results
plt.hist(y, bins = 10, normed = True, alpha = .2, lw = 0)
plt.plot(y, f(theta), c = 'black')
plt.plot(y, f(theta_hat), c = 'red')
plt.xlabel(r'$y_i$')
plt.ylabel(r'$\hat{f}$')
plt.legend(('True', 'Fitted','Histogram'))
plt.show()
Explanation: Simulate data and estimate model parameter by MLE
MLE estimator is
$$\hat{\theta}=\frac{n}{\sum_{i=1}^{n}y_{i}}=\overline{y}^{-1}$$
End of explanation |
2,725 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Left Handed Sister Problem
Think Bayes, Second Edition
Copyright 2021 Allen B. Downey
License
Step1: To compute the proportion of each type of family, I'll use Scipy to compute the binomial distribution.
Step2: And put the results into a Pandas Series.
Step3: But we also have the information frequencies of these families are proportional to 30%, 40%, and 10%, so we can multiply through.
Step5: So that's the (unnormalized) prior.
I'll use the following function to do Bayesian updates.
Step6: This function takes a prior and a likelihood and returns a DataFrame
The first update
Due to length-biased sampling, the person you met is more likely to come from family with more boys.
Specifically, the likelihood of meeting someone from a family with $n$ boys is proportional to $n$.
Step7: So that's what we should believe about the family after the first update.
The second update
The likelihood that a person has exactly one sister named Mary is given by the binomial distribution where n is the number of girls in the family and p is the probability that a girl is named Mary.
Step8: Here's the second update.
Step9: Based on the sister named Mary, we can rule out families with no girls, and families with more than one girls are more likely.
Probability of a left-handed sister
Finally, we can compute the probability that he has at least one left-handed sister.
The likelihood comes from the binomial distribution again, where n is the number of additional sisters, and we use the survival function to compute the probability that one or more are left-handed.
Step10: A convenient way to compute the total probability of an outcome is to do an update as if it happened, ignore the posterior probabilities, and compute the sum of the products.
Step11: At this point, there are only three family types left standing, (1,2), (2,2), and (1,3).
Here's the total probability that your new friend has a left-handed sister.
Step12: The Bayes factor
If your interlocutor is the brother of your friend, the probability is 1 that he has a left-handed sister.
If he is not the brother of your friend, the probability is p.
So the Bayes factor is the ratio of these probabilities. | Python Code:
import pandas as pd
qs = [(2, 0),
(1, 1),
(0, 2),
(3, 0),
(2, 1),
(1, 2),
(0, 3),
(4, 0),
(3, 1),
(2, 2),
(1, 3),
(0, 4),
]
index = pd.MultiIndex.from_tuples(qs, names=['Boys', 'Girls'])
Explanation: The Left Handed Sister Problem
Think Bayes, Second Edition
Copyright 2021 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
Suppose you meet someone who looks like the brother of your friend Mary.
You ask if he has a sister named Mary, and he says "Yes I do, but I don't think I know you."
You remember that Mary has a sister who is left-handed, but you don't remember her name.
So you ask your new friend if he has another sister who is left-handed.
If he does, how much evidence does that provide that he is the brother of your friend, rather than a random person who coincidentally has a sister named Mary and another sister who is left-handed. In other words, what is the Bayes factor of the left-handed sister?
Let's assume:
Out of 100 families with children, 20 have one child, 30 have two children, 40 have three children, and 10 have four children.
All children are either boys or girls with equal probability, one girl in 10 is left-handed, and one girl in 100 is named Mary.
Name, sex, and handedness are independent, so every child has the same probability of being a girl, left-handed, or named Mary.
If the person you met had more than one sister named Mary, he would have said so, but he could have more than one sister who is left handed.
Constructing the prior
I'll make a Pandas Series that enumerates possible families with 2, 3, or 4 children.
End of explanation
from scipy.stats import binom
boys = index.to_frame()['Boys']
girls = index.to_frame()['Girls']
ps = binom.pmf(girls, boys+girls, 0.5)
Explanation: To compute the proportion of each type of family, I'll use Scipy to compute the binomial distribution.
End of explanation
prior1 = pd.Series(ps, index, name='Prior')
pd.DataFrame(prior1)
Explanation: And put the results into a Pandas Series.
End of explanation
ps = [30, 30, 30, 40, 40, 40, 40, 10, 10, 10, 10, 10]
prior1 *= ps
pd.DataFrame(prior1)
Explanation: But we also have the information frequencies of these families are proportional to 30%, 40%, and 10%, so we can multiply through.
End of explanation
import pandas as pd
def make_table(prior, likelihood):
Make a DataFrame representing a Bayesian update.
table = pd.DataFrame(prior)
table.columns = ['Prior']
table['Likelihood'] = likelihood
table['Product'] = (table['Prior'] * table['Likelihood'])
total = table['Product'].sum()
table['Posterior'] = table['Product'] / total
return table
Explanation: So that's the (unnormalized) prior.
I'll use the following function to do Bayesian updates.
End of explanation
likelihood1 = prior1.index.to_frame()['Boys']
table1 = make_table(prior1, likelihood1)
table1
Explanation: This function takes a prior and a likelihood and returns a DataFrame
The first update
Due to length-biased sampling, the person you met is more likely to come from family with more boys.
Specifically, the likelihood of meeting someone from a family with $n$ boys is proportional to $n$.
End of explanation
from scipy.stats import binom
ns = prior1.index.to_frame()['Girls']
p = 1 / 100
k = 1
likelihood2 = binom.pmf(k, ns, p)
likelihood2
Explanation: So that's what we should believe about the family after the first update.
The second update
The likelihood that a person has exactly one sister named Mary is given by the binomial distribution where n is the number of girls in the family and p is the probability that a girl is named Mary.
End of explanation
prior2 = table1['Posterior']
table2 = make_table(prior2, likelihood2)
table2
Explanation: Here's the second update.
End of explanation
ns = prior1.index.to_frame()['Girls'] - 1
ns.name = 'Additional sisters'
neg = (ns < 0)
ns[neg] = 0
pd.DataFrame(ns)
p = 1 / 10
k = 1
likelihood3 = binom.sf(k-1, ns, p)
likelihood3
Explanation: Based on the sister named Mary, we can rule out families with no girls, and families with more than one girls are more likely.
Probability of a left-handed sister
Finally, we can compute the probability that he has at least one left-handed sister.
The likelihood comes from the binomial distribution again, where n is the number of additional sisters, and we use the survival function to compute the probability that one or more are left-handed.
End of explanation
prior3 = table2['Posterior']
table3 = make_table(prior3, likelihood3)
table3
Explanation: A convenient way to compute the total probability of an outcome is to do an update as if it happened, ignore the posterior probabilities, and compute the sum of the products.
End of explanation
p = table3['Product'].sum()
p
Explanation: At this point, there are only three family types left standing, (1,2), (2,2), and (1,3).
Here's the total probability that your new friend has a left-handed sister.
End of explanation
1/p
Explanation: The Bayes factor
If your interlocutor is the brother of your friend, the probability is 1 that he has a left-handed sister.
If he is not the brother of your friend, the probability is p.
So the Bayes factor is the ratio of these probabilities.
End of explanation |
2,726 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Name
Deploying a trained model to Cloud Machine Learning Engine
Label
Cloud Storage, Cloud ML Engine, Kubeflow, Pipeline
Summary
A Kubeflow Pipeline component to deploy a trained model from a Cloud Storage location to Cloud ML Engine.
Details
Intended use
Use the component to deploy a trained model to Cloud ML Engine. The deployed model can serve online or batch predictions in a Kubeflow Pipeline.
Runtime arguments
| Argument | Description | Optional | Data type | Accepted values | Default |
|--------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------|-----------------|---------|
| model_uri | The URI of a Cloud Storage directory that contains a trained model file.<br/> Or <br/> An Estimator export base directory that contains a list of subdirectories named by timestamp. The directory with the latest timestamp is used to load the trained model file. | No | GCSPath | | |
| project_id | The ID of the Google Cloud Platform (GCP) project of the serving model. | No | GCPProjectID | | |
| model_id | The name of the trained model. | Yes | String | | None |
| version_id | The name of the version of the model. If it is not provided, the operation uses a random name. | Yes | String | | None |
| runtime_version | The Cloud ML Engine runtime version to use for this deployment. If it is not provided, the default stable version, 1.0, is used. | Yes | String | | None |
| python_version | The version of Python used in the prediction. If it is not provided, version 2.7 is used. You can use Python 3.5 if runtime_version is set to 1.4 or above. Python 2.7 works with all supported runtime versions. | Yes | String | | 2.7 |
| model | The JSON payload of the new model. | Yes | Dict | | None |
| version | The new version of the trained model. | Yes | Dict | | None |
| replace_existing_version | Indicates whether to replace the existing version in case of a conflict (if the same version number is found.) | Yes | Boolean | | FALSE |
| set_default | Indicates whether to set the new version as the default version in the model. | Yes | Boolean | | FALSE |
| wait_interval | The number of seconds to wait in case the operation has a long run time. | Yes | Integer | | 30 |
Input data schema
The component looks for a trained model in the location specified by the model_uri runtime argument. The accepted trained models are
Step1: Load the component using KFP SDK
Step2: Sample
Note
Step3: Example pipeline that uses the component
Step4: Compile the pipeline
Step5: Submit the pipeline for execution | Python Code:
%%capture --no-stderr
!pip3 install kfp --upgrade
Explanation: Name
Deploying a trained model to Cloud Machine Learning Engine
Label
Cloud Storage, Cloud ML Engine, Kubeflow, Pipeline
Summary
A Kubeflow Pipeline component to deploy a trained model from a Cloud Storage location to Cloud ML Engine.
Details
Intended use
Use the component to deploy a trained model to Cloud ML Engine. The deployed model can serve online or batch predictions in a Kubeflow Pipeline.
Runtime arguments
| Argument | Description | Optional | Data type | Accepted values | Default |
|--------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------|-----------------|---------|
| model_uri | The URI of a Cloud Storage directory that contains a trained model file.<br/> Or <br/> An Estimator export base directory that contains a list of subdirectories named by timestamp. The directory with the latest timestamp is used to load the trained model file. | No | GCSPath | | |
| project_id | The ID of the Google Cloud Platform (GCP) project of the serving model. | No | GCPProjectID | | |
| model_id | The name of the trained model. | Yes | String | | None |
| version_id | The name of the version of the model. If it is not provided, the operation uses a random name. | Yes | String | | None |
| runtime_version | The Cloud ML Engine runtime version to use for this deployment. If it is not provided, the default stable version, 1.0, is used. | Yes | String | | None |
| python_version | The version of Python used in the prediction. If it is not provided, version 2.7 is used. You can use Python 3.5 if runtime_version is set to 1.4 or above. Python 2.7 works with all supported runtime versions. | Yes | String | | 2.7 |
| model | The JSON payload of the new model. | Yes | Dict | | None |
| version | The new version of the trained model. | Yes | Dict | | None |
| replace_existing_version | Indicates whether to replace the existing version in case of a conflict (if the same version number is found.) | Yes | Boolean | | FALSE |
| set_default | Indicates whether to set the new version as the default version in the model. | Yes | Boolean | | FALSE |
| wait_interval | The number of seconds to wait in case the operation has a long run time. | Yes | Integer | | 30 |
Input data schema
The component looks for a trained model in the location specified by the model_uri runtime argument. The accepted trained models are:
Tensorflow SavedModel
Scikit-learn & XGBoost model
The accepted file formats are:
*.pb
*.pbtext
model.bst
model.joblib
model.pkl
model_uri can also be an Estimator export base directory, which contains a list of subdirectories named by timestamp. The directory with the latest timestamp is used to load the trained model file.
Output
| Name | Description | Type |
|:------- |:---- | :--- |
| job_id | The ID of the created job. | String |
| job_dir | The Cloud Storage path that contains the trained model output files. | GCSPath |
Cautions & requirements
To use the component, you must:
Set up the cloud environment.
The component can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details.
Grant read access to the Cloud Storage bucket that contains the trained model to the Kubeflow user service account.
Detailed description
Use the component to:
* Locate the trained model at the Cloud Storage location you specify.
* Create a new model if a model provided by you doesn’t exist.
* Delete the existing model version if replace_existing_version is enabled.
* Create a new version of the model from the trained model.
* Set the new version as the default version of the model if set_default is enabled.
Follow these steps to use the component in a pipeline:
Install the Kubeflow Pipeline SDK:
End of explanation
import kfp.components as comp
mlengine_deploy_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/ml_engine/deploy/component.yaml')
help(mlengine_deploy_op)
Explanation: Load the component using KFP SDK
End of explanation
# Required Parameters
PROJECT_ID = '<Please put your project ID here>'
# Optional Parameters
EXPERIMENT_NAME = 'CLOUDML - Deploy'
TRAINED_MODEL_PATH = 'gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/'
Explanation: Sample
Note: The following sample code works in IPython notebook or directly in Python code.
In this sample, you deploy a pre-built trained model from gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/ to Cloud ML Engine. The deployed model is kfp_sample_model. A new version is created every time the sample is run, and the latest version is set as the default version of the deployed model.
Set sample parameters
End of explanation
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='CloudML deploy pipeline',
description='CloudML deploy pipeline'
)
def pipeline(
model_uri = 'gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/',
project_id = PROJECT_ID,
model_id = 'kfp_sample_model',
version_id = '',
runtime_version = '1.10',
python_version = '',
version = {},
replace_existing_version = 'False',
set_default = 'True',
wait_interval = '30'):
task = mlengine_deploy_op(
model_uri=model_uri,
project_id=project_id,
model_id=model_id,
version_id=version_id,
runtime_version=runtime_version,
python_version=python_version,
version=version,
replace_existing_version=replace_existing_version,
set_default=set_default,
wait_interval=wait_interval)
Explanation: Example pipeline that uses the component
End of explanation
pipeline_func = pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
Explanation: Compile the pipeline
End of explanation
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
Explanation: Submit the pipeline for execution
End of explanation |
2,727 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First we import some datasets of interest
Step1: Now we separate the winners from the losers and organize our dataset
Step2: Now we match the detailed results to the merge dataset above
Step3: Here we get our submission info
Step4: Training Data Creation
Step5: We will only consider years relevant to our test submission
Step6: Now lets just look at TeamID2, or just the second team info.
Step7: From the inner join, we will create data per team id to estimate the parameters we are missing that are independent of the year. Essentially, we are trying to estimate the average behavior of the team across the year.
Step8: Here we look at the comparable statistics. For the TeamID2 column, we would consider the inverse of the ratio, and 1 minus the score attempt percentage.
Step9: Now lets create a model just solely based on the inner group and predict those probabilities.
We will get the teams with the missing result.
Step10: We scale our data for our logistic regression, and make sure our categorical variables are properly processed.
Step11: Here we store our probabilities
Step12: We merge our predictions
Step13: We get the 'average' probability of success for each team
Step14: Any missing value for the prediciton will be imputed with the product of the probabilities calculated above. We assume these are independent events. | Python Code:
#the seed information
#df_seeds = pd.read_csv('../input/NCAATourneySeeds.csv')
#print(df_seeds.shape)
#print(df_seeds.head())
#print(df_seeds.Season.value_counts())
#the seed information
df_seeds = pd.read_csv('../input/NCAATourneySeeds_SampleTourney2018.csv')
print(df_seeds.shape)
print(df_seeds.head())
#print(df_seeds.Season.value_counts())
#tour information
df_tour = pd.read_csv('../input/RegularSeasonCompactResults_Prelim2018.csv')
print(df_tour.shape)
print(df_tour.head())
Explanation: First we import some datasets of interest
End of explanation
df_seeds['seed_int'] = df_seeds['Seed'].apply( lambda x : int(x[1:3]) )
df_winseeds = df_seeds.loc[:, ['TeamID', 'Season', 'seed_int']].rename(columns={'TeamID':'WTeamID', 'seed_int':'WSeed'})
df_lossseeds = df_seeds.loc[:, ['TeamID', 'Season', 'seed_int']].rename(columns={'TeamID':'LTeamID', 'seed_int':'LSeed'})
df_dummy = pd.merge(left=df_tour, right=df_winseeds, how='left', on=['Season', 'WTeamID'])
df_concat = pd.merge(left=df_dummy, right=df_lossseeds, on=['Season', 'LTeamID'])
print(df_concat.shape)
print(df_concat.head())
Explanation: Now we separate the winners from the losers and organize our dataset
End of explanation
df_concat['DiffSeed'] = df_concat[['LSeed', 'WSeed']].apply(lambda x : 0 if x[0] == x[1] else 1, axis = 1)
print(df_concat.shape)
print(df_concat.head())
print(df_concat.Season.value_counts())
Explanation: Now we match the detailed results to the merge dataset above
End of explanation
df_sample_sub1 = pd.read_csv('../input/SampleSubmissionStage1.csv')
#prepares sample submission
df_sample_sub2 = pd.read_csv('../input/SampleSubmissionStage2.csv')
df_sample_sub=pd.concat([df_sample_sub1, df_sample_sub2])
print(df_sample_sub.shape)
print(df_sample_sub.head())
df_sample_sub['Season'] = df_sample_sub['ID'].apply(lambda x : int(x.split('_')[0]) )
df_sample_sub['TeamID1'] = df_sample_sub['ID'].apply(lambda x : int(x.split('_')[1]) )
df_sample_sub['TeamID2'] = df_sample_sub['ID'].apply(lambda x : int(x.split('_')[2]) )
print(df_sample_sub.shape)
print(df_sample_sub.head())
Explanation: Here we get our submission info
End of explanation
winners = df_concat.rename( columns = { 'WTeamID' : 'TeamID1',
'LTeamID' : 'TeamID2',
'WScore' : 'Team1_Score',
'LScore' : 'Team2_Score'}).drop(['WSeed', 'LSeed', 'WLoc'], axis = 1)
winners['Result'] = 1.0
losers = df_concat.rename( columns = { 'WTeamID' : 'TeamID2',
'LTeamID' : 'TeamID1',
'WScore' : 'Team2_Score',
'LScore' : 'Team1_Score'}).drop(['WSeed', 'LSeed', 'WLoc'], axis = 1)
losers['Result'] = 0.0
train = pd.concat( [winners, losers], axis = 0).reset_index(drop = True)
train['Score_Ratio'] = train['Team1_Score'] / train['Team2_Score']
train['Score_Total'] = train['Team1_Score'] + train['Team2_Score']
train['Score_Pct'] = train['Team1_Score'] / train['Score_Total']
print(train.shape)
print(train.head())
Explanation: Training Data Creation
End of explanation
years = [2014, 2015, 2016, 2017,2018]
Explanation: We will only consider years relevant to our test submission
End of explanation
train_test_inner = pd.merge( train.loc[ train['Season'].isin(years), : ].reset_index(drop = True),
df_sample_sub.drop(['ID', 'Pred'], axis = 1),
on = ['Season', 'TeamID1', 'TeamID2'], how = 'inner' )
train_test_inner.head()
train_test_inner.shape
Explanation: Now lets just look at TeamID2, or just the second team info.
End of explanation
team1d_num_ot = train_test_inner.groupby(['Season', 'TeamID1'])['NumOT'].median().reset_index()\
.set_index('Season').rename(columns = {'NumOT' : 'NumOT1'})
team2d_num_ot = train_test_inner.groupby(['Season', 'TeamID2'])['NumOT'].median().reset_index()\
.set_index('Season').rename(columns = {'NumOT' : 'NumOT2'})
num_ot = team1d_num_ot.join(team2d_num_ot).reset_index()
#sum the number of ot calls and subtract by one to prevent overcounting
num_ot['NumOT'] = num_ot[['NumOT1', 'NumOT2']].apply(lambda x : round( x.sum() ), axis = 1 )
num_ot.head()
Explanation: From the inner join, we will create data per team id to estimate the parameters we are missing that are independent of the year. Essentially, we are trying to estimate the average behavior of the team across the year.
End of explanation
def geo_mean( x ):
return np.exp( np.mean(np.log(x)) )
def harm_mean( x ):
return np.mean( x ** -1.0 ) ** -1.0
team1d_score_spread = train_test_inner.groupby(['Season', 'TeamID1'])[['Score_Ratio', 'Score_Pct']]\
.agg({ 'Score_Ratio': geo_mean, 'Score_Pct' : harm_mean}).reset_index()\
.set_index('Season').rename(columns = {'Score_Ratio' : 'Score_Ratio1', 'Score_Pct' : 'Score_Pct1'})
team2d_score_spread = train_test_inner.groupby(['Season', 'TeamID2'])[['Score_Ratio', 'Score_Pct']]\
.agg({ 'Score_Ratio': geo_mean, 'Score_Pct' : harm_mean}).reset_index()\
.set_index('Season').rename(columns = {'Score_Ratio' : 'Score_Ratio2', 'Score_Pct' : 'Score_Pct2'})
score_spread = team1d_score_spread.join(team2d_score_spread).reset_index()
#geometric mean of score ratio of team 1 and inverse of team 2
score_spread['Score_Ratio'] = score_spread[['Score_Ratio1', 'Score_Ratio2']].apply(lambda x : ( x[0] * ( x[1] ** -1.0) ), axis = 1 ) ** 0.5
#harmonic mean of score pct
score_spread['Score_Pct'] = score_spread[['Score_Pct1', 'Score_Pct2']].apply(lambda x : 0.5*( x[0] ** -1.0 ) + 0.5*( 1.0 - x[1] ) ** -1.0, axis = 1 ) ** -1.0
score_spread.head()
Explanation: Here we look at the comparable statistics. For the TeamID2 column, we would consider the inverse of the ratio, and 1 minus the score attempt percentage.
End of explanation
X_train = train_test_inner.loc[:, ['Season', 'NumOT', 'Score_Ratio', 'Score_Pct']]
train_labels = train_test_inner['Result']
train_test_outer = pd.merge( train.loc[ train['Season'].isin(years), : ].reset_index(drop = True),
df_sample_sub.drop(['ID', 'Pred'], axis = 1),
on = ['Season', 'TeamID1', 'TeamID2'], how = 'outer' )
train_test_outer = train_test_outer.loc[ train_test_outer['Result'].isnull(),
['TeamID1', 'TeamID2', 'Season']]
train_test_missing = pd.merge( pd.merge( score_spread.loc[:, ['TeamID1', 'TeamID2', 'Season', 'Score_Ratio', 'Score_Pct']],
train_test_outer, on = ['TeamID1', 'TeamID2', 'Season']),
num_ot.loc[:, ['TeamID1', 'TeamID2', 'Season', 'NumOT']],
on = ['TeamID1', 'TeamID2', 'Season'])
Explanation: Now lets create a model just solely based on the inner group and predict those probabilities.
We will get the teams with the missing result.
End of explanation
X_test = train_test_missing.loc[:, ['Season', 'NumOT', 'Score_Ratio', 'Score_Pct']]
n = X_train.shape[0]
train_test_merge = pd.concat( [X_train, X_test], axis = 0 ).reset_index(drop = True)
train_test_merge = pd.concat( [pd.get_dummies( train_test_merge['Season'].astype(object) ),
train_test_merge.drop('Season', axis = 1) ], axis = 1 )
train_test_merge = pd.concat( [pd.get_dummies( train_test_merge['NumOT'].astype(object) ),
train_test_merge.drop('NumOT', axis = 1) ], axis = 1 )
X_train = train_test_merge.loc[:(n - 1), :].reset_index(drop = True)
X_test = train_test_merge.loc[n:, :].reset_index(drop = True)
x_max = X_train.max()
x_min = X_train.min()
X_train = ( X_train - x_min ) / ( x_max - x_min + 1e-14)
X_test = ( X_test - x_min ) / ( x_max - x_min + 1e-14)
print(X_train.shape)
print(X_train.head())
print(train_labels.shape)
print(train_labels.head())
print(train_labels.value_counts())
print(X_test.shape)
print(X_test.head())
from sklearn.linear_model import LogisticRegressionCV
log_clf = LogisticRegressionCV(cv = 5,Cs=8,n_jobs=4,scoring="neg_log_loss")
log_clf.fit( X_train, train_labels )
import matplotlib.pyplot as plt
plt.plot(log_clf.scores_[1])
# plt.ylabel('some numbers')
plt.show()
Explanation: We scale our data for our logistic regression, and make sure our categorical variables are properly processed.
End of explanation
train_test_inner['Pred1'] = log_clf.predict_proba(X_train)[:,1]
train_test_missing['Pred1'] = log_clf.predict_proba(X_test)[:,1]
Explanation: Here we store our probabilities
End of explanation
sub = pd.merge(df_sample_sub,
pd.concat( [train_test_missing.loc[:, ['Season', 'TeamID1', 'TeamID2', 'Pred1']],
train_test_inner.loc[:, ['Season', 'TeamID1', 'TeamID2', 'Pred1']] ],
axis = 0).reset_index(drop = True),
on = ['Season', 'TeamID1', 'TeamID2'], how = 'outer')
print(sub.shape)
print(sub.head())
Explanation: We merge our predictions
End of explanation
team1_probs = sub.groupby('TeamID1')['Pred1'].apply(lambda x : (x ** -1.0).mean() ** -1.0 ).fillna(0.5).to_dict()
team2_probs = sub.groupby('TeamID2')['Pred1'].apply(lambda x : (x ** -1.0).mean() ** -1.0 ).fillna(0.5).to_dict()
Explanation: We get the 'average' probability of success for each team
End of explanation
sub['Pred'] = sub[['TeamID1', 'TeamID2','Pred1']]\
.apply(lambda x : team1_probs.get(x[0]) * ( 1 - team2_probs.get(x[1]) ) if np.isnan(x[2]) else x[2],
axis = 1)
print(sub.shape)
print(sub.head())
sub.ID.value_counts()
sub=sub.groupby('ID', as_index=False).agg({"Pred": "mean"})
sub.ID.value_counts()
sub2018=sub.loc[sub['ID'].isin(df_sample_sub2.ID)]
print(sub2018.shape)
sub2018[['ID', 'Pred']].to_csv('sub_2018_all_only18.csv', index = False)
sub2018[['ID', 'Pred']].head(20)
Explanation: Any missing value for the prediciton will be imputed with the product of the probabilities calculated above. We assume these are independent events.
End of explanation |
2,728 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Calculate an average spectrum to ID peaks
Step1: 2. Make a feature matrix, n x p, where n = number of samples, p = number of features
Step2: 3. Standardize
Step3: 4. Sklearn PCA
Step4: 5. Matrix decomposition (see A User's Guide to Principal Components by Jackson, 1991.)
U’SU = L
U = orthonormal matrix, characteristic vectors
S = covariance matrix
L = diagonal matrix, characteristic roots
Get characteristic roots
Step5: 6. Matlab's PCA
Step6: Check that all three give similar results
Step7: PCA with the entire IR spectrum.
Step8: Standardize matrix | Python Code:
averagespectrum = PCAsynthetic.get_hyper_peaks(spectralmatrix, threshold = 0.01)
plt.plot(averagespectrum)
Explanation: 1. Calculate an average spectrum to ID peaks
End of explanation
featurematrix = PCAsynthetic.makefeaturematrix(spectralmatrix, averagespectrum)
featurematrix[10:13,:]
Explanation: 2. Make a feature matrix, n x p, where n = number of samples, p = number of features
End of explanation
featurematrix_std = PCAsynthetic.stdfeature(featurematrix, axis = 0)
#along axis 0 = running vertically downwards, across rows; 1 = columns
mean = featurematrix_std.mean(axis=0)
variance = featurematrix_std.std(axis=0)
print(mean, variance)
Explanation: 3. Standardize: zero mean, unit variance. (and check!)
End of explanation
#define number of principal components
sklearn_pca = sklearnPCA(n_components=9)
#matrix with each sample in terms of the PCs
SkPC = sklearn_pca.fit_transform(featurematrix_std)
#covariance matrix
Skcov = sklearn_pca.get_covariance()
#score matrix
#Skscore = sklearn_pca.score_samples(featurematrix_std)
#explained variance
Skvariance = sklearn_pca.explained_variance_
Skvarianceratio = sklearn_pca.explained_variance_ratio_
Skvarianceratio
Skvariance
Explanation: 4. Sklearn PCA
End of explanation
mean_vec = np.mean(featurematrix_std, axis=0)
#need to take transpose, since rowvar = true by default
cov_mat = np.cov(featurematrix_std.T)
#solve for characteristic roots and vectors
eig_vals, eig_vecs = np.linalg.eig(cov_mat)
#check that the loadings squared sum to 1:
Lsquared = sum(eig_vecs**2)
Explanation: 5. Matrix decomposition (see A User's Guide to Principal Components by Jackson, 1991.)
U’SU = L
U = orthonormal matrix, characteristic vectors
S = covariance matrix
L = diagonal matrix, characteristic roots
Get characteristic roots: |𝑺−𝒍𝑰|=𝟎
Get characteristic vector: |𝑺−𝒍𝑰|𝒕𝒊=𝟎
The projection of sample n onto principal component i: z$i$ = u$^{’}{i}$[x${n}$-x${avg}$]
End of explanation
mlPCA = PCA(featurematrix_std)
#get projections of samples into PCA space
mltrans = mlPCA.Y
#reshape
mltransreshape = mltrans.reshape((256,256,9))
mlloadings = mlPCA.Wt
#mltrans[513,:] should be the same as mltransreshape[2,1,:]
mlloadings.shape
Explanation: 6. Matlab's PCA
End of explanation
#projection of first sample, on to the first PC
P11 = np.dot(eig_vecs[:,0], featurematrix_std[0,:]-mean_vec)
mlP11 = mlPCA.Y[0,0]
SkP11 = SkPC[0,0]
P12 = np.dot(eig_vecs[:,1], featurematrix_std[0,:]-mean_vec)
mlP12 = mlPCA.Y[0,1]
SkP12 = SkPC[0,1]
P152 = np.dot(eig_vecs[:,1], featurematrix_std[15,:]-mean_vec)
mlP152 = mlPCA.Y[15,1]
SkP152 = SkPC[15,1]
print(P11, mlP11, SkP11)
print(P12, mlP12, SkP12)
print(P152, mlP152, SkP152)
print(mlloadings[0,7])
print(eig_vecs[0,7])
Explanation: Check that all three give similar results
End of explanation
#Reshape spectral matrix
IRmatrix=spectralmatrix.reshape(65536,559)
print(IRmatrix[1,:].shape)
#make sure we've reshaped correctly
plt.plot(reshapespect[555,:])
Explanation: PCA with the entire IR spectrum.
End of explanation
IRmatrix=np.concatenate((IRmatrix[:,20:60], IRmatrix[:,230:270], IRmatrix[:,420:460], IRmatrix[:,100:140],IRmatrix[:,305:345], IRmatrix[:,470:510], IRmatrix[:,158:198], IRmatrix[:,354:394], IRmatrix[:,512:552] ), axis=1)
#IRmatrix=np.concatenate((IRmatrix[:,30:40], IRmatrix[:,240:260], IRmatrix[:,430:450], IRmatrix[:,90:130],IRmatrix[:,395:335], IRmatrix[:,460:500], IRmatrix[:,148:188], IRmatrix[:,364:384], IRmatrix[:,522:542] ), axis=1)
IRmatrix_std = PCAsynthetic.stdfeature(IRmatrix, axis = 0)
IRmean = IRmatrix_std.mean(axis=0)
IRvariance = IRmatrix_std.std(axis=0)
print(IRvariance)
IRmlPCA = PCA(IRmatrix_std)
#get projections of samples into PCA space
IRmltrans = IRmlPCA.Y
#reshape
IRmlloadings = IRmlPCA.Wt
IRmltrans.shape
IRmltransreshape=IRmltrans.reshape(256,256,360)
score1image = IRmltransreshape[:,:,0]
score2image = IRmltransreshape[:,:,1]
score3image = IRmltransreshape[:,:,2]
score4image = IRmltransreshape[:,:,3]
score5image = IRmltransreshape[:,:,4]
score6image = IRmltransreshape[:,:,5]
score7image = IRmltransreshape[:,:,6]
score8image = IRmltransreshape[:,:,7]
score9image = IRmltransreshape[:,:,8]
plt.imshow(syntheticspectra.Cmatrix)
plt.imshow(score1image)
plt.imshow(score2image)
plt.imshow(score3image)
plt.imshow(score4image)
plt.imshow(score5image)
plt.imshow(score6image)
plt.imshow(score7image)
plt.imshow(score8image)
plt.imshow(score9image)
Explanation: Standardize matrix
End of explanation |
2,729 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Domain, Halo and Padding regions
In this tutorial we will learn about data regions and how these impact the Operator construction. We will use a simple time marching example.
Step1: At this point, we have a time-varying 3x3 grid filled with 1's. Below, we can see the domain data values
Step2: We now create a time-marching Operator that, at each timestep, increments by 2 all points in the computational domain.
Step3: We can print op to get the generated code.
Step4: When we take a look at the constructed expression, u[t1][x + 1][y + 1] = u[t0][x + 1][y + 1] + 2, we see several +1 were added to the u's spatial indices.
This is because the domain region is actually surrounded by 'ghost' points, which can be accessed via a stencil when iterating in proximity of the domain boundary. The ghost points define the halo region. The halo region can be accessed through the data_with_halo data accessor. As we see below, the halo points correspond to the zeros surrounding the domain region.
Step5: By adding the +1 offsets, the Devito compiler ensures the array accesses are logically aligned to the equation’s physical domain. For instance, the TimeFunction u(t, x, y) used in the example above has one point on each side of the x and y halo regions; if the user writes an expression including u(t, x, y) and u(t, x + 2, y + 2), the compiler will ultimately generate u[t, x + 1, y + 1] and u[t, x + 3, y + 3]. When x = y = 0, therefore, the values u[t, 1, 1] and u[t, 3, 3] are fetched, representing the first and third points in the physical domain.
By default, the halo region has space_order points on each side of the space dimensions. Sometimes, these points may be unnecessary, or, depending on the partial differential equation being approximated, extra points may be needed.
Step6: One can also pass a 3-tuple (o, lp, rp) instead of a single integer representing the discretization order. Here, o is the discretization order, while lp and rp indicate how many points are expected on left and right sides of a point of interest, respectivelly.
Step7: Let's have a look at the generated code when using u_new.
Step8: And finally, let's run it, to convince ourselves that only the domain region values will be incremented at each timestep.
Step9: The halo region, in turn, is surrounded by the padding region, which can be used for data alignment. By default, there is no padding. This can be changed by passing a suitable value to padding, as shown below
Step10: Although in practice not very useful, with the (private) _data_allocated accessor one can see the entire domain + halo + padding region. | Python Code:
from devito import Eq, Grid, TimeFunction, Operator
grid = Grid(shape=(3, 3))
u = TimeFunction(name='u', grid=grid)
u.data[:] = 1
Explanation: Domain, Halo and Padding regions
In this tutorial we will learn about data regions and how these impact the Operator construction. We will use a simple time marching example.
End of explanation
print(u.data)
Explanation: At this point, we have a time-varying 3x3 grid filled with 1's. Below, we can see the domain data values:
End of explanation
from devito import configuration
configuration['language'] = 'C'
eq = Eq(u.forward, u+2)
op = Operator(eq, opt='noop')
Explanation: We now create a time-marching Operator that, at each timestep, increments by 2 all points in the computational domain.
End of explanation
print(op)
Explanation: We can print op to get the generated code.
End of explanation
print(u.data_with_halo)
Explanation: When we take a look at the constructed expression, u[t1][x + 1][y + 1] = u[t0][x + 1][y + 1] + 2, we see several +1 were added to the u's spatial indices.
This is because the domain region is actually surrounded by 'ghost' points, which can be accessed via a stencil when iterating in proximity of the domain boundary. The ghost points define the halo region. The halo region can be accessed through the data_with_halo data accessor. As we see below, the halo points correspond to the zeros surrounding the domain region.
End of explanation
u0 = TimeFunction(name='u0', grid=grid, space_order=0)
u0.data[:] = 1
print(u0.data_with_halo)
u2 = TimeFunction(name='u2', grid=grid, space_order=2)
u2.data[:] = 1
print(u2.data_with_halo)
Explanation: By adding the +1 offsets, the Devito compiler ensures the array accesses are logically aligned to the equation’s physical domain. For instance, the TimeFunction u(t, x, y) used in the example above has one point on each side of the x and y halo regions; if the user writes an expression including u(t, x, y) and u(t, x + 2, y + 2), the compiler will ultimately generate u[t, x + 1, y + 1] and u[t, x + 3, y + 3]. When x = y = 0, therefore, the values u[t, 1, 1] and u[t, 3, 3] are fetched, representing the first and third points in the physical domain.
By default, the halo region has space_order points on each side of the space dimensions. Sometimes, these points may be unnecessary, or, depending on the partial differential equation being approximated, extra points may be needed.
End of explanation
u_new = TimeFunction(name='u_new', grid=grid, space_order=(4, 3, 1))
u_new.data[:] = 1
print(u_new.data_with_halo)
Explanation: One can also pass a 3-tuple (o, lp, rp) instead of a single integer representing the discretization order. Here, o is the discretization order, while lp and rp indicate how many points are expected on left and right sides of a point of interest, respectivelly.
End of explanation
equation = Eq(u_new.forward, u_new + 2)
op = Operator(equation, opt='noop')
print(op)
Explanation: Let's have a look at the generated code when using u_new.
End of explanation
#NBVAL_IGNORE_OUTPUT
op.apply(time_M=2)
print(u_new.data_with_halo)
Explanation: And finally, let's run it, to convince ourselves that only the domain region values will be incremented at each timestep.
End of explanation
u_pad = TimeFunction(name='u_pad', grid=grid, space_order=2, padding=(0,2,2))
u_pad.data_with_halo[:] = 1
u_pad.data[:] = 2
equation = Eq(u_pad.forward, u_pad + 2)
op = Operator(equation, opt='noop')
print(op)
Explanation: The halo region, in turn, is surrounded by the padding region, which can be used for data alignment. By default, there is no padding. This can be changed by passing a suitable value to padding, as shown below:
End of explanation
print(u_pad._data_allocated)
Explanation: Although in practice not very useful, with the (private) _data_allocated accessor one can see the entire domain + halo + padding region.
End of explanation |
2,730 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linked Structures
Arrays are basic sequence containe with easy and direct access to the individual elements however they are limited in their functionality
Python lists implemented using an array structure, which extends arrays' functionality by providing largers sets of operations.
Fixed array size, insertion and deletion times, are some problems of Arrays, and Python lists.
Linked list data structure can be used to store a collection in linear order.
Several variaties of linked liss are singly linked lists, and doubly linked lists.
Introduction
Let's create a basic class containing sigle data field
Step1: This is will give us just the containers that we can store data into.
Step2: Since we did not define a method to show the value stored, we cannot see the value. However the values we passed to each variable is stored.
Now to make it a linked list, we have to establish a connection. To achieve this we can add another data field called next into our constructor.
Step3: After modifying the ListNode class and creating nodes for testing we need to use next field to assign it to next node to establish a connection. | Python Code:
class ListNode:
def __init__(self, data):
self.data = data
Explanation: Linked Structures
Arrays are basic sequence containe with easy and direct access to the individual elements however they are limited in their functionality
Python lists implemented using an array structure, which extends arrays' functionality by providing largers sets of operations.
Fixed array size, insertion and deletion times, are some problems of Arrays, and Python lists.
Linked list data structure can be used to store a collection in linear order.
Several variaties of linked liss are singly linked lists, and doubly linked lists.
Introduction
Let's create a basic class containing sigle data field:
End of explanation
a = ListNode(11)
b = ListNode(52)
c = ListNode(18)
print(a)
Explanation: This is will give us just the containers that we can store data into.
End of explanation
class ListNode:
def __init__(self, data):
self.data = data
self.next = None
# Initial creation of nodes
a = ListNode(11)
b = ListNode(52)
c = ListNode(18)
Explanation: Since we did not define a method to show the value stored, we cannot see the value. However the values we passed to each variable is stored.
Now to make it a linked list, we have to establish a connection. To achieve this we can add another data field called next into our constructor.
End of explanation
a.next = b
b.next = c
print(a.data) # Prints the first node
print(a.next.data) # Prints the b
print(a.next.next.data) # Prints the c
Explanation: After modifying the ListNode class and creating nodes for testing we need to use next field to assign it to next node to establish a connection.
End of explanation |
2,731 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Custom Kernels
In this tutorial we will learn
Step1: 1 - Write the nonlinearity and its symbolic form
Step2: 2 - Define a dcgp.kernel with our new callables
Step3: 3 - Profiling the speed
Kernels defined in python introduce some slow down. Here we time 1000 calls to our exp(-x^2) function, when
Step4: 4 - Using the new kernel in a dcpy.expression | Python Code:
# Some necessary imports.
import dcgpy
from time import time
import pyaudi
# Sympy is nice to have for basic symbolic manipulation.
from sympy import init_printing
from sympy.parsing.sympy_parser import *
init_printing()
# Fundamental for plotting.
from matplotlib import pyplot as plt
%matplotlib inline
Explanation: Custom Kernels
In this tutorial we will learn:
How to define a custom Kernel.
How to use it in a dcgpy.expression_double.
NOTE: when defining custom kernels directly via the python interface a slowdown is to be expected for two main reasons:
a) python callables cannot be called from different threads (only processes)
b) an added c++/python layer is added and forces conversions
End of explanation
# Lets define some non-linear function we would like to use as a computational unit in a dCGP:
def my_fun(x):
return exp(sum([it*it for it in x]))
# We need also to define the symbolic form of such a kernel so that, for example, symbolic manipulators
# can understand its semantic. In this function x is to be interpreted as a list of symbols like ["x", "y", "z"]
def my_fun_print(x):
return "exp(-" + "+".join([it + "**2" for it in x]) + ")"
# Note that it is left to the user to define a symbolic representation that makes sense and is truthful, no checks are done.
# All symbolic manipulations will rely on the fact that such a representation makes sense.
# ... and see by example how these functions work:
from numpy import exp
a = my_fun([0.2,-0.12,-0.0011])
b = my_fun_print(["x", "y", "z"])
print(b + " is: " + str(a))
Explanation: 1 - Write the nonlinearity and its symbolic form
End of explanation
# Since the nonlinearities we wrote can operate on gduals as well as on double we can define
# both a kernel_double and a kernel_gdual_double (here we will only use the first one)
my_kernel_double = dcgpy.kernel_double(my_fun, my_fun_print, "my_gaussian")
my_kernel_gdual_double = dcgpy.kernel_gdual_double(my_fun, my_fun_print, "my_gaussian")
# For the case of doubles
a = my_kernel_double([0.2,-0.12,-0.0011])
b = my_kernel_double(["x", "y", "z"])
print(b + " evaluates to: " + str(a))
# And for the case of gduals
a = my_kernel_gdual_double([pyaudi.gdual_double(0.2, "x", 2),pyaudi.gdual_double(-0.12, "x", 2),pyaudi.gdual_double(-0.0011, "x", 2)])
b = my_kernel_gdual_double(["x", "y", "z"])
print(b + " evaluates to: " + str(a))
Explanation: 2 - Define a dcgp.kernel with our new callables
End of explanation
# wrapped by the user in a python dcgpy.kernel
start = time()
_ = [my_kernel_double([i/1000, 0.3]) for i in range(1000)]
end = time()
print("Elapsed (ms)", (end-start) * 1000)
# coming from the shipped dcgpy kernels (cpp implementation)
cpp_kernel = dcgpy.kernel_set_double(["gaussian"])[0]
start = time()
_ = [cpp_kernel([i/1000, 0.3]) for i in range(1000)]
end = time()
print("Elapsed (ms)", (end-start) * 1000)
# a normal python callable
start = time()
_ = [my_fun([i/1000, 0.3]) for i in range(1000)]
end = time()
print("Elapsed (ms)", (end-start) * 1000)
Explanation: 3 - Profiling the speed
Kernels defined in python introduce some slow down. Here we time 1000 calls to our exp(-x^2) function, when:
wrapped by the user in a python dcgpy.kernel
coming from the shipped dcgpy package (cpp implementation)
a normal python callable
End of explanation
ks = dcgpy.kernel_set_double(["sum", "mul", "diff"])
ks.push_back(my_kernel_double)
print(ks)
ex = dcgpy.expression_double(inputs=2,
outputs=1,
rows=1,
cols=6,
levels_back=6,
arity=2,
kernels=ks(),
n_eph=0,
seed = 39)
print(ex(["x", "y"]))
# We use the expression method simplify which is calling sympy. Since our symbolic representation
# of the Kernel is parsable by sympy, a simplified result is possible.
ex.simplify(["x", "y"])
Explanation: 4 - Using the new kernel in a dcpy.expression
End of explanation |
2,732 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including
Step1: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has
Step2: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
Step3: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define
Step4: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
Step5: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy! | Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(0)
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
End of explanation
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
net = tflearn.input_data([None, 784])
net = tflearn.fully_connected(net, 100, activation='ReLU')
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!
End of explanation |
2,733 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create TensorFlow Deep Neural Network Model
Learning Objective
- Create a DNN model using the high-level Estimator API
Introduction
We'll begin by modeling our data using a Deep Neural Network. To achieve this we will use the high-level Estimator API in Tensorflow. Have a look at the various models available through the Estimator API in the documentation here.
Start by setting the environment variables related to your project.
Step1: Create TensorFlow model using TensorFlow's Estimator API
We'll begin by writing an input function to read the data and define the csv column names and label column. We'll also set the default csv column values and set the number of training steps.
Step2: Create the input function
Now we are ready to create an input function using the Dataset API.
Step3: Create the feature columns
Next, we define the feature columns
Step4: Create the Serving Input function
To predict with the TensorFlow model, we also need a serving input function. This will allow us to serve prediction later using the predetermined inputs. We will want all the inputs from our user.
Step5: Create the model and run training and evaluation
Lastly, we'll create the estimator to train and evaluate. In the cell below, we'll set up a DNNRegressor estimator and the train and evaluation operations.
Step6: Finally, we train the model! | Python Code:
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = "cloud-training-bucket" # Replace with your BUCKET
REGION = "us-central1" # Choose an available region for Cloud MLE
TFVERSION = "1.14" # TF version for CMLE to use
import os
os.environ["BUCKET"] = BUCKET
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
%%bash
ls *.csv
Explanation: Create TensorFlow Deep Neural Network Model
Learning Objective
- Create a DNN model using the high-level Estimator API
Introduction
We'll begin by modeling our data using a Deep Neural Network. To achieve this we will use the high-level Estimator API in Tensorflow. Have a look at the various models available through the Estimator API in the documentation here.
Start by setting the environment variables related to your project.
End of explanation
import shutil
import numpy as np
import tensorflow as tf
print(tf.__version__)
CSV_COLUMNS = "weight_pounds,is_male,mother_age,plurality,gestation_weeks".split(',')
LABEL_COLUMN = "weight_pounds"
# Set default values for each CSV column
DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0]]
TRAIN_STEPS = 1000
Explanation: Create TensorFlow model using TensorFlow's Estimator API
We'll begin by writing an input function to read the data and define the csv column names and label column. We'll also set the default csv column values and set the number of training steps.
End of explanation
def read_dataset(filename_pattern, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(records = value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Create list of files that match pattern
file_list = tf.gfile.Glob(filename = filename_pattern)
# Create dataset from file list
dataset = (tf.data.TextLineDataset(filenames = file_list) # Read text file
.map(map_func = decode_csv)) # Transform each elem by applying decode_csv fn
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(count = num_epochs).batch(batch_size = batch_size)
return dataset
return _input_fn
Explanation: Create the input function
Now we are ready to create an input function using the Dataset API.
End of explanation
def get_categorical(name, values):
return tf.feature_column.indicator_column(
categorical_column = tf.feature_column.categorical_column_with_vocabulary_list(key = name, vocabulary_list = values))
def get_cols():
# Define column types
return [\
get_categorical("is_male", ["True", "False", "Unknown"]),
tf.feature_column.numeric_column(key = "mother_age"),
get_categorical("plurality",
["Single(1)", "Twins(2)", "Triplets(3)",
"Quadruplets(4)", "Quintuplets(5)","Multiple(2+)"]),
tf.feature_column.numeric_column(key = "gestation_weeks")
]
Explanation: Create the feature columns
Next, we define the feature columns
End of explanation
def serving_input_fn():
feature_placeholders = {
"is_male": tf.placeholder(dtype = tf.string, shape = [None]),
"mother_age": tf.placeholder(dtype = tf.float32, shape = [None]),
"plurality": tf.placeholder(dtype = tf.string, shape = [None]),
"gestation_weeks": tf.placeholder(dtype = tf.float32, shape = [None])
}
features = {
key: tf.expand_dims(input = tensor, axis = -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = feature_placeholders)
Explanation: Create the Serving Input function
To predict with the TensorFlow model, we also need a serving input function. This will allow us to serve prediction later using the predetermined inputs. We will want all the inputs from our user.
End of explanation
def train_and_evaluate(output_dir):
EVAL_INTERVAL = 300
run_config = tf.estimator.RunConfig(
save_checkpoints_secs = EVAL_INTERVAL,
keep_checkpoint_max = 3)
estimator = tf.estimator.DNNRegressor(
model_dir = output_dir,
feature_columns = get_cols(),
hidden_units = [64, 32],
config = run_config)
train_spec = tf.estimator.TrainSpec(
input_fn = read_dataset("train.csv", mode = tf.estimator.ModeKeys.TRAIN),
max_steps = TRAIN_STEPS)
exporter = tf.estimator.LatestExporter(name = "exporter", serving_input_receiver_fn = serving_input_fn)
eval_spec = tf.estimator.EvalSpec(
input_fn = read_dataset("eval.csv", mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 60, # start evaluating after N seconds
throttle_secs = EVAL_INTERVAL, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator = estimator, train_spec = train_spec, eval_spec = eval_spec)
Explanation: Create the model and run training and evaluation
Lastly, we'll create the estimator to train and evaluate. In the cell below, we'll set up a DNNRegressor estimator and the train and evaluation operations.
End of explanation
# Run the model
shutil.rmtree(path = "babyweight_trained_dnn", ignore_errors = True) # start fresh each time
train_and_evaluate("babyweight_trained_dnn")
Explanation: Finally, we train the model!
End of explanation |
2,734 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We are using splu in scipy package. This is bit slow, but on the cluster you can use mumps, which might a lot faster. We can think about having better iterative solver.
Step1: I want to visualize electrical field, which is a vector, so I average them on to cell center. Also I want to see current density ($\vec{j} = \sigma \vec{e}$).
Step2: Then use "plotSlice" function, to visualize 2D sections
Step3: Is it reasonable? | Python Code:
%%time
es_px = Ainv*rhs_px
es_py = Ainv*rhs_py
# Need to sum the ep and es to get the total field.
e_x = es_px #+ ep_px
e_y = es_py #+ ep_py
Explanation: We are using splu in scipy package. This is bit slow, but on the cluster you can use mumps, which might a lot faster. We can think about having better iterative solver.
End of explanation
Meinv = M.getEdgeInnerProduct(np.ones_like(sig), invMat=True)
j_x = Meinv*Msig*e_x
j_y = Meinv*Msig*e_x
e_x.shape
e_x_CC = M.aveE2CCV*e_x
e_y_CC = M.aveE2CCV*e_y
j_x_CC = M.aveE2CCV*j_x
j_y_CC = M.aveE2CCV*j_y
# j_x_CC = Utils.sdiag(np.r_[sig, sig, sig])*e_x_CC
Explanation: I want to visualize electrical field, which is a vector, so I average them on to cell center. Also I want to see current density ($\vec{j} = \sigma \vec{e}$).
End of explanation
fig, ax = plt.subplots(1,2, figsize = (12, 5))
dat0 = M.plotSlice(np.log10(np.abs(e_x_CC)), vType='CCv', view='vec', streamOpts={'color': 'k'}, normal='Y', ax = ax[0])
cb0 = plt.colorbar(dat0[0], ax = ax[0])
dat1 = M.plotSlice(np.log10(np.abs(e_y_CC)), vType='CCv', view='vec', streamOpts={'color': 'k'}, normal='Y', ax = ax[1])
cb1 = plt.colorbar(dat1[0], ax = ax[1])
Explanation: Then use "plotSlice" function, to visualize 2D sections
End of explanation
# Calculate the data
rx_x, rx_y = np.meshgrid(np.arange(-3000,3001,500),np.arange(-1000,1001,500))
rx_loc = np.hstack((simpeg.Utils.mkvc(rx_x,2),simpeg.Utils.mkvc(rx_y,2),elev+np.zeros((np.prod(rx_x.shape),1))))
# Get the projection matrices
Qex = M.getInterpolationMat(rx_loc,'Ex')
Qey = M.getInterpolationMat(rx_loc,'Ey')
Qez = M.getInterpolationMat(rx_loc,'Ez')
Qfx = M.getInterpolationMat(rx_loc,'Fx')
Qfy = M.getInterpolationMat(rx_loc,'Fy')
Qfz = M.getInterpolationMat(rx_loc,'Fz')
e_x_loc = np.hstack([simpeg.Utils.mkvc(Qex*e_x,2),simpeg.Utils.mkvc(Qey*e_x,2),simpeg.Utils.mkvc(Qez*e_x,2)])
e_y_loc = np.hstack([simpeg.Utils.mkvc(Qex*e_y,2),simpeg.Utils.mkvc(Qey*e_y,2),simpeg.Utils.mkvc(Qez*e_y,2)])
Ciw = -C/(1j*omega(freq)*mu_0)
h_x_loc = np.hstack([simpeg.Utils.mkvc(Qfx*Ciw*e_x,2),simpeg.Utils.mkvc(Qfy*Ciw*e_x,2),simpeg.Utils.mkvc(Qfz*Ciw*e_x,2)])
h_y_loc = np.hstack([simpeg.Utils.mkvc(Qfx*Ciw*e_y,2),simpeg.Utils.mkvc(Qfy*Ciw*e_y,2),simpeg.Utils.mkvc(Qfz*Ciw*e_y,2)])
# Make a combined matrix
dt = np.dtype([('ex1',complex),('ey1',complex),('ez1',complex),('hx1',complex),('hy1',complex),('hz1',complex),('ex2',complex),('ey2',complex),('ez2',complex),('hx2',complex),('hy2',complex),('hz2',complex)])
combMat = np.empty((len(e_x_loc)),dtype=dt)
combMat['ex1'] = e_x_loc[:,0]
combMat['ey1'] = e_x_loc[:,1]
combMat['ez1'] = e_x_loc[:,2]
combMat['ex2'] = e_y_loc[:,0]
combMat['ey2'] = e_y_loc[:,1]
combMat['ez2'] = e_y_loc[:,2]
combMat['hx1'] = h_x_loc[:,0]
combMat['hy1'] = h_x_loc[:,1]
combMat['hz1'] = h_x_loc[:,2]
combMat['hx2'] = h_y_loc[:,0]
combMat['hy2'] = h_y_loc[:,1]
combMat['hz2'] = h_y_loc[:,2]
def calculateImpedance(fieldsData):
'''
Function that calculates MT impedance data from a rec array with E and H field data from both polarizations
'''
zxx = (fieldsData['ex1']*fieldsData['hy2'] - fieldsData['ex2']*fieldsData['hy1'])/(fieldsData['hx1']*fieldsData['hy2'] - fieldsData['hx2']*fieldsData['hy1'])
zxy = (-fieldsData['ex1']*fieldsData['hx2'] + fieldsData['ex2']*fieldsData['hx1'])/(fieldsData['hx1']*fieldsData['hy2'] - fieldsData['hx2']*fieldsData['hy1'])
zyx = (fieldsData['ey1']*fieldsData['hy2'] - fieldsData['ey2']*fieldsData['hy1'])/(fieldsData['hx1']*fieldsData['hy2'] - fieldsData['hx2']*fieldsData['hy1'])
zyy = (-fieldsData['ey1']*fieldsData['hx2'] + fieldsData['ey2']*fieldsData['hx1'])/(fieldsData['hx1']*fieldsData['hy2'] - fieldsData['hx2']*fieldsData['hy1'])
return zxx, zxy, zyx, zyy
zxx, zxy, zyx, zyy = calculateImpedance(combMat)
# rx_loc
ind = np.where(np.sum(np.power(rx_loc - np.array([-3000,0,elev]),2),axis=1)< 5)
m
print appResPhs(freq,zyx[ind])
print appResPhs(freq,zxy[ind])
e0_1d = e0_1d.conj()
Qex = mesh1d.getInterpolationMat(np.array([elev]),'Ex')
Qfx = mesh1d.getInterpolationMat(np.array([elev]),'Fx')
h0_1dC = -(mesh1d.nodalGrad*e0_1d)/(1j*omega(freq)*mu_0)
h0_1d = mesh1d.getInterpolationMat(mesh1d.vectorNx,'Ex')*h0_1dC
indSur = np.where(mesh1d.vectorNx==elev)
print (Qfx*e0_1d),(Qex*h0_1dC)#e0_1d, h0_1d
print appResPhs(freq,(Qfx*e0_1d)/(Qex*h0_1dC).conj())
import simpegMT as simpegmt
sig1D = M.r(sig,'CC','CC','M')[0,0,:]
anaEd, anaEu, anaHd, anaHu = simpegmt.Utils.MT1Danalytic.getEHfields(mesh1d,sig1D,freq,mesh1d.vectorNx)
anaEtemp = anaEd+anaEu
anaHtemp = anaHd+anaHu
# Scale the solution
anaE = (anaEtemp/anaEtemp[-1])#.conj()
anaH = (anaHtemp/anaEtemp[-1])#.conj()
anaZ = anaE/anaH
indSur = np.where(mesh1d.vectorNx==elev)
print anaZ
print appResPhs(freq,anaZ[indSur])
print appResPhs(freq,-anaZ[indSur])
mesh1d.vectorNx
Explanation: Is it reasonable?: Based on that you put resistive target that makes sense to me; current does not want to flow on resistive target so they just do roundabout:). And see air interface. It is continuous on current but not on electric field, which looks reasonable.
End of explanation |
2,735 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step5: Copyright 2021 DeepMind Technologies Limited.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of the
License at
https
Step6: Dataset and environment
Step7: DQN learner
Step8: Training loop
Step9: Evaluation | Python Code:
# @title Installation
!pip install dm-acme
!pip install dm-acme[reverb]
!pip install dm-acme[tf]
!pip install dm-sonnet
!pip install dopamine-rl==3.1.2
!pip install atari-py
!pip install dm_env
!git clone https://github.com/deepmind/deepmind-research.git
%cd deepmind-research
!git clone https://github.com/deepmind/bsuite.git
!pip install -q bsuite/
# @title Imports
import copy
import functools
from typing import Dict, Tuple
import acme
from acme.agents.tf import actors
from acme.agents.tf.dqn import learning as dqn
from acme.tf import utils as acme_utils
from acme.utils import loggers
import sonnet as snt
import tensorflow as tf
import numpy as np
import tree
import dm_env
import reverb
from acme.wrappers import base as wrapper_base
from acme.wrappers import single_precision
import bsuite
# @title Data Loading Utilities
def _parse_seq_tf_example(example, shapes, dtypes):
Parse tf.Example containing one or two episode steps.
def to_feature(shape, dtype):
if np.issubdtype(dtype, np.floating):
return tf.io.FixedLenSequenceFeature(
shape=shape, dtype=tf.float32, allow_missing=True)
elif dtype == np.bool or np.issubdtype(dtype, np.integer):
return tf.io.FixedLenSequenceFeature(
shape=shape, dtype=tf.int64, allow_missing=True)
else:
raise ValueError(f'Unsupported type {dtype} to '
f'convert from TF Example.')
feature_map = {}
for k, v in shapes.items():
feature_map[k] = to_feature(v, dtypes[k])
parsed = tf.io.parse_single_example(example, features=feature_map)
restructured = {}
for k, v in parsed.items():
dtype = tf.as_dtype(dtypes[k])
if v.dtype == dtype:
restructured[k] = parsed[k]
else:
restructured[k] = tf.cast(parsed[k], dtype)
return restructured
def _build_sars_example(sequences):
Convert raw sequences into a Reverb SARS' sample.
o_tm1 = tree.map_structure(lambda t: t[0], sequences['observation'])
o_t = tree.map_structure(lambda t: t[1], sequences['observation'])
a_tm1 = tree.map_structure(lambda t: t[0], sequences['action'])
r_t = tree.map_structure(lambda t: t[0], sequences['reward'])
p_t = tree.map_structure(
lambda d, st: d[0] * tf.cast(st[1] != dm_env.StepType.LAST, d.dtype),
sequences['discount'], sequences['step_type'])
info = reverb.SampleInfo(key=tf.constant(0, tf.uint64),
probability=tf.constant(1.0, tf.float64),
table_size=tf.constant(0, tf.int64),
priority=tf.constant(1.0, tf.float64))
return reverb.ReplaySample(info=info, data=(
o_tm1, a_tm1, r_t, p_t, o_t))
def bsuite_dataset_params(env):
Return shapes and dtypes parameters for bsuite offline dataset.
shapes = {
'observation': env.observation_spec().shape,
'action': env.action_spec().shape,
'discount': env.discount_spec().shape,
'reward': env.reward_spec().shape,
'episodic_reward': env.reward_spec().shape,
'step_type': (),
}
dtypes = {
'observation': env.observation_spec().dtype,
'action': env.action_spec().dtype,
'discount': env.discount_spec().dtype,
'reward': env.reward_spec().dtype,
'episodic_reward': env.reward_spec().dtype,
'step_type': np.int64,
}
return {'shapes': shapes, 'dtypes': dtypes}
def bsuite_dataset(path: str,
shapes: Dict[str, Tuple[int]],
dtypes: Dict[str, type], # pylint:disable=g-bare-generic
num_threads: int,
batch_size: int,
num_shards: int,
shuffle_buffer_size: int = 100000,
shuffle: bool = True) -> tf.data.Dataset:
Create tf dataset for training.
filenames = [f'{path}-{i:05d}-of-{num_shards:05d}' for i in range(
num_shards)]
file_ds = tf.data.Dataset.from_tensor_slices(filenames)
if shuffle:
file_ds = file_ds.repeat().shuffle(num_shards)
example_ds = file_ds.interleave(
functools.partial(tf.data.TFRecordDataset, compression_type='GZIP'),
cycle_length=tf.data.experimental.AUTOTUNE,
block_length=5)
if shuffle:
example_ds = example_ds.shuffle(shuffle_buffer_size)
def map_func(example):
example = _parse_seq_tf_example(example, shapes, dtypes)
return example
example_ds = example_ds.map(map_func, num_parallel_calls=num_threads)
if shuffle:
example_ds = example_ds.repeat().shuffle(batch_size * 10)
example_ds = example_ds.map(
_build_sars_example,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
example_ds = example_ds.batch(batch_size, drop_remainder=True)
example_ds = example_ds.prefetch(tf.data.experimental.AUTOTUNE)
return example_ds
def load_offline_bsuite_dataset(
bsuite_id: str,
path: str,
batch_size: int,
num_shards: int = 1,
num_threads: int = 1,
single_precision_wrapper: bool = True,
shuffle: bool = True) -> Tuple[tf.data.Dataset,
dm_env.Environment]:
Load bsuite offline dataset.
# Data file path format: {path}-?????-of-{num_shards:05d}
# The dataset is not deterministic and not repeated if shuffle = False.
environment = bsuite.load_from_id(bsuite_id)
if single_precision_wrapper:
environment = single_precision.SinglePrecisionWrapper(environment)
params = bsuite_dataset_params(environment)
dataset = bsuite_dataset(path=path,
num_threads=num_threads,
batch_size=batch_size,
num_shards=num_shards,
shuffle_buffer_size=2,
shuffle=shuffle,
**params)
return dataset, environment
Explanation: Copyright 2021 DeepMind Technologies Limited.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of the
License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
RL Unplugged: Offline DQN - Bsuite
Guide to training an Acme DQN agent on Bsuite data.
<a href="https://colab.research.google.com/github/deepmind/deepmind_research/blob/master/rl_unplugged/atari_dqn.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
tmp_path = 'gs://rl_unplugged/bsuite'
level = 'catch'
dir = '0_0.0'
filename = '0_full'
path = f'{tmp_path}/{level}/{dir}/{filename}'
batch_size = 2 #@param
bsuite_id = level + '/0'
dataset, environment = load_offline_bsuite_dataset(bsuite_id=bsuite_id,
path=path,
batch_size=batch_size)
dataset = dataset.prefetch(1)
Explanation: Dataset and environment
End of explanation
# Get total number of actions.
num_actions = environment.action_spec().num_values
obs_spec = environment.observation_spec()
print(environment.observation_spec())
# Create the Q network.
network = snt.Sequential([
snt.flatten,
snt.nets.MLP([56, 56]),
snt.nets.MLP([num_actions])
])
acme_utils.create_variables(network, [environment.observation_spec()])
# Create a logger.
logger = loggers.TerminalLogger(label='learner', time_delta=1.)
# Create the DQN learner.
learner = dqn.DQNLearner(
network=network,
target_network=copy.deepcopy(network),
discount=0.99,
learning_rate=3e-4,
importance_sampling_exponent=0.2,
target_update_period=2500,
dataset=dataset,
logger=logger)
Explanation: DQN learner
End of explanation
for _ in range(10000):
learner.step()
Explanation: Training loop
End of explanation
# Create a logger.
logger = loggers.TerminalLogger(label='evaluation', time_delta=1.)
# Create an environment loop.
policy_network = snt.Sequential([
network,
lambda q: tf.argmax(q, axis=-1),
])
loop = acme.EnvironmentLoop(
environment=environment,
actor=actors.DeprecatedFeedForwardActor(policy_network=policy_network),
logger=logger)
loop.run(400)
Explanation: Evaluation
End of explanation |
2,736 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
P4J Periodogram demo
A simple demonstration of P4J's information theoretic periodogram
Step1: Generating a simple synthetic light curve
We create an irregulary sampled time series using a harmonic model composed of three sine waves with an specified Signal to Noise Ratio SNR
Step2: Finding the best frequency/period using P4J
Now let's assumme that we do not know the best frequency for this time series. To find it, we sweep over a linear array of frequencies and find the ones that optimize the selected criterion. In this case we recover the 10 local optima of the quadratic mutual information periodogram.
Step3: Significance of the detected period
For a periodic time series with an oscillation frequency $f$, its periodogram will exhibit a peak at $f$ with high probability. But the inverse is not necessarily true, a peak in the periodogram does not imply that the time series is periodic. Spurious peaks may be produced by measurement errors, random fluctuations, aliasing or noise.
We can test if the candidate frequencies (periodogram peaks) are statistically significant using bootstrap. This gives a principled way to test wheter the light curve is periodic or not. The core idea is to obtain the distribution of the maximum values of the periodogram on a set of surrogate light curves obtained by random-resampling. The general procedure is
For the null hypothesis that the light curve is not periodic
1. Generate surrogate light curves that comply with the null hypothesis
2. Compute the periodogram for each surrogate and save the maxima
3. Fit a extreme value probability density function (PDF) to the surrogate's maxima
4. For a given significance level (p-value) $\alpha$* obtain the associated confidence level from the fitted PDF
5. If the candidate frequency has a periodogram value greater than the confidence level then you can reject the null hypothesis (for the selected $\alpha$)
*$\alpha$ | Python Code:
from __future__ import division
import numpy as np
%matplotlib inline
import matplotlib.pylab as plt
import P4J
print("P4J version:")
print(P4J.__version__)
Explanation: P4J Periodogram demo
A simple demonstration of P4J's information theoretic periodogram
End of explanation
fundamental_freq = 2.0
lc_generator = P4J.synthetic_light_curve_generator(T=100.0, N=30)
lc_generator.set_model(f0=fundamental_freq, A=[1.0, 0.5, 0.25])
mjd, mag, err = lc_generator.draw_noisy_time_series(SNR=5.0, red_noise_ratio=0.25, outlier_ratio=0.0)
c_mjd, c_mag = lc_generator.get_clean_signal()
fig = plt.figure(figsize=(12, 4))
ax = fig.add_subplot(1, 2, 1)
ax.errorbar(mjd, mag, err, fmt='.')
ax.set_xlabel('Time [d]')
ax.set_title('Time series')
plt.grid()
ax = fig.add_subplot(1, 2, 2)
phase = np.mod(mjd, 1.0/fundamental_freq)*fundamental_freq
index = np.argsort(phase)
ax.errorbar(np.concatenate([np.sort(phase)-0.5, np.sort(phase)+0.5]),
np.concatenate([mag[index], mag[index]]),
np.concatenate([err[index], err[index]]), fmt='.', alpha=1.0, label='Noisy data')
ax.plot(np.concatenate([np.sort(phase)-0.5, np.sort(phase)+0.5]),
np.concatenate([c_mag[index], c_mag[index]]),
linewidth=8, alpha=0.5, label='Underlying model')
plt.legend()
ax.set_xlabel('Phase')
ax.set_title('Folded time series')
plt.grid()
plt.tight_layout();
Explanation: Generating a simple synthetic light curve
We create an irregulary sampled time series using a harmonic model composed of three sine waves with an specified Signal to Noise Ratio SNR
End of explanation
#my_per = P4J.periodogram(method='LKSL') # Lafler-Kinman's string length
#my_per = P4J.periodogram(method='PDM1') # Phase Dispersion Minimization
#my_per = P4J.periodogram(method='MHAOV') # Multi-harmonic Analysis of Variance
#my_per = P4J.periodogram(method='QME') # Quadratical mutual entropy or total correlation
#my_per = P4J.periodogram(method='QMICS') # Quadratic mutual information (QMI) based on Cauchy Schwarz distance
my_per = P4J.periodogram(method='QMIEU') # QMI based on Euclidean distance
my_per.set_data(mjd, mag, err)
my_per.frequency_grid_evaluation(fmin=0.0, fmax=5.0, fresolution=1e-3) # frequency sweep parameters
my_per.finetune_best_frequencies(fresolution=1e-4, n_local_optima=10)
freq, per = my_per.get_periodogram()
fbest, pbest = my_per.get_best_frequencies() # Return best n_local_optima frequencies
fig = plt.figure(figsize=(12, 4))
ax = fig.add_subplot(1, 2, 1)
ax.plot(freq, per)
ymin, ymax = ax.get_ylim()
ax.plot([fbest[0], fbest[0]], [ymin, ymax], linewidth=8, alpha=0.2)
ax.set_ylim([ymin, ymax])
ax.set_xlabel('Frequency [1/MJD]')
ax.set_ylabel('QMI Periodogram')
plt.title('Periodogram')
plt.grid()
ax = fig.add_subplot(1, 2, 2)
phase = np.mod(mjd, 1.0/fbest[0])*fbest[0]
idx = np.argsort(phase)
ax.errorbar(np.concatenate([np.sort(phase), np.sort(phase)+1.0]),
np.concatenate([mag[idx], mag[idx]]),
np.concatenate([err[idx], err[idx]]), fmt='.')
plt.title('Best period')
ax.set_xlabel('Phase @ %0.5f [1/d], %0.5f [d]' %(fbest[0], 1.0/fbest[0]))
ax.set_ylabel('Magnitude')
plt.grid()
plt.tight_layout();
Explanation: Finding the best frequency/period using P4J
Now let's assumme that we do not know the best frequency for this time series. To find it, we sweep over a linear array of frequencies and find the ones that optimize the selected criterion. In this case we recover the 10 local optima of the quadratic mutual information periodogram.
End of explanation
# TODO: Create bootstrap class with iid bootstrap, moving block bootstrap, write proper documentation and debug this!
def block_bootstrap(mjd, mag, err, block_length=10.0, rseed=None):
np.random.seed(rseed)
N = len(mjd)
mjd_boot = np.zeros(shape=(N, ))
mag_boot = np.zeros(shape=(N, ))
err_boot = np.zeros(shape=(N, ))
k = 0
last_time = 0.0
for max_idx in range(2, N):
if mjd[-1] - mjd[-max_idx] > block_length:
break
while k < N:
idx_start = np.random.randint(N-max_idx-1)
for idx_end in range(idx_start+1, N):
if mjd[idx_end] - mjd[idx_start] > block_length or k + idx_end - idx_start >= N-1:
break
#print("%d %d %d %d" %(idx_start, idx_end, k, k + idx_end - idx_start))
mjd_boot[k:k+idx_end-idx_start] = mjd[idx_start:idx_end] - mjd[idx_start] + last_time
mag_boot[k:k+idx_end-idx_start] = mag[idx_start:idx_end]
err_boot[k:k+idx_end-idx_start] = err[idx_start:idx_end]
last_time = mjd[idx_end] - mjd[idx_start] + last_time
k += idx_end - idx_start
return mjd_boot, mag_boot, err_boot
# We will use 200 surrogates and save 20 local maxima per light curve
pbest_bootstrap = np.zeros(shape=(200, 20))
for i in range(pbest_bootstrap.shape[0]):
# P = np.random.permutation(len(mjd))
# my_per.set_data(mjd, mag[P], err[P]) # IID bootstrap
mjd_b, mag_b, err_b = block_bootstrap(mjd, mag, err, block_length=0.9973)
my_per.set_data(mjd_b, mag_b, err_b)
my_per.frequency_grid_evaluation(fmin=0.0, fmax=4.0, fresolution=1e-3)
my_per.finetune_best_frequencies(fresolution=1e-4, n_local_optima=pbest_bootstrap.shape[1])
_, pbest_bootstrap[i, :] = my_per.get_best_frequencies()
#from scipy.stats import gumbel_r # Gumbel right (for maxima), its has 2 parameters
from scipy.stats import genextreme # Generalized extreme value distribution, it has 3 parameters
param = genextreme.fit(pbest_bootstrap.ravel())
rv = genextreme(c=param[0], loc=param[1], scale=param[2])
x = np.linspace(rv.ppf(0.001), rv.ppf(0.999), 100)
fig = plt.figure(figsize=(14, 4))
ax = fig.add_subplot(1, 2, 1)
_ = ax.hist(pbest_bootstrap.ravel(), bins=20, density=True, alpha=0.2, label='Peak\'s histogram')
ax.plot(x, rv.pdf(x), 'r-', lw=5, alpha=0.6, label='Fitted Gumbel PDF')
ymin, ymax = ax.get_ylim()
ax.plot([pbest[0], pbest[0]], [ymin, ymax], '-', linewidth=4, alpha=0.5, label="Max per value")
for p_val in [1e-2, 1e-1]:
ax.plot([rv.ppf(1.-p_val), rv.ppf(1.-p_val)], [ymin, ymax], '--', linewidth=4, alpha=0.5, label=str(p_val))
ax.set_ylim([ymin, ymax])
plt.xlabel('Periodogram value'); plt.legend()
ax = fig.add_subplot(1, 2, 2)
ax.plot(freq, per)
ymin, ymax = ax.get_ylim()
ax.plot([fbest[0], fbest[0]], [ymin, ymax], '-', linewidth=8, alpha=0.2)
# Print confidence bars
xmin, xmax = ax.get_xlim()
for p_val in [1e-2, 1e-1]:
ax.plot([xmin, xmax], [rv.ppf(1.-p_val), rv.ppf(1.-p_val)], '--', linewidth=4, alpha=0.5, label=str(p_val))
ax.set_xlim([xmin, xmax]); ax.set_ylim([ymin, ymax])
ax.set_xlabel('Frequency [1/d]'); ax.set_ylabel('Periodogram')
plt.grid(); plt.legend();
Explanation: Significance of the detected period
For a periodic time series with an oscillation frequency $f$, its periodogram will exhibit a peak at $f$ with high probability. But the inverse is not necessarily true, a peak in the periodogram does not imply that the time series is periodic. Spurious peaks may be produced by measurement errors, random fluctuations, aliasing or noise.
We can test if the candidate frequencies (periodogram peaks) are statistically significant using bootstrap. This gives a principled way to test wheter the light curve is periodic or not. The core idea is to obtain the distribution of the maximum values of the periodogram on a set of surrogate light curves obtained by random-resampling. The general procedure is
For the null hypothesis that the light curve is not periodic
1. Generate surrogate light curves that comply with the null hypothesis
2. Compute the periodogram for each surrogate and save the maxima
3. Fit a extreme value probability density function (PDF) to the surrogate's maxima
4. For a given significance level (p-value) $\alpha$* obtain the associated confidence level from the fitted PDF
5. If the candidate frequency has a periodogram value greater than the confidence level then you can reject the null hypothesis (for the selected $\alpha$)
*$\alpha$: Probability of rejecting the null hypothesis when it is actually true
End of explanation |
2,737 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A simple demo of reparameterizing the gamma distribution
First, check out our blog post for the complete scoop. Once you've read that, the
functions below will make sense.
Step1: Define a simple observation model
In this case, we'll use a Poisson observation model with
a true rate of $z = 3.0$
Step2: We need the gamma entropy too...
Step3: Now use autograd to compute necessary gradients
Step4: Optimize ELBO with SGD | Python Code:
import autograd.numpy as np
import autograd.numpy.random as npr
from autograd.scipy.special import gammaln, psi
from autograd import grad
from autograd.optimizers import adam, sgd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context("talk")
sns.set_style("white")
%matplotlib inline
npr.seed(1)
# theta are the unconstrained parameters
def unwrap(theta):
alpha = np.exp(theta[0]) + 1
beta = np.exp(theta[1])
return alpha, beta
def wrap(alpha, beta):
return np.array([np.log(alpha-1), np.log(beta)])
# Log density of Ga(alpha, beta)
def log_q(z, theta):
alpha, beta = unwrap(theta)
return -gammaln(alpha) + alpha * np.log(beta) \
+ (alpha - 1) * np.log(z) - beta * z
# Log density of N(0, 1)
def log_s(epsilon):
return -0.5 * np.log(2*np.pi) -0.5 * epsilon**2
# Transformation and its derivative
def h(epsilon, theta):
alpha, beta = unwrap(theta)
return (alpha - 1./3.) * (1 + epsilon/np.sqrt(9*alpha-3))**3 / beta
def dh(epsilon, theta):
alpha, beta = unwrap(theta)
return (alpha - 1./3) * 3./np.sqrt(9*alpha - 3.) * \
(1+epsilon/np.sqrt(9*alpha-3))**2 / beta
# Log density of proposal r(z) = s(epsilon) * |dh/depsilon|^{-1}
def log_r(epsilon, theta):
return -np.log(dh(epsilon, theta)) + log_s(epsilon)
# Density of the accepted value of epsilon
# (this is just a change of variables too)
def log_pi(epsilon, theta):
return log_s(epsilon) + \
log_q(h(epsilon, theta), theta) - \
log_r(epsilon, theta)
# To compute expectations with respect to pi,
# we need to be able to sample from it.
# This is simple -- sample a gamma and pass it
# through h^{-1}
def h_inverse(z, theta):
alpha, beta = unwrap(theta)
return np.sqrt(9.0 * alpha - 3) * ((beta * z / (alpha - 1./3))**(1./3) - 1)
def sample_pi(theta, size=(1,)):
alpha, beta = unwrap(theta)
return h_inverse(npr.gamma(alpha, 1./beta, size=size), theta)
# Test
z = 2.0
th = npr.randn(2)
assert np.allclose(h_inverse(h(z, th), th), z)
# Plot the acceptance probability in epsilon space
eps = np.linspace(-3,3,100)
th = npr.randn(2)
alpha, beta = unwrap(th)
eps_samples = sample_pi(th, 10000)
plt.plot(eps, np.exp(log_pi(eps, th)), 'r', label="$\pi(\epsilon, \\theta)$")
plt.hist(eps_samples, 40, normed=True, label="sampled")
plt.xlim(-3, 3)
plt.xlabel("$\epsilon$")
plt.ylabel("$\pi(\epsilon, \\theta)$")
plt.legend(loc="upper right")
Explanation: A simple demo of reparameterizing the gamma distribution
First, check out our blog post for the complete scoop. Once you've read that, the
functions below will make sense.
End of explanation
z_true = 3.0
a0, b0 = 1.0, 1.0
N = 10
x = npr.poisson(z_true, size=N)
# Define the Poisson log likelihood
def log_p(x, z):
x = np.atleast_1d(x)
z = np.atleast_1d(z)
lp = -gammaln(a0) + a0 * np.log(b0) \
+ (a0 - 1) * np.log(z) - b0 * z
ll = np.sum(-gammaln(x[:,None]+1) - z[None,:]
+ x[:,None] * np.log(z[None, :]),
axis=0)
return lp + ll
# We can compute the true posterior in closed form
alpha_true = a0 + x.sum()
beta_true = b0 + N
Explanation: Define a simple observation model
In this case, we'll use a Poisson observation model with
a true rate of $z = 3.0$
End of explanation
def gamma_entropy(theta):
alpha, beta = unwrap(theta)
return alpha - np.log(beta) + gammaln(alpha) + (1-alpha) * psi(alpha)
Explanation: We need the gamma entropy too...
End of explanation
def reparam_objective(epsilon, theta):
return np.mean(log_p(x, h(epsilon, theta)), axis=0)
def score_objective(epsilon, theta):
# unbox the score so that we don't take gradients through it
score = log_p(x, h(epsilon, theta)).value
return np.mean(score * log_pi(epsilon, theta), axis=0)
g_reparam = grad(reparam_objective, argnum=1)
g_score = grad(score_objective, argnum=1)
g_entropy = grad(gamma_entropy)
def elbo(theta, N_samples=10):
epsilon = sample_pi(theta, size=(N_samples,))
return np.mean(log_p(x, h(epsilon, theta))) + gamma_entropy(theta)
def g_elbo(theta, N_samples=10):
epsilon = sample_pi(theta, size=(N_samples,))
return g_reparam(epsilon, theta) \
+ g_score(epsilon, theta) \
+ g_entropy(theta)
Explanation: Now use autograd to compute necessary gradients
End of explanation
elbo_values = []
def callback(theta, t, g):
elbo_values.append(elbo(theta))
theta_0 = wrap(2.0, 2.0)
theta_star = sgd(lambda th, t: -1 * g_elbo(th),
theta_0, num_iters=300, callback=callback)
alpha_star, beta_star = unwrap(theta_star)
print("true a = ", alpha_true)
print("infd a = ", alpha_star)
print("true b = ", beta_true)
print("infd b = ", beta_star)
print("E_q(z; theta)[z] = ", alpha_star/beta_star)
plt.plot(elbo_values)
plt.xlabel("Iteration")
plt.ylabel("ELBO")
import scipy.stats
zs = np.linspace(0,6,100)
plt.plot(zs, scipy.stats.gamma(alpha_true, scale=1./beta_true).pdf(zs), label="true post.")
plt.plot(zs, scipy.stats.gamma(alpha_star, scale=1./beta_star).pdf(zs), label="var. post.")
plt.legend(loc="upper right")
plt.xlabel("$z$")
plt.ylabel("$p(z \\mid x)$")
Explanation: Optimize ELBO with SGD
End of explanation |
2,738 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Node2Vec showcase
This notebook is about showcasing the qualities of the node2vec algorithm aswell which can be found and pip installed through this link.
Data is taken from https
Step1: Data loading and pre processing
Step2: Here comes the ugly part.
Since we want to put each team of a graph of nodes and edges, I had to hard-code the relationship between the different FIFA 17 formations.
Also since some formations have the same role (CB for example) in different positions connected to different players, I first use a distinct name for each role which after the learning process I will trim so the positions will be the same.
Finally since position are connected differently in each formation we will add a suffix for the graph presentation and we will trim it also before the Word2vec process
Example
Step3: Creating the graphs for each team
Step4: Node2Vec algorithm
Step5: Most similar nodes
Step6: Visualization
Step7: Node2Vec usage | Python Code:
%matplotlib inline
import warnings
from text_unidecode import unidecode
from collections import deque
warnings.filterwarnings('ignore')
import pandas as pd
from sklearn.manifold import TSNE
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import seaborn as sns
from node2vec import Node2Vec
sns.set_style('whitegrid')
Explanation: Node2Vec showcase
This notebook is about showcasing the qualities of the node2vec algorithm aswell which can be found and pip installed through this link.
Data is taken from https://www.kaggle.com/artimous/complete-fifa-2017-player-dataset-global
End of explanation
# Load data
data = pd.read_csv('./FullData.csv', usecols=['Name', 'Club', 'Club_Position', 'Rating'])
# Lowercase columns for convenience
data.columns = list(map(str.lower, data.columns))
# Reformat strings: lowercase, ' ' -> '_' and é, ô etc. -> e, o
reformat_string = lambda x: unidecode(str.lower(x).replace(' ', '_'))
data['name'] = data['name'].apply(reformat_string)
data['club'] = data['club'].apply(reformat_string)
# Lowercase position
data['club_position'] = data['club_position'].str.lower()
# Ignore substitutes and reserves
data = data[(data['club_position'] != 'sub') & (data['club_position'] != 'res')]
# Fix lcm rcm -> cm cm
fix_positions = {'rcm' : 'cm', 'lcm': 'cm', 'rcb': 'cb', 'lcb': 'cb', 'ldm': 'cdm', 'rdm': 'cdm'}
data['club_position'] = data['club_position'].apply(lambda x: fix_positions.get(x, x))
# For example sake we will keep only 7 clubs
clubs = {'real_madrid', 'manchester_utd',
'manchester_city', 'chelsea', 'juventus',
'fc_bayern', 'napoli'}
data = data[data['club'].isin(clubs)]
# Verify we have 11 player for each team
assert all(n_players == 11 for n_players in data.groupby('club')['name'].nunique())
data
Explanation: Data loading and pre processing
End of explanation
FORMATIONS = {'4-3-3_4': {'gk': ['cb_1', 'cb_2'], # Real madrid
'lb': ['lw', 'cb_1', 'cm_1'],
'cb_1': ['lb', 'cb_2', 'gk'],
'cb_2': ['rb', 'cb_1', 'gk'],
'rb': ['rw', 'cb_2', 'cm_2'],
'cm_1': ['cam', 'lw', 'cb_1', 'lb'],
'cm_2': ['cam', 'rw', 'cb_2', 'rb'],
'cam': ['cm_1', 'cm_2', 'st'],
'lw': ['cm_1', 'lb', 'st'],
'rw': ['cm_2', 'rb', 'st'],
'st': ['cam', 'lw', 'rw']},
'5-2-2-1': {'gk': ['cb_1', 'cb_2', 'cb_3'], # Chelsea
'cb_1': ['gk', 'cb_2', 'lwb'],
'cb_2': ['gk', 'cb_1', 'cb_3', 'cm_1', 'cb_2'],
'cb_3': ['gk', 'cb_2', 'rwb'],
'lwb': ['cb_1', 'cm_1', 'lw'],
'cm_1': ['lwb', 'cb_2', 'cm_2', 'lw', 'st'],
'cm_2': ['rwb', 'cb_2', 'cm_1', 'rw', 'st'],
'rwb': ['cb_3', 'cm_2', 'rw'],
'lw': ['lwb', 'cm_1', 'st'],
'st': ['lw', 'cm_1', 'cm_2', 'rw'],
'rw': ['st', 'rwb', 'cm_2']},
'4-3-3_2': {'gk': ['cb_1', 'cb_2'], # Man UTD / CITY
'lb': ['cb_1', 'cm_1'],
'cb_1': ['lb', 'cb_2', 'gk', 'cdm'],
'cb_2': ['rb', 'cb_1', 'gk', 'cdm'],
'rb': ['cb_2', 'cm_2'],
'cm_1': ['cdm', 'lw', 'lb', 'st'],
'cm_2': ['cdm', 'rw', 'st', 'rb'],
'cdm': ['cm_1', 'cm_2', 'cb_1', 'cb_2'],
'lw': ['cm_1', 'st'],
'rw': ['cm_2', 'st'],
'st': ['cm_1', 'cm_2', 'lw', 'rw']}, # Juventus, Bayern
'4-2-3-1_2': {'gk': ['cb_1', 'cb_2'],
'lb': ['lm', 'cdm_1', 'cb_1'],
'cb_1': ['lb', 'cdm_1', 'gk', 'cb_2'],
'cb_2': ['rb', 'cdm_2', 'gk', 'cb_1'],
'rb': ['cb_2', 'rm', 'cdm_2'],
'lm': ['lb', 'cdm_1', 'st', 'cam'],
'rm': ['rb', 'cdm_2', 'st', 'cam'],
'cdm_1': ['lm', 'cb_1', 'rb', 'cam'],
'cdm_2': ['rm', 'cb_2', 'lb', 'cam'],
'cam': ['cdm_1', 'cdm_2', 'rm', 'lm', 'st'],
'st': ['lm', 'rm', 'cam']},
'4-3-3': {'gk': ['cb_1', 'cb_2'], # Napoli
'lb': ['cb_1', 'cm_1'],
'cb_1': ['lb', 'cb_2', 'gk', 'cm_2'],
'cb_2': ['rb', 'cb_1', 'gk', 'cm_2'],
'rb': ['cb_2', 'cm_3'],
'cm_1': ['cm_2', 'lw', 'lb'],
'cm_3': ['cm_2', 'rw', 'rb'],
'cm_2': ['cm_1', 'cm_3', 'st', 'cb_1', 'cb_2'],
'lw': ['cm_1', 'st'],
'rw': ['cm_3', 'st'],
'st': ['cm_2', 'lw', 'rw']}}
Explanation: Here comes the ugly part.
Since we want to put each team of a graph of nodes and edges, I had to hard-code the relationship between the different FIFA 17 formations.
Also since some formations have the same role (CB for example) in different positions connected to different players, I first use a distinct name for each role which after the learning process I will trim so the positions will be the same.
Finally since position are connected differently in each formation we will add a suffix for the graph presentation and we will trim it also before the Word2vec process
Example:
'cb' will become 'cb_1_real_madrid' because it is the first CB, in Real Madrid's formation, and before running the Word2Vec algorithm it will be trimmed to cb again
Formations
End of explanation
add_club_suffix = lambda x, c: x + '_{}'.format(c)
graph = nx.Graph()
formatted_positions = set()
def club2graph(club_name, formation, graph):
club_data = data[data['club'] == club_name]
club_formation = FORMATIONS[formation]
club_positions = dict()
# Assign positions to players
available_positions = deque(club_formation)
available_players = set(zip(club_data['name'], club_data['club_position']))
roster = dict() # Here we will store the assigned players and positions
while available_positions:
position = available_positions.pop()
name, pos = [(name, position) for name, p in available_players if position.startswith(p)][0]
roster[name] = pos
available_players.remove((name, pos.split('_')[0]))
reverse_roster = {v: k for k, v in roster.items()}
# Build the graph
for name, position in roster.items():
# Connect to team name
graph.add_edge(name, club_name)
# Inter team connections
for teammate_position in club_formation[position]:
# Connect positions
graph.add_edge(add_club_suffix(position, club_name),
add_club_suffix(teammate_position, club_name))
# Connect player to teammate positions
graph.add_edge(name,
add_club_suffix(teammate_position, club_name))
# Connect player to teammates
graph.add_edge(name, reverse_roster[teammate_position])
# Save for later trimming
formatted_positions.add(add_club_suffix(position, club_name))
formatted_positions.add(add_club_suffix(teammate_position, club_name))
return graph
teams = [('real_madrid', '4-3-3_4'),
('chelsea', '5-2-2-1'),
('manchester_utd', '4-3-3_2'),
('manchester_city', '4-3-3_2'),
('juventus', '4-2-3-1_2'),
('fc_bayern', '4-2-3-1_2'),
('napoli', '4-3-3')]
graph = club2graph('real_madrid', '4-3-3_4', graph)
for team, formation in teams:
graph = club2graph(team, formation, graph)
Explanation: Creating the graphs for each team
End of explanation
node2vec = Node2Vec(graph, dimensions=20, walk_length=16, num_walks=100, workers=2)
fix_formatted_positions = lambda x: x.split('_')[0] if x in formatted_positions else x
reformatted_walks = [list(map(fix_formatted_positions, walk)) for walk in node2vec.walks]
node2vec.walks = reformatted_walks
model = node2vec.fit(window=10, min_count=1)
Explanation: Node2Vec algorithm
End of explanation
for node, _ in model.most_similar('rw'):
# Show only players
if len(node) > 3:
print(node)
for node, _ in model.most_similar('gk'):
# Show only players
if len(node) > 3:
print(node)
for node, _ in model.most_similar('real_madrid'):
print(node)
for node, _ in model.most_similar('paulo_dybala'):
print(node)
Explanation: Most similar nodes
End of explanation
player_nodes = [x for x in model.wv.vocab if len(x) > 3 and x not in clubs]
embeddings = np.array([model.wv[x] for x in player_nodes])
tsne = TSNE(n_components=2, random_state=7, perplexity=15)
embeddings_2d = tsne.fit_transform(embeddings)
# Assign colors to players
team_colors = {
'real_madrid': 'lightblue',
'chelsea': 'b',
'manchester_utd': 'r',
'manchester_city': 'teal',
'juventus': 'gainsboro',
'napoli': 'deepskyblue',
'fc_bayern': 'tomato'
}
data['color'] = data['club'].apply(lambda x: team_colors[x])
player_colors = dict(zip(data['name'], data['color']))
colors = [player_colors[x] for x in player_nodes]
figure = plt.figure(figsize=(11, 9))
ax = figure.add_subplot(111)
ax.scatter(embeddings_2d[:, 0], embeddings_2d[:, 1], c=colors)
# Create team patches for legend
team_patches = [mpatches.Patch(color=color, label=team) for team, color in team_colors.items()]
ax.legend(handles=team_patches);
Explanation: Visualization
End of explanation
import networkx as nx
from node2vec import Node2Vec
EMBEDDING_FILENAME = 'emd_file'
EMBEDDING_MODEL_FILENAME = 'emd_model'
EDGES_EMBEDDING_FILENAME = 'edges_emd'
# Create a graph
graph = nx.fast_gnp_random_graph(n=100, p=0.5)
# Precompute probabilities and generate walks - **ON WINDOWS ONLY WORKS WITH workers=1**
node2vec = Node2Vec(graph, dimensions=64, walk_length=30, num_walks=200, workers=4) # Use temp_folder for big graphs
# Embed nodes
model = node2vec.fit(window=10, min_count=1, batch_words=4) # Any keywords acceptable by gensim.Word2Vec can be passed, `diemnsions` and `workers` are automatically passed (from the Node2Vec constructor)
# Look for most similar nodes
model.wv.most_similar('2') # Output node names are always strings
# Save embeddings for later use
model.wv.save_word2vec_format(EMBEDDING_FILENAME)
# Save model for later use
model.save(EMBEDDING_MODEL_FILENAME)
# Embed edges using Hadamard method
from node2vec.edges import HadamardEmbedder
edges_embs = HadamardEmbedder(keyed_vectors=model.wv)
# Get all edges in a separate KeyedVectors instance - use with caution could be huge for big networks
edges_kv = edges_embs.as_keyed_vectors()
# Look for most similar edges - this time tuples must be sorted and as str
edges_kv.most_similar(str(('1', '2')))
# Save embeddings for later use
edges_kv.save_word2vec_format(EDGES_EMBEDDING_FILENAME)
# Look for most similar edges - this time tuples must be sorted and as str
edges_kv.most_similar(str(('1', '2')))
# Look for embeddings on the fly - here we pass normal tuples
edges_embs[('1', '2')]
Explanation: Node2Vec usage
End of explanation |
2,739 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GLM
Step1: Here, 'log_radon_t' is a dependent variable, while 'floor_t' and 'county_idx_t' determine independent variable.
Step2: Random variable 'radon_like', associated with 'log_radon_t', should be given to the function for ADVI to denote that as observations in the likelihood term.
Step3: On the other hand, 'minibatches' should include the three variables above.
Step4: Then, run ADVI with mini-batch.
Step5: Check the trace of ELBO and compare the result with MCMC. | Python Code:
%matplotlib inline
import theano
theano.config.floatX = 'float64'
import matplotlib.pyplot as plt
import numpy as np
import pymc3 as pm
import pandas as pd
data = pd.read_csv('../data/radon.csv')
county_names = data.county.unique()
county_idx = data['county_code'].values
n_counties = len(data.county.unique())
Explanation: GLM: Mini-batch ADVI on hierarchical regression model
Unlike Gaussian mixture models, (hierarchical) regression models have independent variables. These variables affect the likelihood function, but are not random variables. When using mini-batch, we should take care of that.
End of explanation
import theano.tensor as tt
log_radon_t = tt.vector()
log_radon_t.tag.test_value = np.zeros(1)
floor_t = tt.vector()
floor_t.tag.test_value = np.zeros(1)
county_idx_t = tt.ivector()
county_idx_t.tag.test_value = np.zeros(1, dtype='int32')
minibatch_tensors = [log_radon_t, floor_t, county_idx_t]
with pm.Model() as hierarchical_model:
# Hyperpriors for group nodes
mu_a = pm.Normal('mu_alpha', mu=0., sd=100**2)
sigma_a = pm.Uniform('sigma_alpha', lower=0, upper=100)
mu_b = pm.Normal('mu_beta', mu=0., sd=100**2)
sigma_b = pm.Uniform('sigma_beta', lower=0, upper=100)
# Intercept for each county, distributed around group mean mu_a
# Above we just set mu and sd to a fixed value while here we
# plug in a common group distribution for all a and b (which are
# vectors of length n_counties).
a = pm.Normal('alpha', mu=mu_a, sd=sigma_a, shape=n_counties)
# Intercept for each county, distributed around group mean mu_a
b = pm.Normal('beta', mu=mu_b, sd=sigma_b, shape=n_counties)
# Model error
eps = pm.Uniform('eps', lower=0, upper=100)
# Model prediction of radon level
# a[county_idx] translates to a[0, 0, 0, 1, 1, ...],
# we thus link multiple household measures of a county
# to its coefficients.
radon_est = a[county_idx_t] + b[county_idx_t] * floor_t
# Data likelihood
radon_like = pm.Normal('radon_like', mu=radon_est, sd=eps, observed=log_radon_t)
Explanation: Here, 'log_radon_t' is a dependent variable, while 'floor_t' and 'county_idx_t' determine independent variable.
End of explanation
minibatch_RVs = [radon_like]
Explanation: Random variable 'radon_like', associated with 'log_radon_t', should be given to the function for ADVI to denote that as observations in the likelihood term.
End of explanation
def minibatch_gen(data):
rng = np.random.RandomState(0)
while True:
ixs = rng.randint(len(data), size=100)
yield data.log_radon.values[ixs],\
data.floor.values[ixs],\
data.county_code.values.astype('int32')[ixs]
minibatches = minibatch_gen(data)
total_size = len(data)
Explanation: On the other hand, 'minibatches' should include the three variables above.
End of explanation
means, sds, elbos = pm.variational.advi_minibatch(
model=hierarchical_model, n=40000, minibatch_tensors=minibatch_tensors,
minibatch_RVs=minibatch_RVs, minibatches=minibatches,
total_size=total_size, learning_rate=1e-2, epsilon=1.0
)
Explanation: Then, run ADVI with mini-batch.
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
plt.plot(elbos)
plt.ylim(-5000, 0);
# Inference button (TM)!
with pm.Model():
# Hyperpriors for group nodes
mu_a = pm.Normal('mu_alpha', mu=0., sd=100**2)
sigma_a = pm.Uniform('sigma_alpha', lower=0, upper=100)
mu_b = pm.Normal('mu_beta', mu=0., sd=100**2)
sigma_b = pm.Uniform('sigma_beta', lower=0, upper=100)
# Intercept for each county, distributed around group mean mu_a
# Above we just set mu and sd to a fixed value while here we
# plug in a common group distribution for all a and b (which are
# vectors of length n_counties).
a = pm.Normal('alpha', mu=mu_a, sd=sigma_a, shape=n_counties)
# Intercept for each county, distributed around group mean mu_a
b = pm.Normal('beta', mu=mu_b, sd=sigma_b, shape=n_counties)
# Model error
eps = pm.Uniform('eps', lower=0, upper=100)
# Model prediction of radon level
# a[county_idx] translates to a[0, 0, 0, 1, 1, ...],
# we thus link multiple household measures of a county
# to its coefficients.
radon_est = a[county_idx] + b[county_idx] * data.floor.values
# Data likelihood
radon_like = pm.Normal(
'radon_like', mu=radon_est, sd=eps, observed=data.log_radon.values)
#start = pm.find_MAP()
step = pm.NUTS(scaling=means)
hierarchical_trace = pm.sample(2000, step, start=means, progressbar=False)
from scipy import stats
import seaborn as sns
varnames = means.keys()
fig, axs = plt.subplots(nrows=len(varnames), figsize=(12, 18))
for var, ax in zip(varnames, axs):
mu_arr = means[var]
sigma_arr = sds[var]
ax.set_title(var)
for i, (mu, sigma) in enumerate(zip(mu_arr.flatten(), sigma_arr.flatten())):
sd3 = (-4*sigma + mu, 4*sigma + mu)
x = np.linspace(sd3[0], sd3[1], 300)
y = stats.norm(mu, sigma).pdf(x)
ax.plot(x, y)
if hierarchical_trace[var].ndim > 1:
t = hierarchical_trace[var][i]
else:
t = hierarchical_trace[var]
sns.distplot(t, kde=False, norm_hist=True, ax=ax)
fig.tight_layout()
Explanation: Check the trace of ELBO and compare the result with MCMC.
End of explanation |
2,740 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Coding in Python, Part 2
Investigative Reporters and Editors Conference, New Orleans, June 2016<br />
By Aaron Kessler and Christopher Schnaars<br />
Lists
A list is a mutable (meaning it can be changed), ordered collection of objects. Everything in Python is an object, so a list can contain not only strings and numbers, but also functions and even other lists.
Let's make a list of new friends we've made at the IRE conference
Step1: In a list, you can retrieve a single item by index. To do this, type the name of your list, followed by the numeric position of the item you want inside square brackets. There's just one sticking point
Step2: Mutable objects
In many programming languages, it's common to assign a value (such as an integer or string) to a variable. Python works a bit differently. In our example above, Python creates a list in memory to house the names of our friends and then creates the object my_friends to point to the location in memory where this list is located. Why is that important? Well, for one thing, it means that if we make a copy of a list, Python keeps only one list in memory and just creates a second pointer. While this is not a concern in our example code, it could save a lot of computer memory for a large list containing hundreds or even thousands of objects. Consider this code
Step3: Here's where mutability will bite you, if you're not careful. You haven't met Cora yet and don't know how nice she is, so you decide to remove her from your list of friends, at least for now. See if you can figure out what to type in the box below. You want to use the remove method to remove 'Cora' from your_friends. Use the second box below to verify Cora has been removed
Step4: Perfect! Or is it? Let's take another look at my_friends
Step5: Uh-oh! You've unfriended Cora for me too! Remember that my_friends and your_friends are just pointers to the same list, so when you change one, you're really changing both. If you want the two lists to be independent, you must explicitly make a copy using, you guessed it, the copy method. In the box below
Step6: Dictionaries
In Python, a dictionary is a mutable, unordered collection of key-value pairs. Consider
Step7: Note that our data is enclosed in curly braces, which tell Python you are building a dictionary.<br />
<br />
Now notice what happens when we ask Python to spit this information back to us
Step8: Notice that Python did not return the list of key-value pairs in the same order as we entered them. Remember that dictionaries are unordered collections. This might bother you, but it shouldn't. You'll find in practice it is not a problem. Because key order varies, you can't access a value by index as you might with a list, so something like friend[0] will not work.
You might notice that the keys are listed in alphabetical order. This is <u>not</u> always the case. You can't assume keys will be in any other particular order.
To add a new key-value pair, simply put the new key in brackets and assign the value with an = sign. Try to add the key favorite_sketch to our dictionary, and set it's value to dead parrot
Step9: To replace an existing value, simply re-assign it. Change first_name to Chris | Python Code:
my_friends
Explanation: Introduction to Coding in Python, Part 2
Investigative Reporters and Editors Conference, New Orleans, June 2016<br />
By Aaron Kessler and Christopher Schnaars<br />
Lists
A list is a mutable (meaning it can be changed), ordered collection of objects. Everything in Python is an object, so a list can contain not only strings and numbers, but also functions and even other lists.
Let's make a list of new friends we've made at the IRE conference: Aaron, Sue, Chris and Renee. We'll call our list my_friends. To create a list, put a comma-separated list of strings (our friends' names) inside square brackets ([]). These brackets are how we tell Python we're building a list. See if you can figure out what to do in the box below. If you can't figure it out, don't sweat it, and read on for the answer:
Did you get it? The answer: my_friends = ['Aaron', 'Sue', 'Chris', 'Renee']
Type my_friends in the box below, and you'll see Python remembers the order of the names:
We met Cora at an awesome Python class we just attended, so let's add her to our list of friends. To do that, we're going to use a method called append. A method is a bit of code associated with a Python object (in this case, our list) to provide some built-in functionality. Every time you create a list in Python, you get the functionality of the append method (and a bunch of other methods, too) for free.
To use the append method, type the name of your list, followed by a period and the name of the method, and then put the string we want to add to our list ('Cora') in parentheses. Try it:
End of explanation
# The first two names in the list (['Aaron', 'Sue'])
# The second and third names in the list (['Sue', 'Chris'])
# The second and fourth names in the list (['Sue', 'Renee'])
# The last three names in the list, in reverse order (['Cora', 'Renee', 'Chris'])
Explanation: In a list, you can retrieve a single item by index. To do this, type the name of your list, followed by the numeric position of the item you want inside square brackets. There's just one sticking point: Indices in Python are zero-based, which means the first item is at position 0, the second item is at position 1 and so on. If that sounds confusing, don't worry about it. There actually are very good, logical reasons for this behavior that we won't dive into here. For now, just accept our word that you'll get used to it and see if you can figure out what to type to get the first name in our list (Aaron):
You can retrieve a contiguous subset of names from your list, called a slice. To do this, type the name of your list and provide up to three parameters in square brackets, separated by colons. Just leave any parameter you don't need blank, and Python will use its default value. These parameters, in order, are:
<ul><li>The index of the first item you want. Default is 0.</li>
<li>The index of the first item you *don't* want. You can set this to a negative number to skip a specific number of items at the end of your `list`. For example, a value of -1 here would stop at the next-to-last item in your `list`. Default is the number of items in your `list` (also called the *length*).</li>
<li>The *step* value, which we can use to skip over names in our `list`. For example, if you want every other name, you could set the *step* to 2. If you want to go backwards through your `list`, set this to a negative number. Default is 1.</li></ul>
Use the boxes below to see if you can figure out how to retrieve these lists of names. There could be more than one way to answer each question:
<ol><li>The first two names in the list (`['Aaron', 'Sue']`)</li>
<li>The second and third names in the list (`['Sue', 'Chris']`)</li>
<li>The second and fourth names in the list (`['Sue', 'Renee']`)</li>
<li>The last three names in the list, in reverse order (`['Cora', 'Renee', 'Chris']`)</li></ol>
End of explanation
my_friends = ['Aaron', 'Sue', 'Chris', 'Renee', 'Cora']
your_friends = my_friends
print('My friends are: ')
print(my_friends)
print('\nAnd your friends are: ') # \n is code for newline.
print(your_friends)
Explanation: Mutable objects
In many programming languages, it's common to assign a value (such as an integer or string) to a variable. Python works a bit differently. In our example above, Python creates a list in memory to house the names of our friends and then creates the object my_friends to point to the location in memory where this list is located. Why is that important? Well, for one thing, it means that if we make a copy of a list, Python keeps only one list in memory and just creates a second pointer. While this is not a concern in our example code, it could save a lot of computer memory for a large list containing hundreds or even thousands of objects. Consider this code:
End of explanation
your_friends
Explanation: Here's where mutability will bite you, if you're not careful. You haven't met Cora yet and don't know how nice she is, so you decide to remove her from your list of friends, at least for now. See if you can figure out what to type in the box below. You want to use the remove method to remove 'Cora' from your_friends. Use the second box below to verify Cora has been removed:
End of explanation
my_friends
Explanation: Perfect! Or is it? Let's take another look at my_friends:
End of explanation
my_friends
your_friends
Explanation: Uh-oh! You've unfriended Cora for me too! Remember that my_friends and your_friends are just pointers to the same list, so when you change one, you're really changing both. If you want the two lists to be independent, you must explicitly make a copy using, you guessed it, the copy method. In the box below:
<ul><li>Add Cora back to `my_friends`.</li>
<li>Use the `copy` method to assign a copy of `my_friends` to `your_friends`.</li>
<li>Remove Cora from `your_friends`.</li></ul>
You can use the second and third boxes below to test whether your code is correct.
End of explanation
friend = {'last_name': 'Schnaars', 'first_name': 'Christopher', 'works_for': 'USA Today', 'favorite_food': 'spam'}
Explanation: Dictionaries
In Python, a dictionary is a mutable, unordered collection of key-value pairs. Consider:
End of explanation
friend
Explanation: Note that our data is enclosed in curly braces, which tell Python you are building a dictionary.<br />
<br />
Now notice what happens when we ask Python to spit this information back to us:
End of explanation
friend
Explanation: Notice that Python did not return the list of key-value pairs in the same order as we entered them. Remember that dictionaries are unordered collections. This might bother you, but it shouldn't. You'll find in practice it is not a problem. Because key order varies, you can't access a value by index as you might with a list, so something like friend[0] will not work.
You might notice that the keys are listed in alphabetical order. This is <u>not</u> always the case. You can't assume keys will be in any other particular order.
To add a new key-value pair, simply put the new key in brackets and assign the value with an = sign. Try to add the key favorite_sketch to our dictionary, and set it's value to dead parrot:
End of explanation
friend
Explanation: To replace an existing value, simply re-assign it. Change first_name to Chris:
End of explanation |
2,741 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import CMIP5 from the module and start a session
The latest ARCCSSive stable version is available from the conda analysis27 environment
Anyone can load them both from raijin and the remote desktop.
Step1: The database location is saved in the $CMIP5_DB environment variable. This is defined automatically if you have loaded ARCCSSive from conda/analysis27.
Step2: Import CMIP5 from the module and use the method connect() to open a connection to the database.
Step3: Opening a connection creates a session object (in this case db). A session manages all the comunication with the database and contains all the objects which you’ve loaded or associated with it during its lifespan. Every query to the database is run through the session.
There are a number of helper functions for common operations
Step4: models() return all the models recorded in the database,
experiments(), variables(), mips() produce similar lists for each respective field
Perform a simple search
To perform a search you can use the outputs( ) function.
outputs( ) is a 'shortcut' to perform a session.query on the Instances table.
The following example shows all the input arguments you can use, the order doesn't matter and you can omit any of them.
db.outputs( column-name='value', ... )
will return all the rows for the Instances table in the database.
Step5: You can check how many instances your search returned by using the query method count()
Step6: In this case we defined every possible constraint for the table and hence we get just one instance.
This should always be the case, if you use all the five attributes, because every instance is fully defined by these and each instance is unique.
We can loop through the instances returned by the search and access their attributes and their children ( i.e. related versions and files) attributes.
Step7: Navigate through search results
Let's have a better look at results
Step8: results is a query object but as we saw before we can loop through it as we do with a list.
In this particular case we have only one instance returned in results, but we still need to use an index to access it.
Step9: A useful attribute of an instance is versions, this is a list of all the versions associated to that particular instance.
From a database point of view these are all the rows in the Versions table which are related to that particular instance.
Step10: We have two versions available for this instance, we can loop through them and retrieve their attributes
Step11: If we want to get only the latest version, we can use the latest( ) method of the Instance class.
Step12: As you might have noticed latest( ) returns a list of Version objects rather than only one.
This is because there might be different copies of the same version, downloaded from different servers.
Currently the database lists all of them so that if you used one rather than the other in the past you can still find it.
There are plans though to keep just one copy per version to facilitate the collection management and save storage resources.
Other methods available for the Instances table (objects) are
Step13: !!Warning!!
In most cases you can use directly the drstee_path() method to get to the files, but it can be useful to find all the available versions.
For example if you want to make sure that a new version hasn't been added recently, DRSv2 it is updated only once a week.
Or if you find that the version linked by the DRSv2 is incomplete, there might be another copy of the same version.
We hope eventually to be able to have just one copy for each version and all of them clearly defined.
Filter search results
We can refine our results by using the SQLalchemy filter( ) function.
We will use the attributes ( or columns ) of the database tables as constraints.
So, first we need to import the tables definitions from ARCCSSive.
Step14: We can also import the unique( ) function. This function will give us all the possible values we can use to filter over a particular attribute.
Step15: Let's do a new query
Step16: We would like to filter the results by ensemble, so we will use unique( ) to get all the possible ensemble values.
Step17: unique( results, 'attribute' ) takes two inputs
Step18: We used the == equals operator to select all the r61i1p1 ensembles.
If we wanted all the "r6i1p#" ensembles regardless of their physics (p) value we could have used the like operator.
Step19: If we want to search two variables at the same time we can leave the variable constraints out of the query inputs,
and then use filter with the in_ operator to select them.
Step20: As you can see filter can follow directly the query, i.e. the outputs( ) function.
In fact, you can refine a query with how many successive filters as you want.
Step21: Using the search results to open the files
Once we have found the instances and versions we want to use we can use their path to find the files and work with them.
First we load numpy and the netcdf module.
Step22: All you need to open a file is the location, this is stored in the Versions table in the database as path.
Alternatively you can use the drstree path, that is returned by the Instance drstree_path( ) method.
Let's define a simple function that reads a variable from a file and calculate its maximum value.
We will use MFDataset( ) from the netcDF4 module to open all the netcdf files in the input path as one aggregated file.
Step23: Now we perform a search, loop through the results and pass the Version path attribute to the var_max( ) function
Step24: NB if you pass directly v.path value you get an error because the databse return unicode string, so you need to use the str( ) function to convert to a normal string.
How to integrate ARCCSSive in your python script
In the previous example we simply looped through the results returned by the search as they were and passed them to a function that opened the files.
But what if we want to do something more complex?
Let's say that we want to pass two variables to a function and do it for every model/ensemble that has both of them for a fixed experiment and mip
Mostly users would somehow loop over the drstree path, doing something like
Step25: Now let's do the another search and get tasmin and tasmax
Step26: Get the list of distinct models and ensembles using unique
Step27: Now we loop over the models and the ensembles, for each model-ensemble combination we call the function if we have an instance for both variables. | Python Code:
! module use /g/data3/hh5/public/modules
! module load conda/analysis27
Explanation: Import CMIP5 from the module and start a session
The latest ARCCSSive stable version is available from the conda analysis27 environment
Anyone can load them both from raijin and the remote desktop.
End of explanation
! export CMIP5_DB=sqlite:////g/data1/ua6/unofficial-ESG-replica/tmp/tree/cmip5_raijin_latest.db
Explanation: The database location is saved in the $CMIP5_DB environment variable. This is defined automatically if you have loaded ARCCSSive from conda/analysis27.
End of explanation
from ARCCSSive import CMIP5
db=CMIP5.connect()
Explanation: Import CMIP5 from the module and use the method connect() to open a connection to the database.
End of explanation
db.models()
Explanation: Opening a connection creates a session object (in this case db). A session manages all the comunication with the database and contains all the objects which you’ve loaded or associated with it during its lifespan. Every query to the database is run through the session.
There are a number of helper functions for common operations:
End of explanation
results=db.outputs(variable='tas',experiment='historical',mip='Amon',model='MIROC-ESM-CHEM',ensemble='r1i1p1')
Explanation: models() return all the models recorded in the database,
experiments(), variables(), mips() produce similar lists for each respective field
Perform a simple search
To perform a search you can use the outputs( ) function.
outputs( ) is a 'shortcut' to perform a session.query on the Instances table.
The following example shows all the input arguments you can use, the order doesn't matter and you can omit any of them.
db.outputs( column-name='value', ... )
will return all the rows for the Instances table in the database.
End of explanation
results.count()
Explanation: You can check how many instances your search returned by using the query method count()
End of explanation
for o in results:
print(o.model,o.variable,o.ensemble)
print()
print("drstree path is " + str(o.drstree_path()))
for v in o.versions:
print()
print('version', v.version)
print('dataset-id', v.dataset_id)
print('is_latest', v.is_latest, 'checked on', v.checked_on)
print()
print(v.path)
for f in v.files:
print(f.filename, f.tracking_id)
print(f.md5, f.sha256)
Explanation: In this case we defined every possible constraint for the table and hence we get just one instance.
This should always be the case, if you use all the five attributes, because every instance is fully defined by these and each instance is unique.
We can loop through the instances returned by the search and access their attributes and their children ( i.e. related versions and files) attributes.
End of explanation
results=db.outputs(variable='tas',experiment='historical',mip='Amon',model='MIROC-ESM-CHEM',ensemble='r1i1p1')
type(results)
Explanation: Navigate through search results
Let's have a better look at results
End of explanation
type(results[0])
Explanation: results is a query object but as we saw before we can loop through it as we do with a list.
In this particular case we have only one instance returned in results, but we still need to use an index to access it.
End of explanation
results[0].versions
Explanation: A useful attribute of an instance is versions, this is a list of all the versions associated to that particular instance.
From a database point of view these are all the rows in the Versions table which are related to that particular instance.
End of explanation
for o in results:
for v in o.versions:
print()
print(v.version)
print()
print(v.path)
Explanation: We have two versions available for this instance, we can loop through them and retrieve their attributes:
End of explanation
results[0].latest()[0].version
Explanation: If we want to get only the latest version, we can use the latest( ) method of the Instance class.
End of explanation
results[0].filenames()
results[0].drstree_path()
% ls -l /g/data1/ua6/DRSv2/CMIP5/CCSM4/rcp45/day/atmos/r1i1p1/tas/latest
Explanation: As you might have noticed latest( ) returns a list of Version objects rather than only one.
This is because there might be different copies of the same version, downloaded from different servers.
Currently the database lists all of them so that if you used one rather than the other in the past you can still find it.
There are plans though to keep just one copy per version to facilitate the collection management and save storage resources.
Other methods available for the Instances table (objects) are:
filenames( )
drstree_path( )
End of explanation
from ARCCSSive.CMIP5.Model import Instance, Version, VersionFile
#print(type(Instance))
Explanation: !!Warning!!
In most cases you can use directly the drstee_path() method to get to the files, but it can be useful to find all the available versions.
For example if you want to make sure that a new version hasn't been added recently, DRSv2 it is updated only once a week.
Or if you find that the version linked by the DRSv2 is incomplete, there might be another copy of the same version.
We hope eventually to be able to have just one copy for each version and all of them clearly defined.
Filter search results
We can refine our results by using the SQLalchemy filter( ) function.
We will use the attributes ( or columns ) of the database tables as constraints.
So, first we need to import the tables definitions from ARCCSSive.
End of explanation
from ARCCSSive.CMIP5.other_functions import unique
Explanation: We can also import the unique( ) function. This function will give us all the possible values we can use to filter over a particular attribute.
End of explanation
results=db.outputs(variable='tas',experiment='rcp45',mip='day')
results.count()
Explanation: Let's do a new query
End of explanation
ensembles=unique(results,'ensemble')
print(ensembles)
Explanation: We would like to filter the results by ensemble, so we will use unique( ) to get all the possible ensemble values.
End of explanation
r6i1p1_ens=results.filter(Instance.ensemble == 'r6i1p1')
print( r6i1p1_ens.count() )
unique(r6i1p1_ens,'ensemble')
Explanation: unique( results, 'attribute' ) takes two inputs:
results is a query object* on the Instances table, for example what is returned by the db.outputs( ) function
* 'attribute' is a string defining a particular attribute or column of the Instances table, for example 'model'
unique( ) lists all the distinct values returned by the query for that particular attribute.
Now that we know all the ensembles values, let's choose one to filter our results.
End of explanation
r6i1_ens=results.filter(Instance.ensemble.like('r6i1p%'))
print( r6i1_ens.count() )
unique(r6i1_ens,'ensemble')
Explanation: We used the == equals operator to select all the r61i1p1 ensembles.
If we wanted all the "r6i1p#" ensembles regardless of their physics (p) value we could have used the like operator.
End of explanation
results=db.outputs(ensemble='r1i1p1',experiment='rcp45',mip='day')\
.filter(Instance.variable.in_(['tasmin','tasmax']))
results.count()
Explanation: If we want to search two variables at the same time we can leave the variable constraints out of the query inputs,
and then use filter with the in_ operator to select them.
End of explanation
results=db.outputs(ensemble='r1i1p1',experiment='rcp45',mip='day')\
.filter(Instance.variable.in_(['tasmin','tasmax']))\
.filter(Instance.model.like('%ESM%'))
results.count()
Explanation: As you can see filter can follow directly the query, i.e. the outputs( ) function.
In fact, you can refine a query with how many successive filters as you want.
End of explanation
import numpy as np
from netCDF4 import MFDataset
Explanation: Using the search results to open the files
Once we have found the instances and versions we want to use we can use their path to find the files and work with them.
First we load numpy and the netcdf module.
End of explanation
def var_max(var,path):
''' calculate max value for variable '''
# MFDataset will open all netcdf files in path as one aggregated file
print(path+"/*.nc")
# open the file
nc=MFDataset(path+"/*.nc",'r')
# read the variable from file into a numpy array
data = nc.variables[var][:]
# close the file
nc.close()
# return the maximum
return np.max(data)
Explanation: All you need to open a file is the location, this is stored in the Versions table in the database as path.
Alternatively you can use the drstree path, that is returned by the Instance drstree_path( ) method.
Let's define a simple function that reads a variable from a file and calculate its maximum value.
We will use MFDataset( ) from the netcDF4 module to open all the netcdf files in the input path as one aggregated file.
End of explanation
results=db.outputs(ensemble='r1i1p1',experiment='rcp45',mip='day').filter(Instance.model.like('MIROC%'))\
.filter(Instance.variable.in_(['tas','pr']))
print(results.count())
for o in results[:2]:
var = o.variable
for v in o.versions:
path=str(v.path)
varmax=var_max(var,path)
print()
print('Maximum value for variable %s, version %s is %d' % (var, v.version, varmax))
Explanation: Now we perform a search, loop through the results and pass the Version path attribute to the var_max( ) function
End of explanation
def vars_difference(var1,path1,var2,path2):
''' calculate difference between the mean of two variables '''
# open the files and read both variables
nc1=MFDataset(path1+"/*.nc",'r')
data1 = nc1.variables[var1][:]
nc1.close()
nc2=MFDataset(path2+"/*.nc",'r')
data2 = nc2.variables[var2][:]
nc2.close()
# return the difference between the two means
return np.mean(data2) - np.mean(data1)
Explanation: NB if you pass directly v.path value you get an error because the databse return unicode string, so you need to use the str( ) function to convert to a normal string.
How to integrate ARCCSSive in your python script
In the previous example we simply looped through the results returned by the search as they were and passed them to a function that opened the files.
But what if we want to do something more complex?
Let's say that we want to pass two variables to a function and do it for every model/ensemble that has both of them for a fixed experiment and mip
Mostly users would somehow loop over the drstree path, doing something like:
cd /g/data1/ua6/DRSv2/CMIP5
list all models and save in model_list
for model in model_list:
list all eavailable ensembles and save in ensemble_list
for ensemble in ensemble_list:
call_function(var1_path, var2_path)
Using ARCCSSIve we can do the same using the unique( ) function to return the list of all available models/ensembles.
Let's start from defining a simple function that calculates the difference bewteen the values of two variables.
End of explanation
results=db.outputs(ensemble='r1i1p1',experiment='rcp45',mip='Amon').filter(Instance.model.like('MIROC%'))\
.filter(Instance.variable.in_(['tasmin','tasmax']))
results.count()
Explanation: Now let's do the another search and get tasmin and tasmax
End of explanation
models=unique(results,'model')
ensembles=unique(results,'ensemble')
Explanation: Get the list of distinct models and ensembles using unique
End of explanation
for mod in models:
for ens in ensembles:
# we filter twice the reuslts, using the model and ensemble values plus one the variable at the time
tasmin_inst=results.filter(Instance.model==mod, Instance.ensemble==ens, Instance.variable=='tasmin').first()
tasmax_inst=results.filter(Instance.model==mod, Instance.ensemble==ens, Instance.variable=='tasmax').first()
# we check that both filters returned something and call the function if they did
if tasmax_inst and tasmin_inst:
tasmin_path=tasmin_inst.latest()[0].path
tasmax_path=tasmax_inst.latest()[0].path
diff=vars_difference('tasmin',str(tasmin_path),'tasmax',str(tasmax_path))
print('Difference for model %s and ensemble %s is %d' % (mod, ens, diff))
Explanation: Now we loop over the models and the ensembles, for each model-ensemble combination we call the function if we have an instance for both variables.
End of explanation |
2,742 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
KNN (K-Nearest-Neighbors)
KNN is a simple concept
Step1: Now, we'll group everything by movie ID, and compute the total number of ratings (each movie's popularity) and the average rating for every movie
Step2: The raw number of ratings isn't very useful for computing distances between movies, so we'll create a new DataFrame that contains the normalized number of ratings. So, a value of 0 means nobody rated it, and a value of 1 will mean it's the most popular movie there is.
Step3: Now, let's get the genre information from the u.item file. The way this works is there are 19 fields, each corresponding to a specific genre - a value of '0' means it is not in that genre, and '1' means it is in that genre. A movie may have more than one genre associated with it.
While we're at it, we'll put together everything into one big Python dictionary called movieDict. Each entry will contain the movie name, list of genre values, the normalized popularity score, and the average rating for each movie
Step4: For example, here's the record we end up with for movie ID 1, "Toy Story"
Step5: Now let's define a function that computes the "distance" between two movies based on how similar their genres are, and how similar their popularity is. Just to make sure it works, we'll compute the distance between movie ID's 2 and 4
Step6: Remember the higher the distance, the less similar the movies are. Let's check what movies 2 and 4 actually are - and confirm they're not really all that similar
Step7: Now, we just need a little code to compute the distance between some given test movie (Toy Story, in this example) and all of the movies in our data set. When the sort those by distance, and print out the K nearest neighbors
Step8: While we were at it, we computed the average rating of the 10 nearest neighbors to Toy Story
Step9: How does this compare to Toy Story's actual average rating? | Python Code:
import pandas as pd
r_cols = ['user_id', 'movie_id', 'rating']
ratings = pd.read_csv('e:/sundog-consult/udemy/datascience/ml-100k/u.data', sep='\t', names=r_cols, usecols=range(3))
ratings.head()
Explanation: KNN (K-Nearest-Neighbors)
KNN is a simple concept: define some distance metric between the items in your dataset, and find the K closest items. You can then use those items to predict some property of a test item, by having them somehow "vote" on it.
As an example, let's look at the MovieLens data. We'll try to guess the rating of a movie by looking at the 10 movies that are closest to it in terms of genres and popularity.
To start, we'll load up every rating in the data set into a Pandas DataFrame:
End of explanation
import numpy as np
movieProperties = ratings.groupby('movie_id').agg({'rating': [np.size, np.mean]})
movieProperties.head()
Explanation: Now, we'll group everything by movie ID, and compute the total number of ratings (each movie's popularity) and the average rating for every movie:
End of explanation
movieNumRatings = pd.DataFrame(movieProperties['rating']['size'])
movieNormalizedNumRatings = movieNumRatings.apply(lambda x: (x - np.min(x)) / (np.max(x) - np.min(x)))
movieNormalizedNumRatings.head()
Explanation: The raw number of ratings isn't very useful for computing distances between movies, so we'll create a new DataFrame that contains the normalized number of ratings. So, a value of 0 means nobody rated it, and a value of 1 will mean it's the most popular movie there is.
End of explanation
movieDict = {}
with open(r'e:/sundog-consult/udemy/datascience/ml-100k/u.item') as f:
temp = ''
for line in f:
fields = line.rstrip('\n').split('|')
movieID = int(fields[0])
name = fields[1]
genres = fields[5:25]
genres = map(int, genres)
movieDict[movieID] = (name, genres, movieNormalizedNumRatings.loc[movieID].get('size'), movieProperties.loc[movieID].rating.get('mean'))
Explanation: Now, let's get the genre information from the u.item file. The way this works is there are 19 fields, each corresponding to a specific genre - a value of '0' means it is not in that genre, and '1' means it is in that genre. A movie may have more than one genre associated with it.
While we're at it, we'll put together everything into one big Python dictionary called movieDict. Each entry will contain the movie name, list of genre values, the normalized popularity score, and the average rating for each movie:
End of explanation
movieDict[1]
Explanation: For example, here's the record we end up with for movie ID 1, "Toy Story":
End of explanation
from scipy import spatial
def ComputeDistance(a, b):
genresA = a[1]
genresB = b[1]
genreDistance = spatial.distance.cosine(genresA, genresB)
popularityA = a[2]
popularityB = b[2]
popularityDistance = abs(popularityA - popularityB)
return genreDistance + popularityDistance
ComputeDistance(movieDict[2], movieDict[4])
Explanation: Now let's define a function that computes the "distance" between two movies based on how similar their genres are, and how similar their popularity is. Just to make sure it works, we'll compute the distance between movie ID's 2 and 4:
End of explanation
print movieDict[2]
print movieDict[4]
Explanation: Remember the higher the distance, the less similar the movies are. Let's check what movies 2 and 4 actually are - and confirm they're not really all that similar:
End of explanation
import operator
def getNeighbors(movieID, K):
distances = []
for movie in movieDict:
if (movie != movieID):
dist = ComputeDistance(movieDict[movieID], movieDict[movie])
distances.append((movie, dist))
distances.sort(key=operator.itemgetter(1))
neighbors = []
for x in range(K):
neighbors.append(distances[x][0])
return neighbors
K = 10
avgRating = 0
neighbors = getNeighbors(1, K)
for neighbor in neighbors:
avgRating += movieDict[neighbor][3]
print movieDict[neighbor][0] + " " + str(movieDict[neighbor][3])
avgRating /= float(K)
Explanation: Now, we just need a little code to compute the distance between some given test movie (Toy Story, in this example) and all of the movies in our data set. When the sort those by distance, and print out the K nearest neighbors:
End of explanation
avgRating
Explanation: While we were at it, we computed the average rating of the 10 nearest neighbors to Toy Story:
End of explanation
movieDict[1]
Explanation: How does this compare to Toy Story's actual average rating?
End of explanation |
2,743 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chem 30324, Spring 2020, Homework 8
Due April 3, 2020
Chemical bonding
The electron wavefunctions (molecular orbitals) in molecules can be thought of as coming from combinations of atomic orbitals on the constituent atoms. One of the factors that determines whether two atomic orbitals form a bond is there ability to overlap. Consider two atoms, A and B, aligned on the z axis and separated by a distance $R$.
1. The overlap between two 1s orbitals on A and B can be shown to be
Step1: 2. The overlap functions for other pairs of orbitals are more complicated, but the general features are easily inferred. Neatly sketch the orbital overlap between a 1s orbital on A and 2p$_z$ orbital on B as a function $R$. Carefully indicate the limiting values as $R \rightarrow 0$ and $R \rightarrow \infty$.
3. Choose some other pair of atomic orbitals on A and B and sketch out their overlap as a function of $R$. Carefully indicate the limiting values as $ R \rightarrow 0$ and $ R\rightarrow \infty$.
4. What property besides overlap determines whether two atomic orbitals will form a bond?
The similarity of the energies of the two atomic orbitals, ie the value of $\beta = \langle \phi_1 | \hat{f} | \phi_2 \rangle $
5. A chemical bond is a compromise between the electrons trying to get close to both nuclei and the nuclei trying to stay apart. The function below captures this compromise as a function of internuclear distance, $R$. Plot the function for different values of the parameters $A$, $\alpha$, and $R_0$. Provide a physical interpretation of each of the parameters. $$V(R) = A \left ( 1 - e^{(-\alpha(R - R_0))}\right)^2$$
Step2: 6. For each pair, draw a Lewis dot structure. Indicate which bond is stronger in the pair, and give a very brief rationalization
Step3: 8. Use the quadratic fit from Question 8 to determine the harmonic vibrational frequency of your molecule, in cm$^{-1}$. Recall that the force constant is the second derivative of the energy at the minimum, and that the frequency (in wavenumbers) is related to the force constant according to $$\tilde{\nu} = \frac{1}{2\pi c}\sqrt{\frac{k}{\mu}}$$
Step4: 9. Use your results to determine the zero-point-corrected bond energy of your molecule. How does this model compare with the experimental value?
Step5: Computational chemistry, part deux
Diatomics are a little mundane. These same methods can be used to compute the properties of much more complicated things. As example, the OQMD database http
Step6: 11. Compute the corresponding vibrational spectra. Could you distinguish these molecules by their spectra?
Log into the Webmo server https | Python Code:
import numpy as np
import matplotlib.pyplot as plt
r = np.linspace(0,12,100) # r=R/a0
P = (1+r+1/3*r**2)*np.exp(-r)
plt.plot(r,P)
plt.xlim(0)
plt.ylim(0)
plt.xlabel('Internuclear Distance $R/a0$')
plt.ylabel('Overlap S')
plt.title('The Overlap Between Two 1s Orbitals')
plt.show()
Explanation: Chem 30324, Spring 2020, Homework 8
Due April 3, 2020
Chemical bonding
The electron wavefunctions (molecular orbitals) in molecules can be thought of as coming from combinations of atomic orbitals on the constituent atoms. One of the factors that determines whether two atomic orbitals form a bond is there ability to overlap. Consider two atoms, A and B, aligned on the z axis and separated by a distance $R$.
1. The overlap between two 1s orbitals on A and B can be shown to be: $$S = \left {1+\frac{R}{a_0}+\frac{1}{3}\left (\frac{R}{a_0}\right )^2\right }e^{-R/a_0}$$ Plot out the overlap as a function of the internuclear distance $R$. Qualitatively explain why it has the shape it has.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
A = 1
alpha = 1
R0 =[0,.25,.5,.75,1,2]
R = np.linspace(0,10,100)
for i in R0:
V = A*(1-np.exp(-alpha*(R-i)))**2
plt.plot(R,V, label = i)
plt.ylim(0,2)
plt.xlim(0,8)
plt.legend()
plt.xlabel('Internuclear Distance (R)')
plt.ylabel('Wavefunction (V(R))')
plt.title('Variation in R0')
plt.show()
print('R0 is the equilibrium bond distance.')
import numpy as np
import matplotlib.pyplot as plt
A = 1
alpha = [0,.25,.5,.75,1,2]
R0 = 1
R = np.linspace(0,10,100)
for i in alpha:
V = A*(1-np.exp(-i*(R-R0)))**2
plt.plot(R,V, label = i)
plt.ylim(0,2)
plt.xlim(0,8)
plt.legend()
plt.xlabel('Internuclear Distance (R)')
plt.ylabel('Wavefunction (V(R))')
plt.title('Variation in alpha')
plt.show()
print('Alpha is the stiffness (spring constant) of the bond between the two atoms.')
import numpy as np
import matplotlib.pyplot as plt
A = [0,.25,.5,.75,1,2]
alpha = 1
R0 = 1
R = np.linspace(0,10,100)
for i in A:
V = i*(1-np.exp(-alpha*(R-R0)))**2
plt.plot(R,V, label = i)
plt.ylim(0,2)
plt.xlim(0,8)
plt.legend()
plt.xlabel('Internuclear Distance (R)')
plt.ylabel('Wavefunction (V(R))')
plt.title('Variation in A')
plt.show()
print('A is the difference in energy between a molecule and its atoms---the bond dissociation energy.')
Explanation: 2. The overlap functions for other pairs of orbitals are more complicated, but the general features are easily inferred. Neatly sketch the orbital overlap between a 1s orbital on A and 2p$_z$ orbital on B as a function $R$. Carefully indicate the limiting values as $R \rightarrow 0$ and $R \rightarrow \infty$.
3. Choose some other pair of atomic orbitals on A and B and sketch out their overlap as a function of $R$. Carefully indicate the limiting values as $ R \rightarrow 0$ and $ R\rightarrow \infty$.
4. What property besides overlap determines whether two atomic orbitals will form a bond?
The similarity of the energies of the two atomic orbitals, ie the value of $\beta = \langle \phi_1 | \hat{f} | \phi_2 \rangle $
5. A chemical bond is a compromise between the electrons trying to get close to both nuclei and the nuclei trying to stay apart. The function below captures this compromise as a function of internuclear distance, $R$. Plot the function for different values of the parameters $A$, $\alpha$, and $R_0$. Provide a physical interpretation of each of the parameters. $$V(R) = A \left ( 1 - e^{(-\alpha(R - R_0))}\right)^2$$
End of explanation
# Carbon Monoxide
# From https://cccbdb.nist.gov/bondlengthmodel2.asp?method=12&basis=5, L = 1.128 Angstrom
import numpy as np
import matplotlib.pyplot as plt
E_C = -37.79271 # Ha, energy of single C atom
E_O = -74.98784 # Ha, energy of single O atom
length = [1.00, 1.05, 1.10, 1.15, 1.2, 1.25] # Angstrom
E_CO = [-113.249199,-113.287858,-113.305895,-113.309135,-113.301902,-113.287408] # Ha, energy of CO
E_bond = [] # energy of CO bond
for i in E_CO:
E_bond.append((i-E_C-E_O)*27.212) # eV, Energy[CO - C - O] = Energy[bond]
fit = np.polyfit(length, E_bond, 2) # quadratic fit
print("Fitted result: E = %fx^2 + (%f)x + %f"%(fit[0],fit[1],fit[2]))
# Find E_min
x = np.linspace(0.9, 1.4, 100)
z = fit[0]*x**2 + fit[1]*x + fit[2] # from result above
E_min_CO = min(z) # Find the minimum in energy array
print('E_min_CO = %feV.'%(E_min_CO))
# Plot E vs length
plt.plot(length, E_bond, '.', label='Webmo Data')
plt.plot(x, z, '--',label='Quadratic Fit')
plt.xlabel('Bond length (Angstrom)')
plt.ylabel('Energy (eV)')
plt.title('CO Molecular Energy vs. Bond Length')
plt.legend()
plt.show()
# Find equilbrium bond length
import sympy as sp
x = sp.symbols('x')
z = fit[0]*x**2 + fit[1]*x + fit[2] # from result above
l = sp.solve(sp.diff(z,x),x)
print('L_equilibrium = %f A > 1.128 A (in literature).'%(l[0])) # equilibrium bond length
#Boron Nitride
#From https://cccbdb.nist.gov/bondlengthmodel2.asp?method=12&basis=5, L= 1.325 Angstrom
import numpy as np
import matplotlib.pyplot as plt
E_B = -24.61703 # Ha, energy of single B atom
E_N = -54.51279 # Ha, energy of single N atom
length = [1.15, 1.2, 1.25, 1.3, 1.35, 1.4] # Angstrom
E_BN = [-79.359357,-79.376368,-79.383355,-79.382896,-79.377003,-79.367236] # Ha, energy of BN
E_bond = [] # energy of BN bond
for i in E_BN:
E_bond.append((i-E_B-E_N)*27.212)
fit = np.polyfit(length, E_bond, 2) # quadratic fit
print("Fitted result: E = %fx^2 + (%f)x + %f"%(fit[0],fit[1],fit[2]))
# Find E_min
x = np.linspace(1.1, 1.5, 100)
z = fit[0]*x**2 + fit[1]*x + fit[2] # from result above
E_min_BN = min(z) # Find the minimum in energy array
print('E_min_BN = %feV.'%(E_min_BN))
# Plot E vs length
plt.plot(length, E_bond, '.', label='Webmo Data')
plt.plot(x, z, '--',label='Quadratic Fit')
plt.xlabel('Bond length (Angstrom)')
plt.ylabel('Energy (eV)')
plt.title('BN Molecular Energy vs. Bond Length')
plt.legend()
plt.show()
# Find equilbrium bond length
import sympy as sp
x = sp.symbols('x')
z = fit[0]*x**2 + fit[1]*x + fit[2] # from result above
l = sp.solve(sp.diff(z,x),x)
print('L_equilibrium = %f A < 1.325 A (in literature).'%(l[0])) # equilibrium bond length
#Berrylium Oxide
#From https://cccbdb.nist.gov/bondlengthmodel2.asp?method=12&basis=5, L = 1.331 Angstrom
import numpy as np
import matplotlib.pyplot as plt
E_Be = -14.64102 # Ha
E_O = -74.98784 # Ha
length = [1.2, 1.25, 1.3, 1.35, 1.4, 1.45] # Angstrom
E_BeO = [-89.880569,-89.893740,-89.899599,-89.899934,-89.896149,-89.889335] # Ha, energy of BeO
E_bond = [] # energy of BeO bond
for i in E_BeO:
E_bond.append((i-E_Be-E_O)*27.212)
fit = np.polyfit(length, E_bond, 2) # quadratic fit
print("Fitted result: E = %fx^2 + (%f)x + %f"%(fit[0],fit[1],fit[2]))
# Find E_min
x = np.linspace(1.1, 1.6, 100)
z = fit[0]*x**2 + fit[1]*x + fit[2] # from result above
E_min_BeO = min(z) # Find the minimum in energy array
print('E_min_BeO = %feV.'%(E_min_BeO))
# Plot E vs length
plt.plot(length, E_bond, '.', label='Webmo Data')
plt.plot(x, z, '--',label='Quadratic Fit')
plt.xlabel('Bond length (Angstrom)')
plt.ylabel('Energy (eV)')
plt.title('BeO Molecular Energy vs. Bond Length')
plt.legend()
plt.show()
# Find equilbrium bond length
import sympy as sp
x = sp.symbols('x')
z = fit[0]*x**2 + fit[1]*x + fit[2] # from result above
l = sp.solve(sp.diff(z,x),x)
print('L_equilibrium = %f A > 1.331 A (in literature).'%(l[0])) # equilibrium bond length
Explanation: 6. For each pair, draw a Lewis dot structure. Indicate which bond is stronger in the pair, and give a very brief rationalization:
(a) H$_2$ vs LiH
(b) N$_2$ vs H$_2$
(c) N$_2$ vs CO
(d) H$_2$ vs He$_2$
a) $$H:H$$ $$Li:H$$
$H_2$ has a stronger bond because the two hydrogens have similar energies.
b) $$:N:::N:$$ $$H:H$$
$N_2$ has a stronger bond since there are 3 bonds instead of just one.
c) $$:N:::N:$$ $$:C:::O:$$
An argument for both structures can be made. There is not an agreed upon answer in the literature.
d) $$H:H$$ $$ :He\quad He:$$
$H_2$ has a stronger bond since $He_2$ doesn't have a bond.
Computational chemistry.
Today properties of a molecule are more often than not calculated rather than inferred. Quantitative molecular quantum mechanical calculations require specialized numerical solvers like Orca. Following are instructions for using Orca with the Webmo graphical interface.
Now, let’s set up your calculation (you may do this with a partner or partners if you choose):
Log into the Webmo server https://www.webmo.net/demoserver/cgi-bin/webmo/login.cgi using "guest" as your username and password.
Select New Job-Creat New Job.
Use the available tools to sketch a molecule.
Use the right arrow at the bottom to proceed to the Computational Engines.
Select Orca
Select "Molecular Energy," “B3LYP” functional and the default def2-SVP basis set.
Select the right arrow to run the calculation.
From the job manager window choose the completed calculation to view the results.
The molecule you are to study depends on your last name. Choose according to the list:
+ A-G: CO
+ H-R: BN
+ S-Z: BeO
For your convenience, here are the total energies (in Hartree, 27.212 eV/Hartree) of the constituent atoms, calculated using the B3LYP DFT treatment of $v_{ee}$ and the def2-SVP basis set:
|Atom|Energy|Atom|Energy|
|-|-|-|-|
|B|–24.61703|N| -54.51279|
|Be|-14.64102|O|-74.98784|
|C|-37.79271|F|-99.60655|
7. Construct a potential energy surface for your molecule. Using covalent radii, guess an approximate equilbrium bond length, and use the Webmo editor to draw the molecule with that length. Specify the “Molecular Energy” option to Orga and the def2-SVP basis set. Calculate and plot out total molecular energy vs. bond distance in increments of 0.05 Å about your guessed minimum, including enough points to encompass the actual minimum. (You will find it convenient to subtract off the individual atom energies from the molecular total energy and to convert to more convenient units, like eV or kJ/mol.) By fitting the few points nearest the minimum, determine the equilibrium bond length. How does your result compare to literature?
End of explanation
print('CO Molecule:')
J = 1.6022e-19 # J, 1 eV = 1.6022e-19 J
L = 1e-10 # m, 1 angstrom = 1e-10 m
# k [=] Energy/Length^2
k_CO = 2*71.30418671*J/L**2 # J/m**2
c = 2.99792e8 # m/s
m_C = 12.0107*1.6605e-27 # kg
m_O = 15.9994*1.6605e-27 # kg
mu_CO = m_C*m_O/(m_C+m_O) # kg, reduced mass
nu_CO = 1/(2*np.pi*c)*np.sqrt(k_CO/mu_CO)/100 # cm^-1, wavenumber
print('The harmonic vibrational frequency is %f cm^-1.'%(nu_CO))
print('BN Molecule:')
J = 1.6022e-19 # J, 1 eV = 1.6022e-19 J
L = 1e-10 # m, 1 angstrom = 1e-10 m
# k [=] Energy/Length^2
k_BN = 2*36.0384*J/L**2 # J/m**2
c = 2.99792e8 # m/s
m_B = 10.811*1.6605e-27 # kg
m_N = 14.0067*1.6605e-27 # kg
mu_BN = m_B*m_N/(m_B+m_N) # kg, reduced mass
nu_BN = 1/(2*np.pi*c)*np.sqrt(k_BN/mu_BN)/100 # cm^-1, wavenumber
print('The harmonic vibrational frequency is %f cm^-1.'%(nu_BN))
print('BeO Molecule:')
J = 1.6022e-19 # J, 1 eV = 1.6022e-19 J
L = 1e-10 # m, 1 angstrom = 1e-10 m
# k [=] Energy/Length^2
k_BeO = 2*26.920637*J/L**2 # J/m**2
c = 2.99792e8 # m/s
m_Be = 9.01218*1.6605e-27 # kg
m_O = 15.9994*1.6605e-27 # kg
mu_BeO = m_Be*m_O/(m_Be+m_O) # kg, reduced mass
nu_BeO = 1/(2*np.pi*c)*np.sqrt(k_BeO/mu_BeO)/100 # cm^-1, wavenumber
print('The harmonic vibrational frequency is %f cm^-1.'%(nu_BeO))
Explanation: 8. Use the quadratic fit from Question 8 to determine the harmonic vibrational frequency of your molecule, in cm$^{-1}$. Recall that the force constant is the second derivative of the energy at the minimum, and that the frequency (in wavenumbers) is related to the force constant according to $$\tilde{\nu} = \frac{1}{2\pi c}\sqrt{\frac{k}{\mu}}$$
End of explanation
# Get experimental vibrational zero-point energy from NIST database: https://cccbdb.nist.gov/exp1x.asp
nu_CO_exp = 1084.9 # cm^-1
nu_BN_exp = 760.2 # cm^-1
nu_BeO_exp = 728.5 # cm^-1
print('CO Molecule:')
# Note: E_ZPC = E_min + ZPE_harmonic_oscillator
h = 6.62607e-34
NA = 6.02214e23
J = 1.6022e-19 # eV to J
E_min_CO = (-16.300903*J)*NA/1000 # converted from eV to kJ/mol from problem 8
# Calculations
E0_CO = (0.5*h*nu_CO*100*c)*NA/1000 # kJ/mol, ZPE harmonic oscillator
EB_CO = E_min_CO + E0_CO # kJ/mol, ZPC bond energy
# Experiments
E0_CO_exp = (0.5*h*nu_CO_exp*100*c)*NA/1000
EB_CO_exp = E_min_CO + E0_CO_exp
print('|E_ZPC| = %f kJ/mol < %f kJ/mol.'%(-EB_CO,-EB_CO_exp))
print('BN Molecule:')
# Note: E_ZPC = E_min + ZPE_harmonic_oscillator
h = 6.62607e-34
NA = 6.02214e23
J = 1.6022e-19 # eV to J
E_min_BN = (-4.633537*J)*NA/1000 # converted from eV to kJ/mol from problem 8
# Calculations
E0_BN = (0.5*h*nu_BN*100*c)*NA/1000 # kJ/mol, ZPE harmonic oscillator
EB_BN = E_min_BN + E0_BN # kJ/mol, ZPC bond energy
# Experiments
E0_BN_exp = (0.5*h*nu_BN_exp*100*c)*NA/1000
EB_BN_exp = E_min_BN + E0_BN_exp
print('|E_ZPC| = %f kJ/mol < %f kJ/mol.'%(-EB_BN,-EB_BN_exp))
print('BeO Molecule:')
# Note: E_ZPC = E_min + ZPE_harmonic_oscillator
h = 6.62607e-34
NA = 6.02214e23
J = 1.6022e-19 # eV to J
E_min_BeO = (-5.850784*J)*NA/1000 # converted from eV to kJ/mol from problem 8
# Calculations
E0_BeO = (0.5*h*nu_BeO*100*c)*NA/1000 # kJ/mol, ZPE harmonic oscillator
EB_BeO = E_min_BeO + E0_BeO # kJ/mol, ZPC bond energy
# Experiments
E0_BeO_exp = (0.5*h*nu_BeO_exp*100*c)*NA/1000
EB_BeO_exp = E_min_BeO + E0_BeO_exp
print('|E_ZPC| = %f kJ/mol < %f kJ/mol.'%(-EB_BeO,-EB_BeO_exp))
Explanation: 9. Use your results to determine the zero-point-corrected bond energy of your molecule. How does this model compare with the experimental value?
End of explanation
C2H6 = 1.531 # Angstrom
C2H4 = 1.331 # Angstrom
C2H2 = 1.205 # Angstrom
import matplotlib.pyplot as plt
plt.scatter([0,1,2],[C2H2,C2H4,C2H6])
plt.xlabel('Molecules')
plt.ylabel('Bond length (A)')
plt.xticks(np.arange(3), ('C2H2','C2H4','C2H6'))
plt.show()
Explanation: Computational chemistry, part deux
Diatomics are a little mundane. These same methods can be used to compute the properties of much more complicated things. As example, the OQMD database http://oqmd.org/ contains results for many solids. We don't have time to get this complicated in class, but at least you can compute properties of some molecules.
10. Working with some of your classmates, compute the equilibrium structures of C$_2$H$_6$, C$_2$H$_4$, and C$_2$H$_2$. Compare their equilibrium C-C bond lengths. Do they vary in the way you expect?
Log into the Webmo server https://www.webmo.net/demoserver/cgi-bin/webmo/login.cgi using "guest" as your username and password.
Select New Job-Creat New Job.
Use the available tools to sketch a molecule. Make sure the bond distances and angles are in a plausible range.
Use the right arrow at the bottom to proceed to the Computational Engines.
Select Orca
Select "Geometry optimization," “B3LYP” functional and the default def2-SVP basis set.
Select the right arrow to run the calculation.
From the job manager window choose the completed calculation to view the results.
End of explanation
E_H2 = -1.16646206791 # Ha
E_C2H2 = -77.3256461775 # Ha, acetylene
E_C2H4 = -78.5874580928 # Ha, ethylene
E_C2H6 = -79.8304174812 # Ha, ethane
E_rxn1 = (E_C2H4 - E_C2H2 - E_H2)*2625.50 # kJ/mol, H2 + C2H2 -> C2H4
E_rxn2 = (E_C2H6 - E_C2H4 - E_H2)*2625.50 # kJ/mol, H2 + C2H4 -> C2H6
print("E_rnx1 = %f kJ/mol, E_rnx2 = %f kJ/mol"%(E_rxn1, E_rxn2))
Explanation: 11. Compute the corresponding vibrational spectra. Could you distinguish these molecules by their spectra?
Log into the Webmo server https://www.webmo.net/demoserver/cgi-bin/webmo/login.cgi using "guest" as your username and password.
Select the job with the optimized geometry and open it.
Use the right arrow at the bottom to proceed to the Computational Engines.
Select Orca
Select "Vibrational frequency," “B3LYP” functional and the default def2-SVP basis set.
Select the right arrow to run the calculation.
From the job manager window choose the completed calculation to view the results.
C2H2
C2H4
C2H6
The vibrational spectra are clearly very different, so these molecules can be distinguished by IR.
12. Compute the structure and energy of H$_2$. Use it to compare the energies to hydrogenate acetylene to ethylene and ethylene to ethane. Which is easier to hydrogenate? Can you see why selective hydrogenation of acetylene to ethylene is difficult to do?
End of explanation |
2,744 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Order of magnitude faster training for image classification
Step1: Preprocess
Preprocessing uses a Dataflow pipeline to convert the image format, resize images, and run the converted image through a pre-trained model to get the features or embeddings. You can also do this step using alternate technologies like Spark or plain Python code if you like.
The %%ml preprocess command simplifies this task. Check out the parameters shown using --usage flag first and then run the command.
If you hit "PERMISSION_DENIED" when running the following cell, you need to enable Cloud DataFlow API (url is shown in error message).
The DataFlow job usually takes about 20 min to complete.
Step2: Train
Note that the command remains the same as that in the "local" version.
Step3: Check your job status by running (replace the job id from the one shown above)
Step4: Predict
Deploy the model and run online predictions. The deployment takes about 2 ~ 5 minutes.
Step5: Online prediction is currently in alpha, it helps to ensure a warm start if the first call fails.
Step6: Batch Predict
Step7: Clean up | Python Code:
import mltoolbox.image.classification as model
from google.datalab.ml import *
bucket = 'gs://' + datalab_project_id() + '-lab'
preprocess_dir = bucket + '/flowerpreprocessedcloud'
model_dir = bucket + '/flowermodelcloud'
staging_dir = bucket + '/staging'
!gsutil mb $bucket
Explanation: Order of magnitude faster training for image classification: Part II
Transfer learning using Inception Package - Cloud Run Experience
This notebook continues the codifies the capabilities discussed in this blog post. In a nutshell, it uses the pre-trained inception model as a starting point and then uses transfer learning to train it further on additional, customer-specific images. For explanation, simple flower images are used. Compared to training from scratch, the time and costs are drastically reduced.
This notebook does preprocessing, training and prediction by calling CloudML API instead of running them in the Datalab container. The purpose of local work is to do some initial prototyping and debugging on small scale data - often by taking a suitable (say 0.1 - 1%) sample of the full data. The same basic steps can then be repeated with much larger datasets in cloud.
Setup
First run the following steps only if you are running Datalab from your local desktop or laptop (not running Datalab from a GCE VM):
Make sure you have a GCP project which is enabled for Machine Learning API and Dataflow API.
Run "%datalab project set --project [project-id]" to set the default project in Datalab.
If you run Datalab from a GCE VM, then make sure the project of the GCE VM is enabled for Machine Learning API and Dataflow API.
End of explanation
train_set = CsvDataSet('gs://cloud-datalab/sampledata/flower/train1000.csv', schema='image_url:STRING,label:STRING')
preprocess_job = model.preprocess_async(train_set, preprocess_dir, cloud={'num_workers': 10})
preprocess_job.wait() # Alternatively, you can query the job status by train_job.state. The wait() call blocks the notebook execution.
Explanation: Preprocess
Preprocessing uses a Dataflow pipeline to convert the image format, resize images, and run the converted image through a pre-trained model to get the features or embeddings. You can also do this step using alternate technologies like Spark or plain Python code if you like.
The %%ml preprocess command simplifies this task. Check out the parameters shown using --usage flag first and then run the command.
If you hit "PERMISSION_DENIED" when running the following cell, you need to enable Cloud DataFlow API (url is shown in error message).
The DataFlow job usually takes about 20 min to complete.
End of explanation
train_job = model.train_async(preprocess_dir, 30, 1000, model_dir, cloud=CloudTrainingConfig('us-central1', 'BASIC'))
train_job.wait() # Alternatively, you can query the job status by train_job.state. The wait() call blocks the notebook execution.
Explanation: Train
Note that the command remains the same as that in the "local" version.
End of explanation
tb_id = TensorBoard.start(model_dir)
Explanation: Check your job status by running (replace the job id from the one shown above):
Job('image_classification_train_170307_002934').describe()
Tensorboard works too with GCS path. Note that the data will show up usually a minute after tensorboard starts with GCS path.
End of explanation
Models().create('flower')
ModelVersions('flower').deploy('beta1', model_dir)
Explanation: Predict
Deploy the model and run online predictions. The deployment takes about 2 ~ 5 minutes.
End of explanation
images = [
'gs://cloud-ml-data/img/flower_photos/daisy/15207766_fc2f1d692c_n.jpg',
'gs://cloud-ml-data/img/flower_photos/tulips/6876631336_54bf150990.jpg'
]
# set resize=True to avoid sending large data in prediction request.
model.predict('flower.beta1', images, resize=True, cloud=True)
Explanation: Online prediction is currently in alpha, it helps to ensure a warm start if the first call fails.
End of explanation
import google.datalab.bigquery as bq
bq.Dataset('flower').create()
eval_set = CsvDataSet('gs://cloud-datalab/sampledata/flower/eval670.csv', schema='image_url:STRING,label:STRING')
batch_predict_job = model.batch_predict_async(eval_set, model_dir, output_bq_table='flower.eval_results_full',
cloud={'temp_location': staging_dir})
batch_predict_job.wait()
%%bq query --name wrong_prediction
SELECT * FROM flower.eval_results_full WHERE target != predicted
wrong_prediction.execute().result()
ConfusionMatrix.from_bigquery('flower.eval_results_full').plot()
%%bq query --name accuracy
SELECT
target,
SUM(CASE WHEN target=predicted THEN 1 ELSE 0 END) as correct,
COUNT(*) as total,
SUM(CASE WHEN target=predicted THEN 1 ELSE 0 END)/COUNT(*) as accuracy
FROM
flower.eval_results_full
GROUP BY
target
accuracy.execute().result()
%%bq query --name logloss
SELECT feature, AVG(-logloss) as logloss, count(*) as count FROM
(
SELECT feature, CASE WHEN correct=1 THEN LOG(prob) ELSE LOG(1-prob) END as logloss
FROM
(
SELECT
target as feature,
CASE WHEN target=predicted THEN 1 ELSE 0 END as correct,
target_prob as prob
FROM flower.eval_results_full))
GROUP BY feature
FeatureSliceView().plot(logloss)
Explanation: Batch Predict
End of explanation
ModelVersions('flower').delete('beta1')
Models().delete('flower')
!gsutil -m rm -r {preprocess_dir}
!gsutil -m rm -r {model_dir}
Explanation: Clean up
End of explanation |
2,745 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building and running a preprocessing pipeline
In this example, an image processing pipeline is created and then executed in a manner that maximize throughput.
Step1: Initial data loading
SeqTools works with list-like indexable objects, so the first step is to create one that maps to our samples, then this object will be passed to functions that apply the desired transformations.
In this example, we represent our samples with their file names and store them in a list.
Step2: Let's load the full resolution images, the result cannot normally fit into memory, but with SeqTools the evaluation is delayed until the images are actually accessed.
Step3: We can verify the result for one sample, this will trigger its evaluation and return it
Step4: Mapping transformations
As a first preprocessing stage, we can normalize the size
Step5: then apply common preprocessing steps
Step6: That preprocessing seems a bit over the top... let's check where it went wrong
Step7: For each sample, the minimal set of computations was run to produce the requested item.
We find here that equalization is inappropriate and autocontrast is too weak, let's fix this.
Step8: Combining datasets
Then we want to augment the dataset by flipping
Step9: Evaluation
Once satisfied with our preprocessing pipeline, evaluating all values is simply done by iterating over the elements or forcing the conversion to a list
Step10: This above evaluation is a bit slow, probably due to the IO operations when loading the images from the hard drive. Maybe using multiple threads could help keep the CPU busy?
Step11: The CPU time is the same because the computations are the same (plus some threading overhead), but wall time is cut down because image processing continues for some images while others are being loaded.
However, we could spare some IO by not reading the same image a twice when generating the augmented version, and by the same token save some shared transformations.
To avoid having one thread taking the regular image and another the flipped one in parallel, which would incur a cache miss for the latter, we propose to simply compute the transformed image and its flipped version in one step
Step12: The output now contains pairs of images, to flatten them into a sequence of images, we can "unbatch" them
Step13: The cache is here to avoid recomputing the pair of images when the second one is accessed, indeed, SeqTools works in a stateless on-demand fashion and recomputes everything by default.
Please note that concatenation would be inappropriate to replace unbatching here. Concatenation checks the length of the sequences to join, which requires in this situation to compute each element from fast_dataset before-hand. Besides, unbatching has slightly faster access times because it assumes that all batches have the same size. | Python Code:
from PIL import Image, ImageOps
import seqtools
! [[ -f owl.jpg ]] || curl -s "https://cdn.pixabay.com/photo/2017/04/07/01/05/owl-2209827_640.jpg" -o owl.jpg
! [[ -f rooster.jpg ]] || curl -s "https://cdn.pixabay.com/photo/2018/08/26/14/05/hahn-3632299_640.jpg" -o rooster.jpg
! [[ -f duck.jpg ]] || curl -s "https://cdn.pixabay.com/photo/2018/09/02/10/03/violet-duck-3648415_640.jpg" -o duck.jpg
! [[ -f bird.jpg ]] || curl -s "https://cdn.pixabay.com/photo/2018/08/21/05/15/tit-3620632_640.jpg" -o bird.jpg
! [[ -f dog.jpg ]] || curl -s "https://cdn.pixabay.com/photo/2018/09/04/18/07/pug-3654360_640.jpg" -o dog.jpg
! [[ -f hedgehog.jpg ]] || curl -s "https://cdn.pixabay.com/photo/2018/09/04/18/52/hedgehog-3654434_640.jpg" -o hedgehog.jpg
Explanation: Building and running a preprocessing pipeline
In this example, an image processing pipeline is created and then executed in a manner that maximize throughput.
End of explanation
labels = ['owl', 'rooster', 'duck', 'bird', 'dog', 'hedgehog']
# We artificially increase the size of the dataset for the example
labels = [labels[i % len(labels)] for i in range(200)]
image_files = [l + '.jpg' for l in labels]
Explanation: Initial data loading
SeqTools works with list-like indexable objects, so the first step is to create one that maps to our samples, then this object will be passed to functions that apply the desired transformations.
In this example, we represent our samples with their file names and store them in a list.
End of explanation
raw_images = seqtools.smap(Image.open, image_files)
Explanation: Let's load the full resolution images, the result cannot normally fit into memory, but with SeqTools the evaluation is delayed until the images are actually accessed.
End of explanation
raw_images[0]
Explanation: We can verify the result for one sample, this will trigger its evaluation and return it:
End of explanation
def normalize_size(im):
w, h = im.size
left_crop = w // 2 - h // 2
return im.resize((200, 200), Image.BILINEAR, box=(left_crop, 1, h, h))
small_images = seqtools.smap(normalize_size, raw_images)
small_images[1]
Explanation: Mapping transformations
As a first preprocessing stage, we can normalize the size:
End of explanation
contrasted = seqtools.smap(ImageOps.autocontrast, small_images)
equalized = seqtools.smap(ImageOps.equalize, contrasted)
grayscale = seqtools.smap(ImageOps.grayscale, equalized)
grayscale[0]
Explanation: then apply common preprocessing steps:
End of explanation
equalized[0]
contrasted[0]
Explanation: That preprocessing seems a bit over the top... let's check where it went wrong:
End of explanation
grayscale = seqtools.smap(ImageOps.grayscale, small_images)
contrasted = seqtools.smap(lambda im: ImageOps.autocontrast(im, cutoff=3), grayscale)
contrasted[0]
Explanation: For each sample, the minimal set of computations was run to produce the requested item.
We find here that equalization is inappropriate and autocontrast is too weak, let's fix this.
End of explanation
# Generate flipped versions of the images
flipped = seqtools.smap(ImageOps.mirror, contrasted)
# Combine with the original dataset
augmented_dataset = seqtools.concatenate([contrasted, flipped])
augmented_dataset[-1]
Explanation: Combining datasets
Then we want to augment the dataset by flipping:
End of explanation
%time computed_values = list(augmented_dataset);
Explanation: Evaluation
Once satisfied with our preprocessing pipeline, evaluating all values is simply done by iterating over the elements or forcing the conversion to a list:
End of explanation
fast_dataset = seqtools.prefetch(augmented_dataset, max_buffered=10, nworkers=2)
%time computed_values = list(fast_dataset)
Explanation: This above evaluation is a bit slow, probably due to the IO operations when loading the images from the hard drive. Maybe using multiple threads could help keep the CPU busy?
End of explanation
regular_and_flipped = seqtools.smap(lambda im: (im, ImageOps.mirror), contrasted)
fast_dataset = seqtools.prefetch(regular_and_flipped, max_buffered=10, nworkers=2)
Explanation: The CPU time is the same because the computations are the same (plus some threading overhead), but wall time is cut down because image processing continues for some images while others are being loaded.
However, we could spare some IO by not reading the same image a twice when generating the augmented version, and by the same token save some shared transformations.
To avoid having one thread taking the regular image and another the flipped one in parallel, which would incur a cache miss for the latter, we propose to simply compute the transformed image and its flipped version in one step:
End of explanation
fast_dataset = seqtools.add_cache(fast_dataset, cache_size=1)
flat_dataset = seqtools.unbatch(fast_dataset, batch_size=2)
Explanation: The output now contains pairs of images, to flatten them into a sequence of images, we can "unbatch" them:
End of explanation
%time computed_values = list(flat_dataset)
Explanation: The cache is here to avoid recomputing the pair of images when the second one is accessed, indeed, SeqTools works in a stateless on-demand fashion and recomputes everything by default.
Please note that concatenation would be inappropriate to replace unbatching here. Concatenation checks the length of the sequences to join, which requires in this situation to compute each element from fast_dataset before-hand. Besides, unbatching has slightly faster access times because it assumes that all batches have the same size.
End of explanation |
2,746 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fickian Diffusion and Tortuosity
In this example, we will learn how to perform Fickian diffusion on a Cubic network. The algorithm works fine with every other network type, but for now we want to keep it simple. Refere to Tutorials > Network for more details on different network types.
Step1: Generating network
First, we need to generate a Cubic network. For now, we stick to a 2d network, but you might as well try it in 3d!
Step2: Adding geometry
Next, we need to add a geometry to the generated network. A geometry contains information about size of the pores/throats in a network. OpenPNM has tons of prebuilt geometries that represent the microstructure of different materials such as Toray090 carbon papers, sand stone, electrospun fibers, etc. For now, we'll stick to a simple geometry called SpheresAndCylinders that assigns random values to pore/throat diameters.
Step3: Adding phase
Next, we need to add a phase to our simulation. A phase object(s) contain(s) thermophysical information about the working fluid(s) in the simulation. OpenPNM has tons of prebuilt phases as well! For this simulation, we use air as our working fluid.
Step4: Adding physics
Finally, we need to add a physics. A physics object contains information about the working fluid in the simulation that depend on the geometry of the network. A good example is diffusive conductance, which not only depends on the thermophysical properties of the working fluid, but also depends on the geometry of pores/throats. OpenPNM includes a pre-defined physics class called Standard which as the name suggests contains all the standard pore-scale models to get you going
Step5: Performing Fickian diffusion
Now that everything's set up, it's time to perform our Fickian diffusion simulation. For this purpose, we need to add the FickianDiffusion algorithm to our simulation. Here's how we do it
Step6: Note that network and phase are required parameters for pretty much every algorithm we add, since we need to specify on which network and for which phase we want to run the algorithm.
Adding boundary conditions
Next, we need to add some boundary conditions to the simulation. By default, OpenPNM assumes zero flux for the boundary pores.
Step7: set_value_BC applies the so-called "Dirichlet" boundary condition to the specified pores. Note that unless you want to apply a single value to all of the specified pores (like we just did), you must pass a list (or ndarray) as the values parameter.
Running the algorithm
Now, it's time to run the algorithm. This is done by calling the run method attached to the algorithm object.
Step8: Post processing
When an algorithm is successfully run, the results are attached to the same object. To access the results, you need to know the quantity for which the algorithm was solving. For instance, FickianDiffusion solves for the quantity pore.concentration, which is somewhat intuitive. However, if you ever forget it, or wanted to manually check the quantity, you can take a look at the algorithm settings
Step9: Visualizing
Now that we know the quantity for which FickianDiffusion was solved, let's take a look at the results
Step10: Calculating flux
You might as well be interested in calculating the mass flux from a boundary! This is easily done in OpenPNM via calling the rate method attached to the algorithm. Let's see how it works
Step11: We can determine the effective diffusivity of the network by solving Fick's law
Step12: And the formation factor can be found since the diffusion coefficient of open air is known
Step13: The tortuosity is defined as follows | Python Code:
import numpy as np
import openpnm as op
%config InlineBackend.figure_formats = ['svg']
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(10)
ws = op.Workspace()
ws.settings["loglevel"] = 40
np.set_printoptions(precision=5)
Explanation: Fickian Diffusion and Tortuosity
In this example, we will learn how to perform Fickian diffusion on a Cubic network. The algorithm works fine with every other network type, but for now we want to keep it simple. Refere to Tutorials > Network for more details on different network types.
End of explanation
shape = [1, 10, 10]
spacing = 1e-5
net = op.network.Cubic(shape=shape, spacing=spacing)
Explanation: Generating network
First, we need to generate a Cubic network. For now, we stick to a 2d network, but you might as well try it in 3d!
End of explanation
geom = op.geometry.SpheresAndCylinders(network=net, pores=net.Ps, throats=net.Ts)
Explanation: Adding geometry
Next, we need to add a geometry to the generated network. A geometry contains information about size of the pores/throats in a network. OpenPNM has tons of prebuilt geometries that represent the microstructure of different materials such as Toray090 carbon papers, sand stone, electrospun fibers, etc. For now, we'll stick to a simple geometry called SpheresAndCylinders that assigns random values to pore/throat diameters.
End of explanation
air = op.phase.Air(network=net)
Explanation: Adding phase
Next, we need to add a phase to our simulation. A phase object(s) contain(s) thermophysical information about the working fluid(s) in the simulation. OpenPNM has tons of prebuilt phases as well! For this simulation, we use air as our working fluid.
End of explanation
phys_air = op.physics.Standard(network=net, phase=air, geometry=geom)
Explanation: Adding physics
Finally, we need to add a physics. A physics object contains information about the working fluid in the simulation that depend on the geometry of the network. A good example is diffusive conductance, which not only depends on the thermophysical properties of the working fluid, but also depends on the geometry of pores/throats. OpenPNM includes a pre-defined physics class called Standard which as the name suggests contains all the standard pore-scale models to get you going:
End of explanation
fd = op.algorithms.FickianDiffusion(network=net, phase=air)
Explanation: Performing Fickian diffusion
Now that everything's set up, it's time to perform our Fickian diffusion simulation. For this purpose, we need to add the FickianDiffusion algorithm to our simulation. Here's how we do it:
End of explanation
inlet = net.pores('front')
outlet = net.pores('back')
C_in = 1.0
C_out = 0.0
fd.set_value_BC(pores=inlet, values=C_in)
fd.set_value_BC(pores=outlet, values=C_out)
Explanation: Note that network and phase are required parameters for pretty much every algorithm we add, since we need to specify on which network and for which phase we want to run the algorithm.
Adding boundary conditions
Next, we need to add some boundary conditions to the simulation. By default, OpenPNM assumes zero flux for the boundary pores.
End of explanation
fd.run()
Explanation: set_value_BC applies the so-called "Dirichlet" boundary condition to the specified pores. Note that unless you want to apply a single value to all of the specified pores (like we just did), you must pass a list (or ndarray) as the values parameter.
Running the algorithm
Now, it's time to run the algorithm. This is done by calling the run method attached to the algorithm object.
End of explanation
print(fd.settings)
Explanation: Post processing
When an algorithm is successfully run, the results are attached to the same object. To access the results, you need to know the quantity for which the algorithm was solving. For instance, FickianDiffusion solves for the quantity pore.concentration, which is somewhat intuitive. However, if you ever forget it, or wanted to manually check the quantity, you can take a look at the algorithm settings:
End of explanation
c = fd['pore.concentration']
r = fd.rate(throats=net.Ts, mode='single')
d = net['pore.diameter']
fig, ax = plt.subplots(figsize=[4, 4])
op.topotools.plot_coordinates(network=net, color_by=c, size_by=d, markersize=400, ax=ax)
op.topotools.plot_connections(network=net, color_by=r, linewidth=3, ax=ax)
_ = plt.axis('off')
Explanation: Visualizing
Now that we know the quantity for which FickianDiffusion was solved, let's take a look at the results:
End of explanation
rate_inlet = fd.rate(pores=inlet)[0]
print(f'Mass flow rate from inlet: {rate_inlet:.5e} mol/s')
Explanation: Calculating flux
You might as well be interested in calculating the mass flux from a boundary! This is easily done in OpenPNM via calling the rate method attached to the algorithm. Let's see how it works:
End of explanation
A = (shape[0] * shape[1])*(spacing**2)
L = shape[2]*spacing
D_eff = rate_inlet * L / (A * (C_in - C_out))
print("{0:.6E}".format(D_eff))
Explanation: We can determine the effective diffusivity of the network by solving Fick's law:
$$ D_{eff} = \frac{N_A L}{ A \Delta C} $$
End of explanation
D_AB = air['pore.diffusivity'][0]
F = D_AB / D_eff
print('The formation factor is: ', "{0:.6E}".format(F))
Explanation: And the formation factor can be found since the diffusion coefficient of open air is known:
$$ F = \frac{D_{AB}}{D_{eff}} $$
End of explanation
V_p = geom['pore.volume'].sum()
V_t = geom['throat.volume'].sum()
V_bulk = np.prod(shape)*(spacing**3)
e = (V_p + V_t) / V_bulk
print('The porosity is: ', "{0:.6E}".format(e))
tau = e * D_AB / D_eff
print('The tortuosity is:', "{0:.6E}".format(tau))
Explanation: The tortuosity is defined as follows:
$$ \frac{D_{eff}}{D_{AB}} = \frac{\varepsilon}{\tau} \rightarrow \tau = \varepsilon \frac{ D_{AB}}{D_{eff}} $$
Note that finding the tortuosity requires knowing the porosity, which is annoyingly difficult to calculate accurately, so here we will just gloss over the details.
End of explanation |
2,747 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Threshold, Dynamic Time Warping
DW (2016.01.04)
Step1: Comparison between fastDTW and normal DTW
Step2: Fast DTW is about 3 times faster then the normal DTW. Using an interpolation with 10 times more samples would make the algorithm 16 times slower then normal. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import medfilt
import gitInformation
from neo.io.neuralynxio import NeuralynxIO
import quantities as pq
import sklearn
from scipy.interpolate import Rbf
import fastdtw
#import dtw
% matplotlib inline
gitInformation.printInformation()
# Session folder with all needed neuralynx files
sessionfolder = 'C:\\Users\\Dominik\\Documents\\GitRep\\kt-2015-DSPHandsOn\\MedianFilter\\Python\\07. Real Data'
# Loading the files with all datas and store them as a np.array
NIO = NeuralynxIO(sessiondir = sessionfolder, cachedir = sessionfolder)
block = NIO.read_block()
seg = block.segments[0]
analogsignal = seg.analogsignalarrays[0]
csc = analogsignal.magnitude
plt.figure(figsize=(30,7))
plt.plot(csc)
# Filter the Data with a median filter
filtered = medfilt(csc,45)
new_data = csc-filtered
plt.figure(figsize=(30,10))
plt.plot(csc, color = 'cornflowerblue')
plt.plot(filtered, color = 'g', lw = 1.5)
plt.plot(new_data, color = 'r')
# Automatic Threshold calculation
threshold = 5*np.median(abs(new_data)/0.6745)
# Declaring counter and and dead time.
# Dead time: if the threshold is reached, we wait 50 samples until the threshhold can be
# activated again
count = -1
count2 = 0
timer = 0
# Dictionary with all thresholded shapes
thresholds = {}
# Get the value in the new_data array:
for i in new_data:
# Increment the counter (counter = position in the array)
count += 1
if i >= threshold:
# check the thresholded window if some values are bigger then 0.00005
temp = [i for i in new_data[count -6 : count + 18] if i >= 0.00005]
# If no values are bigger then 0.00005 and the dead time is zero,
# save the window in the dictionary
if len(temp) == 0 and timer == 0:
# set the timer to 20, so 20 samples will be passed
timer = 16
# increment count2, for the array name
count2 += 1
thresholds["spike{0}".format(count2)] = new_data[count -6 : count + 18]
elif timer > 0:
# Decrement the timer.
timer -= 1
else:
pass
# Transfrom the thresholded shpaes into a array
thresholds_array = np.zeros((24,len(thresholds)))
count = -1
for o in thresholds:
count += 1
thresholds_array[:,count] = thresholds[o]
x = np.arange(24)
x_new = np.linspace(0,24,240)
#Interpolate each spike with a Cubic RBF function
thresholds_interp = np.zeros((len(x_new),len(thresholds_array[1,:])))
for o in range(len(thresholds_array[1,:])):
newfunc = Rbf(x, thresholds_array[:,o], function = 'cubic')
thresholds_interp[:,o] = newfunc(x_new)
thresholds_norm = thresholds_array/float(thresholds_array.max())
count = -1
for o in range(len(thresholds_interp[0,:])):
count += 1
fig = plt.figure(1)
plt.axis([0,25,-0.78, 1.0])
plt.plot( thresholds_norm[:,o], color = 'black', linewidth = 0.4)
plt.xlabel(str(count))
template1 = thresholds_norm[:,29]
template2 = thresholds_norm[:,31]
template3 = thresholds_norm[:,75]
template4 = thresholds_norm[:,124]
template5 = thresholds_norm[:,175]
templates = np.zeros((1,24))
#templates[0,:] = template1
templates[0,:] = template1
#templates[2,:] = template3
#templates[3,:] = template4
#templates[4,:] = template5
plt.plot(templates[0,:])
#plt.plot(templates[1,:])
#plt.plot(templates[2,:])
#plt.plot(templates[3,:])
#plt.plot(templates[4,:])
#plt.plot(templates[5,:])
Explanation: Threshold, Dynamic Time Warping
DW (2016.01.04)
End of explanation
label = np.zeros(len(thresholds_norm[0,:]))
def fastDtw(thresholds_norm, template1, label):
count = 0
# Go through all detected windows and compare them with the template
for k in range(len(thresholds_norm[0,:])):
dist = fastdtw.fastdtw(template1,thresholds_norm[:,k])
dist = dist[0]
# If the distance between both templates is smaller then 1.6, it's a match
if dist < 1.6:
label[k] = 1
else:
pass
return label
label1 = np.zeros(len(thresholds_norm[0,:]))
def normalDtw(thresholds_norm, template1, label1):
count = 0
for k in range(len(thresholds_norm[0,:])):
temp = thresholds_norm[:,k]
o = template1.reshape(-1,1)
dist, cost, acc, path = dtw.dtw(o, temp, dist=lambda o, temp: np.linalg.norm(o - temp, ord=1))
if dist < 0.03:
label1[k] = 1
else:
pass
import time
start_time = time.time()
fastDtw(thresholds_norm, template1, label)
print("--- %s seconds ---" % (time.time() - start_time))
import time
start_time = time.time()
normalDtw(thresholds_norm, template1, label1)
print("--- %s seconds ---" % (time.time() - start_time))
Explanation: Comparison between fastDTW and normal DTW
End of explanation
for i in range(len(label1)):
if label1[i] == 1:
plt.plot(thresholds_array[:,i], color = 'black')
for i in range(len(label)):
if label[i] == 1:
plt.plot(thresholds_array[:,i], color = 'black')
Explanation: Fast DTW is about 3 times faster then the normal DTW. Using an interpolation with 10 times more samples would make the algorithm 16 times slower then normal.
End of explanation |
2,748 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning with TensorFlow
Credits
Step2: Download the data from the source website if necessary.
Step3: Read the data into a string.
Step4: Build the dictionary and replace rare words with UNK token.
Step5: Function to generate a training batch for the skip-gram model.
Step6: Train a skip-gram model. | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
import collections
import math
import numpy as np
import os
import random
import tensorflow as tf
import urllib
import zipfile
from matplotlib import pylab
from sklearn.manifold import TSNE
Explanation: Deep Learning with TensorFlow
Credits: Forked from TensorFlow by Google
Setup
Refer to the setup instructions.
Exercise 5
The goal of this exercise is to train a skip-gram model over Text8 data.
End of explanation
#url = 'http://mattmahoney.net/dc/'
import urllib.request
url = urllib.request.urlretrieve("http://mattmahoney.net/dc/")
def maybe_download(filename, expected_bytes):
Download a file if not present, and make sure it's the right size.
if not os.path.exists(filename):
filename, _ = urllib.request.urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print ('Found and verified', filename)
else:
print (statinfo.st_size)
raise Exception('Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
#filename = maybe_download("text8.zip",31344016)
Explanation: Download the data from the source website if necessary.
End of explanation
filename=("text8.zip")
def read_data(filename):
f = zipfile.ZipFile(filename)
for name in f.namelist():
return f.read(name).split()
f.close()
words = read_data(filename)
print ('Data size', len(words))
Explanation: Read the data into a string.
End of explanation
vocabulary_size = 50000
def build_dataset(words):
count = [['UNK', -1]]
count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
if word in dictionary:
index = dictionary[word]
else:
index = 0 # dictionary['UNK']
unk_count = unk_count + 1
data.append(index)
count[0][1] = unk_count
reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reverse_dictionary
data, count, dictionary, reverse_dictionary = build_dataset(words)
print ('Most common words (+UNK)', count[:5])
print ('Sample data', data[:10])
del words # Hint to reduce memory.
Explanation: Build the dictionary and replace rare words with UNK token.
End of explanation
data_index = 0
def generate_batch(batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(int(batch_size / num_skips)):
target = skip_window # target label at the center of the buffer
targets_to_avoid = [ skip_window ]
for j in range(num_skips):
while target in targets_to_avoid:
target = random.randint(0, span - 1)
targets_to_avoid.append(target)
batch[i * num_skips + j] = buffer[skip_window]
labels[i * num_skips + j, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
batch, labels = generate_batch(batch_size=8, num_skips=2, skip_window=1)
for i in range(8):
print (batch[i], '->', labels[i, 0])
print (reverse_dictionary[batch[i]], '->', reverse_dictionary[labels[i, 0]])
Explanation: Function to generate a training batch for the skip-gram model.
End of explanation
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.array(random.sample(range(valid_window), valid_size))
num_sampled = 64 # Number of negative examples to sample.
graph = tf.Graph()
with graph.as_default():
# Input data.
train_dataset = tf.placeholder(tf.int32, shape=[batch_size])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# Variables.
embeddings = tf.Variable(
tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
softmax_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
# Model.
# Look up embeddings for inputs.
embed = tf.nn.embedding_lookup(embeddings, train_dataset)
# Compute the softmax loss, using a sample of the negative labels each time.
loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(softmax_weights, softmax_biases, embed,
train_labels, num_sampled, vocabulary_size))
# Optimizer.
optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
# Compute the similarity between minibatch examples and all embeddings.
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
num_steps = 100001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print ("Initialized")
average_loss = 0
for step in xrange(num_steps):
batch_data, batch_labels = generate_batch(
batch_size, num_skips, skip_window)
feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += l
if step % 2000 == 0:
if step > 0:
average_loss = average_loss / 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print ("Average loss at step", step, ":", average_loss)
average_loss = 0
# note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in xrange(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = "Nearest to %s:" % valid_word
for k in xrange(top_k):
close_word = reverse_dictionary[nearest[k]]
log = "%s %s," % (log, close_word)
print (log)
final_embeddings = normalized_embeddings.eval()
num_points = 400
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])
def plot(embeddings, labels):
assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'
pylab.figure(figsize=(15,15)) # in inches
for i, label in enumerate(labels):
x, y = embeddings[i,:]
pylab.scatter(x, y)
pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points',
ha='right', va='bottom')
pylab.show()
words = [reverse_dictionary[i] for i in range(1, num_points+1)]
plot(two_d_embeddings, words)
from pyspark import SparkContext
from pyspark.mllib.feature import Word2Vec
#sc = SparkContext(appName='Word2Vec')
inp = sc.textFile("url.txt").map(lambda row: row.split(" "))
word2vec = Word2Vec()
model = word2vec.fit(inp) #Results in exception...
print(model.getVectors)
print(model.getVectors)
model.call
model.findSynonyms
model.load
model.save
model.transform
model.getVectors
sc
from __future__ import print_function
import sys
from pyspark import SparkContext
from pyspark.mllib.feature import Word2Vec
USAGE = ("bin/spark-submit --driver-memory 4g "
"examples/src/main/python/mllib/word2vec.py text8_lines")
if __name__ == "__main__":
if len(sys.argv) < 2:
print(USAGE)
sys.exit("Argument for file not provided")
file_path = sys.argv[1]
file_path="url.txt"
# sc = SparkContext(appName='Word2Vec')
inp = sc.textFile(file_path).map(lambda row: row.split(" "))
word2vec = Word2Vec()
model = word2vec.fit(inp)
synonyms = model.findSynonyms('1', 5)
for word, cosine_distance in synonyms:
print("{}: {}".format(word, cosine_distance))
sc.stop()
from pyspark.mllib.feature import HashingTF, IDF
# Load documents (one per line).
documents = sc.textFile("url.txt").map(lambda line: line.split(" "))
hashingTF = HashingTF()
tf = hashingTF.transform(documents)
# While applying HashingTF only needs a single pass to the data, applying IDF needs two passes:
# First to compute the IDF vector and second to scale the term frequencies by IDF.
tf.cache()
idf = IDF().fit(tf)
tfidf = idf.transform(tf)
# spark.mllib's IDF implementation provides an option for ignoring terms
# which occur in less than a minimum number of documents.
# In such cases, the IDF for these terms is set to 0.
# This feature can be used by passing the minDocFreq value to the IDF constructor.
idfIgnore = IDF(minDocFreq=2).fit(tf)
tfidfIgnore = idfIgnore.transform(tf)
from pyspark.mllib.feature import Word2Vec
inp = sc.textFile("data/mllib/sample_lda_data.txt").map(lambda row: row.split(" "))
word2vec = Word2Vec()
model = word2vec.fit(inp)
synonyms = model.findSynonyms('1', 5)
for word, cosine_distance in synonyms:
print("{}: {}".format(word, cosine_distance))
Explanation: Train a skip-gram model.
End of explanation |
2,749 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="tmva_logo.gif" height="20%" width="20%">
TMVA Higgs Classification Example in Python
In this example we will still do Higgs classification but we will use together with the native TMVA methods also methods from Keras and scikit-learn.
Step1: Declare Factory
Create the Factory class. Later you can choose the methods
whose performance you'd like to investigate.
The factory is the major TMVA object you have to interact with. Here is the list of parameters you need to pass
The first argument is the base of the name of all the output
weightfiles in the directory weight/ that will be created with the
method parameters
The second argument is the output file for the training results
The third argument is a string option defining some general configuration for the TMVA session. For example all TMVA output can be suppressed by removing the "!" (not) in front of the "Silent" argument in the option string
Step2: Input Data
We define now the input data file and we retrieve the ROOT TTree objects with the signal and background input events
Step3: Declare DataLoader(s)
The next step is to declare the DataLoader class that deals with input data abd variables
We add first the signal and background trees in the data loader and then we
define the input variables that shall be used for the MVA training
note that you may also use variable expressions, which can be parsed by TTree
Step4: Setup Dataset(s)
Setup the DataLoader by splitting events in training and test samples.
Here we use a random split and a fixed number of training and test events.
Step5: Booking Methods
Here we book the TMVA methods. We book a Likelihood based a BDT and a standard MLP (shallow NN)
Step6: Using scikit-learn
here we book some scikit learn packages
Step7: Booking Deep Neural Network
Here we book the new DNN of TMVA. We use the new DL method available in TMVA
1. Define DNN layout
we need to define (note the use of the character | as separator of input parameters)
input layout
Step8: 2. Define Trainining Strategy
We define here the different training strategy for the DNN. One can concatenate different training strategy changing parameters like
Step9: 3. Define general options and book method
We define the general DNN options such as
Type of Loss function (e.g. cross entropy)
Weight Initizalization (e.g XAVIER, XAVIERUNIFORM, NORMAL )
Variable Transformation
Type of Architecture (e.g. CPU, GPU, Standard)
We add then also all the other options defined before
Step10: Train Methods
Step11: Test all methods
Here we test all methods using the test data set
Step12: Evaluate all methods
Here we evaluate all methods and compare their performances, computing efficiencies, ROC curves etc.. using both training and tetsing data sets. Several histograms are produced which can be examined with the TMVAGui or directly using the output file
Step13: Plot ROC Curve
We enable JavaScript visualisation for the plots
Step14: Close outputfile to save all output information (evaluation result of methods) | Python Code:
import ROOT
from ROOT import TMVA
Explanation: <img src="tmva_logo.gif" height="20%" width="20%">
TMVA Higgs Classification Example in Python
In this example we will still do Higgs classification but we will use together with the native TMVA methods also methods from Keras and scikit-learn.
End of explanation
ROOT.TMVA.Tools.Instance()
## For PYMVA methods
TMVA.PyMethodBase.PyInitialize();
outputFile = ROOT.TFile.Open("Higgs_ClassificationOutput.root", "RECREATE")
factory = ROOT.TMVA.Factory("TMVA_Higgs_Classification", outputFile,
"!V:ROC:!Silent:Color:!DrawProgressBar:AnalysisType=Classification" )
Explanation: Declare Factory
Create the Factory class. Later you can choose the methods
whose performance you'd like to investigate.
The factory is the major TMVA object you have to interact with. Here is the list of parameters you need to pass
The first argument is the base of the name of all the output
weightfiles in the directory weight/ that will be created with the
method parameters
The second argument is the output file for the training results
The third argument is a string option defining some general configuration for the TMVA session. For example all TMVA output can be suppressed by removing the "!" (not) in front of the "Silent" argument in the option string
End of explanation
inputFileName = "Higgs_data.root"
inputFile = ROOT.TFile.Open( inputFileName )
# retrieve input trees
signalTree = inputFile.Get("sig_tree")
backgroundTree = inputFile.Get("bkg_tree")
signalTree.Print()
Explanation: Input Data
We define now the input data file and we retrieve the ROOT TTree objects with the signal and background input events
End of explanation
loader = ROOT.TMVA.DataLoader("dataset")
### global event weights per tree (see below for setting event-wise weights)
signalWeight = 1.0
backgroundWeight = 1.0
### You can add an arbitrary number of signal or background trees
loader.AddSignalTree ( signalTree, signalWeight )
loader.AddBackgroundTree( backgroundTree, backgroundWeight )
## Define input variables
loader.AddVariable("m_jj")
loader.AddVariable("m_jjj")
loader.AddVariable("m_lv")
loader.AddVariable("m_jlv")
loader.AddVariable("m_bb")
loader.AddVariable("m_wbb")
loader.AddVariable("m_wwbb")
Explanation: Declare DataLoader(s)
The next step is to declare the DataLoader class that deals with input data abd variables
We add first the signal and background trees in the data loader and then we
define the input variables that shall be used for the MVA training
note that you may also use variable expressions, which can be parsed by TTree::Draw( "expression" )]
End of explanation
## Apply additional cuts on the signal and background samples (can be different)
mycuts = ROOT.TCut("") ## for example: TCut mycuts = "abs(var1)<0.5 && abs(var2-0.5)<1";
mycutb = ROOT.TCut("") ## for example: TCut mycutb = "abs(var1)<0.5";
loader.PrepareTrainingAndTestTree( mycuts, mycutb,
"nTrain_Signal=1000:nTrain_Background=1000:"
"nTest_Signal=1000:nTest_Background=1000:"
"SplitMode=Random:NormMode=NumEvents:!V" )
Explanation: Setup Dataset(s)
Setup the DataLoader by splitting events in training and test samples.
Here we use a random split and a fixed number of training and test events.
End of explanation
## Boosted Decision Trees
factory.BookMethod(loader,ROOT.TMVA.Types.kBDT, "BDT",
"!V:NTrees=200:MinNodeSize=2.5%:MaxDepth=2:BoostType=AdaBoost:AdaBoostBeta=0.5:UseBaggedBoost:"
"BaggedSampleFraction=0.5:SeparationType=GiniIndex:nCuts=20" )
## Multi-Layer Perceptron (Neural Network)
factory.BookMethod(loader, ROOT.TMVA.Types.kMLP, "MLP",
"!H:!V:NeuronType=tanh:VarTransform=N:NCycles=100:HiddenLayers=N+5:TestRate=5:!UseRegulator" );
Explanation: Booking Methods
Here we book the TMVA methods. We book a Likelihood based a BDT and a standard MLP (shallow NN)
End of explanation
#factory.BookMethod(loader, ROOT.TMVA.Types.kPyGTB, "PyGTB","H:!V:VarTransform=G:NEstimators=400:LearningRate=0.1:"
# "MaxDepth=3")
#
#factory.BookMethod(loader, ROOT.TMVA.Types.kPyRandomForest, "PyRandomForest","!V:VarTransform=G:NEstimators=400:"
# "Criterion=gini:MaxFeatures=auto:MaxDepth=6:MinSamplesLeaf=3:MinWeightFractionLeaf=0:"
# "Bootstrap=kTRUE" )
#
#factory.BookMethod(loader, ROOT.TMVA.Types.kPyAdaBoost, "PyAdaBoost","!V:VarTransform=G:NEstimators=400" )
Explanation: Using scikit-learn
here we book some scikit learn packages
End of explanation
inputLayoutString = "InputLayout=1|1|7";
batchLayoutString= "BatchLayout=1|100|7";
layoutString = ("Layout=DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR")
Explanation: Booking Deep Neural Network
Here we book the new DNN of TMVA. We use the new DL method available in TMVA
1. Define DNN layout
we need to define (note the use of the character | as separator of input parameters)
input layout : this defines the input data format for the DNN as input depth | height | width.
In case of a dense layer as first layer the input layout should be 1 | 1 | number of input variables (features)
batch layout : this defines how are the input batch. It is related to input layout but not the same.
If the first layer is dense it should be 1 | batch size ! number of variables (fetures)
layout string defining the architecture. The syntax is
layer type (e.g. DENSE, CONV, RNN)
layer parameters (e.g. number of units)
activation function (e.g TANH, RELU,...)
the different layers are separated by the ","
End of explanation
##Training strategies
## one can catenate several training strategies
training1 = "Optimizer=ADAM,LearningRate=1e-3,Momentum=0.,Regularization=None,WeightDecay=1e-4,"
training1 += "DropConfig=0.+0.+0.+0.,MaxEpochs=20,ConvergenceSteps=10,BatchSize=100,TestRepetitions=1"
# we add regularization in the second phase
training2 = "Optimizer=ADAM,LearningRate=1e-3,Momentum=0.,Regularization=L2,WeightDecay=1e-4,"
training2 += "DropConfig=0.0+0.0+0.0+0,MaxEpochs=20,ConvergenceSteps=10,BatchSize=100,TestRepetitions=1"
trainingStrategyString = "TrainingStrategy=" + training1 ## + training2
Explanation: 2. Define Trainining Strategy
We define here the different training strategy for the DNN. One can concatenate different training strategy changing parameters like:
- Optimizer
- Learning rate
- Momentum (valid for SGD and RMSPROP)
- Regularization and Weight Decay
- Dropout
- Max number of epochs
- Convergence steps. if the test error will not decrease after that value the training will stop
- Batch size (This value must be the same specified in the input layout)
- Test Repetitions (the interval when the test error will be computed)
End of explanation
## General Options.
dnnOptions = "!H:V:ErrorStrategy=CROSSENTROPY:WeightInitialization=XAVIER::Architecture=CPU"
dnnOptions += ":" + inputLayoutString
dnnOptions += ":" + batchLayoutString
dnnOptions += ":" + layoutString
dnnOptions += ":" + trainingStrategyString
#we can now book the method
factory.BookMethod(loader, ROOT.TMVA.Types.kDL, "DL_CPU", dnnOptions)
## to use tensorflow backend
import os
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam, SGD
from tensorflow.keras.initializers import TruncatedNormal
from tensorflow.keras.layers import Input, Dense, Dropout, Flatten, Conv2D, MaxPooling2D, Reshape, BatchNormalization
# Define model
model = Sequential()
model.add(Dense(64, kernel_initializer='glorot_normal', activation='tanh', input_dim=7))
#model.add(Dropout(0.2))
model.add(Dense(64, kernel_initializer='glorot_normal', activation='tanh'))
#model.add(Dropout(0.2))
model.add(Dense(64, kernel_initializer='glorot_normal', activation='tanh'))
model.add(Dense(2, kernel_initializer='glorot_uniform', activation='softmax'))
# Set loss and optimizer
model.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['categorical_accuracy',])
# Store model to file
model.save('model_dense.h5')
# Print summary of model
model.summary()
factory.BookMethod(loader, ROOT.TMVA.Types.kPyKeras, 'Keras_Dense',
'H:!V:FilenameModel=model_dense.h5:'+\
'NumEpochs=20:BatchSize=100:TriesEarlyStopping=10')
Explanation: 3. Define general options and book method
We define the general DNN options such as
Type of Loss function (e.g. cross entropy)
Weight Initizalization (e.g XAVIER, XAVIERUNIFORM, NORMAL )
Variable Transformation
Type of Architecture (e.g. CPU, GPU, Standard)
We add then also all the other options defined before
End of explanation
factory.TrainAllMethods();
Explanation: Train Methods
End of explanation
factory.TestAllMethods();
Explanation: Test all methods
Here we test all methods using the test data set
End of explanation
factory.EvaluateAllMethods();
Explanation: Evaluate all methods
Here we evaluate all methods and compare their performances, computing efficiencies, ROC curves etc.. using both training and tetsing data sets. Several histograms are produced which can be examined with the TMVAGui or directly using the output file
End of explanation
%jsroot on
c1 = factory.GetROCCurve(loader);
c1.Draw();
Explanation: Plot ROC Curve
We enable JavaScript visualisation for the plots
End of explanation
##outputFile.Close();
Explanation: Close outputfile to save all output information (evaluation result of methods)
End of explanation |
2,750 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Training - Lesson 1 - Variables and Data Types
Variables
A variable refers to a certain value with specific type. For example, we may want to store a number, a fraction, or a name, date, maybe a list of numbers. All those need to be reachable using some name, some reference, which we create when we create a variable.
After we create a variable with a value, we can peek at what's inside using "print" method.
Step1: How to assign values to variables?
Single assignment
Step2: Multiple assignment
Step3: What is a reference? What is a value?
You could ask
Step4: To be completely precise, let's look at creating two variables that store some names. To see where in memory does the object go, we can use method "id". To see the hex representation of this memory, as you will usually see, we can use the method "id".
Step5: Now, let's change this name to something else.
Step6: The important bit is that, even though we use the same variable "person_age", the memory address changed. The object holding integer '22' is still living somewhere on the process heap, but is no longer bound to any name, and probably will be deleted by the "Garbage Collector". The binding that exists now, if from name "person_age" to the int object "24".
The same can be said about variable 'some_person'.
Mutability and immutability
The reason we need to talk about this, is that when you use variables in Python, you have to understand that such a "binding" can be shared! When you modify one, the other shared bindings will be modified as well! This is true for "mutable" objects. There are also "immutable" objects, that behave in a standard, standalone, not-changeable way.
Immutable types
Step7: Now, when we modify the binding of 'shared_list' variable, both of our variables will change also!
Step8: This can be very confusing later on, if you do not grasp this right now. Feel free to play around
Step9: float
Floating decimal point numbers. Used usually for everything that is not an 'int'.
Step10: complex
Complex numbers. Advanced sorceries of mathematicians. In simple terms, numbers that have two components. Historically, they were named 'real' component (regular numbers) and 'imaginary' component - marked in Python using the 'j' letter.
Step11: Numeric operations
Step12: Strings
Represents text, or to be more specific, sequences of 'Unicode' characters. To let Python know we are using strings, put them in quotes, either single, or double.
Step13: Even though strings are not numbers, you can do a lot of operations on them using the usual operators.
Step14: Actually, strings are 'lists' of characters. We will explore lists in just a moment, but I want you to become familiar with a new notation. It is based on the order of sequence. When I say, "Give me the second character of this string", I can write is as such
Step15: Since we are counting from 0, the second character has index = 1.
Now, say I want characters from second, to fourth.
Step16: These operations are called 'slicing'.
We can also find substrings in other substrings. THe result is the index, at which this substring occurs.
Step17: We can also replace substrings in a bigger string. Very convenient. But more complex replacements or searches are done using regular expressions, which we will cover later
Step18: Boolean
It represents the True and False values. Variables of this type, can be only True or False.
It is useful to know, that in Python we can check any variable to be True or False, even lists!
We use the bool() function.
Step19: List
Prepare to use this data type A LOT. Lists can store any objects, and have as many elements, as you like. The most important thing about lists, is that their elements are ordered. You can create a list by making an empty list, converting something else to a list, or defining elements of a list right there, when you declare it.
Creating lists.
Step20: Selecting from lists
Step21: Adding and removing from a list
Step22: Iterating over a list
But lists are not only used to hold some sequences! You can iterate over a list. This means no more, no less, then doing something for each of the elements in a given range, or for all of them. We will cover the so-called 'for' loop in next lessons, but I guess you can easily imagine what this minimal example would do.
Step23: Even though the short notation is a more advanced topic, it is very elegant and 'pythonic'. This way of writing down the process of iteration is called 'list comprehensions'.
Tuple
A tuple is a simple data structure - it behaves pretty much like a list, except for one fact - you can not change elements of tuple after it is created! You create it the same as a list, but using normal brackets.
Step24: Dictionary
This data structure is very useful. In essence, it stores pairs of values, first of which is always a "key", a unique identifier, and the "value", which is the connected object.
A dictionary performs a mapping between keys and values. Because the key is always unique (has to be, we will find out in a minute), there is always exactly one key with specific content.
A dictionary is also very efficient - finding a value in a dictionary takes only one operation, whereas searching through a list one by one could require going through the whole list.
This means that for any situation, where you need to store lot's of values, that will be often used, it is much better to store them in a dictionary.
Also, I recommend to read on Wikipedia on "hash maps".
Creating dictionaries
Step25: Using dictionaries
Add key-value pairs
Step26: Remove items
Step27: Inspect a dictionary
Step28: Iterate over dictionary
Step29: Example of looking for a specific thing in a list, and in a dictionary
Step30: Sets
A set behaves pretty much like a mixture of a dictionary and a list. It has two features | Python Code:
my_name = 'Adam'
print my_name
my_age = 92
your_age = 23
age_difference = my_age - your_age
print age_difference
Explanation: Python Training - Lesson 1 - Variables and Data Types
Variables
A variable refers to a certain value with specific type. For example, we may want to store a number, a fraction, or a name, date, maybe a list of numbers. All those need to be reachable using some name, some reference, which we create when we create a variable.
After we create a variable with a value, we can peek at what's inside using "print" method.
End of explanation
a = 1
Explanation: How to assign values to variables?
Single assignment
End of explanation
a, b, c = 1, 2, 3
print a, b, c
a = b = c = d = "The same string"
print a, b, c, d
Explanation: Multiple assignment
End of explanation
type(my_age)
Explanation: What is a reference? What is a value?
You could ask: does Python use call-by-value, or call-by-reference? Neither of those, actually. Variables in Python are "names", that ALWAYS bind to some object, because mostly everything in Python is an object, a complex type. So assigning a variable means, binding this "name" to an object.
Actually, each time you create a number, you are not using a classic approach, like for example in C++:
int my_integer = 1;
When we look at an integer in Python, it's actually an object of type 'int'. To check the type of an object, use the "type" method.
End of explanation
some_person = "Andrew"
person_age = 22
print some_person, type(some_person), hex(id(some_person))
print person_age, type(person_age), hex(id(person_age))
Explanation: To be completely precise, let's look at creating two variables that store some names. To see where in memory does the object go, we can use method "id". To see the hex representation of this memory, as you will usually see, we can use the method "id".
End of explanation
some_person = "Jamie"
person_age = 24
print some_person, type(some_person), hex(id(some_person))
print person_age, type(person_age), hex(id(person_age))
Explanation: Now, let's change this name to something else.
End of explanation
shared_list = [11,22]
my_list = shared_list
your_list = shared_list
print shared_list, my_list, your_list
Explanation: The important bit is that, even though we use the same variable "person_age", the memory address changed. The object holding integer '22' is still living somewhere on the process heap, but is no longer bound to any name, and probably will be deleted by the "Garbage Collector". The binding that exists now, if from name "person_age" to the int object "24".
The same can be said about variable 'some_person'.
Mutability and immutability
The reason we need to talk about this, is that when you use variables in Python, you have to understand that such a "binding" can be shared! When you modify one, the other shared bindings will be modified as well! This is true for "mutable" objects. There are also "immutable" objects, that behave in a standard, standalone, not-changeable way.
Immutable types: int, float, decimal, complex, bool, string, tuple, range, frozenset, bytes
Mutable types: list, dict, set, bytearray, user-defined classes
End of explanation
shared_list.append(33)
print shared_list, my_list, your_list
Explanation: Now, when we modify the binding of 'shared_list' variable, both of our variables will change also!
End of explanation
a = 111
print a, type(a)
b = 111111111111111111111111111111111
print b, type(b)
Explanation: This can be very confusing later on, if you do not grasp this right now. Feel free to play around :)
Data types
What is a data type? It is a way of telling our computer, that we want to store a specific kind of information in a particular variable. This allows us to access tools and mechanisms that are allowed for that type.
We already mentioned that actually every time we create a variable, we create a complex type variable, or an object.
This is called creating an object, or instantiating an object. Each object comes from a specific template, or how we call it in Object Oriented Programming, from a class.
So when you assign a variable, you instantiate an object from a class.
In Python, every data type is a class!
Also, we will use some built-in tools for inspection - type() and isinstance() functions. The function type() will just say from which class does this object come from. THe function isinstance() will take an object reference, and then a class name, and will tell you if this is an instance of this class.
Let's review data types used in Python (most of them).
Numeric types
These types allow you to store numbers. Easy.
int
Integers. If you create a really big integer, it will become a 'long integer', or 'long'.
End of explanation
c = 11.33333
d = 11111.33
print c, type(c)
print d, type(d)
Explanation: float
Floating decimal point numbers. Used usually for everything that is not an 'int'.
End of explanation
c = 2 + 3j
print c, type(c)
Explanation: complex
Complex numbers. Advanced sorceries of mathematicians. In simple terms, numbers that have two components. Historically, they were named 'real' component (regular numbers) and 'imaginary' component - marked in Python using the 'j' letter.
End of explanation
# Addition
print(1+1)
# Multiplication
print(2*2)
# Division
print(4/2)
# Remainder of division
print(5%2)
# Power
print(2**4)
Explanation: Numeric operations
End of explanation
a = "Something"
b = 'Something else'
print type(a), type(b)
Explanation: Strings
Represents text, or to be more specific, sequences of 'Unicode' characters. To let Python know we are using strings, put them in quotes, either single, or double.
End of explanation
name = 'Adam'
print name + name
print name * 3
Explanation: Even though strings are not numbers, you can do a lot of operations on them using the usual operators.
End of explanation
print 'Second character is: ' + name[1]
Explanation: Actually, strings are 'lists' of characters. We will explore lists in just a moment, but I want you to become familiar with a new notation. It is based on the order of sequence. When I say, "Give me the second character of this string", I can write is as such:
End of explanation
print 'From second to fourth: ' + name[1:4]
print 'The last character (or first counting from the end) is: ' + name[-1]
print 'All characters, but skip every second: ' + name[0:4:2]
Explanation: Since we are counting from 0, the second character has index = 1.
Now, say I want characters from second, to fourth.
End of explanation
some_string = "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAxxAAAAAAAAAAAAAAAAAAAA"
substring = "xx"
location = some_string.find(substring)
print("Lets see what we found:")
print(some_string[location:location+len(substring)])
Explanation: These operations are called 'slicing'.
We can also find substrings in other substrings. THe result is the index, at which this substring occurs.
End of explanation
some_string = "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAxxAAAAAAAAAAAAAAAAAAAA"
substring = "xx"
print(some_string.replace( substring , "___REPLACED___"))
Explanation: We can also replace substrings in a bigger string. Very convenient. But more complex replacements or searches are done using regular expressions, which we will cover later
End of explanation
a = True
b = False
print("Is a equal to b ?")
print(a==b)
print("Logical AND")
print(a and b)
print("Logical OR")
print(a or b)
print("Logical value of True")
print( bool(a) )
print("Logical value of an empty list")
print( bool([]) )
print("Logical value of an empty string")
print( bool("") )
print("Logical value of integer 0")
print( bool(0) )
Explanation: Boolean
It represents the True and False values. Variables of this type, can be only True or False.
It is useful to know, that in Python we can check any variable to be True or False, even lists!
We use the bool() function.
End of explanation
empty_list = []
list_from_something_else = list('I feel like Im going to explode')
list_elements_defined_when_list_is_created = [1, 2, 3, 4]
print empty_list
print list_from_something_else
print list_elements_defined_when_list_is_created
Explanation: List
Prepare to use this data type A LOT. Lists can store any objects, and have as many elements, as you like. The most important thing about lists, is that their elements are ordered. You can create a list by making an empty list, converting something else to a list, or defining elements of a list right there, when you declare it.
Creating lists.
End of explanation
l = ["a", "b", "c", "d", "e"]
print l[0]
print l[-1]
print l[1:3]
Explanation: Selecting from lists
End of explanation
l = []
l.append(1)
print l
l[0] = 222
print l
l.remove(1)
print l
l = [1,2,3,3,4,5,3,2,3,2]
# Make a new list from a part of that list
new = l[4:7]
print new
Explanation: Adding and removing from a list
End of explanation
# Do something for all of elements.
for element in [1, 2, 3]:
print element + 20
# Do something for numbers coming from a range of numbers.
for number in range(0,3):
print number + 20
# Do something for all of elements, but written in a short way.
some_list = ['a', 'b', 'c']
print [element*2 for element in some_list]
Explanation: Iterating over a list
But lists are not only used to hold some sequences! You can iterate over a list. This means no more, no less, then doing something for each of the elements in a given range, or for all of them. We will cover the so-called 'for' loop in next lessons, but I guess you can easily imagine what this minimal example would do.
End of explanation
some_tuple = (1,3,4)
print some_tuple
print type(some_tuple)
print len(some_tuple)
print some_tuple[0]
print some_tuple[-1]
print some_tuple[1:2]
other_tuple = 1, 2, 3
print other_tuple
print type(other_tuple)
# This will cause an error! You can not modify a tuple.
some_tuple[1] = 22
Explanation: Even though the short notation is a more advanced topic, it is very elegant and 'pythonic'. This way of writing down the process of iteration is called 'list comprehensions'.
Tuple
A tuple is a simple data structure - it behaves pretty much like a list, except for one fact - you can not change elements of tuple after it is created! You create it the same as a list, but using normal brackets.
End of explanation
empty_dictionary = {}
print empty_dictionary
print type(empty_dictionary)
dictionary_from_direct_definition = {"key1": 1, "key2": 33}
print dictionary_from_direct_definition
# Let's create a dictionary from a list of tuples
dictionary_from_a_collection = dict([("a", 1), ("b", 2)])
print dictionary_from_a_collection
# Let's create a dictionary from two lists
some_list_with_strings = ["a", "b", "c"]
some_list_with_numbers = [1,2,3]
dictionary_from_two_lists = dict(zip(some_list_with_strings, some_list_with_numbers))
print dictionary_from_two_lists
print type(dictionary_from_two_lists)
# Let's create a dictionary from a dictionary comprehension
dict_from_comprehension = {key:value for key, value in zip(some_list_with_strings, some_list_with_numbers)}
print dict_from_comprehension
Explanation: Dictionary
This data structure is very useful. In essence, it stores pairs of values, first of which is always a "key", a unique identifier, and the "value", which is the connected object.
A dictionary performs a mapping between keys and values. Because the key is always unique (has to be, we will find out in a minute), there is always exactly one key with specific content.
A dictionary is also very efficient - finding a value in a dictionary takes only one operation, whereas searching through a list one by one could require going through the whole list.
This means that for any situation, where you need to store lot's of values, that will be often used, it is much better to store them in a dictionary.
Also, I recommend to read on Wikipedia on "hash maps".
Creating dictionaries
End of explanation
d = {}
d["a"] = 1
d["bs"] = 22
d["ddddd"] = 31
print d
d.update({"b": 2, "c": 3})
print d
Explanation: Using dictionaries
Add key-value pairs
End of explanation
del d["b"]
print d
d.pop("c")
print d
Explanation: Remove items
End of explanation
# How many keys?
print d.keys()
print len(d)
print len(d.keys())
# How many values?
print d.values()
print len(d.values())
Explanation: Inspect a dictionary
End of explanation
for key, value in d.items():
print key, value
Explanation: Iterate over dictionary
End of explanation
l = ["r", "p", "s", "t"]
d = {a: a for a in l}
# Find "t" in list.
for letter in l:
if letter == "t":
print "Found it!"
else:
print "Not yet!"
# Find "t" in dictionary keys.
print "In dictionary - found it! " + d["t"]
Explanation: Example of looking for a specific thing in a list, and in a dictionary:
End of explanation
some_sequence = [1,1,1,1,2,2,2,3,3,3]
some_set = set(some_sequence)
print some_set
some_string = "What's going ooooon?"
another_set = set(some_string)
print another_set
some_dictionary = {"a": 2, "b": 2}
print some_dictionary
yet_another_set = set(some_dictionary)
print yet_another_set
print set(some_dictionary.values())
Explanation: Sets
A set behaves pretty much like a mixture of a dictionary and a list. It has two features:
- it only has unique values
- it does not respect order of things - it has no order, like a dictionary
End of explanation |
2,751 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise
Step1: Task 1
Read the iris data into a pandas DataFrame, including column names. Name the dataframe iris.
Step2: Task 2
Gather some basic information about the data such as
Step3: Task 3
Use sorting, split-apply-combine, and/or visualization to look for differences between species.
sorting
Step4: split-apply-combine
Step5: visualization
Step6: Task 4
Decide on a set of rules that could be used to predict species based on iris measurements.
Step7: Predicting setosa will be straightforward since all our Iris-setosa pedal_areas are < 2 and the other Iris species have petal_areas larger than 2. But what about the petal_areas of Iris-versicolor and Iris-virginica? Some of their petal_area values overlap.
Let's look at that overlap in more detail.
Step8: My set of rules for predicting species | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
# display plots in the notebook
%matplotlib inline
# increase default figure and font sizes for easier viewing
plt.rcParams['figure.figsize'] = (8, 6)
plt.rcParams['font.size'] = 14
Explanation: Exercise: "Human learning" with iris data
Question: Can you predict the species of an iris using petal and sepal measurements?
Read the iris data into a Pandas DataFrame, including column names.
Gather some basic information about the data.
Use sorting, split-apply-combine, and/or visualization to look for differences between species.
Write down a set of rules that could be used to predict species based on iris measurements.
BONUS: Define a function that accepts a row of data and returns a predicted species. Then, use that function to make predictions for all existing rows of data, and check the accuracy of your predictions.
End of explanation
# define a list of column names (as strings)
col_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
# define the URL from which to retrieve the data (as a string)
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'
import pandas as pd
# retrieve the CSV file and add the column names. Name the dataframe iris
iris = pd.read_csv(url, sep = ",", names=col_names, header=0)
print(iris)
Explanation: Task 1
Read the iris data into a pandas DataFrame, including column names. Name the dataframe iris.
End of explanation
iris.shape
iris.head()
iris.dtypes
iris.describe()
iris.species.value_counts()
iris.isnull().sum()
Explanation: Task 2
Gather some basic information about the data such as:
* shape
* head
* data types of the columns
* describe
* counts of the values in the column species
* count the nulls
End of explanation
# Sort the values in the petal_width column and display them
iris.sort_values("petal_width").values
Explanation: Task 3
Use sorting, split-apply-combine, and/or visualization to look for differences between species.
sorting
End of explanation
# Find the mean of sepal_length grouped by species
iris.groupby("species").sepal_length.mean()
# Find the mean of all numeric columns grouped by species
iris.groupby("species").mean()
# Get the describe information for all numeric columns grouped by species
iris.groupby("species").describe()
Explanation: split-apply-combine
End of explanation
# Generate a histogram of petal_width grouped by species
plt.style.use('bmh')
iris.hist(column="petal_width", by="species")
#iris.groupby('species').petal_width.plot(kind='hist')
# Display a box plot of petal_width grouped by species
plt.style.use("fivethirtyeight")
iris.boxplot(column="petal_width", by="species")
# Display box plot of all numeric columns grouped by species
#iris.groupby("species").plot(kind="boxplot") #not a box plot but does givie you plots of lines broken out by species
iris.boxplot(by='species')
# map species to a numeric value so that plots can be colored by species
iris['species_num'] = iris.species.map({'Iris-setosa':0, 'Iris-versicolor':1, 'Iris-virginica':2})
print(iris.species_num)
# alternative method, I like the mapping better since it also documents what integer mappings are
#iris['species_num'] = iris.species.factorize()[0]
# Generate a scatter plot of petal_length vs petal_width colored by species
iris.plot(kind="scatter", x="petal_length", y="petal_width", c="species_num", colormap="brg")
# Generate a scatter matrix of all features colored by species. Make the figure size 12x10
pd.scatter_matrix(iris.drop("species_num", axis=1), c=iris.species_num, figsize=(12,10))
Explanation: visualization
End of explanation
# Define a new feature that represents petal area ("feature engineering")
iris["petal_area"] = iris.petal_length * iris.petal_width
# Display a describe of petal_area grouped by species
iris.groupby("species").petal_area.describe().unstack()
# Display a box plot of petal_area grouped by species
iris.boxplot(column="petal_area", by="species")
Explanation: Task 4
Decide on a set of rules that could be used to predict species based on iris measurements.
End of explanation
# Show only dataframe rows with a petal_area between 7 and 9
iris[(iris.petal_area > 7) & (iris.petal_area < 9)].sort_values('petal_area')
Explanation: Predicting setosa will be straightforward since all our Iris-setosa pedal_areas are < 2 and the other Iris species have petal_areas larger than 2. But what about the petal_areas of Iris-versicolor and Iris-virginica? Some of their petal_area values overlap.
Let's look at that overlap in more detail.
End of explanation
# Define a function that given a row of data, returns a predicted species_num (0/1/2)
def classify_species(row):
petal_area = (row[2] * row[3]) #define petal area, petal_length * petal_width
if petal_area < 2:
prediction = "setosa"
elif petal_area < 7.5:
prediction = "versicolor"
else:
prediction = "virginica"
factorize = {'setosa':0, 'versicolor':1, 'virginica':2} #need to map the strings back to their factors
return factorize[prediction]
# Print the first row
iris.loc[0,:]
# Print the last row
iris.loc[148,:]
# Test the function on the first and last rows
print classify_species(iris.loc[0,:])
print classify_species(iris.loc[148,:])
# Make predictions for all rows and store them in the DataFrame
iris["y_pred_species"] = [classify_species(row) for index, row in iris.iterrows()]
# Calculate the percentage of correct predictions
sum(iris.species_num == iris.y_pred_species) / 149.
Explanation: My set of rules for predicting species:
- if petal_area < 2
- then "setsosa"
- elseif petal_area < 7.5
- then "versicolor"
- else "virginica"
Bonus
Define a function that accepts a row of data and returns a predicted species. Then, use that function to make predictions for all existing rows of data, and check the accuracy of your predictions.
End of explanation |
2,752 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vectorized Operations
not necessary to write loops for element-by-element operations
pandas' Series objects can be passed to MOST NumPy functions
documentation
Step1: add Series without loop
Series within arithmetic expression
Series used as argument to NumPy function
A key difference between Series and ndarray is that operations between Series automatically align the data based on
label. Thus, you can write computations without giving consideration to whether the Series involved have the same labels.
Step2: Apply Python functions on an element-by-element basis
Step3: Vectorized string methods
Series is equipped with a set of string processing methods that make it easy to operate on each element of the array. Perhaps most importantly, these methods exclude missing/NA values automatically. | Python Code:
import pandas as pd
import numpy as np
my_dictionary = {'a' : 45., 'b' : -19.5, 'c' : 4444}
my_series = pd.Series(my_dictionary)
my_series
Explanation: Vectorized Operations
not necessary to write loops for element-by-element operations
pandas' Series objects can be passed to MOST NumPy functions
documentation: http://pandas.pydata.org/pandas-docs/stable/basics.html
End of explanation
my_series[1:] + my_series[:-1]
Explanation: add Series without loop
Series within arithmetic expression
Series used as argument to NumPy function
A key difference between Series and ndarray is that operations between Series automatically align the data based on
label. Thus, you can write computations without giving consideration to whether the Series involved have the same labels.
End of explanation
def multiply_by_ten (input_element):
return input_element * 10.0
Explanation: Apply Python functions on an element-by-element basis
End of explanation
series_of_strings = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
Explanation: Vectorized string methods
Series is equipped with a set of string processing methods that make it easy to operate on each element of the array. Perhaps most importantly, these methods exclude missing/NA values automatically.
End of explanation |
2,753 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Perceptron Learning in Python
(C) 2017-2019 by Damir Cavar
Download
Step1: Our example data, weights $w$, bias $b$, and input $x$ are defined as
Step2: Our neural unit would compute $z$ as the dot-product $w \cdot x$ and add the bias $b$ to it. The sigmoid function defined above will convert this $z$ value to the activation value $a$ of the unit
Step3: The XOR Problem
The power of neural units comes from combining them into larger networks. Minsky and Papert (1969)
Step4: For AND we could implement a perceptron as
Step5: For OR we could implement a perceptron as | Python Code:
import numpy as np
def sigmoid(z):
return 1 / (1 + np.exp(-z))
Explanation: Perceptron Learning in Python
(C) 2017-2019 by Damir Cavar
Download: This and various other Jupyter notebooks are available from my GitHub repo.
License: Creative Commons Attribution-ShareAlike 4.0 International License (CA BY-SA 4.0)
This is a tutorial related to the discussion of machine learning and NLP in the class Machine Learning for NLP: Deep Learning (Topics in Artificial Intelligence) taught at Indiana University in Spring 2018.
What is a Perceptron?
There are many online examples and tutorials on perceptrons and learning. Here is a list of some articles:
- Wikipedia on Perceptrons
- Jurafsky and Martin (ed. 3), Chapter 8
Example
This is an example that I have taken from a draft of the 3rd edition of Jurafsky and Martin, with slight modifications:
We import numpy and use its exp function. We could use the same function from the math module, or some other module like scipy. The sigmoid function is defined as in the textbook:
End of explanation
w = np.array([0.2, 0.3, 0.8])
b = 0.5
x = np.array([0.5, 0.6, 0.1])
Explanation: Our example data, weights $w$, bias $b$, and input $x$ are defined as:
End of explanation
z = w.dot(x) + b
print("z:", z)
print("a:", sigmoid(z))
Explanation: Our neural unit would compute $z$ as the dot-product $w \cdot x$ and add the bias $b$ to it. The sigmoid function defined above will convert this $z$ value to the activation value $a$ of the unit:
End of explanation
def activation(z):
if z > 0:
return 1
return 0
Explanation: The XOR Problem
The power of neural units comes from combining them into larger networks. Minsky and Papert (1969): A single neural unit cannot compute the simple logical function XOR.
The task is to implement a simple perceptron to compute logical operations like AND, OR, and XOR.
Input: $x_1$ and $x_2$
Bias: $b = -1$ for AND; $b = 0$ for OR
Weights: $w = [1, 1]$
with the following activation function:
$$
y = \begin{cases}
\ 0 & \quad \text{if } w \cdot x + b \leq 0\
\ 1 & \quad \text{if } w \cdot x + b > 0
\end{cases}
$$
We can define this activation function in Python as:
End of explanation
w = np.array([1, 1])
b = -1
x = np.array([0, 0])
print("0 AND 0:", activation(w.dot(x) + b))
x = np.array([1, 0])
print("1 AND 0:", activation(w.dot(x) + b))
x = np.array([0, 1])
print("0 AND 1:", activation(w.dot(x) + b))
x = np.array([1, 1])
print("1 AND 1:", activation(w.dot(x) + b))
Explanation: For AND we could implement a perceptron as:
End of explanation
w = np.array([1, 1])
b = 0
x = np.array([0, 0])
print("0 OR 0:", activation(w.dot(x) + b))
x = np.array([1, 0])
print("1 OR 0:", activation(w.dot(x) + b))
x = np.array([0, 1])
print("0 OR 1:", activation(w.dot(x) + b))
x = np.array([1, 1])
print("1 OR 1:", activation(w.dot(x) + b))
Explanation: For OR we could implement a perceptron as:
End of explanation |
2,754 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Detailed stats of HGVS in ClinVar
Looking only at records with no functional consequences and no complete chr_pos_ref_alt coordinates. Based on June consequence predictions and ClinVar dump.
Table of contents
Sequence types
Variant types
Ranges
Span lengths
Intronic numbering
Summary
Step3: Sequence types
Refer to documentation here.
Top of page
Step4: Variant types
Documentation here
Compare stats here - distribution over all of ClinVar, using XML attribute not HGVS
Top of page
Step5: Ranges
Mostly documented here though note the first case is not uncertain.
g.12345_12678del- no uncertainty
g.(123456_234567)_(345678_456789)del - variable endpoints within a known range
g.(?_234567)_(345678_?)del - variable endpoints within an unknown range
Also split into genomic vs. other sequences - the former we can definitely deal with, not sure about the others.
See below for intronic coordinate ranges...
Top of page
Step7: Span lengths
Lengths of spans - definite or minimum provided. For now only compute for genomic sequences, as getting span length for coding/noncoding is a bit more complicated.
Top of page
Step8: Intronic numbering
Documentation here and particularly the diagram.
Only used for coding and non-coding reference sequences, so not relevant if we focus on genomic.
Top of page | Python Code:
import os
import re
import sys
import numpy as np
from eva_cttv_pipeline.clinvar_xml_utils import *
from eva_cttv_pipeline.clinvar_identifier_parsing import *
%matplotlib inline
import matplotlib.pyplot as plt
PROJECT_ROOT = '/home/april/projects/opentargets/complex-events'
# dump of all records with no functional consequences and no complete coordinates
# uses June consequence pred + ClinVar 6/26/2021
no_consequences_path = os.path.join(PROJECT_ROOT, 'no-conseq_no-coords.xml.gz')
dataset = ClinVarDataset(no_consequences_path)
def count_hgvs(dataset, regex_dict, exclusive=False, limit=None, include_no_hgvs=True, include_none=True):
Counts records in dataset with HGVS matching a collection of regexes.
Can be exclusive or non-exclusive counts (see below).
If limit is provided, will count at most that many records (useful for testing).
Notes:
* records with multiple HGVS expressions need at least one matching a given regex to be counted once
* can also count measures with no HGVS and ones that match none of the regexes (only if not exclusive)
* non-exclusive => record has an HGVS expression that matches this regex.
"If we do support X, how many records could we get?"
* exclusive => record _only_ has HGVS expressions that match this regex (out of this collection).
"If we don't support X, how many records must we lose?"
n = 0
# just use a dict instead of a counter, so we have a predictable key order
results = {k: 0 for k in regex_dict}
if include_no_hgvs:
results['no hgvs'] = 0
if not exclusive and include_none:
results['none'] = 0
for record in dataset:
if not record.measure:
continue
if not record.measure.hgvs:
if include_no_hgvs:
results['no hgvs'] += 1
continue
hs = [h for h in record.measure.hgvs if h is not None]
n += 1
if limit and n > limit:
break
temp_results = {
k: any(r.match(h) for h in hs)
for k, r in regex_dict.items()
}
any_match = False
for k in regex_dict:
if exclusive:
if temp_results[k] and not any(temp_results[j] for j in regex_dict if j != k):
results[k] += 1
else:
if temp_results[k]:
results[k] += 1
any_match = True
if not exclusive and include_none and not any_match:
results['none'] += 1
return results
def print_example_matches(dataset, regex_dict, size=1, limit=None, include_none=True):
Like count_hgvs but returns (size) example matches for each regex where possible.
n = 0
all_matches = {k: [] for k in regex_dict}
if include_none:
all_matches['none'] = []
for record in dataset:
if not record.measure or not record.measure.hgvs:
continue
hs = [h for h in record.measure.hgvs if h is not None]
n += 1
if limit and n > limit:
break
for h in hs:
any_match = False
for k, r in regex_dict.items():
if r.match(h):
all_matches[k].append(h)
any_match = True
if not any_match and include_none:
all_matches['none'].append(h)
result = {
k: [v[i] for i in np.random.choice(len(v), size=min(len(v), size), replace=False)] if v else []
for k, v in all_matches.items()
}
for k in result:
print(k)
for s in result[k]:
print(f' {s}')
print('\n==========\n')
Explanation: Detailed stats of HGVS in ClinVar
Looking only at records with no functional consequences and no complete chr_pos_ref_alt coordinates. Based on June consequence predictions and ClinVar dump.
Table of contents
Sequence types
Variant types
Ranges
Span lengths
Intronic numbering
Summary
End of explanation
# be more lenient than what we currently in identifier_parsing
# for example this allows things like `chr11` or `LRG_199p1`
sequence_identifier = r'[a-zA-Z0-9_.]+:'
seq_type_dict = {
'coding': re.compile(sequence_identifier + r'c\.'),
'genomic': re.compile(sequence_identifier + r'g\.'),
'non-coding': re.compile(sequence_identifier + r'n\.'), # transcript but not coding for a protein
'protein': re.compile(sequence_identifier + r'p\.'),
'mitochondrial': re.compile(sequence_identifier + r'm\.'),
'circular': re.compile(sequence_identifier + r'o\.'),
'RNA': re.compile(sequence_identifier + r'r\.'),
}
print_example_matches(dataset, seq_type_dict, size=5)
seq_type_counts = count_hgvs(dataset, seq_type_dict, exclusive=False)
plt.figure(figsize=(15,7))
plt.title('Sequence type (non-exclusive)')
plt.bar(seq_type_counts.keys(), seq_type_counts.values())
seq_type_counts
# coding or non-coding
2192 + 225
# have hgvs in general
17649 - 4030
seq_type_counts_exclusive = count_hgvs(dataset, seq_type_dict, exclusive=True)
plt.figure(figsize=(15,7))
plt.title('Sequence type (exclusive)')
plt.bar(seq_type_counts_exclusive.keys(), seq_type_counts_exclusive.values())
seq_type_counts_exclusive
Explanation: Sequence types
Refer to documentation here.
Top of page
End of explanation
genomic_sequence = f'^{sequence_identifier}g\.'
all_other_sequence = f'^{sequence_identifier}[a-fh-z]\.'
# double-counts hybrid things, e.g.
# * NC_000013.9:g.93703239_93802554del99316insCTA
# * NC_000016.9:g.2155486_2155487ins2145304_2155487inv
variant_regex = {
'substitution (genomic)': re.compile(f'{genomic_sequence}.*?>.*?'),
'deletion (genomic)': re.compile(f'{genomic_sequence}.*?del(?!ins).*?'),
'duplication (genomic)': re.compile(f'{genomic_sequence}.*?dup.*?'),
'insertion (genomic)': re.compile(f'{genomic_sequence}.*?(?<!del)ins.*?'),
'inversion (genomic)': re.compile(f'{genomic_sequence}.*?inv.*?'),
'delins (genomic)': re.compile(f'{genomic_sequence}.*?delins.*?'),
'substitution (other)': re.compile(f'{all_other_sequence}.*?>.*?'),
'deletion (other)': re.compile(f'{all_other_sequence}.*?del(?!ins).*?'),
'duplication (other)': re.compile(f'{all_other_sequence}.*?dup.*?'),
'insertion (other)': re.compile(f'{all_other_sequence}.*?(?<!del)ins.*?'),
'inversion (other)': re.compile(f'{all_other_sequence}.*?inv.*?'),
'delins (other)': re.compile(f'{all_other_sequence}.*?delins.*?'),
}
print_example_matches(dataset, variant_regex, size=5)
variant_counts = count_hgvs(dataset, variant_regex, include_no_hgvs=False, exclusive=False)
plt.figure(figsize=(15,7))
plt.title('Variant type')
plt.xticks(rotation='vertical')
plt.bar(variant_counts.keys(), variant_counts.values())
variant_counts
Explanation: Variant types
Documentation here
Compare stats here - distribution over all of ClinVar, using XML attribute not HGVS
Top of page
End of explanation
genomic_sequence = f'^{sequence_identifier}g\.'
coding_sequence = f'^{sequence_identifier}c\.'
noncoding_sequence = f'^{sequence_identifier}n\.'
other_sequence = f'^{sequence_identifier}[abdefh-mo-z]\.' # r'^' + sequence_identifier + r'[a-fh-z]\.'
num_range = r'[0-9]+_[0-9]+'
unk_range = r'[0-9?]+_[0-9?]+'
ch = r'[^?_+-]' # we allow characters on either side of the range, but none of this guff
# g.12345_12678del
def definite_range(sequence_type):
return re.compile(f'{sequence_type}{ch}*?{num_range}{ch}*?$')
# g.(123456_234567)_(345678_456789)del
def variable_range(sequence_type):
return re.compile(f'{sequence_type}{ch}*?\({num_range}\)_\({num_range}\){ch}*?$')
# g.(?_234567)_(345678_?)del
def unknown_range(sequence_type):
return re.compile(f'{sequence_type}{ch}*?(?=.*?\?.*?)\({unk_range}\)_\({unk_range}\){ch}*?$')
range_regex = {
'definite (genomic)': definite_range(genomic_sequence),
'variable (genomic)': variable_range(genomic_sequence),
'unknown (genomic)': unknown_range(genomic_sequence),
'definite (coding)': definite_range(coding_sequence),
'variable (coding)': variable_range(coding_sequence),
'unknown (coding)': unknown_range(coding_sequence),
'definite (noncoding)': definite_range(noncoding_sequence),
'variable (noncoding)': variable_range(noncoding_sequence),
'unknown (noncoding)': unknown_range(noncoding_sequence),
'definite (other)': definite_range(other_sequence),
'variable (other)': variable_range(other_sequence),
'unknown (other)': unknown_range(other_sequence),
}
print_example_matches(dataset, range_regex, size=5)
range_counts = count_hgvs(dataset, range_regex, include_no_hgvs=False)
plt.figure(figsize=(15,7))
plt.xticks(rotation='vertical')
plt.title('Ranges')
plt.bar(range_counts.keys(), range_counts.values())
range_counts
# genomic ranges
1735 + 559 + 9311
# coding / noncoding ranges
264 + 79 + 58
Explanation: Ranges
Mostly documented here though note the first case is not uncertain.
g.12345_12678del- no uncertainty
g.(123456_234567)_(345678_456789)del - variable endpoints within a known range
g.(?_234567)_(345678_?)del - variable endpoints within an unknown range
Also split into genomic vs. other sequences - the former we can definitely deal with, not sure about the others.
See below for intronic coordinate ranges...
Top of page
End of explanation
def span_lengths(dataset, regex, limit=None):
Returns all span lengths for a given regex.
This will take the first two captured groups of the regex, convert to integers, and subtract the two.
It will NOT be smart.
n = 0
all_spans = []
for record in dataset:
if not record.measure or not record.measure.hgvs:
continue
hs = [h for h in record.measure.hgvs if h is not None]
n += 1
if limit and n > limit:
break
for h in hs:
m = regex.match(h)
if m and m.group(1) and m.group(2):
span = int(m.group(2)) - int(m.group(1)) + 1
if span < 0:
print('negative span!!!', h)
else:
all_spans.append(span)
# presumably all hgvs expressions for one record have the same span, don't double count
break
return all_spans
# same as previous but with capturing groups added
def_range = r'([0-9]+)_([0-9]+)'
var_range = r'\([0-9?]+_([0-9]+)\)_\(([0-9]+)_[0-9?]+\)'
def_span_regex = re.compile(f'{genomic_sequence}{ch}*?{def_range}{ch}*?$')
var_span_regex = re.compile(f'{genomic_sequence}{ch}*?{var_range}{ch}*?$')
spans = span_lengths(dataset, def_span_regex) + span_lengths(dataset, var_span_regex)
# This is everything with a known minimum span - genomic reference sequence, X_Y or (?_X)_(Y_?)
print(len(spans))
print('Mean:', np.mean(spans))
print('Median:', np.median(spans))
print('Min:', np.min(spans))
print('Max:', np.max(spans))
# actually reasonable spans...
MAX_REASONABLE_SPAN = 20000 #100000
smaller_spans = [x for x in spans if x < MAX_REASONABLE_SPAN]
print(len(smaller_spans))
plt.figure(figsize=(15,10))
plt.grid(visible=True)
plt.title(f'Minimum Spans (less than {MAX_REASONABLE_SPAN})')
# first array is counts per bin
# second array is left edges of bins, plus last right edge
plt.hist(smaller_spans, bins=100)
# VEP acceptable spans
vep_spans = [x for x in spans if x < 5000]
print(len(vep_spans))
Explanation: Span lengths
Lengths of spans - definite or minimum provided. For now only compute for genomic sequences, as getting span length for coding/noncoding is a bit more complicated.
Top of page
End of explanation
coding_sequence = r'^' + sequence_identifier + r'c\.'
other_sequence = r'^' + sequence_identifier + r'[abd-z]\.'
pivot = r'[-*]?[0-9]+'
offset = r'[+-][0-9]+'
endpoint = pivot + offset
num_range = f'{endpoint}_{endpoint}'
unk_range = f'(?:{endpoint}|\?)_(?:{endpoint}|\?)'
ch = r'[^?_+-]' # we allow characters on either side of the range, but none of this guff
irange_regex = {
'definite intron (coding)': re.compile(coding_sequence + f'{ch}*?{num_range}{ch}*?$'),
'variable intron (coding)': re.compile(coding_sequence + f'{ch}*?\({num_range}\)_\({num_range}\){ch}*?$'),
'unknown intron (coding)': re.compile(coding_sequence + f'{ch}*?(?=.*?\?.*?)\({unk_range}\)_\({unk_range}\){ch}*?$'),
'definite intron (other)': re.compile(other_sequence + f'{ch}*?{num_range}{ch}*?$'),
'variable intron (other)': re.compile(other_sequence + f'{ch}*?\({num_range}\)_\({num_range}\){ch}*?$'),
'unknown intron (other)': re.compile(other_sequence + f'{ch}*?(?=.*?\?.*?)\({unk_range}\)_\({unk_range}\){ch}*?$'),
}
print_example_matches(dataset, irange_regex, size=5, include_none=False)
irange_counts = count_hgvs(dataset, irange_regex, include_no_hgvs=False, include_none=False)
plt.figure(figsize=(15,7))
plt.title('Ranges')
plt.bar(irange_counts.keys(), irange_counts.values())
irange_counts
sum(irange_counts.values())
Explanation: Intronic numbering
Documentation here and particularly the diagram.
Only used for coding and non-coding reference sequences, so not relevant if we focus on genomic.
Top of page
End of explanation |
2,755 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quadratic Programming
1. Introduction
1.1 Libraries Used
For Quadratic Programming, the packages quadprog and cvxopt were installed
Step1: 1.2 Theory
1.2.1 Lagrange Multipliers
The Lagrangian is given by
Step2: 2.2 Leave-one-out Cross-Validation Example
Step3: The ordinary least squares (OLS) estimator for the weights is
Step4: Constant model (inverse does not work here as the matrix is not square)
Step5: Here, we can see that the 3rd expression gives the same leave-one-out cross validation error as the constant model.
Step6: 3. Quadratic Programming
3.1 Background
In the notation of help(solve_qp), we wish to minimize
Step7: Calculation of dvec
Step8: Here, the quadratic programming code is tested on a handful of 'toy problems' | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import scipy
import cvxopt
import quadprog
from numpy.random import permutation
from sklearn import linear_model
from sympy import var, diff, exp, latex, factor, log, simplify
from IPython.display import display, Math, Latex
np.set_printoptions(precision=4,threshold=400)
%matplotlib inline
Explanation: Quadratic Programming
1. Introduction
1.1 Libraries Used
For Quadratic Programming, the packages quadprog and cvxopt were installed:
bash
pip install quadprog
pip install cvxopt
Help for the appropriate functions are available via
python
help(quadprog.solve_qp)
help(cvxopt.solvers.qp)
The remaining libraries are loaded in the code below:
End of explanation
n_samples = 1000
e1 = np.random.random(n_samples)
e2 = np.random.random(n_samples)
e = np.vstack((e1,e2))
e = np.min(e, axis=0)
print("E(e1) = {}".format(np.mean(e1)))
print("E(e2) = {}".format(np.mean(e2)))
print("E(e ) = {}".format(np.mean(e)))
Explanation: 1.2 Theory
1.2.1 Lagrange Multipliers
The Lagrangian is given by:
$$\mathcal{L}\left(\mathbf{w},b,\mathbf{\alpha}\right) = \frac{1}{2}\mathbf{w^T w} - \sum\limits_{n=1}^N \alpha_n\left[y_n\left(\mathbf{w^T x_n + b}\right) - 1\right]$$
The Lagrangian may be simplified by making the following substitution:
$$\mathbf{w} = \sum\limits_{n=1}^N \alpha_n y_n \mathbf{x_n}, \quad \sum\limits_{n=1}^N \alpha_n y_n = 0$$
whereby we obtain:
$$\mathcal{L}\left(\mathbf{\alpha}\right) = \sum\limits_{n=1}^N \alpha_n - \frac{1}{2}\sum\limits_{n=1}^N \sum\limits_{m=1}^N y_n y_m \alpha_n \alpha_m \mathbf{x_n^T x_m}$$
We wish to maximize the Lagrangian with respect to $\mathbf{\alpha}$ subject to the conditions: $\alpha_n \ge 0$ for:
$$n = 1, \dots, N \quad\text{and}\quad \sum\limits_{n=1}^N \alpha_n y_n = 0$$
To do this, we convert the Lagrangian to match a form that can be used with quadratic programming software packages.
$$\min\limits_\alpha \frac{1}{2}\alpha^T \left[\begin{array}{cccc}
y_1 y_1 \mathbf{x_1^T x_1} & y_1 y_2 \mathbf{x_1^T x_2} & \cdots & y_1 y_N \mathbf{x_1^T x_N}\
y_2 y_1 \mathbf{x_2^T x_1} & y_2 y_2 \mathbf{x_2^T x_2} & \cdots & y_2 y_N \mathbf{x_2^T x_N}\
\vdots & \vdots & & \vdots\
y_N y_1 \mathbf{x_N^T x_1} & y_N y_2 \mathbf{x_N^T x_2} & \cdots & y_N y_N \mathbf{x_N^T x_N}\end{array}\right]\alpha + \left(-\mathbf{1^T}\right)\mathbf{\alpha}$$
i.e.
$$\min\limits_\alpha \frac{1}{2}\alpha^T \mathbf{Q} \alpha + \left(-\mathbf{1^T}\right)\mathbf{\alpha}$$
Subject to the linear constraint: $\mathbf{y^T \alpha} = 0$ and $0 \le \alpha \le \infty$.
1.2.2 Quadratic Programming
In Quadratic Programming, the objective is to find the value of $\mathbf{x}$ that minimizes the function:
$$\frac{1}{2}\mathbf{x^T Q x + c^T x}$$
subject to the constraint:
$$\mathbf{Ax \le b}$$
The support vectors are $\mathbf{x_n}$ where $\alpha_n > 0$.
The solution to the above is calculated using a subroutine such as solve_qp(G, a, C, b),
which finds the $\alpha$'s that minimize:
$$\frac{1}{2}\mathbf{x^T G x} - \mathbf{a^T x}$$
subject to the condition:
$$\mathbf{C^T x} \ge \mathbf{b}$$
The quadratic programming solver is implemented in solve.QP.c, with a Cython wrapper quadprog.pyx. The unit tests are in test_1.py which compares the solution from quadprog's solve_qp() with that obtained from scipy.optimize.minimize, and test_factorized.py.
2. Validation
2.1 Is there a Validation Bias when choosing the minimum of two random variables?
Let $\text{e}_1$ and $\text{e}_2$ be independent random variables, distributed uniformly over the interval [0, 1]. Let $\text{e} = \min\left(\text{e}_1, \text{e}_2\right)$. What is the expected values of $\left(\text{e}_1, \text{e}_2, \text{e}\right)$:
End of explanation
from sympy import Matrix, Rational, Eq, sqrt
var('x1 x2 x3 rho')
Explanation: 2.2 Leave-one-out Cross-Validation Example
End of explanation
def linear_model_cv_err(x1,y1,x2,y2,x3,y3):
X_train1 = Matrix((x2,x3))
X_train2 = Matrix((x1,x3))
X_train3 = Matrix((x1,x2))
display(Math('X_1^{train} = ' + latex(X_train1) + ', ' +
'X_2^{train} = ' + latex(X_train2) + ', ' +
'X_3^{train} = ' + latex(X_train3)))
display(Math('(X_1^{train})^{-1} = ' + latex(X_train1.inv()) + ', ' +
'(X_2^{train})^{-1} = ' + latex(X_train2.inv()) + ', ' +
'(X_3^{train})^{-1} = ' + latex(X_train3.inv()) ))
y_train1 = Matrix((y2,y3))
y_train2 = Matrix((y1,y3))
y_train3 = Matrix((y1,y2))
display(Math('y_1^{train} = ' + latex(y_train1) + ', ' +
'y_2^{train} = ' + latex(y_train2) + ', ' +
'y_3^{train} = ' + latex(y_train3)))
w1 = X_train1.inv() * y_train1
w2 = X_train2.inv() * y_train2
w3 = X_train3.inv() * y_train3
display(Math('w_1 = ' + latex(w1) + ', ' +
'w_2 = ' + latex(w2) + ', ' +
'w_3 = ' + latex(w3)))
y_pred1 = w1.T*Matrix(x1)
y_pred2 = w2.T*Matrix(x2)
y_pred3 = w3.T*Matrix(x3)
display(Math('y_1^{pred} = ' + latex(y_pred1) + ', ' +
'y_2^{pred} = ' + latex(y_pred2) + ', ' +
'y_3^{pred} = ' + latex(y_pred3)))
e1 = (y_pred1 - Matrix([y1])).norm()**2
e2 = (y_pred2 - Matrix([y2])).norm()**2
e3 = (y_pred3 - Matrix([y3])).norm()**2
display(Math('e_1 = ' + latex(e1) + ', ' +
'e_2 = ' + latex(e2) + ', ' +
'e_3 = ' + latex(e3)))
return (e1 + e2 + e3)/3
x1 = 1,-1
x2 = 1,rho
x3 = 1,1
y1 = 0
y2 = 1
y3 = 0
e_linear = linear_model_cv_err(x1,y1,x2,y2,x3,y3)
display(Math('e_{linear\;model} = ' + latex(e_linear)))
Explanation: The ordinary least squares (OLS) estimator for the weights is:
$$w = \left(\mathbf{X^T X}\right)^{-1}\mathbf{X^T y} = \mathbf{X^\dagger}y$$
When $\mathbf{X}$ is invertible, $\mathbf{X^\dagger} = \mathbf{X^{-1}}$, so:
$$w = \mathbf{X^{-1}}y$$
Lastly, the error is given by
$$e = \left[h(x) - y\right]^2 = \left|\mathbf{w^T x - y}\right|$$
Linear model
End of explanation
def const_model_cv_err(x1,y1,x2,y2,x3,y3):
X_train1 = Matrix((x2,x3))
X_train2 = Matrix((x1,x3))
X_train3 = Matrix((x1,x2))
y_train1 = Matrix((y2,y3))
y_train2 = Matrix((y1,y3))
y_train3 = Matrix((y1,y2))
w1 = Rational(y2+y3,2)
w2 = Rational(y1+y3,2)
w3 = Rational(y1+y2,2)
e1 = (w1 * Matrix([x1]) - Matrix([y1])).norm()**2
e2 = (w2 * Matrix([x2]) - Matrix([y2])).norm()**2
e3 = (w3 * Matrix([x3]) - Matrix([y3])).norm()**2
return Rational(e1 + e2 + e3,3)
x1 = 1
x2 = 1
x3 = 1
y1 = 0
y2 = 1
y3 = 0
e_const = const_model_cv_err(x1,y1,x2,y2,x3,y3)
display(Math('e_{constant\;model} = ' + latex(e_const)))
rho1 = sqrt(sqrt(3)+4)
rho2 = sqrt(sqrt(3)-1)
rho3 = sqrt(9+4*sqrt(6))
rho4 = sqrt(9-sqrt(6))
ans1 = e_linear.subs(rho,rho1).simplify()
ans2 = e_linear.subs(rho,rho2).simplify()
ans3 = e_linear.subs(rho,rho3).simplify()
ans4 = e_linear.subs(rho,rho4).simplify()
display(Math(latex(ans1) + '=' + str(ans1.evalf())))
display(Math(latex(ans2) + '=' + str(ans2.evalf())))
display(Math(latex(ans3) + '=' + str(ans3.evalf())))
display(Math(latex(ans4) + '=' + str(ans4.evalf())))
Explanation: Constant model (inverse does not work here as the matrix is not square)
End of explanation
Math(latex(Eq(6*(e_linear-e_const),0)))
Explanation: Here, we can see that the 3rd expression gives the same leave-one-out cross validation error as the constant model.
End of explanation
def get_Dmat(X,y):
n = len(X)
K = np.zeros(shape=(n,n))
for i in range(n):
for j in range(n):
K[i,j] = np.dot(X[i], X[j])
Q = np.outer(y,y)*K
return(Q)
Explanation: 3. Quadratic Programming
3.1 Background
In the notation of help(solve_qp), we wish to minimize:
$$\frac{1}{2}\mathbf{x^T G x - a^T x}$$
subject to the constraint
$$\mathbf{C^T x} \ge \mathbf{b}$$
The matrix, Q, (also called Dmat) is:
$$G = \left[\begin{array}{cccc}
y_1 y_1 \mathbf{x_1^T x_1} & y_1 y_2 \mathbf{x_1^T x_2} & \cdots & y_1 y_N \mathbf{x_1^T x_N}\
y_2 y_1 \mathbf{x_2^T x_1} & y_2 y_2 \mathbf{x_2^T x_2} & \cdots & y_2 y_N \mathbf{x_2^T x_N}\
\vdots & \vdots & & \vdots\
y_N y_1 \mathbf{x_N^T x_1} & y_N y_2 \mathbf{x_N^T x_2} & \cdots & y_N y_N \mathbf{x_N^T x_N}\end{array}\right]$$
The calculation of the above matrix is implemented in the code below:
End of explanation
def get_GaCb(X,y, verbose=False):
n = len(X)
assert n == len(y)
G = get_Dmat(X,y)
a = np.ones(n)
C = np.vstack([y,np.eye(n)]).T
b = np.zeros(1+n)
I = np.eye(n, dtype=float)
assert G.shape == (n,n)
assert y.shape == (n,)
assert a.shape == (n,)
assert C.shape == (n,n+1)
assert b.shape == (1+n,)
assert I.shape == (n,n)
if verbose is True:
print(G)
print(C.astype(int).T)
return G,a,C,b,I
def solve_cvxopt(P, q, G, h, A, b):
P = cvxopt.matrix(P)
q = cvxopt.matrix(q)
G = cvxopt.matrix(G)
h = cvxopt.matrix(h)
A = cvxopt.matrix(A)
b = cvxopt.matrix(b)
solution = cvxopt.solvers.qp(P, q, G, h, A, b)
return solution
def create_toy_problem_1():
X = np.array(
[[ 1.0],
[ 2.0],
[ 3.0]])
y = np.array([-1,-1,1], dtype=float)
return X,y
def create_toy_problem_2():
X = np.array(
[[ 1.0, 0.0],
[ 2.0, 0.0],
[ 3.0, 0.0]])
y = np.array([-1,-1,1], dtype=float)
return X,y
def create_toy_problem_3():
X = np.array(
[[ 0.0, 0.0],
[ 2.0, 2.0],
[ 2.0, 0.0],
[ 3.0, 0.0]])
y = np.array([-1,-1,1,1], dtype=float)
return X,y
def create_toy_problem_4():
X = np.array(
[[ 0.78683463, 0.44665934],
[-0.16648517,-0.72218041],
[ 0.94398266, 0.74900882],
[ 0.45756412,-0.91334759],
[ 0.15403063,-0.75459915],
[-0.47632360, 0.02265701],
[ 0.53992470,-0.25138609],
[-0.73822772,-0.50766569],
[ 0.92590792,-0.92529239],
[ 0.08283211,-0.15199064]])
y = np.array([-1,1,-1,1,1,-1,1,-1,1,1], dtype=float)
G,a,C,b,I = get_GaCb(X,y)
assert np.allclose(G[0,:],np.array([0.818613299,0.453564930,1.077310034,0.047927947,
0.215852131,-0.364667935,-0.312547506,-0.807616753,-0.315245922,0.002712864]))
assert np.allclose(G[n-1,:],np.array([0.002712864,0.095974341,0.035650250,0.176721283,
0.127450687,0.042898544,0.082931435,-0.016011470,0.217330687,0.029962312]))
return X,y
def solve_quadratic_programming(X,y,tol=1.0e-8,method='solve_qp'):
n = len(X)
G,a,C,b,I = get_GaCb(X,y)
eigs = np.linalg.eigvals(G + tol*I)
pos_definite = np.all(eigs > 0)
if pos_definite is False:
print("Warning! Positive Definite(G+tol*I) = {}".format(pos_definite))
if method=='solve_qp':
try:
alphas, f, xu, iters, lagr, iact = quadprog.solve_qp(G + tol*I,a,C,b,meq=1)
print("solve_qp(): alphas = {} (f = {})".format(alphas,f))
return alphas
except:
print("solve_qp() failed")
else:
#solution = cvxopt.solvers.qp(G, a, np.eye(n), np.zeros(n), np.diag(y), np.zeros(n))
solution = solve_cvxopt(P=G, q=-np.ones(n),
G=-np.eye(n), h=np.zeros(n),
A=np.array([y]), b=np.zeros(1)) #A=np.diag(y), b=np.zeros(n))
if solution['status'] != 'optimal':
print("cvxopt.solvers.qp() failed")
return None
else:
alphas = np.ravel(solution['x'])
print("cvxopt.solvers.qp(): alphas = {}".format(alphas))
#ssv = alphas > 1e-5
#alphas = alphas[ssv]
#print("alphas = {}".format(alphas))
return alphas
Explanation: Calculation of dvec:
$$-\mathbf{a^T x} = \left(-\mathbf{1^T}\right)\mathbf{\alpha} = \begin{pmatrix} -1 & -1 & \dots & -1\end{pmatrix}\mathbf{\alpha}$$
is implemented as:
python
a = np.ones(n)
Calculation of Inequality constraint:
$$\mathbf{C^T x} \ge \mathbf{b}$$
via
$$\mathbf{y^T x} \ge \mathbf{0}$$
$$\mathbf{\alpha} \ge \mathbf{0}$$
where the last two constraints are implemented as:
$$\mathbf{C^T} = \begin{pmatrix}y_1 & y_2 & \dots & y_n\
1 & 0 & \cdots & 0\
0 & 1 & \cdots & 0\
\vdots & \vdots & \ddots & \vdots\
0 & 0 & \cdots & 1\end{pmatrix}$$
$$\mathbf{b} = \begin{pmatrix}0 \ 0 \ \vdots \ 0\end{pmatrix}$$
python
C = np.vstack([y,np.eye(n)])
b = np.zeros(1+n)
End of explanation
#X, y = create_toy_problem_1()
X, y = create_toy_problem_3()
G,a,C,b,I = get_GaCb(X,y,verbose=True)
X, y = create_toy_problem_3()
alphas1 = solve_quadratic_programming(X,y,method='cvxopt')
alphas2 = solve_quadratic_programming(X,y,method='solve_qp')
#h = np.hstack([
# np.zeros(n),
# np.ones(n) * 999999999.0])
#A = np.array([y]) #A = cvxopt.matrix(y, (1,n))
#b = np.array([0.0]) #b = cvxopt.matrix(0.0)
def solve_cvxopt(n,P,y_output):
# Generating all the matrices and vectors
# P = cvxopt.matrix(np.outer(y_output, y_output) * K)
q = cvxopt.matrix(np.ones(n) * -1)
G = cvxopt.matrix(np.vstack([
np.eye(n) * -1,
np.eye(n)
]))
h = cvxopt.matrix(np.hstack([
np.zeros(n),
np.ones(n) * 999999999.0
]))
A = cvxopt.matrix(y_output, (1,n))
b = cvxopt.matrix(0.0)
solution = cvxopt.solvers.qp(P, q, G, h, A, b)
return solution
G = np.eye(3, 3)
a = np.array([0, 5, 0], dtype=np.double)
C = np.array([[-4, 2, 0], [-3, 1, -2], [0, 0, 1]], dtype=np.double)
b = np.array([-8, 2, 0], dtype=np.double)
xf, f, xu, iters, lagr, iact = quadprog.solve_qp(G, a, C, b)
#https://github.com/rmcgibbo/quadprog/blob/master/quadprog/tests/test_1.py
def solve_qp_scipy(G, a, C, b, meq=0):
# Minimize 1/2 x^T G x - a^T x
# Subject to C.T x >= b
def f(x):
return 0.5 * np.dot(x, G).dot(x) - np.dot(a, x)
if C is not None and b is not None:
constraints = [{
'type': 'ineq',
'fun': lambda x, C=C, b=b, i=i: (np.dot(C.T, x) - b)[i]
} for i in range(C.shape[1])]
else:
constraints = []
result = scipy.optimize.minimize(f, x0=np.zeros(len(G)), method='COBYLA',
constraints=constraints, tol=1e-10)
return result
def verify(G, a, C=None, b=None):
xf, f, xu, iters, lagr, iact = quadprog.solve_qp(G, a, C, b)
result = solve_qp_scipy(G, a, C, b)
np.testing.assert_array_almost_equal(result.x, xf)
np.testing.assert_array_almost_equal(result.fun, f)
def test_1():
G = np.eye(3, 3)
a = np.array([0, 5, 0], dtype=np.double)
C = np.array([[-4, 2, 0], [-3, 1, -2], [0, 0, 1]], dtype=np.double)
b = np.array([-8, 2, 0], dtype=np.double)
xf, f, xu, iters, lagr, iact = quadprog.solve_qp(G, a, C, b)
np.testing.assert_array_almost_equal(xf, [0.4761905, 1.0476190, 2.0952381])
np.testing.assert_almost_equal(f, -2.380952380952381)
np.testing.assert_almost_equal(xu, [0, 5, 0])
np.testing.assert_array_equal(iters, [3, 0])
np.testing.assert_array_almost_equal(lagr, [0.0000000, 0.2380952, 2.0952381])
verify(G, a, C, b)
def test_2():
G = np.eye(3, 3)
a = np.array([0, 0, 0], dtype=np.double)
C = np.ones((3, 1))
b = -1000 * np.ones(1)
verify(G, a, C, b)
verify(G, a)
def test_3():
random = np.random.RandomState(0)
G = scipy.stats.wishart(scale=np.eye(3,3), seed=random).rvs()
a = random.randn(3)
C = random.randn(3, 2)
b = random.randn(2)
verify(G, a, C, b)
verify(G, a)
test_1()
test_2()
test_3()
#https://gist.github.com/zibet/4f76b66feeb5aa24e124740081f241cb
from cvxopt import solvers
from cvxopt import matrix
def toysvm():
def to_matrix(a):
return matrix(a, tc='d')
X = np.array([
[0,2],
[2,2],
[2,0],
[3,0]], dtype=float)
y = np.array([-1,-1,1,1], dtype=float)
Qd = np.array([
[0,0,0,0],
[0,8,-4,-6],
[0,-4,4,6],
[0,-6,6,9]], dtype=float)
Ad = np.array([
[-1,-1,1,1],
[1,1,-1,-1],
[1,0,0,0],
[0,1,0,0],
[0,0,1,0],
[0,0,0,1]], dtype=float)
N = len(y)
P = to_matrix(Qd)
q = to_matrix(-(np.ones((N))))
G = to_matrix(-Ad)
h = to_matrix(np.array(np.zeros(N+2)))
sol = solvers.qp(P,q,G,h)
print(sol['x'])
#xf, f, xu, iters, lagr, iact = solve_qp(Qd, y, Ad, X)
toysvm()
Explanation: Here, the quadratic programming code is tested on a handful of 'toy problems'
End of explanation |
2,756 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mag Inversion
Step1
Step1: Step2
Step2: Step3 | Python Code:
cs = 25.
hxind = [(cs,5,-1.3), (cs, 31),(cs,5,1.3)]
hyind = [(cs,5,-1.3), (cs, 31),(cs,5,1.3)]
hzind = [(cs,5,-1.3), (cs, 30),(cs,5,1.3)]
mesh = Mesh.TensorMesh([hxind, hyind, hzind], 'CCC')
Explanation: Mag Inversion
Step1: Generating mesh
End of explanation
chibkg = 1e-5
chiblk = 0.1
chi = np.ones(mesh.nC)*chibkg
sph_ind = spheremodel(mesh, 0., 0., -150., 80)
chi[sph_ind] = chiblk
active = mesh.gridCC[:,2]<0
actMap = Maps.ActiveCells(mesh, active, chibkg)
dweight = np.ones(mesh.nC)
dweight[active] = (1/abs(mesh.gridCC[active, 2]-13.)**1.5)
baseMap = BaseMag.BaseMagMap(mesh)
depthMap = BaseMag.WeightMap(mesh, dweight)
dmap = baseMap*actMap
rmap = depthMap*actMap
model = (chi)[active]
sph_ind_ini = spheremodel(mesh, 0., 0., -200., 150)
chi_ini = np.ones_like(chi)*chibkg
chi_ini[sph_ind_ini] = chiblk*0.1
fig, ax = plt.subplots(1,1, figsize = (5, 5))
dat1 = mesh.plotSlice(chi, ax = ax, normal = 'X')
plt.colorbar(dat1[0], orientation="horizontal", ax = ax)
ax.set_ylim(-500, 0)
print model.shape
print chi.shape
Explanation: Step2: Generating Model: Use Combo model
Here we combined $\mu$ model$^1$, Depth model$^2$ and Active model$^3$
End of explanation
survey = BaseMag.BaseMagSurvey()
const = 20
Inc = 90.
Dec = 0.
Btot = 51000
survey.setBackgroundField(Inc, Dec, Btot)
xr = np.linspace(-300, 300, 81)
yr = np.linspace(-300, 300, 81)
X, Y = np.meshgrid(xr, yr)
Z = np.ones((xr.size, yr.size))*(0.)
rxLoc = np.c_[Utils.mkvc(X), Utils.mkvc(Y), Utils.mkvc(Z)]
survey.rxLoc = rxLoc
prob = MagneticsDiffSecondary(mesh, mapping = dmap)
prob.pair(survey)
prob.Solver = Utils.SolverUtils.SolverWrapD(Mumps, factorize=True)
dsyn = survey.dpred(model)
survey.dtrue = Utils.mkvc(dsyn)
std = 0.05
noise = std*abs(survey.dtrue)*np.random.randn(*survey.dtrue.shape)
survey.dobs = survey.dtrue+noise
survey.std = survey.dobs*0 + std
fig, ax = plt.subplots(1,2, figsize = (8,5) )
dat = ax[0].imshow(np.reshape(noise, (xr.size, yr.size), order='F'), extent=[min(xr), max(xr), min(yr), max(yr)])
plt.colorbar(dat, ax = ax[0], orientation="horizontal")
dat2 = ax[1].imshow(np.reshape(survey.dobs, (xr.size, yr.size), order='F'), extent=[min(xr), max(xr), min(yr), max(yr)])
plt.colorbar(dat2, ax = ax[1], orientation="horizontal")
plt.show()
# m0 = (1e-5*np.ones(mesh.nC))[active]
m0 = chi_ini[active]/dweight[active]
dmisfit = DataMisfit.l2_DataMisfit(survey)
valmin = abs(survey.dobs).max()
dmisfit.Wd = 1/(np.ones(survey.dobs.size)*valmin)
d_ini = survey.dpred(m0)
fig, ax = plt.subplots(1,2, figsize = (8,5) )
dat1 = ax[0].imshow(np.reshape(d_ini, (xr.size, yr.size), order='F'), extent=[min(xr), max(xr), min(yr), max(yr)])
vmin = d_ini.min()
vmax = d_ini.max()
plt.colorbar(dat1, ax = ax[0], orientation="horizontal", ticks=[np.linspace(vmin, vmax, 3)], format = FormatStrFormatter('$%5.5f$'))
dat2 = ax[1].imshow(np.reshape(survey.dobs, (xr.size, yr.size), order='F'), extent=[min(xr), max(xr), min(yr), max(yr)])
vmin = survey.dobs.min()
vmax = survey.dobs.max()
plt.colorbar(dat2, ax = ax[1], orientation="horizontal", ticks=[np.linspace(vmin, vmax, 5)])
plt.show()
reg = Regularization.Tikhonov(mesh, mapping = rmap)
opt = Optimization.ProjectedGNCG(maxIter = 30)
opt.lower = 1e-10
opt.maxIterLS = 50
invProb = InvProblem.BaseInvProblem(dmisfit, reg, opt)
beta = Directives.BetaSchedule(coolingFactor=8, coolingRate=2)
betaest = Directives.BetaEstimate_ByEig(beta0_ratio=10**0)
inv = Inversion.BaseInversion(invProb, directiveList=[beta,betaest])
opt.tolG = 1e-20
opt.eps = 1e-20
reg.alpha_s = 1e-9
reg.alpha_x = 1.
reg.alpha_y = 1.
reg.alpha_z = 1.
prob.counter = opt.counter = Utils.Counter()
opt.LSshorten = 0.1
opt.remember('xc')
mopt = inv.run(m0)
opt.counter.summary()
xc = opt.recall('xc')
from JSAnimation import IPython_display
from matplotlib import animation
from SimPEG import *
fig, ax = subplots(1,2, figsize = (16, 5))
ax[0].set_xlabel('Easting (m)')
ax[0].set_ylabel('Depth (m)')
ax[1].set_xlabel('Easting (m)')
ax[1].set_ylabel('Depth (m)')
def animate(i_id):
indx = 18
temp = dmap*(xc[i_id])
minval = (temp).min()
maxval = (temp).max()
frame1 = mesh.plotSlice(temp, vType='CC', ind=indx, normal='X',ax = ax[1], grid=False, gridOpts={'color':'b','lw':0.3, 'alpha':0.5}, )
frame2 = mesh.plotSlice(chi, vType='CC', ind=indx, normal='X',ax = ax[0], grid=False, gridOpts={'color':'b','lw':0.3, 'alpha':0.5}, );
ax[0].set_title('True model', fontsize = 16)
ax[1].set_title('Estimated model at iteration = ' + str(i_id+1), fontsize = 16)
ax[0].set_ylim(-500, 0)
ax[1].set_ylim(-500, 0)
return frame1[0]
animation.FuncAnimation(fig, animate, frames=10, interval=40, blit=True)
import matplotlib
matplotlib.rcParams.update({'font.size': 14, 'text.usetex': True, 'font.family': 'arial'})
indx = 18
iteration = 9
fig, axes = subplots(1,2, figsize = (12, 5))
vmin = chi.min()
vmax = chi.max()
ps1 = mesh.plotSlice(chi, vType='CC', ind=indx, normal='X',ax = axes[0], grid=True, gridOpts={'color':'b','lw':0.3, 'alpha':0.5});
axes[0].set_title('$\chi_{true}$', fontsize = 16)
axes[0].set_ylim(-500, 0.)
cb1 = colorbar(ps1[0], ax = axes[0], orientation="horizontal", ticks=[np.linspace(vmin, vmax, 5)], format = FormatStrFormatter('$%5.3f$'))
axes[0].set_xlabel('Easting (m)')
axes[0].set_ylabel('Depth (m)')
vmin = (actMap*xc[iteration]).min()
vmax = (actMap*xc[iteration]).max()
ps2 = mesh.plotSlice(actMap*xc[iteration], vType='CC', ind=indx, normal='X', ax = axes[1], grid=True, gridOpts={'color':'b','lw':0.3, 'alpha':0.5});
axes[1].set_title('$\chi_{pred}$', fontsize = 16)
axes[1].set_ylim(-500, 0.)
cb2 = colorbar(ps2[0], ax = axes[1], orientation="horizontal", ticks=[np.linspace(vmin, vmax, 5)], format = FormatStrFormatter('$%5.3f$'))
cb1.set_label('Susceptibility (dimensionless)')
cb2.set_label('Susceptibility (dimensionless)')
axes[1].set_xlabel('Easting (m)')
axes[1].set_ylabel('Depth (m)')
fig.savefig('model.png', dpi = 200)
dpred_xc = survey.dpred(xc[iteration])
fig, ax = plt.subplots(1,2, figsize = (12,7) )
vmin = survey.dobs.min()
vmax = survey.dobs.max()
dat2 = ax[0].imshow(np.reshape(survey.dobs, (xr.size, yr.size), order='F'), extent=[min(xr), max(xr), min(yr), max(yr)], vmin = vmin, vmax = vmax)
cb1 = plt.colorbar(dat2, ax = ax[0], orientation="horizontal", ticks=[np.linspace(vmin, vmax, 5)])
dat = ax[1].imshow(np.reshape(dpred_xc, (xr.size, yr.size), order='F'), extent=[min(xr), max(xr), min(yr), max(yr)], vmin = vmin, vmax = vmax)
cb2 = plt.colorbar(dat, ax = ax[1], orientation="horizontal", ticks=[np.linspace(vmin, vmax, 5)])
ax[0].plot(rxLoc[:,0],rxLoc[:,1],'w.', ms=1)
ax[1].plot(rxLoc[:,0],rxLoc[:,1],'w.', ms=1)
ax[0].set_title('Observed', fontsize = 16)
ax[1].set_title('Predicted', fontsize = 16)
ax[0].set_xlabel('Easting (m)')
ax[0].set_ylabel('Northing (m)')
ax[1].set_xlabel('Easting (m)')
ax[1].set_ylabel('Northing (m)')
cb1.set_label('Total magnetic intensity (nT)')
cb2.set_label('Total magnetic intensity (nT)')
fig.savefig('obspred.png', dpi = 200)
Explanation: Step3: Generating Data
End of explanation |
2,757 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Updated NOAA Data
Looks like NOAA technically has back to 1946, but first actual read of any precipitation is on September 24, 1970
Step1: N-Year Metrics
Using rolling time series in pandas to find n-year events. First looking at some for 6 hour interval.
The rolling sum here calculates the sum of observations over a given number of observations over time. Since each observation here is an hour, the window we provide is a number of hours. Each row is then the sum of observations over that number of hours.
If we had the following rows
Step2: Notes on Initial Results
Because it's looking for a count of intervals, the initial counts of events returned could include more than one event per storm. For example, if one storm lasted 8 hours from 1pm to 9pm and rained relatively consistently throughout at a 5-year event level and we're looking for 6 hour intervals, it could count for as many as 3 events. | Python Code:
rain_df = rain_df['1970-09-01':]
rain_df.head()
# Resampling the dataframe into one hour increments, accessing max because accumulation listed more often than hourly (i.e.
# every 15 minutes) is the total precipitation since the hour began
# Description: http://www1.ncdc.noaa.gov/pub/data/cdo/documentation/LCD_documentation.pdf
chi_rain_series = rain_df['HOURLYPrecip'].resample('1H').max()
print(chi_rain_series.count())
chi_rain_series.head()
Explanation: Updated NOAA Data
Looks like NOAA technically has back to 1946, but first actual read of any precipitation is on September 24, 1970
End of explanation
roll_6_hr = chi_rain_series.rolling(window=6)
roll_6_hr.sum().plot()
Explanation: N-Year Metrics
Using rolling time series in pandas to find n-year events. First looking at some for 6 hour interval.
The rolling sum here calculates the sum of observations over a given number of observations over time. Since each observation here is an hour, the window we provide is a number of hours. Each row is then the sum of observations over that number of hours.
If we had the following rows:
1pm: 2
2pm: 3
3pm: 1
4pm: 5
And we calculate the rolling sum with a window of 2 hours, the results will be:
1pm: NaN (because we only have one observation at this point)
2pm: 5
3pm: 4
4pm: 6
Details of the specific cutoffs for each level of n-year storm can be found here: Rainfall Frequency Information Illinois
End of explanation
roll_6 = pd.DataFrame(roll_6_hr.sum())
print('For 6-hour intervals')
print('{} 1-year events for Northeast Illinois'.format(len(roll_6[(roll_6['HOURLYPrecip'] >= 1.88) &
(roll_6['HOURLYPrecip'] < 2.28)])))
print('{} 2-year events for Northeast Illinois'.format(len(roll_6[(roll_6['HOURLYPrecip'] >= 2.28) &
(roll_6['HOURLYPrecip'] < 2.85)])))
print('{} 5-year events for Northeast Illinois'.format(len(roll_6[(roll_6['HOURLYPrecip'] >= 2.85) &
(roll_6['HOURLYPrecip'] < 3.35)])))
print('{} 10-year events for Northeast Illinois'.format(len(roll_6[(roll_6['HOURLYPrecip'] >= 3.35) &
(roll_6['HOURLYPrecip'] < 4.13)])))
print('{} 25-year events for Northeast Illinois'.format(len(roll_6[(roll_6['HOURLYPrecip'] >= 4.13) &
(roll_6['HOURLYPrecip'] < 4.90)])))
print('{} 50-year events for Northeast Illinois'.format(len(roll_6[(roll_6['HOURLYPrecip'] >= 4.90) &
(roll_6['HOURLYPrecip'] < 5.69)])))
print('{} 100-year events for Northeast Illinois'.format(len(roll_6[roll_6['HOURLYPrecip'] >= 5.69])))
roll_6_1yr = roll_6[(roll_6['HOURLYPrecip'] >= 1.88) & (roll_6['HOURLYPrecip'] < 2.28)]
print('{} days with 1-year events in Northeast Illinois'.format(len(roll_6_1yr.groupby(roll_6_1yr.index.date))))
roll_6_1yr.sort_values(by=['HOURLYPrecip'], ascending=False, inplace=True)
roll_6_1yr.head()
# Many of these are from the same days, but over slightly different intervals as mentioned before
roll_6_2yr = roll_6[(roll_6['HOURLYPrecip'] >= 2.28) & (roll_6['HOURLYPrecip'] < 2.85)]
print('{} days with 2-year events in Northeast Illinois'.format(len(roll_6_2yr.groupby(roll_6_2yr.index.date))))
roll_6_2yr.sort_values(by=['HOURLYPrecip'], ascending=False, inplace=True)
roll_6_2yr.head()
# Helper function taking the series, window, and list of cutoffs to make this quicker, returns the subset
def rolling_results(rain_series, window, rain_cutoffs):
window_df = pd.DataFrame(rain_series.rolling(window=window).sum())
print('For {}-hour intervals'.format(window))
print('{} 1-year events for Northeast Illinois'.format(len(window_df[(window_df['HOURLYPrecip'] >= rain_cutoffs[0]) &
(window_df['HOURLYPrecip'] < rain_cutoffs[1])])))
print('{} 2-year events for Northeast Illinois'.format(len(window_df[(window_df['HOURLYPrecip'] >= rain_cutoffs[1]) &
(window_df['HOURLYPrecip'] < rain_cutoffs[2])])))
print('{} 5-year events for Northeast Illinois'.format(len(window_df[(window_df['HOURLYPrecip'] >= rain_cutoffs[2]) &
(window_df['HOURLYPrecip'] < rain_cutoffs[3])])))
print('{} 10-year events for Northeast Illinois'.format(len(window_df[(window_df['HOURLYPrecip'] >= rain_cutoffs[3]) &
(window_df['HOURLYPrecip'] < rain_cutoffs[4])])))
print('{} 25-year events for Northeast Illinois'.format(len(window_df[(window_df['HOURLYPrecip'] >= rain_cutoffs[4]) &
(window_df['HOURLYPrecip'] < rain_cutoffs[5])])))
print('{} 50-year events for Northeast Illinois'.format(len(window_df[(window_df['HOURLYPrecip'] >= rain_cutoffs[5]) &
(window_df['HOURLYPrecip'] < rain_cutoffs[6])])))
print('{} 100-year events for Northeast Illinois'.format(len(window_df[window_df['HOURLYPrecip'] >= rain_cutoffs[6]])))
# Gets the subset of the dataframe for the given cutoff index (i.e. 5 year is the third, so cutoff_index would be 3)
def rolling_subset(rain_series, window, rain_cutoffs, cutoff_index):
window_df = pd.DataFrame(rain_series.rolling(window=window).sum())
if cutoff_index <= 6:
return window_df[(window_df['HOURLYPrecip'] >= rain_cutoffs[cutoff_index - 1]) & (roll_6['HOURLYPrecip'] < 2.85)]
if cutoff_index == 7:
return window_df[window_df['HOURLYPrecip'] >= rain_cutoffs[cutoff_index -1]]
cutoffs_12hr = [2.18, 2.64, 3.31, 3.89, 4.79, 5.6, 6.59]
rolling_results(chi_rain_series, 12, cutoffs_12hr)
roll_12_2yr = rolling_subset(chi_rain_series, 12, cutoffs_12hr, 2)
print('{} days with 2-year events for 12 hrs in Northeast Illinois'.format(len(roll_12_2yr.groupby(roll_12_2yr.index.date))))
cutoffs_24hr = [2.51, 3.04, 3.80, 4.47, 5.51, 6.46, 7.58]
rolling_results(chi_rain_series, 24, cutoffs_24hr)
roll_24_1yr = rolling_subset(chi_rain_series, 24, cutoffs_24hr, 1)
print('{} days with 1-year events for 24 hrs in Northeast Illinois'.format(len(roll_24_1yr.groupby(roll_24_1yr.index.date))))
cutoffs_48hr = [2.70, 3.30, 4.09, 4.81, 5.88, 6.84, 8.16]
rolling_results(chi_rain_series, 48, cutoffs_48hr)
roll_48_1yr = rolling_subset(chi_rain_series, 48, cutoffs_48hr, 1)
print('{} days with 1-year events for 48 hrs in Northeast Illinois'.format(len(roll_48_1yr.groupby(roll_48_1yr.index.date))))
Explanation: Notes on Initial Results
Because it's looking for a count of intervals, the initial counts of events returned could include more than one event per storm. For example, if one storm lasted 8 hours from 1pm to 9pm and rained relatively consistently throughout at a 5-year event level and we're looking for 6 hour intervals, it could count for as many as 3 events.
End of explanation |
2,758 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Natural Neighbor Verification
Walks through the steps of Natural Neighbor interpolation to validate that the algorithmic
approach taken in MetPy is correct.
Find natural neighbors visual test
A triangle is a natural neighbor for a point if the
circumscribed circle <https
Step1: For a test case, we generate 10 random points and observations, where the
observation values are just the x coordinate value times the y coordinate
value divided by 1000.
We then create two test points (grid 0 & grid 1) at which we want to
estimate a value using natural neighbor interpolation.
The locations of these observations are then used to generate a Delaunay triangulation.
Step2: Using the circumcenter and circumcircle radius information from
Step3: What?....the circle from triangle 8 looks pretty darn close. Why isn't
grid 0 included in that circle?
Step4: Lets do a manual check of the above interpolation value for grid 0 (southernmost grid)
Grab the circumcenters and radii for natural neighbors
Step6: Draw the natural neighbor triangles and their circumcenters. Also plot a Voronoi diagram
<https
Step7: Put all of the generated polygon areas and their affiliated values in arrays.
Calculate the total area of all of the generated polygons.
Step8: For each polygon area, calculate its percent of total area.
Step9: Multiply the percent of total area by the respective values.
Step10: The sum of this array is the interpolation value!
Step11: The values are slightly different due to truncating the area values in
the above visual example to the 3rd decimal place. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
from scipy.spatial import ConvexHull, Delaunay, delaunay_plot_2d, Voronoi, voronoi_plot_2d
from scipy.spatial.distance import euclidean
from metpy.gridding import polygons, triangles
from metpy.gridding.interpolation import nn_point
Explanation: Natural Neighbor Verification
Walks through the steps of Natural Neighbor interpolation to validate that the algorithmic
approach taken in MetPy is correct.
Find natural neighbors visual test
A triangle is a natural neighbor for a point if the
circumscribed circle <https://en.wikipedia.org/wiki/Circumscribed_circle>_ of the
triangle contains that point. It is important that we correctly grab the correct triangles
for each point before proceeding with the interpolation.
Algorithmically:
We place all of the grid points in a KDTree. These provide worst-case O(n) time
complexity for spatial searches.
We generate a Delaunay Triangulation <https://docs.scipy.org/doc/scipy/
reference/tutorial/spatial.html#delaunay-triangulations>_
using the locations of the provided observations.
For each triangle, we calculate its circumcenter and circumradius. Using
KDTree, we then assign each grid a triangle that has a circumcenter within a
circumradius of the grid's location.
The resulting dictionary uses the grid index as a key and a set of natural
neighbor triangles in the form of triangle codes from the Delaunay triangulation.
This dictionary is then iterated through to calculate interpolation values.
We then traverse the ordered natural neighbor edge vertices for a particular
grid cell in groups of 3 (n - 1, n, n + 1), and perform calculations to generate
proportional polygon areas.
Circumcenter of (n - 1), n, grid_location
Circumcenter of (n + 1), n, grid_location
Determine what existing circumcenters (ie, Delaunay circumcenters) are associated
with vertex n, and add those as polygon vertices. Calculate the area of this polygon.
Increment the current edges to be checked, i.e.:
n - 1 = n, n = n + 1, n + 1 = n + 2
Repeat steps 5 & 6 until all of the edge combinations of 3 have been visited.
Repeat steps 4 through 7 for each grid cell.
End of explanation
np.random.seed(100)
pts = np.random.randint(0, 100, (10, 2))
xp = pts[:, 0]
yp = pts[:, 1]
zp = (pts[:, 0] * pts[:, 0]) / 1000
tri = Delaunay(pts)
fig, ax = plt.subplots(1, 1, figsize=(15, 10))
delaunay_plot_2d(tri, ax=ax)
for i, zval in enumerate(zp):
ax.annotate('{} F'.format(zval), xy=(pts[i, 0] + 2, pts[i, 1]))
sim_gridx = [30., 60.]
sim_gridy = [30., 60.]
ax.plot(sim_gridx, sim_gridy, '+', markersize=10)
ax.set_aspect('equal', 'datalim')
ax.set_title('Triangulation of observations and test grid cell '
'natural neighbor interpolation values')
members, tri_info = triangles.find_natural_neighbors(tri, list(zip(sim_gridx, sim_gridy)))
val = nn_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri, members[0], tri_info)
ax.annotate('grid 0: {:.3f}'.format(val), xy=(sim_gridx[0] + 2, sim_gridy[0]))
val = nn_point(xp, yp, zp, (sim_gridx[1], sim_gridy[1]), tri, members[1], tri_info)
ax.annotate('grid 1: {:.3f}'.format(val), xy=(sim_gridx[1] + 2, sim_gridy[1]))
Explanation: For a test case, we generate 10 random points and observations, where the
observation values are just the x coordinate value times the y coordinate
value divided by 1000.
We then create two test points (grid 0 & grid 1) at which we want to
estimate a value using natural neighbor interpolation.
The locations of these observations are then used to generate a Delaunay triangulation.
End of explanation
def draw_circle(ax, x, y, r, m, label):
th = np.linspace(0, 2 * np.pi, 100)
nx = x + r * np.cos(th)
ny = y + r * np.sin(th)
ax.plot(nx, ny, m, label=label)
members, tri_info = triangles.find_natural_neighbors(tri, list(zip(sim_gridx, sim_gridy)))
fig, ax = plt.subplots(1, 1, figsize=(15, 10))
delaunay_plot_2d(tri, ax=ax)
ax.plot(sim_gridx, sim_gridy, 'ks', markersize=10)
for i, info in tri_info.items():
x_t = info['cc'][0]
y_t = info['cc'][1]
if i in members[1] and i in members[0]:
draw_circle(ax, x_t, y_t, info['r'], 'm-', str(i) + ': grid 1 & 2')
ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)
elif i in members[0]:
draw_circle(ax, x_t, y_t, info['r'], 'r-', str(i) + ': grid 0')
ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)
elif i in members[1]:
draw_circle(ax, x_t, y_t, info['r'], 'b-', str(i) + ': grid 1')
ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)
else:
draw_circle(ax, x_t, y_t, info['r'], 'k:', str(i) + ': no match')
ax.annotate(str(i), xy=(x_t, y_t), fontsize=9)
ax.set_aspect('equal', 'datalim')
ax.legend()
Explanation: Using the circumcenter and circumcircle radius information from
:func:metpy.gridding.triangles.find_natural_neighbors, we can visually
examine the results to see if they are correct.
End of explanation
x_t, y_t = tri_info[8]['cc']
r = tri_info[8]['r']
print('Distance between grid0 and Triangle 8 circumcenter:',
euclidean([x_t, y_t], [sim_gridx[0], sim_gridy[0]]))
print('Triangle 8 circumradius:', r)
Explanation: What?....the circle from triangle 8 looks pretty darn close. Why isn't
grid 0 included in that circle?
End of explanation
cc = np.array([tri_info[m]['cc'] for m in members[0]])
r = np.array([tri_info[m]['r'] for m in members[0]])
print('circumcenters:\n', cc)
print('radii\n', r)
Explanation: Lets do a manual check of the above interpolation value for grid 0 (southernmost grid)
Grab the circumcenters and radii for natural neighbors
End of explanation
vor = Voronoi(list(zip(xp, yp)))
fig, ax = plt.subplots(1, 1, figsize=(15, 10))
voronoi_plot_2d(vor, ax=ax)
nn_ind = np.array([0, 5, 7, 8])
z_0 = zp[nn_ind]
x_0 = xp[nn_ind]
y_0 = yp[nn_ind]
for x, y, z in zip(x_0, y_0, z_0):
ax.annotate('{}, {}: {:.3f} F'.format(x, y, z), xy=(x, y))
ax.plot(sim_gridx[0], sim_gridy[0], 'k+', markersize=10)
ax.annotate('{}, {}'.format(sim_gridx[0], sim_gridy[0]), xy=(sim_gridx[0] + 2, sim_gridy[0]))
ax.plot(cc[:, 0], cc[:, 1], 'ks', markersize=15, fillstyle='none',
label='natural neighbor\ncircumcenters')
for center in cc:
ax.annotate('{:.3f}, {:.3f}'.format(center[0], center[1]),
xy=(center[0] + 1, center[1] + 1))
tris = tri.points[tri.simplices[members[0]]]
for triangle in tris:
x = [triangle[0, 0], triangle[1, 0], triangle[2, 0], triangle[0, 0]]
y = [triangle[0, 1], triangle[1, 1], triangle[2, 1], triangle[0, 1]]
ax.plot(x, y, ':', linewidth=2)
ax.legend()
ax.set_aspect('equal', 'datalim')
def draw_polygon_with_info(ax, polygon, off_x=0, off_y=0):
Draw one of the natural neighbor polygons with some information.
pts = np.array(polygon)[ConvexHull(polygon).vertices]
for i, pt in enumerate(pts):
ax.plot([pt[0], pts[(i + 1) % len(pts)][0]],
[pt[1], pts[(i + 1) % len(pts)][1]], 'k-')
avex, avey = np.mean(pts, axis=0)
ax.annotate('area: {:.3f}'.format(polygons.area(pts)), xy=(avex + off_x, avey + off_y),
fontsize=12)
cc1 = triangles.circumcenter((53, 66), (15, 60), (30, 30))
cc2 = triangles.circumcenter((34, 24), (53, 66), (30, 30))
draw_polygon_with_info(ax, [cc[0], cc1, cc2])
cc1 = triangles.circumcenter((53, 66), (15, 60), (30, 30))
cc2 = triangles.circumcenter((15, 60), (8, 24), (30, 30))
draw_polygon_with_info(ax, [cc[0], cc[1], cc1, cc2], off_x=-9, off_y=3)
cc1 = triangles.circumcenter((8, 24), (34, 24), (30, 30))
cc2 = triangles.circumcenter((15, 60), (8, 24), (30, 30))
draw_polygon_with_info(ax, [cc[1], cc1, cc2], off_x=-15)
cc1 = triangles.circumcenter((8, 24), (34, 24), (30, 30))
cc2 = triangles.circumcenter((34, 24), (53, 66), (30, 30))
draw_polygon_with_info(ax, [cc[0], cc[1], cc1, cc2])
Explanation: Draw the natural neighbor triangles and their circumcenters. Also plot a Voronoi diagram
<https://docs.scipy.org/doc/scipy/reference/tutorial/spatial.html#voronoi-diagrams>_
which serves as a complementary (but not necessary)
spatial data structure that we use here simply to show areal ratios.
Notice that the two natural neighbor triangle circumcenters are also vertices
in the Voronoi plot (green dots), and the observations are in the polygons (blue dots).
End of explanation
areas = np.array([60.434, 448.296, 25.916, 70.647])
values = np.array([0.064, 1.156, 2.809, 0.225])
total_area = np.sum(areas)
print(total_area)
Explanation: Put all of the generated polygon areas and their affiliated values in arrays.
Calculate the total area of all of the generated polygons.
End of explanation
proportions = areas / total_area
print(proportions)
Explanation: For each polygon area, calculate its percent of total area.
End of explanation
contributions = proportions * values
print(contributions)
Explanation: Multiply the percent of total area by the respective values.
End of explanation
interpolation_value = np.sum(contributions)
function_output = nn_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri, members[0], tri_info)
print(interpolation_value, function_output)
Explanation: The sum of this array is the interpolation value!
End of explanation
plt.show()
Explanation: The values are slightly different due to truncating the area values in
the above visual example to the 3rd decimal place.
End of explanation |
2,759 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sources and receivers
Defining the sources and receiver position is necessary for any seismic simulation or inversion problem. This notebook shows how to do so, and present the different functionalities allowed by SeisCL.
Step1: We explain these concepts with a very simple model to begin with. Let's start by setting up the relevant constants of a simple 2D model.
Step2: Structure of the position arrays
In SeisCL, the sources and the receivers information is defined in two arrays src_pos_all and rec_pos_all which are of length [ 5 x number of sources ] and [ 8 x number of receivers ]. Each entry in the source array must have the elements [sx, sy, sz, srcid, src_type]
<br>
| Src_pos_all input | Description |
|
Step3: Each entry in the receiver array must have the elements [gx, gy, gz, srcid, recid, , , __]
| Rec_pos_all input | Description |
|
Step4: Let's vizualize the source and receiver positions.
Step5: Defining the source signature
Once sources positions are defined, we need to define the source signature of each source. The array SeisCL.src_all contains the sources signature and has the format [NT X nb_srcs]. If not defined, SeisCL will fill thay array automatically, with a Ricker wavelet with a central frequency of seis.f0. For now, the attribute SeisCL.src_all is empty
Step6: Upon calling SeisCL.set_forward, the src signature is filled up with a Ricker Wavelet
Step7: If another source function is needed, we can always redefine src_all to whatever we would like.
Step8: Once src_all is defined, the SeisCL.set_forward method does not overwrite it
Step9: ## Performing the simulation
We now have everything we need, so we can run the simulation and show the result.
Step10: Chosing the ouput type
The receivers can record different types of measurements, like pressure to simulate hydrophones, or particle velocities to simulate geophones. The type of measurements is controlled by a global parameter, SeisCL.seisout.
The possible values are
Step11: This time, the output of SeisCL.read_data is a list with two elements, particle velocities in x and z.
Step12: Computing a selection over multiple shots
For now, we have only shown examples with one shot position. Let's now show how to have multiple shots, and compute only a subsample of them. This can be useful for stochastic optimization, as shown by Fabien-Ouellet et al. ( 2017).
Let's define 4 shot positions.
Step13: Let's reuse the same receiver position as the previous example. In this case, we have to repeat them for each shot.
Step14: Say we want to only compute shots 0 and 2. We can easily do that by passing that information to set_forward
Step15: Note that data shape is [NT X ntraces], which means that all traces of all shots are concatenated in the same array. Let's resort according to the shot position.
Step16: Simultaneous shots
SeisCL also allows firing simultaneuously different shots. In fact, all shots sharing the same srcid are fired at the same time.
Let's show that by redefining the source array.
Step17: Similarly, we redefine the receiver positions.
Step18: We then compute the shots for the srcids 0 and 1.
Step19: Two acquisitons have been simulated, with two sources per acquisitions. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
from SeisCL import SeisCL
seis = SeisCL()
Explanation: Sources and receivers
Defining the sources and receiver position is necessary for any seismic simulation or inversion problem. This notebook shows how to do so, and present the different functionalities allowed by SeisCL.
End of explanation
seis.N = np.array([200, 500])
seis.dt = dt = 0.25e-03
seis.dh = dh = 2
seis.NT = NT = 1500
model = {"vp": np.full(seis.N, 3500),
"rho": np.full(seis.N, 2000),
"vs": np.full(seis.N, 2000)}
Explanation: We explain these concepts with a very simple model to begin with. Let's start by setting up the relevant constants of a simple 2D model.
End of explanation
Nz = seis.csts['N'][0]
Nx = seis.csts['N'][1]
sx = seis.N[1] // 2 * dh
sy = 0
sz = seis.N[0] // 2 * dh
srcid = 0
src_type = 100
seis.src_pos_all = np.stack([[sx], [sy], [sz], [srcid], [src_type]], axis=0)
Explanation: Structure of the position arrays
In SeisCL, the sources and the receivers information is defined in two arrays src_pos_all and rec_pos_all which are of length [ 5 x number of sources ] and [ 8 x number of receivers ]. Each entry in the source array must have the elements [sx, sy, sz, srcid, src_type]
<br>
| Src_pos_all input | Description |
| :-: | :-: |
| sx | Position of the receiver in X |
| sy | Position of the receiver in Y |
| sz | Position of the receiver in Z |
| srcid | Source ID : Each receiver is associated with one source ID <br> Same ID are fired simultaneously |
| src_type | Type of the source : $\hspace{0.1cm}$ 0 : Force in X<br> $\hspace{3cm}$ 1 : Force in Y<br> $\hspace{3cm}$ 2 : Force in Z <br> $\hspace{2.6cm}$ 100 : Explosive |
As can be seen, SeisCL supports two different types of sources with the entry src_type, an explosive source (or pressure source), and a directed force. Note that two sources with the same srcid will be fired simultaneously.
In the following, we define a single shot position.
End of explanation
gx = np.arange(seis.nab + 10, seis.N[1] - seis.nab -10, 5) * dh
gy = gx * 0
gz = gx * 0 + (seis.nab + 10) * dh
gsid = gx*0
recid = np.arange(0, len(gx)) + 1
blank = gx*0
seis.rec_pos_all = np.stack([gx, gy, gz, gsid, recid, blank, blank, blank], axis=0)
Explanation: Each entry in the receiver array must have the elements [gx, gy, gz, srcid, recid, , , __]
| Rec_pos_all input | Description |
| :-: | :-: |
| gx | Position of the source in X |
| gy | Position of the source in Y |
| gz | Position of the source in Z |
| srcid | Id of the source related to this receiver |
| recid | Trace number |
| -- | Blank |
| -- | Blank |
| -- | Blank |
It is important to understand that the receivers for each source must appear in rec_pos_all, even if they are located at the same position. That is why the srcid is necessary. This allows to have different receiver configuration for each source.
Note also the recid field. This must be unique for each receiver, and begin at 1 for the first receiver. Why is this needed ? This is related to the _all in src_pos_all and rec_pos_all. Suppose that you have a very large survey, but that you only want to compute the gradient for a small subset of the dataset. To do so, you can select only the desired srcid, and pass that to SeisCL. However, when computing the gradient, it is convenient to keep the trace number of the traces we need to read in the observed data file. That is the purpose of the recid field: it thus of utmost importance that it is unique for each trace and starts at 1 for the first trace in the file. Hence, the arrays src_pos_all and rec_pos_all should contain all sources and receiver positions of the survey.
Let's define the receiver on top of the model.
End of explanation
_, ax = plt.subplots(1, 1, figsize = (16,8))
seis.DrawDomain2D(model['vp'], ax = ax, showsrcrec = True, showabs = True)
Explanation: Let's vizualize the source and receiver positions.
End of explanation
print(seis.src_all)
Explanation: Defining the source signature
Once sources positions are defined, we need to define the source signature of each source. The array SeisCL.src_all contains the sources signature and has the format [NT X nb_srcs]. If not defined, SeisCL will fill thay array automatically, with a Ricker wavelet with a central frequency of seis.f0. For now, the attribute SeisCL.src_all is empty:
End of explanation
seis.set_forward([0], model, withgrad=False)
plt.plot(seis.src_all)
plt.show()
Explanation: Upon calling SeisCL.set_forward, the src signature is filled up with a Ricker Wavelet
End of explanation
seis.src_all[:, 0] = seis.ricker_wavelet(f0=seis.f0/1.5)
plt.plot(seis.src_all)
plt.show()
Explanation: If another source function is needed, we can always redefine src_all to whatever we would like.
End of explanation
seis.set_forward([0], model, withgrad=False)
plt.plot(seis.src_all)
plt.show()
Explanation: Once src_all is defined, the SeisCL.set_forward method does not overwrite it:
End of explanation
seis.seisout = 2
seis.set_forward([0], model, withgrad=False)
seis.execute()
data = seis.read_data()[0]
fig, ax = plt.subplots(1, 1, figsize=[8, 5])
extent = [seis.rec_pos_all[0,0], seis.rec_pos_all[0,-1], seis.NT*dt, 0]
clip = 0.1
vmax = np.max(data) * clip
vmin = -vmax
ax.imshow(data, aspect='auto', vmax=vmax, vmin=vmin,
extent=extent, interpolation='bilinear',
cmap=plt.get_cmap('Greys'))
ax.set_title("Pressure", fontsize=14, fontweight='bold')
ax.set_xlabel("Position of geophone (m)")
ax.set_ylabel("Time (ms)")
plt.show()
Explanation: ## Performing the simulation
We now have everything we need, so we can run the simulation and show the result.
End of explanation
seis.seisout = 1
seis.set_forward([0], model, withgrad=False)
seis.execute()
data = seis.read_data()
Explanation: Chosing the ouput type
The receivers can record different types of measurements, like pressure to simulate hydrophones, or particle velocities to simulate geophones. The type of measurements is controlled by a global parameter, SeisCL.seisout.
The possible values are:
1: output velocities,
2: output pressure,
3: output stresses and velocities
We can perform the previous computation, this time outputing the velocities.
End of explanation
fig, axs = plt.subplots(1, 2, figsize=[16, 5])
extent = [seis.rec_pos_all[0,0], seis.rec_pos_all[0,-1], seis.NT*dt, 0]
clip = 0.1
vmax = np.max(data) * clip
vmin = -vmax
axs[0].imshow(data[0], aspect='auto', vmax=vmax, vmin=vmin,
extent=extent, interpolation='bilinear',
cmap=plt.get_cmap('Greys'))
axs[1].imshow(data[1], aspect='auto', vmax=vmax, vmin=vmin,
extent=extent, interpolation='bilinear',
cmap=plt.get_cmap('Greys'))
axs[0].set_title("$v_x$", fontsize=14, fontweight='bold')
axs[0].set_xlabel("Position of geophone (m)")
axs[0].set_ylabel("Time (ms)")
axs[1].set_title("$v_z$", fontsize=14, fontweight='bold')
axs[1].set_xlabel("Position of geophone (m)")
axs[1].set_ylabel("Time (ms)")
plt.show()
Explanation: This time, the output of SeisCL.read_data is a list with two elements, particle velocities in x and z.
End of explanation
seis.seisout = 2
seis.src_pos_all = np.empty((5, 0))
seis.rec_pos_all = np.empty((8, 0))
seis.src_all = None
sx = np.array([100, 200, 300, 400]) * dh
sy = sx * 0
sz = sx * 0 + Nz // 2 * dh
srcid = np.arange(0, len(sx))
src_type = sx * 0 + 100
seis.src_pos_all = np.stack([sx, sy, sz, srcid, src_type], axis=0)
print(seis.src_pos_all)
Explanation: Computing a selection over multiple shots
For now, we have only shown examples with one shot position. Let's now show how to have multiple shots, and compute only a subsample of them. This can be useful for stochastic optimization, as shown by Fabien-Ouellet et al. ( 2017).
Let's define 4 shot positions.
End of explanation
gx4 = np.tile(gx, len(sx))
gy4 = np.tile(gy, len(sy))
gz4 = np.tile(gz, len(sz))
gsid4 = np.concatenate([gsid + ii for ii in range(len(sx))])
recid4 = np.arange(0, len(gx4)) + 1
blank4 = np.tile(blank, len(sx))
seis.rec_pos_all = np.stack([gx4, gy4, gz4, gsid4, recid4,
blank4, blank4, blank4], axis=0)
Explanation: Let's reuse the same receiver position as the previous example. In this case, we have to repeat them for each shot.
End of explanation
seis.set_forward([0, 2], model, withgrad=False)
seis.execute()
data = seis.read_data()[0]
Explanation: Say we want to only compute shots 0 and 2. We can easily do that by passing that information to set_forward
End of explanation
data = np.reshape(data, [data.shape[0], -1, data.shape[1]//2])
fig2 = plt.figure(figsize = (10,5))
ax2 = []
extent = [seis.rec_pos_all[0,0], seis.rec_pos_all[0, -1], seis.NT*dt, 0]
clip = 0.1
vmax = np.max(data) * clip
vmin = -vmax
for idx, shot in enumerate([0, 2]):
ax2.append(fig2.add_subplot(1,2,idx+1))
ax2[idx].imshow(data[:, idx, :], aspect='auto', vmax=vmax, vmin=vmin,
extent=extent, interpolation='bilinear',
cmap=plt.get_cmap('Greys'))
ax2[idx].set_title('Shot at ' + str(sx[shot]) + ' m',
fontsize=16, fontweight='bold')
ax2[idx].set_xlabel("Position (m)")
ax2[idx].set_ylabel("Time (s)")
plt.tight_layout()
plt.show()
Explanation: Note that data shape is [NT X ntraces], which means that all traces of all shots are concatenated in the same array. Let's resort according to the shot position.
End of explanation
seis.src_pos_all[3, 1] = 0
seis.src_pos_all[3, 2] = 1
seis.src_pos_all[3, 3] = 1
print(seis.src_pos_all)
Explanation: Simultaneous shots
SeisCL also allows firing simultaneuously different shots. In fact, all shots sharing the same srcid are fired at the same time.
Let's show that by redefining the source array.
End of explanation
gx2 = np.tile(gx, 2)
gy2 = np.tile(gy, 2)
gz2 = np.tile(gz, 2)
gsid2 = np.concatenate([gsid + ii for ii in range(2)])
recid2 = np.arange(0, len(gx2)) + 1
blank2 = np.tile(blank, 2)
seis.rec_pos_all = np.stack([gx2, gy2, gz2, gsid2, recid2, blank2, blank2, blank2], axis=0)
Explanation: Similarly, we redefine the receiver positions.
End of explanation
seis.set_forward([0, 1], model, withgrad=False)
seis.execute()
data = seis.read_data()[0]
Explanation: We then compute the shots for the srcids 0 and 1.
End of explanation
data = np.reshape(data, [data.shape[0], -1, data.shape[1]//2])
fig2 = plt.figure(figsize = (10,5))
ax2 = []
extent = [seis.rec_pos_all[0,0], seis.rec_pos_all[0, -1], seis.NT*dt, 0]
clip = 0.1
vmax = np.max(data) * clip
vmin = -vmax
for idx, shot in enumerate([0, 1]):
ax2.append(fig2.add_subplot(1,2,idx+1))
ax2[idx].imshow(data[:, idx, :], aspect='auto', vmax=vmax, vmin=vmin,
extent=extent, interpolation='bilinear',
cmap=plt.get_cmap('Greys'))
ax2[idx].set_title('Shot id ' + str(shot),
fontsize=16, fontweight='bold')
ax2[idx].set_xlabel("Position (m)")
ax2[idx].set_ylabel("Time (s)")
plt.tight_layout()
plt.show()
Explanation: Two acquisitons have been simulated, with two sources per acquisitions.
End of explanation |
2,760 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise 4
Step1: Part 1
Step2: Part 2
Step3: Unrolling the parameters into one vector
Step4: Part 3
Step5: The cost at the given parameters should be about 0.287629.
Step6: The cost at the given parameters and a regularization factor of 1 should be about 0.38377.
Part 4
Step7: Part 5
Step8: Part 6
Step9: Part 7
Step10: If your backpropagation implementation is correct, then the relative difference will be small (less than 1e-9).
Step11: Part 8
Step12: The cost at lambda = 3 should be about 0.57.
Step13: Part 8
Step14: Obtain Theta1 and Theta2 back from nn_params
Step15: Part 9 | Python Code:
import numpy as np
import scipy.io
import scipy.optimize
import matplotlib.pyplot as plt
%matplotlib inline
# uncomment for console - useful for debugging
# %qtconsole
ex3data1 = scipy.io.loadmat("./ex4data1.mat")
X = ex3data1['X']
y = ex3data1['y'][:,0]
m, n = X.shape
m, n
input_layer_size = n # 20x20 Input Images of Digits
hidden_layer_size = 25 # 25 hidden units
num_labels = 10 # 10 labels, from 1 to 10
# (note that we have mapped "0" to label 10)
lambda_ = 1
Explanation: Exercise 4: Neural Network Learning
End of explanation
def display(X, display_rows=5, display_cols=5, figsize=(4,4), random_x=False):
m = X.shape[0]
fig, axes = plt.subplots(display_rows, display_cols, figsize=figsize)
fig.subplots_adjust(wspace=0.1, hspace=0.1)
import random
for i, ax in enumerate(axes.flat):
ax.set_axis_off()
x = None
if random_x:
x = random.randint(0, m-1)
else:
x = i
image = X[x].reshape(20, 20).T
image = image / np.max(image)
ax.imshow(image, cmap=plt.cm.Greys_r)
display(X, random_x=True)
def add_ones_column(array):
return np.insert(array, 0, 1, axis=1)
Explanation: Part 1: Loading and Visualizing Data
We start the exercise by first loading and visualizing the dataset. You will be working with a dataset that contains handwritten digits.
End of explanation
ex4weights = scipy.io.loadmat('./ex4weights.mat')
Theta1 = ex4weights['Theta1']
Theta2 = ex4weights['Theta2']
print(Theta1.shape, Theta2.shape)
Explanation: Part 2: Loading Parameters
In this part of the exercise, we load some pre-initialized
neural network parameters.
End of explanation
nn_params = np.concatenate((Theta1.flat, Theta2.flat))
nn_params.shape
def sigmoid(z):
return 1 / (1+np.exp(-z))
Explanation: Unrolling the parameters into one vector:
End of explanation
def nn_cost_function(nn_params, input_layer_size, hidden_layer_size,
num_labels, X, y, lambda_):
#NNCOSTFUNCTION Implements the neural network cost function for a two layer
#neural network which performs classification
# [J grad] = NNCOSTFUNCTON(nn_params, hidden_layer_size, num_labels, ...
# X, y, lambda) computes the cost and gradient of the neural network. The
# parameters for the neural network are "unrolled" into the vector
# nn_params and need to be converted back into the weight matrices.
#
# The returned parameter grad should be a "unrolled" vector of the
# partial derivatives of the neural network.
#
# Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices
# for our 2 layer neural network
t1_len = (input_layer_size+1)*hidden_layer_size
Theta1 = nn_params[:t1_len].reshape(hidden_layer_size, input_layer_size+1)
Theta2 = nn_params[t1_len:].reshape(num_labels, hidden_layer_size+1)
m = X.shape[0]
# You need to return the following variables correctly
J = 0;
Theta1_grad = np.zeros(Theta1.shape);
Theta2_grad = np.zeros(Theta2.shape);
# ====================== YOUR CODE HERE ======================
# Instructions: You should complete the code by working through the
# following parts.
#
# Part 1: Feedforward the neural network and return the cost in the
# variable J. After implementing Part 1, you can verify that your
# cost function computation is correct by verifying the cost
# computed for lambda == 0.
#
# Part 2: Implement the backpropagation algorithm to compute the gradients
# Theta1_grad and Theta2_grad. You should return the partial derivatives of
# the cost function with respect to Theta1 and Theta2 in Theta1_grad and
# Theta2_grad, respectively. After implementing Part 2, you can check
# that your implementation is correct by running checkNNGradients
#
# Note: The vector y passed into the function is a vector of labels
# containing values from 1..K. You need to map this vector into a
# binary vector of 1's and 0's to be used with the neural network
# cost function.
#
# Hint: We recommend implementing backpropagation using a for-loop
# over the training examples if you are implementing it for the
# first time.
#
# Part 3: Implement regularization with the cost function and gradients.
#
# Hint: You can implement this around the code for
# backpropagation. That is, you can compute the gradients for
# the regularization separately and then add them to Theta1_grad
# and Theta2_grad from Part 2.
#
# =========================================================================
# Unroll gradients
gradient = np.concatenate((Theta1_grad.flat, Theta2_grad.flat))
return J, gradient
Explanation: Part 3: Compute Cost (Feedforward)
To the neural network, you should first start by implementing the
feedforward part of the neural network that returns the cost only. You
should complete the code in nn_cost_function() to return cost. After
implementing the feedforward to compute the cost, you can verify that
your implementation is correct by verifying that you get the same cost
as us for the fixed debugging parameters.
We suggest implementing the feedforward cost without regularization
first so that it will be easier for you to debug. Later, in part 4, you
will get to implement the regularized cost.
End of explanation
lambda_ = 0 # No regularization
nn_cost_function(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda_)
Explanation: The cost at the given parameters should be about 0.287629.
End of explanation
lambda_ = 1
nn_cost_function(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda_)
Explanation: The cost at the given parameters and a regularization factor of 1 should be about 0.38377.
Part 4: Implement Regularization
Once your cost function implementation is correct, you should now
continue to implement the regularization with the cost.
End of explanation
def sigmoid_gradient(z):
#SIGMOIDGRADIENT returns the gradient of the sigmoid function
#evaluated at z
# g = SIGMOIDGRADIENT(z) computes the gradient of the sigmoid function
# evaluated at z. This should work regardless if z is a matrix or a
# vector. In particular, if z is a vector or matrix, you should return
# the gradient for each element.
g = np.zeros(z.shape)
# ====================== YOUR CODE HERE ======================
# Instructions: Compute the gradient of the sigmoid function evaluated at
# each value of z (z can be a matrix, vector or scalar).
# =============================================================
return g
sigmoid_gradient(np.array([1, -0.5, 0, 0.5, 1]))
Explanation: Part 5: Sigmoid Gradient
Before you start implementing the neural network, you will first
implement the gradient for the sigmoid function. You should complete the
code in sigmoid_gradient.
End of explanation
def rand_initialize_weight(L_in, L_out):
#RANDINITIALIZEWEIGHTS Randomly initialize the weights of a layer with L_in
#incoming connections and L_out outgoing connections
# W = RANDINITIALIZEWEIGHTS(L_in, L_out) randomly initializes the weights
# of a layer with L_in incoming connections and L_out outgoing
# connections.
#
# Note that W should be set to a matrix of size(L_out, 1 + L_in) as
# the column row of W handles the "bias" terms
#
# You need to return the following variables correctly
W = np.zeros((L_out, L_in))
# ====================== YOUR CODE HERE ======================
# Instructions: Initialize W randomly so that we break the symmetry while
# training the neural network.
#
# Note: The first row of W corresponds to the parameters for the bias units
#
return W
# =========================================================================
Explanation: Part 6: Initializing Pameters
In this part of the exercise, you will be starting to implment a two
layer neural network that classifies digits. You will start by
implementing a function to initialize the weights of the neural network.
End of explanation
def numerical_gradient(f, x, dx=1e-6):
perturb = np.zeros(x.size)
result = np.zeros(x.size)
for i in range(x.size):
perturb[i] = dx
result[i] = (f(x+perturb) - f(x-perturb)) / (2*dx)
perturb[i] = 0
return result
def check_NN_gradients(lambda_=0):
input_layer_size = 3
hidden_layer_size = 5
num_labels = 3
m = 5
def debug_matrix(fan_out, fan_in):
W = np.sin(np.arange(fan_out * (fan_in+1))+1) / 10
return W.reshape(fan_out, fan_in+1)
Theta1 = debug_matrix(hidden_layer_size, input_layer_size)
Theta2 = debug_matrix(num_labels, hidden_layer_size)
X = debug_matrix(m, input_layer_size - 1)
y = 1 + ((1 + np.arange(m)) % num_labels)
nn_params = np.concatenate([Theta1.flat, Theta2.flat])
cost, grad = nn_cost_function(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda_)
def just_cost(nn_params):
cost, grad = nn_cost_function(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda_)
return cost
return np.sum(np.abs(grad - numerical_gradient(just_cost, nn_params))) / grad.size
Explanation: Part 7: Implement Backpropagation
Once your cost matches up with ours, you should proceed to implement the
backpropagation algorithm for the neural network. You should add to the
code you've written in nn_cost_function to return the partial
derivatives of the parameters.
End of explanation
check_NN_gradients()
initial_Theta1 = rand_initialize_weight(hidden_layer_size, input_layer_size+1)
initial_Theta2 = rand_initialize_weight(num_labels, hidden_layer_size+1)
Explanation: If your backpropagation implementation is correct, then the relative difference will be small (less than 1e-9).
End of explanation
def cost_fun(nn_params):
return nn_cost_function(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda_)
lambda_ = 3
nn_params = np.concatenate((initial_Theta1.flat, initial_Theta2.flat))
res = scipy.optimize.minimize(cost_fun, nn_params, jac=True, method='L-BFGS-B',
options=dict(maxiter=200, disp=True))
res
Explanation: Part 8: Implement Regularization
Once your backpropagation implementation is correct, you should now
continue to implement the regularization with the cost and gradient.
End of explanation
res.fun
Explanation: The cost at lambda = 3 should be about 0.57.
End of explanation
lambda_ = 1
nn_params = np.concatenate((initial_Theta1.flat, initial_Theta2.flat))
res = scipy.optimize.minimize(cost_fun, nn_params, jac=True, method='L-BFGS-B',
options=dict(maxiter=200, disp=True))
nn_params = res.x
Explanation: Part 8: Training NN
You have now implemented all the code necessary to train a neural
network. To train your neural network, we will use scipy.optimize.minimize.
Recall that these
advanced optimizers are able to train our cost functions efficiently as
long as we provide them with the gradient computations.
After you have completed the assignment, change the MaxIter to a larger
value to see how more training helps. You should also try different values of lambda.
End of explanation
t1_len = (input_layer_size+1)*hidden_layer_size
Theta1 = nn_params[:t1_len].reshape(hidden_layer_size, input_layer_size+1)
Theta2 = nn_params[t1_len:].reshape(num_labels, hidden_layer_size+1)
Explanation: Obtain Theta1 and Theta2 back from nn_params:
End of explanation
display(Theta1[:,1:], figsize=(6,6))
def predict(Theta1, Theta2, X):
#PREDICT Predict the label of an input given a trained neural network
# p = PREDICT(Theta1, Theta2, X) outputs the predicted label of X given the
# trained weights of a neural network (Theta1, Theta2)
m = X.shape[0]
num_labels = Theta2.shape[1]
# You need to return the following variables correctly. Remember that
# the given data labels go from 1..10, with 10 representing the digit 0!
p = np.zeros(X.shape[0])
# ====================== YOUR CODE HERE ======================
# ============================================================
return p
predictions = predict(Theta1, Theta2, X)
np.mean(predictions == y)
Explanation: Part 9: Visualize Weights
You can now "visualize" what the neural network is learning by
displaying the hidden units to see what features they are capturing in
the data.
End of explanation |
2,761 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Science Tutorial 01 @ Data Science Society
那須野薫(Kaoru Nasuno)/ 東京大学(The University of Tokyo)
データサイエンスの基礎的なスキルを身につける為のチュートリアルです。
KaggleのコンペティションであるRECRUIT Challenge, Coupon Purchase Predictionのデータセットを題材として、
データサイエンスの基礎的なスキルに触れ,理解の土台を養うことを目的とします。
(高い予測精度を出すことが目的ではないです)
まだ、書きかけでして、要望に合わせて誤りの修正や加筆をしていく予定です。何かお気づきの点があればご連絡頂けますと幸いです。
対象データ
RECRUIT Challenge, Coupon Purchase Predictionのデータセット。
ユーザ登録や利用規約に同意してダウンロードしてください。
https
Step1: モジュールのimportや変数の初期化
次に、このチュートリアルで利用するモジュールのimportや一部の変数の初期化を行います。
python
%matplotlib inline
は ipython notebookに特有のマジックコマンドというものです。
pythonの文法と異なりますが、matplotlibという画像を描画するライブラリの出力結果がブラウザ上に表示されるように設定するものです。
(ここでは、おまじない程度に考えてください。)
Step2: 2. データベースへのデータの格納
データベースとは
データベースとは、色々なデータの目的ベースでの管理や、効率的なデータ参照/検索を可能にするものです。
データベースの中には複数のテーブルがあります。
テーブルはちょうどスプレッドシートのようになっていて、それぞれの列に名前があり、1行が1つのデータとなるイメージです。
データの格納
データの格納の流れは大まかに、
1. テーブルの作成
2. テーブルへのインサート
3. errorやwarningの確認
の3つのステップとなります。
kaggleのページにテーブルの定義が書いてあるので、ここでは、その通りに作成します。
まずは、user_listのテーブル作成クエリと実行です。
MySQLのCREATE TABLE構文については、http
Step3: 次に、データのインサートです。
csvファイルなど、dumpされたファイルからMySQLにインサートする場合にはLOAD DATA INFILE構文を利用します。
LOAD DATA INFILE構文については、 http
Step4: テーブルの作成に利用したCREATE TABLE文には、
テーブルの型の定義ではなく、インデックスと呼ばれるものの定義も含まれています。
インデックスとはデータの検索を高速化するものです。
PRIMARY KEY
テーブル内でuniqueで、かつ、検索するカラムに付与する。
例えば、user_listテーブルのuser_id_hashは当該テーブルで、ユニークであり,かつ、ユーザの検索によく用いるため、PRIMARY KEYを付与しておいた方が良い。
INDEX
テーブル内でuniqueではないが、検索するカラムに付与する。例えば、ユーザを性別や年齢に応じて検索・集計して、割合を見たい場合には、sex_idやageなどのカラムに付与しておいた方が良い。
TODO
MYSQL関数などの説明の加筆。
Exercise
下記の他のファイルについても同様にテーブルを作成し、データをインサートしてください。
- prefecture_locations.csv
- coupon_area_train.csv, coupon_area_test.csv
- coupon_detail_train.csv
- coupon_visit_train.csv
- coupon_list_train.csv, coupon_list_test.csv
実装例
prefecture_locations.csv
Step5: 実行すると、それぞれのレコードでWarningが発生しますが、
データベースに展開されたレコードのlongitudeを確認すると正しく展開されているため、ここではWarningは無視します。
(確認しておりませんが、行末の改行コードがWarningの原因かもしれません、、、)
- coupon_area_train.csv, coupon_area_test.csv
Step6: coupon_detail_train.csv
Step7: coupon_visit_train.csv
このファイルはレコード数が一番多く、インサートが完了するまで少し時間がかかります。
Step8: coupon_list_train.csv, coupon_list_test.csv
2つともWarningが出ますが、
日時の値が正しくないために発生しているだけなので、無視します。
下記のクエリの
SQL
SET validperiod=IF(@validperiod = 'NA', Null, @validperiod)
を外すと、おそらくNULLが入ってほしい箇所に0が入ってしまうため、ここでは、NULLがはいるように変換します。
Step9: 3. モデリング対象の設定
対象の明確化
このコンペティションの最終的なゴールは、各ユーザが将来どのクーポンを購入するかをより正確に予測することです。
モデリング(≒予測モデルの構築)は
- ユーザの属性データ(性別、年代、地域など)
- クーポンの属性データ(ジャンル、価格、地域など)
- いつどのユーザがどのクーポンを見たか、購買したか等のログデータ
の3つのデータを主に利用します(これは、クーポンに限らず他の多くの問題で共通しています)。
これらのデータを利用して将来ユーザがどのクーポンを購買するのかをモデリングするわけですが、
特に、このコンペティションでは、モデリングの良し悪しを評価する指標が与えられており、
参加者はこの指標のスコアを競っている形になっています。
https
Step10: ランダム推定・MAP@10の評価
先にも触れましたが、本コンペティションではMAP@10というスコアを競い合っています。
提出のフォーマットは、https
Step11: 2. 抽出したクーポン群から各ユーザが購買するクーポンをランダムに10個選び、予測結果とする。
Step12: 3. 実際に購買したクーポンと照らし合わせ、MAP@10を算出する。
Step13: ランダムだと、全然当たらないですね。
map@10は期間中に何人のユーザが購買しているかによって、最大値が大きく変わります。
従って、一概に他の期間と比較することは出来ませんが、執筆時点で最も高いスコアでも0.01よりも少し多き程度のスコアなので、
スコア自体はまだまだ改善の余地があると考えられます。
※実際にスコアを上げる(=高い精度で予測できるようにする)のは、非常に大変です。
4. サブミッション用のcsvファイルを作成する。
Step14: Excercise
test dataに対して、同様の手法で予測・csvに結果を出力して、
コンペティションにサブミットしてください。
※1日に最大5回までしかサブミットできないので、注意してください。
4. 機械学習アルゴリズムによる予測モデルの構築
先の章では、モデリング対象や評価指標への理解を深めました。
具体的に先に挙げた、
- ユーザの属性データ(性別、年代、地域など)
- クーポンの属性データ(ジャンル、価格、地域など)
- いつどのユーザがどのクーポンを見たか、購買したか等のログデータ
のそれぞれの要素がどれぐらい将来の購買行動と関連があるのか、
を考える前に、まず簡単に機械学習アルゴリズムによる予測モデルを構築して、
アルゴリズムの内部がどうなっているのか、
アルゴリズムのinputとoutputがどうなっているのか、
見ていきましょう。
ここでは、下記の設定、
対象アルゴリズム:ロジスティック回帰
対象データ:ユーザの属性データとクーポンの属性データ
将来購買されるクーポンを予測するということをやってみましょう。
機械学習アルゴリズム
ロジスティック回帰は非常にシンプルなアルゴリズムで
下記の数式のように定義されています。
$$
v=Wx+b
$$
$$
y \simeq \tilde{y}=\frac{exp(v)}{\sum_{k=1}^{N}exp(v_k)}
$$
$x$:素性ベクトル(ユーザとクーポンの特徴を表現したベクトル)
$W$:素性ベクトルの各列をどれだけ評価するかのウェイトマトリックス(パラメタ)
$b$:バイアス項(パラメタ)
$N$:クラスの数(この場合、買われたか否か)
$v$:N次元ベクトル
$\tilde{y}$:各クラスが発生する確率
$y$:正解ベクトル
ロジスティック回帰は与えられた素性ベクトル$x$の各次元の重みを評価して、各クラスに所属する確率$\tilde{y}$を算出します。
モデルの学習の際には、$\tilde{y}$と$y$の誤差が最小となるように内部のパラメタ$W$と$b$を学習し、
予測の際には、入力$x$を計算式に当てはめ、出力結果$\tilde{y}$を見て、例えば、所属する確率が最大のクラスを予測結果とします。
つまり、学習の際は入力は$x$と$y$で、予測の際は入力は$x$のみです。
これは、他の多くの機械学習アルゴリズムに共通です。
機械学習ライブラリの1つであるscikit-learnは、
多くの機械学習アルゴリズムに対して共通のフレームワークで提供され内部がブラックボックス化されており、
中身が分からなくても簡単に利用できるものとなっています。
ここでは、scikit-learnで提供されているロジスティック回帰を利用していきます。
データ加工(素性/ラベル作成)
簡単のためログデータを用いずに、
ユーザの属性データとクーポンの属性データのみを用いて素性$x$を作成します。
特徴ベクトルを作成する際には、
1. 各次元のスケールを合わせる
2. 連続値で表現すべきでないところは、フラグにする
を意識しましょう。
ユーザの特徴ベクトル
sequel proでuser_listテーブルを見てみると、
素性に利用しやすそうな属性として、性別、年齢、場所があります。
性別と場所は連続値でないため、フラグで表現します。
年齢のような連続値は、場合によってはフラグとした方が良い時もありますが、
ここでは、連続値として扱います。
sequel proで年齢のカラムをソートしてみると、
最小値が15最大値が80なので素性を作る時はスケールを合わせるために、
年齢値から15引いて65で割ります。
このように、[0, 1]の値での表現のように最小値と最大値を整えるスケーリングをmini max scaling等と呼びます。
他には平均0分散1の標準正規分布に変換するstandard scalingがあります。
※いずれもscikit-learnのpreprocessingモジュール(http
Step15: クーポンの特徴ベクトル
アイテムは、比較的用意に利用できそうなフラグのカラムが多いです。
ここでは、簡単のためフラグのみを、
学習に利用するクーポンはvalidation期間の前の7日で購買可能なデータ
を利用して特徴ベクトルを作っていきます。
※実際には、より多くのデータを使った方が精度は出る傾向にあります。
Step16: ユーザ・クーポンの特徴ベクトルと正解ラベルの割当
ユーザ特徴ベクトルとクーポン特徴ベウトルの関係性を評価する方法は無数にあります。
ベクトルを横に結合するという単純な方法や、
単純な結合に加えてそれぞれの掛け合わせも加えるという方法(pair-wise)、
などが代表的な簡単な方法です。
素性が長くなれば、それだけ情報量が多くなりますが、一方で過学習(Over fitting)が起きやすくなります。
「過学習が起きる」とは、訓練データでモデルを学習させる際に、過剰に訓練データにフィッティングして、
validationデータやtest dataに対する汎化性能が低いモデルが出来てしまう現象で、
本来、未知のデータに対する汎化・予測性能のあるモデルを構築したいため、
過学習を防ぐ必要があります。
しかし、過学習を防ぐプロセスはやや煩雑であるため、
ここでは意識せず、単にベクトルを横に結合するという方法ですすめます。
Step17: 全部のペアを考慮すると1000万行程度となってしまいメモリに乗り切らなさそうです。
購買していないユーザは訓練データから外してしまいましょう。
また、validation dataのペアも逐次的に生成するようにした方が良さそうなので、そのように進めます。
Step18: 予測モデルの構築・精度評価
いよいよ、機械学習アルゴリズムによる予測モデルの構築です。
といっても、これ自体はライブラリでブラックボックス化されているため、
ここまでくれば、非常に簡単に実装できます。
Step19: 先ほどの、ランダム予測よりだいぶ上がったようです。
今回の例では、ユーザの特徴とクーポンの特徴を単純に結合しましたが、
モデルの入力である素性は無数の形式が存在します。
機械学習アルゴリズムを利用する際、この素性を如何に設計するかは非常に重要です。
他に予測精度をあげるポイントとして、
・機械学習アルゴリズムを選ぶ。
・機械学習アルゴリズムのハイパーパラメタを調整する。
・前処理をちゃんとやる。
・目的関数を正しく選ぶ。
などがあります。
興味がある方は、適用を検討してみてください。
Excercise
同様の方法を test dataに適用、各ユーザの購買クーポンを予測し、submissionしてみてください。
学習に用いるデータ期間を7日から増やして、精度の変化を見てみてください。
5. データの概観把握・予測モデルの改善
TODO
Step20: 最終的な精度評価に用いるテストデータに含まれる各クーポンに対して、どれくらいviewやpurchaseのデータが存在するか、の確認。
ほとんどのクーポンでviewデータがなく、また、全てのクーポンでpurchaseのデータが存在しないitem-cold start状態なので、
協調フィルタリングやその派生の手法は不向きであると考えられます。
Step21: 関係性についての仮説をたてる
どういったユーザがどういったクーポンを購買するかについての仮説を立てます。
今回のデータセットでは、例えば、下記のような仮説が立てられます。
1. ユーザは自分の地域と同じ地域を対象とするクーポンを購買する可能性が高い。
2. ユーザは女性の方が男性より割引率の高いクーポンを購買する可能性が高い。
3. ユーザは最後に購買したクーポンと同じジャンルのクーポンを、将来購買する可能性が高い。
4. (過去に購買したクーポンがない)ユーザは最後に閲覧したクーポンと同じジャンルのクーポンを、将来購買する可能性が高い。
仮説を立てる際に意識して頂きたいことは、
立てた仮説がどういったユーザがどういったクーポンを購買するかを具体的に表現していて、
そこから、どのようにデータを集計し、検証を行うかが具体的に分かるレベルで考えるということです。
たとえば、
「ユーザは過去の行動に応じて、将来に購買するクーポンを変える可能性が高い。」
という仮説をたてたとしましょう。しかし、この抽象度では具体的に何をすべきか分かりません。
従って、正しいかもしれないが、次のアクションに繋がらないという点で、役に立たない(≒価値がない)仮説だと言えます。
また、別の角度から見ると、立てた仮説はグレーであればあるほど、白黒はっきりさせた(検証した)あとの有用性が大きいことが多いです。
実際に、「データを分析する」際には、この「仮説を立てて、重要なところから取りかかる」ということが非常に重要です。
スキルは所詮スキルでしかなく、使いどころを誤るといたずらに時間を溶かす結果になりますので、是非意識してもらえればと思います。
仮説を検証するために、データを集計&可視化する
具体例として挙げた仮説を一つずつ検証いていきましょう。
1. ユーザは自分の地域と同じ地域を対象とするクーポンを購買する可能性が高い。
この仮説は、時間軸とは関係のなく、いわゆる分析的な集計である為、ここでは訓練データ全体で集計します。
Step22: まず、同じ地域からの購買よりも,異なる地域からの購買の方が多いことが分かります。
また、same_rateとdiff_rateを比べると、同じ地域での閲覧に於ける購買の割合より、異なる地域での閲覧に於ける購買の割合の方がやや大きいことが分かります。
そもそも、異なる地域での閲覧が多いということ自体が先の仮定で意識したものとは異なっていた、ということもありますが、
検証したかった仮説はどうやら、正しくなさそうです。
2. ユーザは女性の方が男性より割引率の高いクーポンを購買する可能性が高い。
この仮説も、時間軸とは関係のなく、いわゆる分析的な集計である為、ここでは訓練データ全体で集計します。
Step23: あまり、変わらないですね、、、
3. ユーザは最後に購買したクーポンと同じジャンルのクーポンを、将来購買する可能性が高い。
この仮説は、時間軸と関係しており、いわゆる予測的な集計である為、ここではtraining dataとvalidationデータに分けたものを集計します。
TODO:何の為に集計したの? | Python Code:
# TODO: You Must Change the setting bellow
MYSQL = {
'user': 'root',
'passwd': '',
'db': 'coupon_purchase',
'host': '127.0.0.1',
'port': 3306,
'local_infile': True,
'charset': 'utf8',
}
DATA_DIR = '/home/nasuno/recruit_kaggle_datasets' # ディレクトリの名前に日本語(マルチバイト文字)は使わないでください。
OUTPUTS_DIR = '/home/nasuno/recruit_kaggle/outputs' # 予測結果などを保存するディレクトリ。
Explanation: Data Science Tutorial 01 @ Data Science Society
那須野薫(Kaoru Nasuno)/ 東京大学(The University of Tokyo)
データサイエンスの基礎的なスキルを身につける為のチュートリアルです。
KaggleのコンペティションであるRECRUIT Challenge, Coupon Purchase Predictionのデータセットを題材として、
データサイエンスの基礎的なスキルに触れ,理解の土台を養うことを目的とします。
(高い予測精度を出すことが目的ではないです)
まだ、書きかけでして、要望に合わせて誤りの修正や加筆をしていく予定です。何かお気づきの点があればご連絡頂けますと幸いです。
対象データ
RECRUIT Challenge, Coupon Purchase Predictionのデータセット。
ユーザ登録や利用規約に同意してダウンロードしてください。
https://www.kaggle.com/c/coupon-purchase-prediction/data
進め方
まずは、全てのコードをコピー&ペーストして、エラーなく動作することを確認しましょう。
この段階でエラーが出る場合には環境が整っていないか、パラメタの設定ができていない等、
プログラムの理解とはあまり関係のない箇所が原因である可能性が高いです。
動作確認が終わったら、ひとつずつ書き写してみて、それぞれどのように動作するかを理解していくという方法をお勧めします。
目次
<span style="color: #FF0000;">下準備</span>
<span style="color: #FF0000;">データベースへのデータの展開</span>
モデリング対象の明確化
機械学習による予測モデルの構築・精度検証
データの概観把握・予測モデルの改善
dependencies
macユーザ:
bash
brew update;
pip install ipython;
pip install ipython[notebook];
brew install mariadb;
pip install MySQL-python;
pip install scikit-learn;
mysqlが起動していない場合は、下記のコマンドでmysqlのプロセスを立ち上げましょう。
bash
mysqld_safe;
MySQLクライアンの一つであるSequel Pro( http://www.sequelpro.com/ )もinstall してください。
1. 下準備
データベースの作成
このチュートリアルではMySQL(MariaDB)というリレーショナルデータベースを利用します。
ここでは、利用するデータベース名をcoupon_purchaseとし、データベースを作成していない人は下記のコマンドをターミナルで実行してください。
bash
echo 'CREATE DATABASE coupon_purchase; ' |mysql -uroot
rootユーザのパスワードを設定している方は
bash
echo 'CREATE DATABASE coupon_purchase; ' |mysql -uroot -pyourpassword
としてください。
ローカル環境下で実行している場合には、sequel proで下記のような設定で
でデータベースにアクセスできるようになっているはずです。
(MySQLのパスワードを設定していない場合には、パスワード欄は空白)
<img src="files/sequel_pro.png" width="400px;"/>
以下は、ipython notebook上で実行してください。
ipython notebook は下記のコマンドをターミナルで実行することで起動できます。
bash
ipython notebook;
起動すると、ブラウザ上でipython notebookが起動します。
New >> python2(or New Notebook)をクリックすることで、新しいpythonのノートブックを作成できます。
パラメタの設定
MySQLのユーザ名やパスワードなどのパラメタを指定してください。
多くの場合はuserやpasswdを変更すれば動くと思います。
また、ダウンロードし、解凍した9つのcsvファイルが置いてあるディレクトリのパスを設定してください。
(coupon_area_test.csv, coupon_list_test.csv, prefecture_locations.csv, coupon_area_train.csv, coupon_list_train.csv, sample_submission.csv, coupon_detail_train.csv, coupon_visit_train.csv user_list.csv)
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import MySQLdb
import numpy
from sklearn.utils import shuffle
from sklearn.cross_validation import train_test_split
from sklearn.metrics import f1_score, accuracy_score
from sklearn.linear_model import LogisticRegression
from datetime import datetime, timedelta
from itertools import product
# Random Seed
rng = numpy.random.RandomState(1234)
dbcon = MySQLdb.connect(**MYSQL)
dbcur = dbcon.cursor()
Explanation: モジュールのimportや変数の初期化
次に、このチュートリアルで利用するモジュールのimportや一部の変数の初期化を行います。
python
%matplotlib inline
は ipython notebookに特有のマジックコマンドというものです。
pythonの文法と異なりますが、matplotlibという画像を描画するライブラリの出力結果がブラウザ上に表示されるように設定するものです。
(ここでは、おまじない程度に考えてください。)
End of explanation
dbcur.execute('''DROP TABLE IF EXISTS user_list;''') # チュートリアルの便宜上、一度削除します。
query = '''
CREATE TABLE IF NOT EXISTS user_list (
reg_date DATETIME,
sex_id VARCHAR(1),
age INT,
withdraw_date DATETIME,
pref_name VARCHAR(15),
user_id_hash VARCHAR(32),
PRIMARY KEY(user_id_hash),
INDEX(reg_date),
INDEX(sex_id),
INDEX(age),
INDEX(withdraw_date),
INDEX(pref_name)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
'''
dbcur.execute(query)
Explanation: 2. データベースへのデータの格納
データベースとは
データベースとは、色々なデータの目的ベースでの管理や、効率的なデータ参照/検索を可能にするものです。
データベースの中には複数のテーブルがあります。
テーブルはちょうどスプレッドシートのようになっていて、それぞれの列に名前があり、1行が1つのデータとなるイメージです。
データの格納
データの格納の流れは大まかに、
1. テーブルの作成
2. テーブルへのインサート
3. errorやwarningの確認
の3つのステップとなります。
kaggleのページにテーブルの定義が書いてあるので、ここでは、その通りに作成します。
まずは、user_listのテーブル作成クエリと実行です。
MySQLのCREATE TABLE構文については、http://dev.mysql.com/doc/refman/5.6/ja/create-table.html を参照ください。
End of explanation
csv_path = DATA_DIR + '/user_list.csv'
query = '''
LOAD DATA LOCAL INFILE "''' + csv_path + '''"
INTO TABLE user_list
CHARACTER SET utf8
FIELDS TERMINATED BY ','
IGNORE 1 LINES
(reg_date, sex_id, age,@withdraw_date, pref_name, user_id_hash)
SET
withdraw_date = IF(CHAR_LENGTH(@withdraw_date) != 19 , '9999-12-31 23:59:59', STR_TO_DATE(@withdraw_date, "%Y-%m-%d %H:%i:%s"))
;
'''
dbcur.execute(query)
Explanation: 次に、データのインサートです。
csvファイルなど、dumpされたファイルからMySQLにインサートする場合にはLOAD DATA INFILE構文を利用します。
LOAD DATA INFILE構文については、 http://dev.mysql.com/doc/refman/5.6/ja/load-data.html を参照ください。
End of explanation
### prefecture_locations
csv_path = DATA_DIR + '/prefecture_locations.csv'
dbcur.execute('''DROP TABLE IF EXISTS prefecture_locations;''')
dbcur.execute('''
CREATE TABLE IF NOT EXISTS prefecture_locations (
pref_name VARCHAR(15),
PRIMARY KEY(pref_name),
prefectual_office VARCHAR(15),
latitude DOUBLE,
longitude DOUBLE
) ENGINE=MyISAM DEFAULT CHARSET=utf8
;
''')
dbcur.execute('''
LOAD DATA LOCAL INFILE "''' + csv_path + '''"
INTO TABLE prefecture_locations
CHARACTER SET utf8
FIELDS TERMINATED BY ','
IGNORE 1 LINES
(pref_name, prefectual_office, latitude, longitude)
;
''')
Explanation: テーブルの作成に利用したCREATE TABLE文には、
テーブルの型の定義ではなく、インデックスと呼ばれるものの定義も含まれています。
インデックスとはデータの検索を高速化するものです。
PRIMARY KEY
テーブル内でuniqueで、かつ、検索するカラムに付与する。
例えば、user_listテーブルのuser_id_hashは当該テーブルで、ユニークであり,かつ、ユーザの検索によく用いるため、PRIMARY KEYを付与しておいた方が良い。
INDEX
テーブル内でuniqueではないが、検索するカラムに付与する。例えば、ユーザを性別や年齢に応じて検索・集計して、割合を見たい場合には、sex_idやageなどのカラムに付与しておいた方が良い。
TODO
MYSQL関数などの説明の加筆。
Exercise
下記の他のファイルについても同様にテーブルを作成し、データをインサートしてください。
- prefecture_locations.csv
- coupon_area_train.csv, coupon_area_test.csv
- coupon_detail_train.csv
- coupon_visit_train.csv
- coupon_list_train.csv, coupon_list_test.csv
実装例
prefecture_locations.csv
End of explanation
### coupon_area_train
csv_path = DATA_DIR + '/coupon_area_train.csv'
dbcur.execute('''DROP TABLE IF EXISTS coupon_area_train;''')
dbcur.execute('''
CREATE TABLE IF NOT EXISTS coupon_area_train (
small_area_name VARCHAR(32),
pref_name VARCHAR(15),
coupon_id_hash VARCHAR(32),
INDEX(coupon_id_hash),
INDEX(pref_name)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
;
''')
dbcur.execute('''
LOAD DATA LOCAL INFILE "''' + csv_path + '''"
INTO TABLE coupon_area_train
CHARACTER SET utf8
FIELDS TERMINATED BY ','
IGNORE 1 LINES
(small_area_name,pref_name,coupon_id_hash)
;
''')
### coupon_area_test
csv_path = DATA_DIR + '/coupon_area_test.csv'
dbcur.execute('''DROP TABLE IF EXISTS coupon_area_test;''')
dbcur.execute('''
CREATE TABLE IF NOT EXISTS coupon_area_test (
small_area_name VARCHAR(32),
pref_name VARCHAR(15),
coupon_id_hash VARCHAR(32),
INDEX(coupon_id_hash),
INDEX(pref_name)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
;
''')
dbcur.execute('''
LOAD DATA LOCAL INFILE "''' + csv_path + '''"
INTO TABLE coupon_area_test
CHARACTER SET utf8
FIELDS TERMINATED BY ','
IGNORE 1 LINES
(small_area_name,pref_name,coupon_id_hash)
;
''')
Explanation: 実行すると、それぞれのレコードでWarningが発生しますが、
データベースに展開されたレコードのlongitudeを確認すると正しく展開されているため、ここではWarningは無視します。
(確認しておりませんが、行末の改行コードがWarningの原因かもしれません、、、)
- coupon_area_train.csv, coupon_area_test.csv
End of explanation
### coupon_detail_train
csv_path = DATA_DIR + '/coupon_detail_train.csv'
dbcur.execute('''DROP TABLE IF EXISTS coupon_detail_train;''')
dbcur.execute('''
CREATE TABLE IF NOT EXISTS coupon_detail_train (
item_count INT,
i_date DATETIME,
small_area_name VARCHAR(32),
purchaseid_hash VARCHAR(32),
user_id_hash VARCHAR(32),
coupon_id_hash VARCHAR(32),
INDEX(coupon_id_hash)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
;
''')
dbcur.execute('''
LOAD DATA LOCAL INFILE "''' + csv_path + '''"
INTO TABLE coupon_detail_train
CHARACTER SET utf8
FIELDS TERMINATED BY ','
IGNORE 1 LINES
(item_count, i_date, small_area_name, purchaseid_hash, user_id_hash, coupon_id_hash)
;
''')
Explanation: coupon_detail_train.csv
End of explanation
### coupon_visit_train
csv_path = DATA_DIR + '/coupon_visit_train.csv'
dbcur.execute('''DROP TABLE IF EXISTS coupon_visit_train;''')
dbcur.execute('''
CREATE TABLE IF NOT EXISTS coupon_visit_train (
purchase_flg INT,
i_date DATETIME,
page_serial INT,
referrer_hash VARCHAR(128),
view_coupon_id_hash VARCHAR(128),
user_id_hash VARCHAR(32),
session_id_hash VARCHAR(128),
purchaseid_hash VARCHAR(32),
INDEX(user_id_hash, i_date),
INDEX(i_date, user_id_hash),
INDEX(view_coupon_id_hash),
INDEX(purchaseid_hash),
INDEX(purchase_flg)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
;
''')
dbcur.execute('''
LOAD DATA LOCAL INFILE "''' + csv_path + '''"
INTO TABLE coupon_visit_train
CHARACTER SET utf8
FIELDS TERMINATED BY ','
IGNORE 1 LINES
(purchase_flg,i_date,page_serial,referrer_hash,view_coupon_id_hash,user_id_hash,session_id_hash,purchaseid_hash)
;
''')
Explanation: coupon_visit_train.csv
このファイルはレコード数が一番多く、インサートが完了するまで少し時間がかかります。
End of explanation
### coupon_list_train
csv_path = DATA_DIR + '/coupon_list_train.csv'
dbcur.execute('''DROP TABLE IF EXISTS coupon_list_train;''')
dbcur.execute('''
CREATE TABLE IF NOT EXISTS coupon_list_train (
capsule_text VARCHAR(20),
genre_name VARCHAR(50),
price_rate INT,
catalog_price INT,
discount_price INT,
dispfrom DATETIME,
dispend DATETIME,
dispperiod INT,
validfrom DATE,
validend DATE,
validperiod INT,
usable_date_mon VARCHAR(7),
usable_date_tue VARCHAR(7),
usable_date_wed VARCHAR(7),
usable_date_thu VARCHAR(7),
usable_date_fri VARCHAR(7),
usable_date_sat VARCHAR(7),
usable_date_sun VARCHAR(7),
usable_date_holiday VARCHAR(7),
usable_date_before_holiday VARCHAR(7),
large_area_name VARCHAR(30),
ken_name VARCHAR(8),
small_area_name VARCHAR(30),
coupon_id_hash VARCHAR(32),
PRIMARY KEY(coupon_id_hash),
INDEX(ken_name),
INDEX(genre_name)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
;
''')
dbcur.execute('''
LOAD DATA LOCAL INFILE "''' + csv_path + '''"
INTO TABLE coupon_list_train
CHARACTER SET utf8
FIELDS TERMINATED BY ','
IGNORE 1 LINES
(capsule_text,genre_name,price_rate,catalog_price,discount_price,dispfrom,dispend,dispperiod,validfrom,validend,@validperiod,usable_date_mon,usable_date_tue,usable_date_wed,usable_date_thu,usable_date_fri,usable_date_sat,usable_date_sun,usable_date_holiday,usable_date_before_holiday,large_area_name,ken_name,small_area_name,coupon_id_hash)
SET validperiod=IF(@validperiod = 'NA', Null, @validperiod)
;
''')
### coupon_list_test
csv_path = DATA_DIR + '/coupon_list_test.csv'
dbcur.execute('''DROP TABLE IF EXISTS coupon_list_test;''')
dbcur.execute('''
CREATE TABLE IF NOT EXISTS coupon_list_test (
capsule_text VARCHAR(20),
genre_name VARCHAR(50),
price_rate INT,
catalog_price INT,
discount_price INT,
dispfrom DATETIME,
dispend DATETIME,
dispperiod INT,
validfrom DATE,
validend DATE,
validperiod INT,
usable_date_mon VARCHAR(7),
usable_date_tue VARCHAR(7),
usable_date_wed VARCHAR(7),
usable_date_thu VARCHAR(7),
usable_date_fri VARCHAR(7),
usable_date_sat VARCHAR(7),
usable_date_sun VARCHAR(7),
usable_date_holiday VARCHAR(7),
usable_date_before_holiday VARCHAR(7),
large_area_name VARCHAR(30),
ken_name VARCHAR(8),
small_area_name VARCHAR(30),
coupon_id_hash VARCHAR(32),
PRIMARY KEY(coupon_id_hash),
INDEX(ken_name),
INDEX(genre_name)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
;
''')
dbcur.execute('''
LOAD DATA LOCAL INFILE "''' + csv_path + '''"
INTO TABLE coupon_list_test
CHARACTER SET utf8
FIELDS TERMINATED BY ','
IGNORE 1 LINES
(capsule_text,genre_name,price_rate,catalog_price,discount_price,dispfrom,dispend,dispperiod,validfrom,validend,@validperiod,usable_date_mon,usable_date_tue,usable_date_wed,usable_date_thu,usable_date_fri,usable_date_sat,usable_date_sun,usable_date_holiday,usable_date_before_holiday,large_area_name,ken_name,small_area_name,coupon_id_hash)
SET validperiod=IF(@validperiod = 'NA', Null, @validperiod)
;
''')
Explanation: coupon_list_train.csv, coupon_list_test.csv
2つともWarningが出ますが、
日時の値が正しくないために発生しているだけなので、無視します。
下記のクエリの
SQL
SET validperiod=IF(@validperiod = 'NA', Null, @validperiod)
を外すと、おそらくNULLが入ってほしい箇所に0が入ってしまうため、ここでは、NULLがはいるように変換します。
End of explanation
validation_start = datetime.strptime('2012-06-17 00:00:00', '%Y-%m-%d %H:%M:%S')
validation_end = validation_start + timedelta(days=7)
dbcur.execute(''' DROP TABLE IF EXISTS coupon_visit_train_training;''') # チュートリアルの便宜上一回削除します。
dbcur.execute(''' CREATE TABLE IF NOT EXISTS coupon_visit_train_training LIKE coupon_visit_train;''')
dbcur.execute('''
INSERT INTO coupon_visit_train_training
SELECT *
FROM coupon_visit_train
WHERE i_date >= "2011-07-01 00:00:00" AND i_date < %s
;
''', (validation_start, ))
dbcur.execute(''' DROP TABLE IF EXISTS coupon_visit_train_validation;''') # チュートリアルの便宜上一回削除します。
dbcur.execute(''' CREATE TABLE IF NOT EXISTS coupon_visit_train_validation LIKE coupon_visit_train;''')
dbcur.execute('''
INSERT INTO coupon_visit_train_validation
SELECT *
FROM coupon_visit_train
WHERE i_date >= %s
;
''', (validation_start, ))
Explanation: 3. モデリング対象の設定
対象の明確化
このコンペティションの最終的なゴールは、各ユーザが将来どのクーポンを購入するかをより正確に予測することです。
モデリング(≒予測モデルの構築)は
- ユーザの属性データ(性別、年代、地域など)
- クーポンの属性データ(ジャンル、価格、地域など)
- いつどのユーザがどのクーポンを見たか、購買したか等のログデータ
の3つのデータを主に利用します(これは、クーポンに限らず他の多くの問題で共通しています)。
これらのデータを利用して将来ユーザがどのクーポンを購買するのかをモデリングするわけですが、
特に、このコンペティションでは、モデリングの良し悪しを評価する指標が与えられており、
参加者はこの指標のスコアを競っている形になっています。
https://www.kaggle.com/c/coupon-purchase-prediction/details/evaluation によれば、
このコンペティションでは、Mean Average Precision@10(MAP@10)という指標を最大化するような予測モデルを構築することが最終的なゴールです。
Map@10は、下記の数式で定義されています。
$$
MAP@10=\frac{1}{|U|}\sum_{u=1}^{|U|}\frac{1}{min(m, 10)} \sum_{k=1}^{min(n, 10)}P(k)
$$
$|U|$:ユーザ数
$P(k)$:k番目まで着目した時のPrecision(購買すると予測したクーポン中の実際に購買されたクーポンの割合)
$n$:購買すると予測されたクーポンの数
$m$:実際に購買されたクーポンの数(m=0の時は、Precision=0)
この指標を直接最大化するようにモデリングする方がコンペティションの最終的な評価は高いと考えられますが、
ここでは、簡単のため、下記のように問題を少し変更し別の指標で代替することにします。
- 全てのユーザは検証用の期間で必ずクーポンをちょうど1つ購入する。
- その1つのクーポンを訓練用の期間のデータから予測する。
このように問題を簡単にし、検証用の期間で購買される確率が最も高いクーポンを選択する問題にすることで、
正解率(Accuracy)でモデルの良し悪しを近似的に評価することが出来ます。
このように、近似した問題に対してモデルを構築することとして、以下は進めていきます。
モデル構築に於ける3つのデータ:training data, validation data, test data
データはまず大きく分けて、訓練用のデータとテスト用のデータ(test data)に大別されます。
特に、Kaggleのコンペティションのデータセットは明示的にそれらが分けられているものもあります。
通常、訓練用のデータで予測モデルを構築し、テスト用のデータで構築した予測モデルの精度を検証します。
<img src="files/data_split.png" width="600px;"/>
このチュートリアルでは、機械学習のアルゴリズムに基づいた予測モデルを構築しますが、
機械学習のアルゴリズムには一般にハイパーパラメタというものが存在します。
機械学習アルゴリズムには、内部パラメタとハイパーパラメタの2つのパラメタがあり、
内部パラメタは自動で学習しますが、ハイパーパラメタは予め設定する必要があります。
良い精度を出すためにはハイパーパラメタをこのうまく調整する必要があります。
実際に予測モデルを構築する際には、訓練用のデータをさらにtraining dataとvalidation dataに分けて、
あるハイパーパラメタでtraining dataから予測モデルを構築しvalidation dataで精度を検証し、
別のハイパーパラメタで予測モデルを構築しvalidation dataで精度を検証し、、、
ということを繰り返します。
最終的な精度評価に際しては、
validation dataへの検証実験で良い結果となったハイパーパラメタで構築された予測モデルを今度はtest dataに適用し、
実際に未知のデータに対して、どれくらいの精度で予測できるのかということ検証します。
ここでは、便宜的に訓練データの最後の1週間の期間(2012-06-17 00:00:00から2012-06-23 23:59:59)をvalidation dataとし、
残りの約1年の期間をtraining dataとして、データを分割します。
End of explanation
# validation 期間に購買されうるクーポンの抽出
dbcur.execute('''
SELECT
coupon_id_hash
FROM coupon_list_train
WHERE
NOT (dispend <= %s OR dispfrom > %s)
;
''', (validation_start, validation_end))
coupon_ids = []
for row in dbcur.fetchall():
coupon_ids.append(row[0])
Explanation: ランダム推定・MAP@10の評価
先にも触れましたが、本コンペティションではMAP@10というスコアを競い合っています。
提出のフォーマットは、https://www.kaggle.com/c/coupon-purchase-prediction/details/evaluation に
記載されています。
訓練データをtraining dataと validation dataに分割したので、
試しに、各ユーザが購買するクーポンをランダムに推定して、
・MAP@10の性質
・ランダム推定の精度
・サブミッションの形式
をざっくり把握していきましょう。
validation期間に購買されうるクーポンを抽出する。
抽出したクーポン群から各ユーザが購買するクーポンをランダムに10個選び、予測結果とする。
実際に購買したクーポンと照らし合わせ、MAP@10を算出する。
サブミッション用のcsvファイルを作成する。
の4ステップで見ていきます。
1. validation期間に購買されうるクーポンを抽出する。
End of explanation
# user_idsをselectして、ランダムに、購買アイテムを割り当てる。
dbcur.execute('''
SELECT
user_id_hash
FROM user_list
;
''')
user_pcoupon_pred = {}
for row in dbcur.fetchall():
user_pcoupon_pred[row[0]] =list(shuffle(coupon_ids, random_state=rng)[:10])
Explanation: 2. 抽出したクーポン群から各ユーザが購買するクーポンをランダムに10個選び、予測結果とする。
End of explanation
# validation期間に購買したクーポンリストを抽出。
dbcur.execute('''
SELECT
user_id_hash, view_coupon_id_hash
FROM coupon_visit_train_validation
WHERE purchase_flg = 1
;
''')
user_pcoupon_true = {}
for row in dbcur.fetchall():
if row[0] not in user_pcoupon_true:
user_pcoupon_true[row[0]] = []
user_pcoupon_true[row[0]].append(row[1])
# ap10を算出する関数を定義。
def get_ap10(y_pred, y_true):
ap10 = 0.
y_true = set(y_true)
for i in range(len(y_pred)):
if y_pred[i] in y_true:
c = set(y_pred[:i + 1])
ap10 += len(y_true & c) / float(i + 1)
ap10 /= min(len(y_true), 10)
return ap10
map10 = 0.
n_purchased_user = 0.
for user_id in user_pcoupon_pred:
if user_id not in user_pcoupon_true:
# 当該ユーザがvalidation期間にcouponを買わなかった場合、
# ap@10は0
continue
n_purchased_user += 1
y_true = user_pcoupon_true[user_id]
y_pred = user_pcoupon_pred[user_id]
map10 += get_ap10(y_pred, y_true)
max_map10 = n_purchased_user / len(user_pcoupon_pred)
map10 /= len(user_pcoupon_pred)
print 'max_map@10: %.5f, map@10: %.5f' % (max_map10, map10)
Explanation: 3. 実際に購買したクーポンと照らし合わせ、MAP@10を算出する。
End of explanation
output = ['USER_ID_hash,PURCHASED_COUPONS']
for user_id in user_pcoupon_pred:
output.append(user_id + ',' + ' '.join(user_pcoupon_pred[user_id]))
output = '\n'.join(output)
with open(OUTPUTS_DIR + '/random_prediction_valid.csv', 'wb') as fid:
fid.write(output)
Explanation: ランダムだと、全然当たらないですね。
map@10は期間中に何人のユーザが購買しているかによって、最大値が大きく変わります。
従って、一概に他の期間と比較することは出来ませんが、執筆時点で最も高いスコアでも0.01よりも少し多き程度のスコアなので、
スコア自体はまだまだ改善の余地があると考えられます。
※実際にスコアを上げる(=高い精度で予測できるようにする)のは、非常に大変です。
4. サブミッション用のcsvファイルを作成する。
End of explanation
# ユニークな都道府県リストの取得
dbcur.execute(''' SELECT pref_name FROM prefecture_locations ORDER BY pref_name ; ''')
pref_data = []
for row in dbcur.fetchall():
pref_data.append(row)
# ユーザの素性を作成。(ユーザの素性はtraining、validation, testで共通)
dbcur.execute('''
SELECT
t1.user_id_hash,
IF(t1.sex_id = 'm', 1, 0),
(t1.age-15)/65,
''' + ', '.join([u'IF(t1.pref_name = "' + p[0] + u'", 1, 0)' for i, p in enumerate(pref_data)]) + '''
FROM user_list AS t1
''')
user_feature = {} # ユーザの素性ベクトル
for row in dbcur.fetchall():
user_feature[row[0]] = row[1:]
Explanation: Excercise
test dataに対して、同様の手法で予測・csvに結果を出力して、
コンペティションにサブミットしてください。
※1日に最大5回までしかサブミットできないので、注意してください。
4. 機械学習アルゴリズムによる予測モデルの構築
先の章では、モデリング対象や評価指標への理解を深めました。
具体的に先に挙げた、
- ユーザの属性データ(性別、年代、地域など)
- クーポンの属性データ(ジャンル、価格、地域など)
- いつどのユーザがどのクーポンを見たか、購買したか等のログデータ
のそれぞれの要素がどれぐらい将来の購買行動と関連があるのか、
を考える前に、まず簡単に機械学習アルゴリズムによる予測モデルを構築して、
アルゴリズムの内部がどうなっているのか、
アルゴリズムのinputとoutputがどうなっているのか、
見ていきましょう。
ここでは、下記の設定、
対象アルゴリズム:ロジスティック回帰
対象データ:ユーザの属性データとクーポンの属性データ
将来購買されるクーポンを予測するということをやってみましょう。
機械学習アルゴリズム
ロジスティック回帰は非常にシンプルなアルゴリズムで
下記の数式のように定義されています。
$$
v=Wx+b
$$
$$
y \simeq \tilde{y}=\frac{exp(v)}{\sum_{k=1}^{N}exp(v_k)}
$$
$x$:素性ベクトル(ユーザとクーポンの特徴を表現したベクトル)
$W$:素性ベクトルの各列をどれだけ評価するかのウェイトマトリックス(パラメタ)
$b$:バイアス項(パラメタ)
$N$:クラスの数(この場合、買われたか否か)
$v$:N次元ベクトル
$\tilde{y}$:各クラスが発生する確率
$y$:正解ベクトル
ロジスティック回帰は与えられた素性ベクトル$x$の各次元の重みを評価して、各クラスに所属する確率$\tilde{y}$を算出します。
モデルの学習の際には、$\tilde{y}$と$y$の誤差が最小となるように内部のパラメタ$W$と$b$を学習し、
予測の際には、入力$x$を計算式に当てはめ、出力結果$\tilde{y}$を見て、例えば、所属する確率が最大のクラスを予測結果とします。
つまり、学習の際は入力は$x$と$y$で、予測の際は入力は$x$のみです。
これは、他の多くの機械学習アルゴリズムに共通です。
機械学習ライブラリの1つであるscikit-learnは、
多くの機械学習アルゴリズムに対して共通のフレームワークで提供され内部がブラックボックス化されており、
中身が分からなくても簡単に利用できるものとなっています。
ここでは、scikit-learnで提供されているロジスティック回帰を利用していきます。
データ加工(素性/ラベル作成)
簡単のためログデータを用いずに、
ユーザの属性データとクーポンの属性データのみを用いて素性$x$を作成します。
特徴ベクトルを作成する際には、
1. 各次元のスケールを合わせる
2. 連続値で表現すべきでないところは、フラグにする
を意識しましょう。
ユーザの特徴ベクトル
sequel proでuser_listテーブルを見てみると、
素性に利用しやすそうな属性として、性別、年齢、場所があります。
性別と場所は連続値でないため、フラグで表現します。
年齢のような連続値は、場合によってはフラグとした方が良い時もありますが、
ここでは、連続値として扱います。
sequel proで年齢のカラムをソートしてみると、
最小値が15最大値が80なので素性を作る時はスケールを合わせるために、
年齢値から15引いて65で割ります。
このように、[0, 1]の値での表現のように最小値と最大値を整えるスケーリングをmini max scaling等と呼びます。
他には平均0分散1の標準正規分布に変換するstandard scalingがあります。
※いずれもscikit-learnのpreprocessingモジュール(http://scikit-learn.org/stable/modules/classes.html#module-sklearn.preprocessing )に実装されています。
End of explanation
training_start = validation_start - timedelta(days=7) # 訓練開始日時を算出。
# カテゴリリストの取得
dbcur.execute(''' SELECT DISTINCT(capsule_text) FROM coupon_list_train ORDER BY capsule_text;''')
capsule_data = []
for row in dbcur.fetchall():
capsule_data.append(row)
# ジャンルリストの取得
dbcur.execute(''' SELECT DISTINCT(genre_name) FROM coupon_list_train ORDER BY genre_name;''')
genre_data = []
for row in dbcur.fetchall():
genre_data.append(row)
# 大エリアリストの取得
dbcur.execute(''' SELECT DISTINCT(large_area_name) FROM coupon_list_train ORDER BY large_area_name;''')
larea_data = []
for row in dbcur.fetchall():
larea_data.append(row)
# 都道府県リストの取得
dbcur.execute(''' SELECT DISTINCT(ken_name) FROM coupon_list_train ORDER BY ken_name;''')
pref_data = []
for row in dbcur.fetchall():
pref_data.append(row)
# 小エリアリストの取得
dbcur.execute(''' SELECT DISTINCT(small_area_name) FROM coupon_list_train ORDER BY small_area_name;''')
sarea_data = []
for row in dbcur.fetchall():
sarea_data.append(row)
def get_item_feature(f_date, t_date):
# クーポンの素性を作成する関数。
# @f_date:対象期間の開始日時
# @t_date:対象期間の終了日時
# テーブルが訓練用のテーブルとなっている為、training とvalidationのデータを作成する際にしか利用できない。
dbcur.execute('''
SELECT
coupon_id_hash,
''' + ', '.join([u'IF(capsule_text = "' + p[0] + u'", 1, 0)' for i, p in enumerate(capsule_data)]) + ''',
''' + ', '.join([u'IF(genre_name = "' + p[0] + u'", 1, 0)' for i, p in enumerate(genre_data)]) + ''',
COALESCE(CAST(usable_date_mon AS SIGNED), 0),
COALESCE(CAST(usable_date_tue AS SIGNED), 0),
COALESCE(CAST(usable_date_wed AS SIGNED), 0),
COALESCE(CAST(usable_date_thu AS SIGNED), 0),
COALESCE(CAST(usable_date_fri AS SIGNED), 0),
COALESCE(CAST(usable_date_sat AS SIGNED), 0),
COALESCE(CAST(usable_date_sun AS SIGNED), 0),
COALESCE(CAST(usable_date_holiday AS SIGNED), 0),
COALESCE(CAST(usable_date_before_holiday AS SIGNED), 0),
''' + ', '.join([u'IF(large_area_name = "' + p[0] + u'", 1, 0)' for i, p in enumerate(larea_data)]) + ''',
''' + ', '.join([u'IF(ken_name = "' + p[0] + u'", 1, 0)' for i, p in enumerate(pref_data)]) + ''',
''' + ', '.join([u'IF(small_area_name = "' + p[0] + u'", 1, 0)' for i, p in enumerate(sarea_data)]) + '''
FROM coupon_list_train
WHERE
NOT (dispend <= %s OR dispfrom > %s)
;
''', (f_date, t_date))
item_feature = {} # クーポンの素性
for row in dbcur.fetchall():
item_feature[row[0]] = row[1:]
return item_feature
item_feature_train = get_item_feature(training_start, validation_start) # training 期間のクーポンの素性
item_feature_valid = get_item_feature(validation_start, validation_end) # validation 期間のクーポンの素性
print 'n_item_train: %d, n_item_valid: %d' % (len(item_feature_train), len(item_feature_valid))
Explanation: クーポンの特徴ベクトル
アイテムは、比較的用意に利用できそうなフラグのカラムが多いです。
ここでは、簡単のためフラグのみを、
学習に利用するクーポンはvalidation期間の前の7日で購買可能なデータ
を利用して特徴ベクトルを作っていきます。
※実際には、より多くのデータを使った方が精度は出る傾向にあります。
End of explanation
def get_purchased_coupons(f_date, t_date):
# 実際に購買されるクーポンの取得
# @f_date:対象期間の開始日時
# @t_date:対象期間の終了日時
dbcur.execute('''
SELECT user_id_hash, view_coupon_id_hash
FROM coupon_visit_train
WHERE i_date >= %s AND i_date < %s AND purchase_flg = 1
ORDER BY user_id_hash, view_coupon_id_hash
;
''', (f_date, t_date))
purchased_items = {} # 各ユーザがどのクーポン群を購入するかを辞書型で返す。
for row in dbcur.fetchall():
if row[0] not in purchased_items:
purchased_items[row[0]] = set([])
purchased_items[row[0]].add(row[1])
return purchased_items
user_pcoupon_train = get_purchased_coupons(training_start, validation_start) # training 期間に各ユーザが実際に買ったクーポン
user_pcoupon_valid = get_purchased_coupons(validation_start, validation_end) # validation 期間に各ユーザが実際に買ったクーポン
n_pairs_train = len(user_feature) * len(item_feature_train) # ユーザ数×trainingクーポン数
n_pairs_valid = len(user_feature) * len(item_feature_valid) # ユーザ数×validation クーポン数
print 'n_train_datasets: %d, n_validation_datasets: %d, n_puser: %d' %(n_pairs_train, n_pairs_valid, len([1 for a in user_pcoupon_train if len(a) > 0]))
Explanation: ユーザ・クーポンの特徴ベクトルと正解ラベルの割当
ユーザ特徴ベクトルとクーポン特徴ベウトルの関係性を評価する方法は無数にあります。
ベクトルを横に結合するという単純な方法や、
単純な結合に加えてそれぞれの掛け合わせも加えるという方法(pair-wise)、
などが代表的な簡単な方法です。
素性が長くなれば、それだけ情報量が多くなりますが、一方で過学習(Over fitting)が起きやすくなります。
「過学習が起きる」とは、訓練データでモデルを学習させる際に、過剰に訓練データにフィッティングして、
validationデータやtest dataに対する汎化性能が低いモデルが出来てしまう現象で、
本来、未知のデータに対する汎化・予測性能のあるモデルを構築したいため、
過学習を防ぐ必要があります。
しかし、過学習を防ぐプロセスはやや煩雑であるため、
ここでは意識せず、単にベクトルを横に結合するという方法ですすめます。
End of explanation
# 訓練データに利用するユーザをtraining期間に、実際にクーポンを購入したユーザに限定し、そのユーザIDとクーポンのIDの全組み合せを出力する。
pairs_train = list(product([k for k in user_pcoupon_train if len(user_pcoupon_train[k]) > 0], item_feature_train.keys()))
print 'n_train_datasets: %d' %(len(pairs_train), )
features_train = [] # 学習に用いる素性
labels_train = [] # 学習に用いるラベル
for pair in pairs_train: # 各ユーザ、アイテムペアについて
user_id, item_id = pair
features_train.append(user_feature[user_id] + item_feature_train[item_id]) # 単純な結合
if user_id in user_pcoupon_train and item_id in user_pcoupon_train[user_id]:
# 購買された
labels_train.append(1)
else:
# 購買されなかった
labels_train.append(0)
Explanation: 全部のペアを考慮すると1000万行程度となってしまいメモリに乗り切らなさそうです。
購買していないユーザは訓練データから外してしまいましょう。
また、validation dataのペアも逐次的に生成するようにした方が良さそうなので、そのように進めます。
End of explanation
model = LogisticRegression() # ロジスティック回帰のモデル構築(ハイパーパラメタの調整は省略)。インスタンス化。
model.fit(features_train, labels_train) # x, y~を入力して学習
purchase_index = numpy.argmax(model.classes_) # 1(=購買ラベル)がついている方のカラムインデックスを取得
item_index_to_item_id = sorted(item_feature_valid.keys()) # クーポンの番号をクーポンIDに変換する。
map10 = 0.
for user_id in user_feature: # map@10はユーザごとにap@10を算出する。
if user_id not in user_pcoupon_valid: # 購入したクーポンが亡ければ、ap@10は0なので、スコア評価時には飛ばす。
continue
feature = []
for item_id in item_index_to_item_id:
feature.append(user_feature[user_id] + item_feature_valid[item_id]) # 単純にユーザ素性とクーポン素性を結合
y_proba = model.predict_proba(feature) # 各クーポンの購買確率を算出
y_pred_indices = numpy.argsort(y_proba[:, purchase_index])[-10:][::-1] # 購入確率が高いクーポン上位10個のクーポン番号を取得
y_pred_item_ids = [item_index_to_item_id[i] for i in y_pred_indices] # クーポン番号をクーポンIDに変換。
map10 += get_ap10(y_pred_item_ids, user_pcoupon_valid[user_id]) # ap@10を計算して、map@10に足す。
map10 /= len(user_feature) # map@10はユーザ平均なので、全ユーザで割る。
print 'MAP@10: %.5f' % (map10, )
Explanation: 予測モデルの構築・精度評価
いよいよ、機械学習アルゴリズムによる予測モデルの構築です。
といっても、これ自体はライブラリでブラックボックス化されているため、
ここまでくれば、非常に簡単に実装できます。
End of explanation
dbcur.execute('''
SELECT
COUNT(*),
SUM(purchase_flg),
COUNT(DISTINCT(view_coupon_id_hash))
FROM
coupon_visit_train
GROUP BY user_id_hash
;
''')
n_view = []
n_purchase = []
n_view_u = []
for row in dbcur.fetchall():
n_view.append(int(row[0]))
n_purchase.append(int(row[1]))
n_view_u.append(int(row[2]))
n_view = numpy.asarray(n_view)
n_purchase = numpy.asarray(n_purchase)
n_view_u = numpy.asarray(n_view_u)
### user-coldstartがどういった状況か見る為に、最初の20件だけ見る。
span = 20
fig = plt.figure(figsize=(18, 8))
ax = fig.add_subplot(2, 3, 1)
ax.hist(n_view, bins=numpy.arange(0, span), cumulative=True)
ax.set_title('page view count distribution')
ax = fig.add_subplot(2, 3, 2)
ax.hist(n_purchase, bins=numpy.arange(0, span), cumulative=True)
ax.set_title('purchase count distribution')
ax = fig.add_subplot(2, 3, 3)
ax.hist(n_view_u, bins=numpy.arange(0, span), cumulative=True)
ax.set_title('unique page view count distribution')
ax = fig.add_subplot(2, 3, 4)
ax.plot(n_view, n_purchase, 'x')
ax.set_title('X=page view count, Y=purchase count')
ax = fig.add_subplot(2, 3, 5)
ax.plot(n_view_u, n_purchase, 'x')
ax.set_title('X=unique page view count, Y=purchase count')
ax = fig.add_subplot(2, 3, 6)
ax.plot(n_view, n_view_u, 'x')
ax.set_title('X=page view count, Y=unique page view count')
plt.show()
## 3Dにしても良く分からないことが多いので,辞めましょう。
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(5, 5))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(n_view, n_view_u, n_purchase, marker='x')
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
plt.show()
Explanation: 先ほどの、ランダム予測よりだいぶ上がったようです。
今回の例では、ユーザの特徴とクーポンの特徴を単純に結合しましたが、
モデルの入力である素性は無数の形式が存在します。
機械学習アルゴリズムを利用する際、この素性を如何に設計するかは非常に重要です。
他に予測精度をあげるポイントとして、
・機械学習アルゴリズムを選ぶ。
・機械学習アルゴリズムのハイパーパラメタを調整する。
・前処理をちゃんとやる。
・目的関数を正しく選ぶ。
などがあります。
興味がある方は、適用を検討してみてください。
Excercise
同様の方法を test dataに適用、各ユーザの購買クーポンを予測し、submissionしてみてください。
学習に用いるデータ期間を7日から増やして、精度の変化を見てみてください。
5. データの概観把握・予測モデルの改善
TODO: もっと多くの実例を取り上げる。
このコンペティションの最終的なゴールは、各ユーザが将来どのクーポンを購入するかを予測することです。
先のように、ランダムに選ぶよりは、ユーザやクーポンの特徴をモデリングした方が将来の購買するクーポンを予測しやすいことが分かりました。
では、将来ユーザがどのクーポンを購入するかは、何と関係があるでしょうか。
その関係性を捉えることで、モデリングの際によりよい結果が得られることが多いです。
下記の2ステップで、購買確率の予測に寄与する指標が何かを探していきます。
1. 重要のデータの量やラベルの種類を確認するために、集計&可視化する。
2. 関係性についての仮説をたてる。
3. 仮説を検証するために、データを集計&可視化する。
重要のデータの量やラベルの種類の確認
ログデータが少ないユーザの数の確認。
TODO:coldstart問題について加筆。
TODO:GROUP BYの説明とこの集計の仕組みの説明の加筆。
End of explanation
dbcur.execute('''
SELECT
t1.coupon_id_hash, COUNT(t2.view_coupon_id_hash), COALESCE(SUM(t2.purchase_flg), 0)
FROM coupon_list_test AS t1
LEFT JOIN coupon_visit_train AS t2 ON t1.coupon_id_hash = t2.view_coupon_id_hash
GROUP BY t1.coupon_id_hash
ORDER BY SUM(t2.purchase_flg)
;
''')
view_count = []
purchase_count = []
for row in dbcur.fetchall():
view_count.append(int(row[1]))
purchase_count.append(int(row[2]))
view_count = numpy.asarray(view_count)
purchase_count = numpy.asarray(purchase_count)
plt.figure()
plt.plot(purchase_count, view_count, '.')
plt.show()
Explanation: 最終的な精度評価に用いるテストデータに含まれる各クーポンに対して、どれくらいviewやpurchaseのデータが存在するか、の確認。
ほとんどのクーポンでviewデータがなく、また、全てのクーポンでpurchaseのデータが存在しないitem-cold start状態なので、
協調フィルタリングやその派生の手法は不向きであると考えられます。
End of explanation
dbcur.execute('''
SELECT
AVG(same_pref_purchase_cnt),
AVG(same_pref_view_cnt),
AVG(same_pref_purchase_cnt / same_pref_view_cnt),
AVG(diff_pref_purchase_cnt),
AVG(diff_pref_view_cnt),
AVG(diff_pref_purchase_cnt / diff_pref_view_cnt)
FROM (
SELECT
t1.user_id_hash,
SUM(t1.pref_name = t3.ken_name AND purchase_flg = 1) AS same_pref_purchase_cnt,
SUM(t1.pref_name = t3.ken_name) AS same_pref_view_cnt,
SUM(t1.pref_name != t3.ken_name AND purchase_flg = 1) AS diff_pref_purchase_cnt,
SUM(t1.pref_name != t3.ken_name) AS diff_pref_view_cnt
FROM user_list AS t1
LEFT JOIN coupon_visit_train AS t2 ON t1.user_id_hash = t2.user_id_hash
LEFT JOIN coupon_list_train AS t3 ON t2.view_coupon_id_hash = t3.coupon_id_hash
WHERE t1.pref_name != ""
GROUP BY t1.user_id_hash
) AS t1
;
''')
data = None
for row in dbcur.fetchall():
data = row
print 'same_purchase: %.2f, same_view: %.2f, same_rate: %.2f, diff_purchase: %.2f, diff_view: %.2f, diff_rate: %.2f' % (data)
Explanation: 関係性についての仮説をたてる
どういったユーザがどういったクーポンを購買するかについての仮説を立てます。
今回のデータセットでは、例えば、下記のような仮説が立てられます。
1. ユーザは自分の地域と同じ地域を対象とするクーポンを購買する可能性が高い。
2. ユーザは女性の方が男性より割引率の高いクーポンを購買する可能性が高い。
3. ユーザは最後に購買したクーポンと同じジャンルのクーポンを、将来購買する可能性が高い。
4. (過去に購買したクーポンがない)ユーザは最後に閲覧したクーポンと同じジャンルのクーポンを、将来購買する可能性が高い。
仮説を立てる際に意識して頂きたいことは、
立てた仮説がどういったユーザがどういったクーポンを購買するかを具体的に表現していて、
そこから、どのようにデータを集計し、検証を行うかが具体的に分かるレベルで考えるということです。
たとえば、
「ユーザは過去の行動に応じて、将来に購買するクーポンを変える可能性が高い。」
という仮説をたてたとしましょう。しかし、この抽象度では具体的に何をすべきか分かりません。
従って、正しいかもしれないが、次のアクションに繋がらないという点で、役に立たない(≒価値がない)仮説だと言えます。
また、別の角度から見ると、立てた仮説はグレーであればあるほど、白黒はっきりさせた(検証した)あとの有用性が大きいことが多いです。
実際に、「データを分析する」際には、この「仮説を立てて、重要なところから取りかかる」ということが非常に重要です。
スキルは所詮スキルでしかなく、使いどころを誤るといたずらに時間を溶かす結果になりますので、是非意識してもらえればと思います。
仮説を検証するために、データを集計&可視化する
具体例として挙げた仮説を一つずつ検証いていきましょう。
1. ユーザは自分の地域と同じ地域を対象とするクーポンを購買する可能性が高い。
この仮説は、時間軸とは関係のなく、いわゆる分析的な集計である為、ここでは訓練データ全体で集計します。
End of explanation
dbcur.execute('''
SELECT
t1.sex_id,
AVG(t1.discount_rate_view),
AVG(t1.discount_rate_purchase)
FROM (
SELECT
t1.user_id_hash,
t1.sex_id,
AVG(100 - t3.price_rate) AS discount_rate_view,
COALESCE(SUM(IF(t2.purchase_flg, 100 - t3.price_rate, 0)) / SUM(t2.purchase_flg), 0) AS discount_rate_purchase
FROM user_list AS t1
LEFT JOIN coupon_visit_train AS t2 ON t1.user_id_hash = t2.user_id_hash
LEFT JOIN coupon_list_train AS t3 ON t2.view_coupon_id_hash = t3.coupon_id_hash
GROUP BY t1.user_id_hash
) AS t1
GROUP BY t1.sex_id
;
''')
data = []
for row in dbcur.fetchall():
row = list(row)
row[1] = float(row[1])
row[2] = float(row[2])
data.append(tuple(row))
for row in data:
print 'sex_id: %s, discount_rate_view: %.2f, discount_rate_purchase: %.2f' % (row)
Explanation: まず、同じ地域からの購買よりも,異なる地域からの購買の方が多いことが分かります。
また、same_rateとdiff_rateを比べると、同じ地域での閲覧に於ける購買の割合より、異なる地域での閲覧に於ける購買の割合の方がやや大きいことが分かります。
そもそも、異なる地域での閲覧が多いということ自体が先の仮定で意識したものとは異なっていた、ということもありますが、
検証したかった仮説はどうやら、正しくなさそうです。
2. ユーザは女性の方が男性より割引率の高いクーポンを購買する可能性が高い。
この仮説も、時間軸とは関係のなく、いわゆる分析的な集計である為、ここでは訓練データ全体で集計します。
End of explanation
dbcur.execute('''
SELECT
SUM(purchase_flg)
FROM coupon_visit_train_validation
WHERE purchase_flg = 1
GROUP BY user_id_hash
;
''')
x = []
for row in dbcur.fetchall():
x.append(int(row[0]))
plt.figure()
plt.hist(x, bins=numpy.arange(1, 15))
plt.show()
dbcur.execute('''
SELECT
AVG(t1.same_purchase),
AVG(t1.same_view),
AVG(t1.same_purchase / t1.same_view) AS same_rate,
AVG(t1.diff_purchase),
AVG(t1.diff_view),
AVG(t1.diff_purchase / t1.diff_view) AS diff_rate
FROM (
SELECT
t1.user_id_hash,
SUM(t1.genre_name = t3.genre_name AND t2.purchase_flg = 1) AS same_purchase,
SUM(t1.genre_name = t3.genre_name) AS same_view,
SUM(t1.genre_name != t3.genre_name AND t2.purchase_flg = 1) AS diff_purchase,
SUM(t1.genre_name != t3.genre_name) AS diff_view
FROM (
SELECT
t1.user_id_hash, t1.view_coupon_id_hash, t3.genre_name
FROM coupon_visit_train_training AS t1
LEFT JOIN coupon_visit_train_training AS t2 ON t1.user_id_hash = t2.user_id_hash AND t1.i_date < t2.i_date
LEFT JOIN coupon_list_train AS t3 ON t1.view_coupon_id_hash = t3.coupon_id_hash
WHERE t1.purchase_flg = 1 AND t2.user_id_hash IS NULL
GROUP BY t1.user_id_hash
) AS t1
LEFT JOIN coupon_visit_train_validation AS t2 ON t1.user_id_hash = t2.user_id_hash
LEFT JOIN coupon_list_train AS t3 ON t2.view_coupon_id_hash = t3.coupon_id_hash
LEFT JOIN (
SELECT user_id_hash
FROM coupon_visit_train_validation
WHERE purchase_flg = 1
GROUP BY user_id_hash
) AS t4 ON t1.user_id_hash = t4.user_id_hash
WHERE t4.user_id_hash IS NOT NULL
GROUP BY t1.user_id_hash
) AS t1
;
''')
data = None
for row in dbcur.fetchall():
data = row
print 'same_purchase: %.2f, same_view: %.2f, same_rate: %.2f, diff_purchase: %.2f, diff_view: %.2f, diff_rate: %.2f' % (data)
Explanation: あまり、変わらないですね、、、
3. ユーザは最後に購買したクーポンと同じジャンルのクーポンを、将来購買する可能性が高い。
この仮説は、時間軸と関係しており、いわゆる予測的な集計である為、ここではtraining dataとvalidationデータに分けたものを集計します。
TODO:何の為に集計したの?
End of explanation |
2,762 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Enums
This notebook is an introduction to Python Enums as introduced in Python 3.4 and subsequently backported to other version of Python.
More details can be found in the library documentation
Step1: Nomenclature
Python has a specific nomenclature for enums.
The class Color is an enumeration (or enum)
The attributes Color.red, Color.green, etc., are enumeration members (or enum members).
The enum members have names and values (the name of Color.red is red, the value of Color.blue is 3, etc.)
Printing and Representing Enums
Enum types have human readable string representations for print and repr
Step2: The type of an enumeration member is the enumeration it belongs to
Step3: Alternative way to create an Enum
There is an alternative way to create and Enum, that matches Python's NamedTuple | Python Code:
from enum import Enum
class MyEnum(Enum):
first = 1
second = 2
third = 3
Explanation: Enums
This notebook is an introduction to Python Enums as introduced in Python 3.4 and subsequently backported to other version of Python.
More details can be found in the library documentation: https://docs.python.org/3.4/library/enum.html
Enumerations are sets of symbolic names bound to unique, constant values.
Within an enumeration, the members can be compared by identity, and the enumeration itself can be iterated over.
A simple example is:
python
from enum import Enum
class Color(Enum):
red = 1
green = 2
blue = 3
Let's walk through the example above. First you import the Enum library with the line:
python
from enum import Enum
Then you subclass Enum to create your own enumerated class with the values listed within the class:
python
class Color(Enum):
red = 1
green = 2
blue = 3
Try it below, create your own Enum.
End of explanation
print(MyEnum.first)
print(repr(MyEnum.first))
Explanation: Nomenclature
Python has a specific nomenclature for enums.
The class Color is an enumeration (or enum)
The attributes Color.red, Color.green, etc., are enumeration members (or enum members).
The enum members have names and values (the name of Color.red is red, the value of Color.blue is 3, etc.)
Printing and Representing Enums
Enum types have human readable string representations for print and repr:
End of explanation
type(MyEnum.first)
Explanation: The type of an enumeration member is the enumeration it belongs to:
End of explanation
SecondEnum = Enum('SecondEnum', 'first, second, third')
print(SecondEnum.first)
Explanation: Alternative way to create an Enum
There is an alternative way to create and Enum, that matches Python's NamedTuple:
python
Colour = Enum('Colour', 'red, green')
Try it below:
End of explanation |
2,763 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I want to convert a 1-dimensional array into a 2-dimensional array by specifying the number of rows in the 2D array. Something that would work like this: | Problem:
import numpy as np
A = np.array([1,2,3,4,5,6])
nrow = 3
B = np.reshape(A, (nrow, -1)) |
2,764 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Topological insulators II/01
Step1: Some more handy simple Fock space operators are defined below. First the total fermion number operator
Step2: And the fermion particle number parity operator
$$\hat{P}=(-1)^{\hat{N}}$$
Step3: Single site s-wave superconductor
$$
\hat{H}=\mu\left(\hat{c}^\dagger_\uparrow\hat{c}\uparrow+\hat{c}^\dagger\downarrow\hat{c}\downarrow\right)
+\Delta\left(\hat{c}^\dagger\uparrow\hat{c}^\dagger_\downarrow + \hat{c}\downarrow\hat{c}\uparrow \right)
$$
Step4: Task
Step5: Kitaev model
$$
\hat{H}=\mu\sum_p\hat{c}^\dagger_p\hat{c}p+t\left (\sum_p\hat{c}^\dagger{p+1}\hat{c}p+ \mathrm{h.c.}\right)+
\Delta\left(\sum_p\hat{c}^\dagger{p+1}\hat{c}^\dagger_p+ \mathrm{h.c.} \right)
$$
Task
Step6: The Bogoliubov–de Gennes "trick"
One can rewrite a generic superconductor many-body Hamiltonian
$$
\hat{H} = \sum_{\alpha,\beta} \left(
c_\alpha^\dagger h_{\alpha,\beta}
c_{\beta} + \frac{1}{2}c_\alpha^\dagger \Delta_{\alpha,\beta}
c_\beta^\dagger + \frac{1}{2}c_\beta \Delta_{\alpha,\beta}^\ast
c_\alpha \right)
$$
as
$$
\hat{H} = \frac{1}{2}
\begin{matrix}\begin{pmatrix} c^\dagger & c \end{pmatrix}\\mbox{}\end{matrix}
\mathcal{H}{\mathrm{BdG}}
\begin{pmatrix} c \ c^\dagger \end{pmatrix} + \frac{1}{2} \mathrm{Tr} h \mathbb{1}{\mathrm{Fock}}
$$
where we have introduced the Bogoliubov–de Gennes matrix as
$$
\mathcal{H}_{\mathrm{BdG}} = \begin{pmatrix}
h & \Delta \
-\Delta^\ast & -h^\ast \end{pmatrix}.
$$
The positive eigenvalues of the Bogoliubov–de Gennes matrix correspond to the single particle excitation spectrum of the full many-body Hamiltonian.
Task | Python Code:
def fermion_Fock_matrices(NN=3):
'''
Returns list of 2^NN X 2^NN sparse matrices,
representing fermionic annihilation operators
acting on the Fock space of NN fermions.
'''
l=list(map(lambda x: list(map(int,list(binary_repr(x,NN)))),arange(0,2**NN)))
ll=-(-1)**cumsum(l,axis=1)
AA=(array(l)*array(ll))[:,::-1]
cc=[]
for p in range(NN):
cc.append(scsp.dia_matrix((AA[:,p],array([2**p])),shape=(2**NN,2**NN), dtype='d'))
return cc
Explanation: Topological insulators II/01: Superconductivity and the Kitaev model
Many-body approach
We first construct annihilation operators $\hat{c}_i$ acting on the Fock space of fermions in the binary sequence basis.
End of explanation
def particle_number_Fock_operator(NN=3):
'''
Returns particle number operator in Fock space of NN fermions
'''
return scsp.dia_matrix((list(map(lambda x:bin(x).count("1"),arange(0,2**NN))),[0]),
shape=(2**NN,2**NN))
Explanation: Some more handy simple Fock space operators are defined below. First the total fermion number operator:
$$ \hat{N}=\sum_p \hat{c}^\dagger_p\hat{c}_p$$
End of explanation
def parity_Fock_operator(NN=3):
'''
Returns particle number parity operator in Fock space of NN fermions
'''
return scsp.dia_matrix((list(map(lambda x:1-2*mod(bin(x).count("1"),2),arange(0,2**NN))),[0]),
shape=(2**NN,2**NN))
Explanation: And the fermion particle number parity operator
$$\hat{P}=(-1)^{\hat{N}}$$
End of explanation
def Singe_Site_Superconductor_Fock_Ham(cc,mu,Delta,**kwargs):
'''
Returns Fock space representation of the Hamiltonian
of the single site s-wave superconductor.
'''
H= mu*(cc[0].H*cc[0]+cc[1].H*cc[1]) \
+Delta*(cc[0].H*cc[1].H+cc[1]*cc[0])
return H.todense()
Explanation: Single site s-wave superconductor
$$
\hat{H}=\mu\left(\hat{c}^\dagger_\uparrow\hat{c}\uparrow+\hat{c}^\dagger\downarrow\hat{c}\downarrow\right)
+\Delta\left(\hat{c}^\dagger\uparrow\hat{c}^\dagger_\downarrow + \hat{c}\downarrow\hat{c}\uparrow \right)
$$
End of explanation
muran=linspace(-5,5,100)
fig=figsize(4,5)
dat=[]
cc=fermion_Fock_matrices(2)
for mu in muran:
dat.append(eigvalsh(Singe_Site_Superconductor_Fock_Ham(cc,mu,1.)))
plot(muran,dat,lw=3);
#stuff below is only to make figure nice
xticks(fontsize=16)
yticks(fontsize=16)
grid()
xlabel(r'$\mu/\Delta$',fontsize=16);
ylabel(r'$E_n$',fontsize=16);
for mu in [-5,0,5]:
val,vec=eigh(Singe_Site_Superconductor_Fock_Ham(fermion_Fock_matrices(2),mu,1.))
print (val[0])
print (vec[:,0])
Explanation: Task: Calculate the full many-body spectrum as function of $\mu/\Delta$, find the many-body groundstate for
$\mu/\Delta=-5,0,5$
End of explanation
def Kitaev_wire_BDG_Ham_Fock_Ham(cc,t,Delta,mu,**kwargs):
'''
Builds Kitaev wire Hamiltonian in Fock space.
'''
H= t*sum(cc[p+1].H*cc[p]+cc[p].H*cc[p+1] for p in range(len(cc)-1)) \
+Delta*sum(cc[p+1]*cc[p]+cc[p].H*cc[p+1].H for p in range(len(cc)-1)) \
+mu*sum(cc[p].H*cc[p] for p in range(len(cc)))
return H
def playFock(N=10,Delta=0.2):
cc=fermion_Fock_matrices(N)
dat=[]
for u in uran:
val=scspl.eigsh(Kitaev_wire_BDG_Ham_Fock_Ham(cc,1,Delta,u),return_eigenvectors=False,k=50,which='SA')
dat.append(sort((val-val[-1])[:-1]))
plot(uran,dat,'r-',lw=2);
plot(uran,array(dat)[:,0],'r-',lw=2,label=r'$E^{\mathrm{Fock}}_n-E^{\mathrm{Fock}}_{GS}$');
#This lets you play with the Fock spectrum
uran=linspace(-3,3,50)
interact(playFock,N=(6,7),Delta=(0,2,0.1));
xlabel(r'$\mu$',fontsize=16);
ylabel(r'$E^{\mathrm{Fock}}_n-E^{\mathrm{Fock}}_0$',fontsize=16);
grid();
Explanation: Kitaev model
$$
\hat{H}=\mu\sum_p\hat{c}^\dagger_p\hat{c}p+t\left (\sum_p\hat{c}^\dagger{p+1}\hat{c}p+ \mathrm{h.c.}\right)+
\Delta\left(\sum_p\hat{c}^\dagger{p+1}\hat{c}^\dagger_p+ \mathrm{h.c.} \right)
$$
Task: Write a routine that builds up the Kitaev model in Fock space, and calculate the spectrum.
End of explanation
#Define the Pauli matrices unit matrix and the zero matrix
s0=matrix([[1,0],[0,1]])
s1=matrix([[0,1],[1,0]])
s2=matrix([[0,-1j],[1j,0]])
s3=matrix([[1,0],[0,-1]])
z2=zeros_like(s0);
def Kitaev_wire_BDG_Ham(N,mu,t,Delta):
idL=eye(N); # identity matrix of dimension L
odL=diag(ones(N-1),1);# upper off diagonal matrix with ones of size L
U=mu*s3
T=-t*s3+1.0j*Delta*s2
return kron(idL,U)+kron(odL,T)+kron(odL,T).H
def playBdG(N=10,Delta=0.2):
dat=[]
for u in uran:
dat.append(eigvalsh(Kitaev_wire_BDG_Ham(N,u,1,Delta)))
plot(uran,dat,'k',lw=6);
plot(uran,array(dat)[:,0],'k',lw=6,label=r'$E^{\mathrm{BdG}}_n$')
#This lets you play with the BdG spectrum
uran=linspace(-3,3,100)
interact(playBdG,N=(3,20),Delta=(0,2,0.1));
xlabel(r'$\mu$',fontsize=16);
ylabel(r'$E^{\mathrm{BdG}}_n$',fontsize=16);
grid();
figsize(4,5)
uran=linspace(0,4,50)
NN=6;Delta=0.9;
playBdG(NN,Delta)
playFock(NN,Delta)
ylim(-6,6)
xticks(linspace(min(uran),max(uran),5),fontsize=16)
yticks(linspace(-6,6,5),fontsize=16)
xlabel(r'$\mu$',fontsize=16)
grid()
legend(fontsize=16,loc='lower right');
Explanation: The Bogoliubov–de Gennes "trick"
One can rewrite a generic superconductor many-body Hamiltonian
$$
\hat{H} = \sum_{\alpha,\beta} \left(
c_\alpha^\dagger h_{\alpha,\beta}
c_{\beta} + \frac{1}{2}c_\alpha^\dagger \Delta_{\alpha,\beta}
c_\beta^\dagger + \frac{1}{2}c_\beta \Delta_{\alpha,\beta}^\ast
c_\alpha \right)
$$
as
$$
\hat{H} = \frac{1}{2}
\begin{matrix}\begin{pmatrix} c^\dagger & c \end{pmatrix}\\mbox{}\end{matrix}
\mathcal{H}{\mathrm{BdG}}
\begin{pmatrix} c \ c^\dagger \end{pmatrix} + \frac{1}{2} \mathrm{Tr} h \mathbb{1}{\mathrm{Fock}}
$$
where we have introduced the Bogoliubov–de Gennes matrix as
$$
\mathcal{H}_{\mathrm{BdG}} = \begin{pmatrix}
h & \Delta \
-\Delta^\ast & -h^\ast \end{pmatrix}.
$$
The positive eigenvalues of the Bogoliubov–de Gennes matrix correspond to the single particle excitation spectrum of the full many-body Hamiltonian.
Task: Write a routine that calculates the Bogoliubov–de Gennes matrix of the Kitaev model, calculate the spectrum and compare the spectrum to the excitation spectrum of the many-body Hamiltonian.
End of explanation |
2,765 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Set Up
We have again provided code to do the basic loading, review and model-building. Run the cell below to set everything up
Step1: The first few questions require examining the distribution of effects for each feature, rather than just an average effect for each feature. Run the following cell for a summary plot of the shap_values for readmission. It will take about 20 seconds to run.
Step2: Question 1
Which of the following features has a bigger range of effects on predictions (i.e. larger difference between most positive and most negative effect)
- diag_1_428 or
- payer_code_?
Step3: Uncomment the line below to see the solution and explanation.
Step4: Question 2
Do you believe the range of effects sizes (distance between smallest effect and largest effect) is a good indication of which feature will have a higher permutation importance? Why or why not?
If the range of effect sizes measures something different from permutation importance
Step5: Question 3
Both diag_1_428 and payer_code_? are binary variables, taking values of 0 or 1.
From the graph, which do you think would typically have a bigger impact on predicted readmission risk
Step6: For a solution and explanation, uncomment the line below.
Step7: Question 4
Some features (like number_inpatient) have reasonably clear separation between the blue and pink dots. Other variables like num_lab_procedures have blue and pink dots jumbled together, even though the SHAP values (or impacts on prediction) aren't all 0.
What do you think you learn from the fact that num_lab_procedures has blue and pink dots jumbled together? Once you have your answer, run the line below to verify your solution.
Step8: Question 5
Consider the following SHAP contribution dependence plot.
The x-axis shows feature_of_interest and the points are colored based on other_feature.
Is there an interaction between feature_of_interest and other_feature?
If so, does feature_of_interest have a more positive impact on predictions when other_feature is high or when other_feature is low?
Run the following code when you are ready for the answer.
Step9: Question 6
Review the summary plot for the readmission data by running the following cell
Step10: Both num_medications and num_lab_procedures share that jumbling of pink and blue dots.
Aside from num_medications having effects of greater magnitude (both more positive and more negative), it's hard to see a meaningful difference between how these two features affect readmission risk. Create the SHAP dependence contribution plots for each variable, and describe what you think is different between how these two variables affect predictions.
As a reminder, here is the code you previously saw to create this type of plot.
shap.dependence_plot(feature_of_interest, shap_values[1], val_X)
And recall that your validation data is called small_val_X.
Step11: Then run the following line to compare your observations from this graph to the solution. | Python Code:
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
import shap
# Environment Set-Up for feedback system.
from learntools.core import binder
binder.bind(globals())
from learntools.ml_explainability.ex5 import *
print("Setup Complete")
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
data = pd.read_csv('../input/hospital-readmissions/train.csv')
y = data.readmitted
base_features = ['number_inpatient', 'num_medications', 'number_diagnoses', 'num_lab_procedures',
'num_procedures', 'time_in_hospital', 'number_outpatient', 'number_emergency',
'gender_Female', 'payer_code_?', 'medical_specialty_?', 'diag_1_428', 'diag_1_414',
'diabetesMed_Yes', 'A1Cresult_None']
# Some versions of shap package error when mixing bools and numerics
X = data[base_features].astype(float)
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
# For speed, we will calculate shap values on smaller subset of the validation data
small_val_X = val_X.iloc[:150]
my_model = RandomForestClassifier(n_estimators=30, random_state=1).fit(train_X, train_y)
data.describe()
Explanation: Set Up
We have again provided code to do the basic loading, review and model-building. Run the cell below to set everything up:
End of explanation
explainer = shap.TreeExplainer(my_model)
shap_values = explainer.shap_values(small_val_X)
shap.summary_plot(shap_values[1], small_val_X)
Explanation: The first few questions require examining the distribution of effects for each feature, rather than just an average effect for each feature. Run the following cell for a summary plot of the shap_values for readmission. It will take about 20 seconds to run.
End of explanation
# set following variable to 'diag_1_428' or 'payer_code_?'
feature_with_bigger_range_of_effects = ____
# Check your answer
q_1.check()
Explanation: Question 1
Which of the following features has a bigger range of effects on predictions (i.e. larger difference between most positive and most negative effect)
- diag_1_428 or
- payer_code_?
End of explanation
# q_1.solution()
Explanation: Uncomment the line below to see the solution and explanation.
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_2.solution()
Explanation: Question 2
Do you believe the range of effects sizes (distance between smallest effect and largest effect) is a good indication of which feature will have a higher permutation importance? Why or why not?
If the range of effect sizes measures something different from permutation importance: which is a better answer for the question "Which of these two features does the model say is more important for us to understand when discussing readmission risks in the population?"
Run the following line after you've decided your answer.
End of explanation
shap.summary_plot(shap_values[1], small_val_X)
# Set following var to "diag_1_428" if changing it to 1 has bigger effect. Else set it to 'payer_code_?'
bigger_effect_when_changed = ____
# Check your answer
q_3.check()
Explanation: Question 3
Both diag_1_428 and payer_code_? are binary variables, taking values of 0 or 1.
From the graph, which do you think would typically have a bigger impact on predicted readmission risk:
- Changing diag_1_428 from 0 to 1
- Changing payer_code_? from 0 to 1
To save you scrolling, we have included a cell below to plot the graph again (this one runs quickly).
End of explanation
# q_3.solution()
Explanation: For a solution and explanation, uncomment the line below.
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_4.solution()
Explanation: Question 4
Some features (like number_inpatient) have reasonably clear separation between the blue and pink dots. Other variables like num_lab_procedures have blue and pink dots jumbled together, even though the SHAP values (or impacts on prediction) aren't all 0.
What do you think you learn from the fact that num_lab_procedures has blue and pink dots jumbled together? Once you have your answer, run the line below to verify your solution.
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_5.solution()
Explanation: Question 5
Consider the following SHAP contribution dependence plot.
The x-axis shows feature_of_interest and the points are colored based on other_feature.
Is there an interaction between feature_of_interest and other_feature?
If so, does feature_of_interest have a more positive impact on predictions when other_feature is high or when other_feature is low?
Run the following code when you are ready for the answer.
End of explanation
shap.summary_plot(shap_values[1], small_val_X)
Explanation: Question 6
Review the summary plot for the readmission data by running the following cell:
End of explanation
# Your code here
____
Explanation: Both num_medications and num_lab_procedures share that jumbling of pink and blue dots.
Aside from num_medications having effects of greater magnitude (both more positive and more negative), it's hard to see a meaningful difference between how these two features affect readmission risk. Create the SHAP dependence contribution plots for each variable, and describe what you think is different between how these two variables affect predictions.
As a reminder, here is the code you previously saw to create this type of plot.
shap.dependence_plot(feature_of_interest, shap_values[1], val_X)
And recall that your validation data is called small_val_X.
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_6.solution()
Explanation: Then run the following line to compare your observations from this graph to the solution.
End of explanation |
2,766 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Data
Step2: View Average Ages By City
Step3: View Max Age By City
Step4: View Count Of Criminals By City
Step5: View Total Age By City | Python Code:
# Ignore
%load_ext sql
%sql sqlite://
%config SqlMagic.feedback = False
Explanation: Title: Calculate Counts, Sums, Max, and Averages
Slug: sums_counts_max_averages
Summary: Calculate Counts, Sums, and Averages in SQL.
Date: 2017-01-16 12:00
Category: SQL
Tags: Basics
Authors: Chris Albon
Note: This tutorial was written using Catherine Devlin's SQL in Jupyter Notebooks library. If you have not using a Jupyter Notebook, you can ignore the two lines of code below and any line containing %%sql. Furthermore, this tutorial uses SQLite's flavor of SQL, your version might have some differences in syntax.
For more, check out Learning SQL by Alan Beaulieu.
End of explanation
%%sql
-- Create a table of criminals
CREATE TABLE criminals (pid, name, age, sex, city, minor);
INSERT INTO criminals VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1);
INSERT INTO criminals VALUES (234, 'Bill James', 22, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (632, 'Stacy Miller', 23, 'F', 'San Francisco', 0);
INSERT INTO criminals VALUES (901, 'Gordon Ado', 32, 'F', 'San Francisco', 0);
INSERT INTO criminals VALUES (512, 'Bill Byson', 21, 'M', 'Petaluma', 0);
Explanation: Create Data
End of explanation
%%sql
-- Select name and average age,
SELECT city, avg(age)
-- from the table 'criminals',
FROM criminals
-- after grouping by city
GROUP BY city
Explanation: View Average Ages By City
End of explanation
%%sql
-- Select name and average age,
SELECT city, max(age)
-- from the table 'criminals',
FROM criminals
-- after grouping by city
GROUP BY city
Explanation: View Max Age By City
End of explanation
%%sql
-- Select name and average age,
SELECT city, count(name)
-- from the table 'criminals',
FROM criminals
-- after grouping by city
GROUP BY city
Explanation: View Count Of Criminals By City
End of explanation
%%sql
-- Select name and average age,
SELECT city, total(age)
-- from the table 'criminals',
FROM criminals
-- after grouping by city
GROUP BY city
Explanation: View Total Age By City
End of explanation |
2,767 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I'm using tensorflow 2.10.0. | Problem:
import tensorflow as tf
import numpy as np
np.random.seed(10)
a = tf.constant(np.random.rand(50, 100, 1, 512))
def g(a):
return tf.squeeze(a)
result = g(a.__copy__()) |
2,768 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Detection of meteor scatter pings in GRAVES recording
This notebook shows an algorithm for the detection of meteor scatter pings in a recording of GRAVES done on 2018-08-11, during the Perseids meteor shower.
Step1: The recording data we load is already preprocessed as FFT (waterfall) data. The data is power spectral density in dB units. The frequency resolution is 4kHz/256 = 15.625Hz and the time resolution is 256/4kHz = 64ms.
Step2: First we plot the average power spectral density. Note the GRAVES pings on the centre of the graph.
Step3: Our detection algorithm is based on the calculation of SNR. The signal power is computed over a certain number of FFT bins centred on the GRAVES frequency. We zoom in the graph to validate our selection.
Step4: To avoid the interference around bin 50, we measure signal plus noise over two disjoint intervals of bins.
Step5: We now calculate the SNR in dB units usin the signal bins and noise bins chosen above.
Step6: We plot the SNR versus time. We see strong spikes corresponding to meteor scatter pings.
Step7: Pings will be detected according as to whether the SNR is above a certain threshold. For chosing the threshold, it is useful to have a look the distribution of the SNR.
Step8: To chose the threshold we set a desired probability of false acquisition and obtain the threshold from there. In this case the null hypothesis corresponds to no meteor scatter signal being present. Thus, under the null hypothesis we assume that all the signal and noise bins are independent random variables whose distribution is a chi-squared with 2 degrees of freedom.
We denote by $n$ the number of noise bins and by $k$ the number of signal bins. Then the noise and signal power are distributed as chi-squared distributions with $2n$ and $2k$ degrees of freedom respectively.
The SNR is the quotient of two chi-squared distributions, and so it is distributed as $k/n$ times an F-distribution with parameters $2k$ and $2n$.
Step9: Since there are roughly $10^5$ samples, we choose a probability of false acqusition of $10^{-7}$.
Step10: We now get the number of detections.
Step11: Since a single ping is detected in many samples, we need to apply a clustering algorithm to separate individual pings. Also, since not all the samples in a ping are above the threshold, we apply a lower threshold to extend and cluster detection over neighbouring samples.
Step12: The clustering algorithm works by setting a certain allowed jump and clustering all detections that can be reached doing jumps of size smaller than this maximum jump.
Step13: Now we plot each of the detections to a file. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal
import scipy.stats
import matplotlib.patches
Explanation: Detection of meteor scatter pings in GRAVES recording
This notebook shows an algorithm for the detection of meteor scatter pings in a recording of GRAVES done on 2018-08-11, during the Perseids meteor shower.
End of explanation
waterfall = np.load('/mnt/perseids2018/data.npz')['waterfall']
waterfall_linear = 10**(0.1*waterfall)
waterfall.shape
Explanation: The recording data we load is already preprocessed as FFT (waterfall) data. The data is power spectral density in dB units. The frequency resolution is 4kHz/256 = 15.625Hz and the time resolution is 256/4kHz = 64ms.
End of explanation
average_psd = 10*np.log10(np.average(waterfall_linear, axis = 0))
plt.plot(average_psd)
plt.title('Average power spectral density')
plt.ylabel('Power spectral density (dB)')
plt.xlabel('FFT bin')
plt.ylim([-120,-108]);
Explanation: First we plot the average power spectral density. Note the GRAVES pings on the centre of the graph.
End of explanation
span = 10
centre = waterfall.shape[1]//2 - 3
signal_bins = np.arange(centre-span, centre+span)
plt.plot(signal_bins, average_psd[signal_bins])
plt.title('Average power spectral density')
plt.ylabel('Power spectral density (dB)')
plt.xlabel('FFT bin')
plt.ylim([-120,-108]);
Explanation: Our detection algorithm is based on the calculation of SNR. The signal power is computed over a certain number of FFT bins centred on the GRAVES frequency. We zoom in the graph to validate our selection.
End of explanation
noise_bins_left = np.arange(12,39)
noise_bins_right = np.arange(57,246)
noise_bins = np.concatenate((noise_bins_left, noise_bins_right))
plt.plot(noise_bins_left, average_psd[noise_bins_left])
plt.plot(noise_bins_right, average_psd[noise_bins_right])
plt.title('Average power spectral density')
plt.ylabel('Power spectral density (dB)')
plt.xlabel('FFT bin')
plt.ylim([-120,-108]);
Explanation: To avoid the interference around bin 50, we measure signal plus noise over two disjoint intervals of bins.
End of explanation
signal = np.sum(waterfall_linear[:,signal_bins], axis=1)
signal_plus_noise = np.sum(waterfall_linear[:,noise_bins], axis=1)
snr = 10*np.log10(signal/(signal_plus_noise-signal))
Explanation: We now calculate the SNR in dB units usin the signal bins and noise bins chosen above.
End of explanation
time = np.arange(waterfall.shape[0]) * 256 / 4e3
plt.plot(time, snr)
plt.title('SNR')
plt.xlabel('Time (s)')
plt.ylabel('SNR (dB)');
Explanation: We plot the SNR versus time. We see strong spikes corresponding to meteor scatter pings.
End of explanation
plt.hist(snr, bins=1000)
plt.yscale('log')
plt.title('SNR histogram')
plt.ylabel('Number of samples')
plt.xlabel('SNR (dB)');
Explanation: Pings will be detected according as to whether the SNR is above a certain threshold. For chosing the threshold, it is useful to have a look the distribution of the SNR.
End of explanation
k = signal_bins.size
n = noise_bins.size - signal_bins.size
def threshold(pfa):
return 10*np.log10(k/n*scipy.stats.f.ppf(1-pfa, dfn=2*k, dfd=2*n))
pfa = np.logspace(-9,-1)
plt.semilogx(pfa, threshold(pfa))
plt.title('Probability of false aquisition and threshold')
plt.ylabel('Threshold (dB)')
plt.xlabel('Probability of false acquisition');
Explanation: To chose the threshold we set a desired probability of false acquisition and obtain the threshold from there. In this case the null hypothesis corresponds to no meteor scatter signal being present. Thus, under the null hypothesis we assume that all the signal and noise bins are independent random variables whose distribution is a chi-squared with 2 degrees of freedom.
We denote by $n$ the number of noise bins and by $k$ the number of signal bins. Then the noise and signal power are distributed as chi-squared distributions with $2n$ and $2k$ degrees of freedom respectively.
The SNR is the quotient of two chi-squared distributions, and so it is distributed as $k/n$ times an F-distribution with parameters $2k$ and $2n$.
End of explanation
alpha = threshold(1e-6)
alpha
Explanation: Since there are roughly $10^5$ samples, we choose a probability of false acqusition of $10^{-7}$.
End of explanation
detections = np.where(snr > alpha)[0]
detections.size
Explanation: We now get the number of detections.
End of explanation
beta = alpha - 1
detections2 = np.where(snr > beta)[0]
detections2.size
Explanation: Since a single ping is detected in many samples, we need to apply a clustering algorithm to separate individual pings. Also, since not all the samples in a ping are above the threshold, we apply a lower threshold to extend and cluster detection over neighbouring samples.
End of explanation
cluster_jump_seconds = 2
cluster_jump = cluster_jump_seconds * 4e3 / 250
marks = list()
right_mark = -np.inf
while np.any(detections > right_mark):
left_mark = detections[detections > right_mark][0]
right_mark = left_mark
while np.any(detections2[detections2> right_mark]) and \
detections2[detections2 > right_mark][0] < right_mark + cluster_jump:
right_mark = detections2[detections2 > right_mark][0]
marks.append((left_mark, right_mark))
len(marks)
Explanation: The clustering algorithm works by setting a certain allowed jump and clustering all detections that can be reached doing jumps of size smaller than this maximum jump.
End of explanation
margin_seconds = 5
margin = margin_seconds * 4e3 / 250
for j,m in enumerate(marks):
l = m[0]
r = m[1]
start = int(np.clip(l - margin, 0, waterfall.shape[0]-1))
end = int(np.clip(r + margin, 0, waterfall.shape[0]-1))
plt.imsave('/tmp/ping_{:0{size}d}'.format(j, size=int(np.ceil(np.log10(len(marks))))),\
waterfall[start:end,::-1].T, vmin = -120, vmax = -90);
Explanation: Now we plot each of the detections to a file.
End of explanation |
2,769 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
It seems that the DF9NP GPSDO lost lock shortly after 2019-11-20T04
Step1: RMS phase difference
Step2: Recompute Allan deviations (takes several minutes) | Python Code:
data = load_file('gpsdo_phase_2019-11-17T21:55:29.989819.f32').sel(time = slice('2019-11-17T21:55:31', '2019-11-20T04:57:30'))
(data.coords['time'][-1] - data.coords['time'][0]).astype('float')*1e-9
residual_freq = np.polyfit((data.coords['time'] - data.coords['time'][0]).astype('float') * 1e-9, data['phase'], 1)[0]/(2*np.pi)
residual_freq
f_obs = 10e6
plt.figure(figsize = (12,6), facecolor = 'w')
plt.plot(data.coords['time'][::100], scipy.signal.detrend(data['phase'][::100]/(2*np.pi*f_obs)*1e9))
plt.title('Phase difference (linear trend removed)')
plt.ylabel('Phase difference (ns)')
plt.xlabel('UTC time')
plt.legend(['DF9NP -- Vectron MD-011 (10MHz)']);
Explanation: It seems that the DF9NP GPSDO lost lock shortly after 2019-11-20T04:57:30. We exclude the measurements after this moment.
End of explanation
np.std(scipy.signal.detrend(data['phase'][::100]/(2*np.pi*f_obs)*1e9))
def adev(series, skip, freq = 10e9, overlapping = False):
x = series.values/(2*np.pi*freq)
tau = skip / obs_rate
if overlapping:
y = x[:-2*skip] - 2*x[skip:-skip] + x[2*skip:]
else:
z = x[:x.size//skip*skip].reshape((-1,skip))[:,0]
y = z[:-2] - 2*z[1:-1] + z[2:]
return np.sqrt(0.5/tau**2*np.average(y**2))
def get_skips(n):
if n <= 0:
return np.array([], dtype = 'int')
a = int(np.log10(n))
step = max(10**(a-2), 1)
return np.concatenate((get_skips(10**a - 1) , np.arange(10**a, n+1, step)))
def compute_adev(data, overlapping = False):
skips = get_skips(data.coords['time'].size//2)
taus = skips / obs_rate
adevs = [adev(data['phase'], skip, f_obs, overlapping) for skip in skips]
return xr.Dataset({'adev' : ('tau', adevs)}, coords = {'tau' : taus})
Explanation: RMS phase difference:
End of explanation
# adevs = compute_adev(data, overlapping = True)
# adevs.to_netcdf('adevs_df9np_vectron.nc')
adevs = xr.open_dataset('adevs_df9np_vectron.nc')
adevs_qo100 = xr.open_dataset('adevs_evening2_qo100.nc')
def plot_adev(a, label):
plt.loglog(a.coords['tau'], a['adev'], label = f'{label}')
plt.figure(figsize = (12,6), facecolor = 'w')
plot_adev(adevs, 'DF9NP -- Vectron MD-011 (10MHz)')
plt.loglog(adevs_qo100.coords['tau'], adevs_qo100['CW-BPSK'], label = 'DF9NP -- Bochum (QO100, 2.4GHz)')
plt.xlabel('$\\tau$ (s)')
plt.ylabel('$\\sigma(\\tau)$')
plt.legend()
plt.grid(which = 'both')
plt.title('Allan deviation');
Explanation: Recompute Allan deviations (takes several minutes)
End of explanation |
2,770 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Autoencoder
This notebook demonstrates the invocation of the SystemML autoencoder script, and alternative ways of passing in/out data.
This notebook is supported with SystemML 0.14.0 and above.
Step1: SystemML Read/Write data from local file system
Step3: Generate Data and write out to file.
Step4: Alternatively to passing in/out file names, use Python variables. | Python Code:
!pip show systemml
import pandas as pd
from systemml import MLContext, dml
ml = MLContext(sc)
print(ml.info())
sc.version
Explanation: Autoencoder
This notebook demonstrates the invocation of the SystemML autoencoder script, and alternative ways of passing in/out data.
This notebook is supported with SystemML 0.14.0 and above.
End of explanation
FsPath = "/tmp/data/"
inp = FsPath + "Input/"
outp = FsPath + "Output/"
Explanation: SystemML Read/Write data from local file system
End of explanation
import numpy as np
X_pd = pd.DataFrame(np.arange(1,2001, dtype=np.float)).values.reshape(100,20)
# X_pd = pd.DataFrame(range(1, 2001,1),dtype=float).values.reshape(100,20)
script =
write(X, $Xfile)
prog = dml(script).input(X=X_pd).input(**{"$Xfile":inp+"X.csv"})
ml.execute(prog)
!ls -l /tmp/data/Input
autoencoderURL = "https://raw.githubusercontent.com/apache/systemml/master/scripts/staging/autoencoder-2layer.dml"
rets = ("iter", "num_iters_per_epoch", "beg", "end", "o")
prog = dml(autoencoderURL).input(**{"$X":inp+"X.csv"}) \
.input(**{"$H1":500, "$H2":2, "$BATCH":36, "$EPOCH":5 \
, "$W1_out":outp+"W1_out", "$b1_out":outp+"b1_out" \
, "$W2_out":outp+"W2_out", "$b2_out":outp+"b2_out" \
, "$W3_out":outp+"W3_out", "$b3_out":outp+"b3_out" \
, "$W4_out":outp+"W4_out", "$b4_out":outp+"b4_out" \
}).output(*rets)
iter, num_iters_per_epoch, beg, end, o = ml.execute(prog).get(*rets)
print (iter, num_iters_per_epoch, beg, end, o)
!ls -l /tmp/data/Output
Explanation: Generate Data and write out to file.
End of explanation
autoencoderURL = "https://raw.githubusercontent.com/apache/systemml/master/scripts/staging/autoencoder-2layer.dml"
rets = ("iter", "num_iters_per_epoch", "beg", "end", "o")
rets2 = ("W1", "b1", "W2", "b2", "W3", "b3", "W4", "b4")
prog = dml(autoencoderURL).input(X=X_pd) \
.input(**{ "$H1":500, "$H2":2, "$BATCH":36, "$EPOCH":5}) \
.output(*rets) \
.output(*rets2)
result = ml.execute(prog)
iter, num_iters_per_epoch, beg, end, o = result.get(*rets)
W1, b1, W2, b2, W3, b3, W4, b4 = result.get(*rets2)
print (iter, num_iters_per_epoch, beg, end, o)
Explanation: Alternatively to passing in/out file names, use Python variables.
End of explanation |
2,771 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
Step1: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
Step2: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise
Step3: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise
Step4: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise
Step5: Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general.
Step6: Exercise
Step7: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise
Step8: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step9: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise
Step10: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise
Step11: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation
Step12: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise
Step13: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[
Step14: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
Step15: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
Step16: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
Step17: Testing | Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]
len(non_zero_idx)
reviews_ints[-1]
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]
labels = np.array([labels[ii] for ii in non_zero_idx])
Explanation: Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general.
End of explanation
seq_len = 200
features = np.zeros((len(reviews_ints), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
features[:10,:100]
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
lstm_size = 256
lstm_layers = 1
batch_size = 100
learning_rate = 0.001
tf.reset_default_graph()
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
n_words = len(vocab_to_int) + 1 # Adding 1 because we use 0's for padding, dictionary started at 1
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
initial_state=initial_state)
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Testing
End of explanation |
2,772 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
[MSE-01] モジュールをインポートして、乱数のシードを設定します。
Step1: [MSE-02] MNISTのデータセットを用意します。
Step2: [MSE-03] ソフトマックス関数による確率 p の計算式を用意します。
Step3: [MSE-04] 誤差関数 loss とトレーニングアルゴリズム train_step を用意します。
Step4: [MSE-05] 正解率 accuracy を定義します。
Step5: [MSE-06] セッションを用意して、Variableを初期化します。
Step6: [MSE-07] パラメーターの最適化を2000回繰り返します。
1回の処理において、トレーニングセットから取り出した100個のデータを用いて、勾配降下法を適用します。
最終的に、テストセットに対して約92%の正解率が得られます。
Step7: [MSE-08] この時点のパラメーターを用いて、テストセットに対する予測を表示します。
ここでは、「0」〜「9」の数字に対して、正解と不正解の例を3個ずつ表示します。 | Python Code:
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
np.random.seed(20160604)
Explanation: [MSE-01] モジュールをインポートして、乱数のシードを設定します。
End of explanation
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
Explanation: [MSE-02] MNISTのデータセットを用意します。
End of explanation
x = tf.placeholder(tf.float32, [None, 784])
w = tf.Variable(tf.zeros([784, 10]))
w0 = tf.Variable(tf.zeros([10]))
f = tf.matmul(x, w) + w0
p = tf.nn.softmax(f)
Explanation: [MSE-03] ソフトマックス関数による確率 p の計算式を用意します。
End of explanation
t = tf.placeholder(tf.float32, [None, 10])
loss = -tf.reduce_sum(t * tf.log(p))
train_step = tf.train.AdamOptimizer().minimize(loss)
Explanation: [MSE-04] 誤差関数 loss とトレーニングアルゴリズム train_step を用意します。
End of explanation
correct_prediction = tf.equal(tf.argmax(p, 1), tf.argmax(t, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Explanation: [MSE-05] 正解率 accuracy を定義します。
End of explanation
sess = tf.Session()
sess.run(tf.initialize_all_variables())
Explanation: [MSE-06] セッションを用意して、Variableを初期化します。
End of explanation
i = 0
for _ in range(2000):
i += 1
batch_xs, batch_ts = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, t: batch_ts})
if i % 100 == 0:
loss_val, acc_val = sess.run([loss, accuracy],
feed_dict={x:mnist.test.images, t: mnist.test.labels})
print ('Step: %d, Loss: %f, Accuracy: %f'
% (i, loss_val, acc_val))
Explanation: [MSE-07] パラメーターの最適化を2000回繰り返します。
1回の処理において、トレーニングセットから取り出した100個のデータを用いて、勾配降下法を適用します。
最終的に、テストセットに対して約92%の正解率が得られます。
End of explanation
images, labels = mnist.test.images, mnist.test.labels
p_val = sess.run(p, feed_dict={x:images, t: labels})
fig = plt.figure(figsize=(8,15))
for i in range(10):
c = 1
for (image, label, pred) in zip(images, labels, p_val):
prediction, actual = np.argmax(pred), np.argmax(label)
if prediction != i:
continue
if (c < 4 and i == actual) or (c >= 4 and i != actual):
subplot = fig.add_subplot(10,6,i*6+c)
subplot.set_xticks([])
subplot.set_yticks([])
subplot.set_title('%d / %d' % (prediction, actual))
subplot.imshow(image.reshape((28,28)), vmin=0, vmax=1,
cmap=plt.cm.gray_r, interpolation="nearest")
c += 1
if c > 6:
break
Explanation: [MSE-08] この時点のパラメーターを用いて、テストセットに対する予測を表示します。
ここでは、「0」〜「9」の数字に対して、正解と不正解の例を3個ずつ表示します。
End of explanation |
2,773 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
HoloViews is designed to be both highly customizable, allowing you to control how your visualizations appear, but also to enforce a strong separation between your data (with any semantically associated metadata, like type and label information) and all options related purely to visualization. This separation allows HoloViews objects to be generated easily by external programs, without giving them a dependency on any plotting or windowing libraries. It also helps make it completely clear which parts of your code deal with the actual data, and which are just about displaying it nicely, which becomes very important for complex visualizations that become more complicated than your data itself.
To achieve this separation, HoloViews stores visualization options independently from your data, and applies the options only when rendering the data to a file on disk or when displaying it in an IPython notebook cell.
This tutorial gives an overview of the different types of options available, how to find out more about them, and how to set them in both regular Python and using the IPython magic interface that is shown elsewhere in the tutorials.
Example objects
First, we'll create some HoloViews data objects ready to visualize
Step1: Rendering and saving objects from Python <a id='python-saving'></a>
To illustrate how to do plotting independently of IPython, we'll generate and save a plot directly to disk. First, let's create a renderer object that will render our files to SVG (for static figures) or GIF (for animations)
Step2: We could instead have used the default Store.renderer, but that would have been PNG format. Using this renderer, we can save any HoloViews object as SVG or GIF
Step3: That's it! The renderer builds the figure in matplotlib, renders it to SVG, and saves that to "example_I.svg" on disk. Everything up to this point would have worked the same in IPython or in regular Python, even with no display available. But since we're in IPython Notebook at the moment, we can check whether the exporting worked
Step4: You can use this workflow for generating HoloViews visualizations directly from Python, perhaps as a part of a set of scripts that you run automatically, e.g. to put your results up on a web server as soon as data is generated. But so far, this plot just uses all the default options, with no customization. How can we change how the plot will appear when we render it?
HoloViews visualization options
HoloViews provides three categories of visualization options that can be set by the user. In this section we will first describe the different kinds of options, then later sections show you how to list the supported options of each type for a given HoloViews object or class, and how to change them in Python or IPython.
style options
Step5: This information can be useful, but we have explicitly suppressed information regarding the visualization parameters -- these all report metadata about your data, not about anything to do with plotting directly. That's because the normal HoloViews components have nothing to do with plotting; they are just simple containers for your data and a small amount of metadata.
Instead, the plotting implementation and its associated parameters are kept in completely separate Python classes and objects. To find out about visualizing a HoloViews component like an Image, you can simply use the help command holoviews.help(object-or-class) that looks up the code that plots that particular type of component, and then reports the style and plot options available for it.
For our image example, holoviews.help first finds that image is of type Image, then looks in its database to find that Image visualization is handled by the RasterPlot class (which users otherwise rarely need to access directly). holoviews.help then shows information about what objects are available to customize (either the object itself, or the items inside a container), followed by a brief list of style options supported by a RasterPlot, and a very long list of plot options (which are all the parameters of a RasterPlot)
Step6: Supported style options
As you can see, HoloViews lists the currently allowed style options, but provides no further documentation because these settings are implemented by matplotlib and described at the matplotlib site. Note that matplotlib actually accepts a huge range of additional options, but they are not listed as being allowed because those options are not normally meaningful for this plot type. But if you know of a specific matplotlib option not on the list and really want to use it, you can add it manually to the list of supported options using Store.add_style_opts(holoviews-component-class, ['matplotlib-option ...']). For instance, if you want to use the filternorm parameter with this image object, you would run Store.add_style_opts(Image, ['filternorm']). This will add the new option to the corresponding plotting class RasterPlot
Step7: Changing plot options at the class level
Any parameter in HoloViews can be set on an object or on the class of the object, so any of the above plot options can be set like
Step8: Here .set_param() allows you to set multiple parameters conveniently, but it works the same as the single-parameter .colorbar example above it. Setting these values at the class level affects all previously created and to-be-created plotting objects of this type, unless specifically overridden via Store as described below.
Note that if you look at the source code for a particular plotting class, you will only see some of the parameters it supports. The rest, such as show_frame above, are defined in a superclass of the given object. The Reference Manual shows the complete list of parameters available for any given class (those labeled param in the manual), but it can be an overwhelming list since it includes all superclasses, all the metadata about each parameter, etc. The holoviews.help command with visualization=True provides a much more concise listing, and also shows the style options that are not listed in the Reference Manual.
Because setting these parameters at the class level does not provide much control over individual plots, HoloViews provides a much more flexible system using the OptionTree mechanisms described below, which can override these class defaults according to the HoloViews object type, group, and label.
The rest of the sections show how to change any of the above options, once you have found the right one using the suitable call to holoviews.help.
Controlling options from Python
Once you know the name of the option you want to change, and the value you want to change it to, there are a number of ways to customize your plot.
For the Python output to SVG example above, you can specify the options for a given type using keywords supplying a dictionary for any of the above option categories. You can see that the colormap changes when we supply that style option and render a new SVG
Step9: As before, the SVG call is simply to display it here in the notebook; the actual image is saved on disk and then loaded back in here for display.
You can see that the image now has a colorbar, because we set colorbar=True on the RasterPlot class, that it has become blue, because we set the matplotlib cmap style option in the renderer.save call, and that the y axis has been disabled, because we set the plot option yaxis to None (which is normally 'left' by default, as you can see in the default value for RasterPlot's parameter yaxis above). Hopefully you can see that once you know the option value you want to use, it can be provided easily.
You can also create a whole set of options separately, perhaps holding a large collection of preferred values, and apply it whenever you wish to save
Step10: Here you can see that the y axis has returned, because our previous setting to turn it off was just for the call to renderer.save. But we still have a colorbar, because that parameter was set at the class level, for all future plots of this type. Note that this form of option setting, while more verbose, accepts the full {type}[.{group}[.{label}]] syntax, like 'Image.Function.Sine' or 'Image.Function', while the shorter keyword approach above only supports the class, like 'Image'.
Note that for the options dictionary, the option nesting is inverted compared to the keyword approach
Step11: Here we could save the object to SVG just as before, but in this case we can skip a step and simply view it directly in the notebook
Step12: Both IPython notebook and renderer.save() use the same mechanisms for keeping track of the options, so they will give the same results. Specifically, what happens when you "bind" a set of options to an object is that there is an integer ID stored in the object (green_sine in this case), and a corresponding entry with that ID is stored in a database of options called an OptionTree (kept in holoviews.core.options.Store). The object itself is otherwise unchanged, but then if that object is later used in another container, etc. it will retain its ID and therefore its customization. Any customization stored in an OptionTree will override any class attribute defaults set like RasterGridPlot.border=5 above. This approach lets HoloViews keep track of any customizations you want to make, without ever affecting your actual data objects.
If the same object is later customized again to create a new customized object, the old customizations will be copied, and then the new customizations applied. The new customizations will thus override the old, while retaining any previous customizations not specified in the new step.
In this way, it is possible to build complex objects with arbitrary customization, step by step. As mentioned above, it is also possible to customize objects already combined into a complex container, just by specifying an option for a suitable key (e.g. 'Image.Function.Sine' above). This flexible system should allow for any level of customization that is needed.
Finally, there is one more way to apply options that is a mix of the above approaches -- temporarily assign a new ID to the object and apply a set of customizations during a specific portion of the code
Step13: Here the result is red, because it was rendered within the options context above, but were we to render the green_sine again it would still be green; the options are applied only within the scope of the with statement.
Controlling options in IPython using %%opts and %opts
The above sections describe how to set all of the options using regular Python. Similar functionality is provided in IPython, but with a more convenient syntax based on an IPython magic command
Step14: The %%opts magic works like the pure-Python option for associating options with an object, except that it works on the item in the IPython cell, and it affects the item directly rather than making a copy or applying only in scope. Specifically, it assigns a new ID number to the object returned from this cell, and makes a new OptionTree containing the options for that ID number.
If the same layout object is used later in the notebook, even within a complicated container object, it will retain the options set on it.
The options accepted are just the same as for the Python version, but specified more succinctly
Step15: There is also a special IPython syntax for listing the visualization options for a plotting object in a pop-up window that is equivalent to calling holoviews.help(object) | Python Code:
import numpy as np
import holoviews as hv
%reload_ext holoviews.ipython
x,y = np.mgrid[-50:51, -50:51] * 0.1
image = hv.Image(np.sin(x**2+y**2), group="Function", label="Sine")
coords = [(0.1*i, np.sin(0.1*i)) for i in range(100)]
curve = hv.Curve(coords)
curves = {phase: hv.Curve([(0.1*i, np.sin(phase+0.1*i)) for i in range(100)])
for phase in [0, np.pi/2, np.pi, np.pi*3/2]}
waves = hv.HoloMap(curves)
layout = image + curve
Explanation: HoloViews is designed to be both highly customizable, allowing you to control how your visualizations appear, but also to enforce a strong separation between your data (with any semantically associated metadata, like type and label information) and all options related purely to visualization. This separation allows HoloViews objects to be generated easily by external programs, without giving them a dependency on any plotting or windowing libraries. It also helps make it completely clear which parts of your code deal with the actual data, and which are just about displaying it nicely, which becomes very important for complex visualizations that become more complicated than your data itself.
To achieve this separation, HoloViews stores visualization options independently from your data, and applies the options only when rendering the data to a file on disk or when displaying it in an IPython notebook cell.
This tutorial gives an overview of the different types of options available, how to find out more about them, and how to set them in both regular Python and using the IPython magic interface that is shown elsewhere in the tutorials.
Example objects
First, we'll create some HoloViews data objects ready to visualize:
End of explanation
renderer = hv.Store.renderers['matplotlib'].instance(fig='svg', holomap='gif')
Explanation: Rendering and saving objects from Python <a id='python-saving'></a>
To illustrate how to do plotting independently of IPython, we'll generate and save a plot directly to disk. First, let's create a renderer object that will render our files to SVG (for static figures) or GIF (for animations):
End of explanation
renderer.save(layout, 'example_I')
Explanation: We could instead have used the default Store.renderer, but that would have been PNG format. Using this renderer, we can save any HoloViews object as SVG or GIF:
End of explanation
from IPython.display import SVG
SVG(filename='example_I.svg')
Explanation: That's it! The renderer builds the figure in matplotlib, renders it to SVG, and saves that to "example_I.svg" on disk. Everything up to this point would have worked the same in IPython or in regular Python, even with no display available. But since we're in IPython Notebook at the moment, we can check whether the exporting worked:
End of explanation
hv.help(image, visualization=False)
Explanation: You can use this workflow for generating HoloViews visualizations directly from Python, perhaps as a part of a set of scripts that you run automatically, e.g. to put your results up on a web server as soon as data is generated. But so far, this plot just uses all the default options, with no customization. How can we change how the plot will appear when we render it?
HoloViews visualization options
HoloViews provides three categories of visualization options that can be set by the user. In this section we will first describe the different kinds of options, then later sections show you how to list the supported options of each type for a given HoloViews object or class, and how to change them in Python or IPython.
style options:
style options are passed directly to the underlying rendering backend that actually draws the plots, allowing you to control the details of how it behaves. The default backend is matplotlib, and the only other backend currently available is mpld3, both of which use matplotlib options. HoloViews can tell you which of these options are supported, but you will need to see the matplotlib documentation for the details of their use.
HoloViews has been designed to be easily extensible to additional backends in the future, such as Cairo, VTK, Bokeh, or D3.js, and if one of those backends were selected then the supported style options would differ.
plot options:
Each of the various HoloViews plotting classes declares various Parameters that control how HoloViews builds the visualization for that type of object, such as plot sizes and labels. HoloViews uses these options internally; they are not simply passed to the matplotlib backend. HoloViews documents these options fully in its online help and in the Reference Manual. These options may vary for different backends in some cases, but we try to keep any options that are meaningful for a variety of backends the same for all of them.
norm options:
norm options are a special type of plot option that are applied orthogonally to the above two types, to control normalization. Normalization refers to adjusting the properties of one plot relative to those of another. For instance, two images normalized together would appear with relative brightness levels, with the brightest image using the full range black to white, while the other image is scaled proportionally. Two images normalized independently would both cover the full range from black to white. Similarly, two axis ranges normalized together will expand to fit the largest range of either axis, while those normalized separately would cover different ranges.
There are currently only two norm options supported, axiswise and framewise, but they can be applied to any of the various object types in HoloViews to specify a huge range of different normalization options.
For a given category or group of HoloViews objects, if axiswise is True, normalization will be computed independently for all items in that category that have their own axes, such as different Image plots or Curve plots. If axiswise is False, all such objects are normalized together.
For a given category or group of HoloViews objects, if framewise is True, normalization of any HoloMap objects included is done independently per frame rendered -- each frame will appear as it would if it were extracted from the HoloMap and plotted separately. If framewise is False (the default), all frames in a given HoloMap are normalized together, so that you can see strength differences over the course of the animation.
As described below, these options can be controlled precisely and in any combination to make sure that HoloViews displays the data of most interest, ignoring irrelevant differences and highlighting important ones.
Finding out which options are available for an object
For the norm options, no further online documentation is provided, because all of the various visualization classes support only the two options described above. But there are a variety of ways to get the list of supported style options and detailed documentation for the plot options for a given component.
First, for any Python class or object in HoloViews, you can use holoviews.help(object-or-class, visualization=False) to find out about its parameters. For instance, these parameters are available for our Image object, shown with their current value (or default value, for a class), data type, whether it can be changed by the user (if it is constant, read-only, etc.), and bounds if any:
End of explanation
hv.help(image)
Explanation: This information can be useful, but we have explicitly suppressed information regarding the visualization parameters -- these all report metadata about your data, not about anything to do with plotting directly. That's because the normal HoloViews components have nothing to do with plotting; they are just simple containers for your data and a small amount of metadata.
Instead, the plotting implementation and its associated parameters are kept in completely separate Python classes and objects. To find out about visualizing a HoloViews component like an Image, you can simply use the help command holoviews.help(object-or-class) that looks up the code that plots that particular type of component, and then reports the style and plot options available for it.
For our image example, holoviews.help first finds that image is of type Image, then looks in its database to find that Image visualization is handled by the RasterPlot class (which users otherwise rarely need to access directly). holoviews.help then shows information about what objects are available to customize (either the object itself, or the items inside a container), followed by a brief list of style options supported by a RasterPlot, and a very long list of plot options (which are all the parameters of a RasterPlot):
End of explanation
hv.Store.add_style_opts(hv.Image, ['filternorm'])
# To check that it worked:
RasterPlot = renderer.plotting_class(hv.Image)
print(RasterPlot.style_opts)
Explanation: Supported style options
As you can see, HoloViews lists the currently allowed style options, but provides no further documentation because these settings are implemented by matplotlib and described at the matplotlib site. Note that matplotlib actually accepts a huge range of additional options, but they are not listed as being allowed because those options are not normally meaningful for this plot type. But if you know of a specific matplotlib option not on the list and really want to use it, you can add it manually to the list of supported options using Store.add_style_opts(holoviews-component-class, ['matplotlib-option ...']). For instance, if you want to use the filternorm parameter with this image object, you would run Store.add_style_opts(Image, ['filternorm']). This will add the new option to the corresponding plotting class RasterPlot:
End of explanation
RasterPlot.colorbar=True
RasterPlot.set_param(show_title=False,show_frame=True)
Explanation: Changing plot options at the class level
Any parameter in HoloViews can be set on an object or on the class of the object, so any of the above plot options can be set like:
End of explanation
renderer.save(layout, 'example_II', style=dict(Image={'cmap':'Blues'}),
plot= dict(Image={'yaxis':None}))
SVG(filename='example_II.svg')
Explanation: Here .set_param() allows you to set multiple parameters conveniently, but it works the same as the single-parameter .colorbar example above it. Setting these values at the class level affects all previously created and to-be-created plotting objects of this type, unless specifically overridden via Store as described below.
Note that if you look at the source code for a particular plotting class, you will only see some of the parameters it supports. The rest, such as show_frame above, are defined in a superclass of the given object. The Reference Manual shows the complete list of parameters available for any given class (those labeled param in the manual), but it can be an overwhelming list since it includes all superclasses, all the metadata about each parameter, etc. The holoviews.help command with visualization=True provides a much more concise listing, and also shows the style options that are not listed in the Reference Manual.
Because setting these parameters at the class level does not provide much control over individual plots, HoloViews provides a much more flexible system using the OptionTree mechanisms described below, which can override these class defaults according to the HoloViews object type, group, and label.
The rest of the sections show how to change any of the above options, once you have found the right one using the suitable call to holoviews.help.
Controlling options from Python
Once you know the name of the option you want to change, and the value you want to change it to, there are a number of ways to customize your plot.
For the Python output to SVG example above, you can specify the options for a given type using keywords supplying a dictionary for any of the above option categories. You can see that the colormap changes when we supply that style option and render a new SVG:
End of explanation
options={'Image.Function.Sine': {'plot':dict(fig_size=50), 'style':dict(cmap='jet')}}
renderer.save(layout, 'example_III',options=options)
SVG(filename='example_III.svg')
Explanation: As before, the SVG call is simply to display it here in the notebook; the actual image is saved on disk and then loaded back in here for display.
You can see that the image now has a colorbar, because we set colorbar=True on the RasterPlot class, that it has become blue, because we set the matplotlib cmap style option in the renderer.save call, and that the y axis has been disabled, because we set the plot option yaxis to None (which is normally 'left' by default, as you can see in the default value for RasterPlot's parameter yaxis above). Hopefully you can see that once you know the option value you want to use, it can be provided easily.
You can also create a whole set of options separately, perhaps holding a large collection of preferred values, and apply it whenever you wish to save:
End of explanation
green_sine = image(style={'cmap':'Greens'})
Explanation: Here you can see that the y axis has returned, because our previous setting to turn it off was just for the call to renderer.save. But we still have a colorbar, because that parameter was set at the class level, for all future plots of this type. Note that this form of option setting, while more verbose, accepts the full {type}[.{group}[.{label}]] syntax, like 'Image.Function.Sine' or 'Image.Function', while the shorter keyword approach above only supports the class, like 'Image'.
Note that for the options dictionary, the option nesting is inverted compared to the keyword approach: the outermost dictionary is by key (Image, or Image.Function.Sines), with the option categories underneath. You can see that with this mechanism, we can specify the options even for subobjects of a container, as long as we can specify them with an appropriate key.
There's also another way to customize options in Python that lets you build up customizations incrementally. To do this, you can associate a particular set of options persistently with a particular HoloViews object, even if that object is later combined with other objects into a container. Here a new copy of the object is created, with the given set of options (using either the keyword or options= format above) bound to it:
End of explanation
green_sine
Explanation: Here we could save the object to SVG just as before, but in this case we can skip a step and simply view it directly in the notebook:
End of explanation
with hv.StoreOptions.options(green_sine, options={'Image':{'style':{'cmap':'Reds'}}}):
data, info = renderer(green_sine)
print(info)
SVG(data)
Explanation: Both IPython notebook and renderer.save() use the same mechanisms for keeping track of the options, so they will give the same results. Specifically, what happens when you "bind" a set of options to an object is that there is an integer ID stored in the object (green_sine in this case), and a corresponding entry with that ID is stored in a database of options called an OptionTree (kept in holoviews.core.options.Store). The object itself is otherwise unchanged, but then if that object is later used in another container, etc. it will retain its ID and therefore its customization. Any customization stored in an OptionTree will override any class attribute defaults set like RasterGridPlot.border=5 above. This approach lets HoloViews keep track of any customizations you want to make, without ever affecting your actual data objects.
If the same object is later customized again to create a new customized object, the old customizations will be copied, and then the new customizations applied. The new customizations will thus override the old, while retaining any previous customizations not specified in the new step.
In this way, it is possible to build complex objects with arbitrary customization, step by step. As mentioned above, it is also possible to customize objects already combined into a complex container, just by specifying an option for a suitable key (e.g. 'Image.Function.Sine' above). This flexible system should allow for any level of customization that is needed.
Finally, there is one more way to apply options that is a mix of the above approaches -- temporarily assign a new ID to the object and apply a set of customizations during a specific portion of the code:
End of explanation
%%opts Curve style(linewidth=8) Image style(interpolation='bilinear') plot[yaxis=None] norm{+framewise}
layout
Explanation: Here the result is red, because it was rendered within the options context above, but were we to render the green_sine again it would still be green; the options are applied only within the scope of the with statement.
Controlling options in IPython using %%opts and %opts
The above sections describe how to set all of the options using regular Python. Similar functionality is provided in IPython, but with a more convenient syntax based on an IPython magic command:
End of explanation
from holoviews.ipython.parser import OptsSpec
renderer.save(image + waves, 'example_V',
options=OptsSpec.parse("Image (cmap='gray')"))
Explanation: The %%opts magic works like the pure-Python option for associating options with an object, except that it works on the item in the IPython cell, and it affects the item directly rather than making a copy or applying only in scope. Specifically, it assigns a new ID number to the object returned from this cell, and makes a new OptionTree containing the options for that ID number.
If the same layout object is used later in the notebook, even within a complicated container object, it will retain the options set on it.
The options accepted are just the same as for the Python version, but specified more succinctly:
%%opts target-specification style(styleoption=val ...) plot[plotoption=val ...] norm{+normoption -normoption...}
Here key lets you specify the object type (e.g. Image), and optionally its group (e.g. Image.Function) or even both group and label (e.g. Image.Function.Sine), if you want to control options very precisely. There is also an even further abbreviated syntax, because the special bracket types alone are enough to indicate which category of option is specified:
%%opts target-specification (styleoption=val ...) [plotoption=val ...] {+normoption -normoption ...}
Here parentheses indicate style options, square brackets indicate plot options, and curly brackets indicate norm options (with +axiswise and +framewise indicating True for those values, and -axiswise and -framewise indicating False). Additional target-specifications and associated options of each type for that target-specification can be supplied at the end of this line. This ultra-concise syntax is used throughout the other tutorials, because it helps minimize the code needed to specify the plotting options, and helps make it very clear that these options are handled separately from the actual data.
The %opts "line" magic (with one %) works just the same as the %%opts "cell" magic, but it changes the global default options for all future cells, allowing you to choose a new default colormap, line width, etc.
Apart from its brevity, a big benefit of using the IPython magic syntax %%opts or %opts is that it is fully tab-completable. Each of the options that is currently available will be listed if you press <TAB> when you are ready to write it, which makes it much easier to find the right parameter. Of course, you will still need to consult the full holoviews.help documentation (described above) to see the type, allowable values, and documentation for each option, but the tab completion should at least get you started and is great for helping you remember the list of options and see which options are available.
You can even use the succinct IPython-style specification directly in your Python code if you wish, but it requires the external pyparsing library (which is already available if you are using matplotlib):
End of explanation
%%output info=True
curve
Explanation: There is also a special IPython syntax for listing the visualization options for a plotting object in a pop-up window that is equivalent to calling holoviews.help(object):
End of explanation |
2,774 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
Glossary
1. Radio Science using Interferometric Arrays
Previous
Step1: Import section specific modules
Step3: 1.9 A brief introduction to interferometry and its history
1.9.1 The double-slit experiment
The basics of interferometry date back to Thomas Young's double-slit experiment ⤴ of 1801. In this experiment, a plate pierced by two parallel slits is illuminated by a monochromatic source of light. Due to the wave-like nature of light, the waves passing through the two slits interfere, resulting in an interference pattern, or fringe, projected onto a screen behind the slits
Step4: This function draws a double-slit setup, with a light source at position $p$ (in fact the function can render multiple sources, but we'll only use it for one source for the moment). The dotted blue line shows the optical axis ($p=0$). The sine wave (schematically) shows the wavelength. (Note that the units here are arbitrary, since it is only geometry relative to wavelength that determines the results). The black lines show the path of the light waves through the slits and onto the screen at the right. The strip on the right schematically renders the resulting interference pattern, and the red curve shows a cross-section through the pattern.
Inside the function, we simply compute the pathlength difference along the two paths, convert it to phase delay, and render the corresponding interference pattern.
<div class=warn>
<b>Warning
Step5: 1.9.4 From the double-slit box to an interferometer
The original double-slit experiment was conceived as a demonstration of the wave-like nature of light. The role of the light source in the experiment was simply to illuminate the slits. Let us now turn it around and ask ourselves, given a working dual-slit setup, could we use it to obatin some information about the light source? Could we use the double-slit experiment as a measurement device, i.e. an interferometer?
1.9.4.1 Measuring source position
Obviously, we could measure source intensity -- but that's not very interesting, since we can measure that by looking at the source directly. Less obviously, we could measure the source position. Observe what happens when we move the source around, and repeat this experiment for longer and shorter baselines
Step6: Note that long baselines are very sensitive to change in source position, while short baselines are less sensitive. As we'll learn in Chapter 4, the spatial resolution (i.e. the distance at which we can distinguish sources) of an interfrometer is given by $\lambda/B$ , while the spatial resolution of a conventional telescope is given by $\lambda/D$, where $D$ is the dish (or mirror) aperture. This is a fortunate fact, as in practice it is much cheaper to build long baselines than large apertures!
On the other hand, due to the periodic nature of the interference pattern, the position measurement of a long baseline is ambiguous. Consider that two sources at completely different positions produce the same interference pattern
Step7: On the other hand, using a shorter baseline resolves the ambiguity
Step8: Modern interferometers exploit this by using an array of elements, which provides a whole range of possible baselines.
1.9.4.2 Measuring source size
Perhaps less obviously, we can use an inteferometer to measure source size. Until now we have been simulating only point-like sources. First, consider what happens when we add a second source to the experiment (fortunately, we wrote the function above to accommodate such a scenario). The interference pattern from two (independent) sources is the sum of the individual interference patterns. This seems obvious, but will be shown more formally later on. Here we add a second source, with a slider to control its position and intensity. Try to move the second source around, and observe how the superimposed interference pattern can become attenuated or even cancel out.
Step9: So we can already use our double-slit box to infer something about the structure of the light source. Note that with two sources of equal intensity, it is possible to have the interference pattern almost cancel out on any one baseline -- but never on all baselines at once
Step10: Now, let us simulate an extended source, by giving the simulator an array of closely spaced point-like sources. Try playing with the extent slider. What's happening here is that the many interference patterns generated by each little part of the extended source tend to "wash out" each other, resulting in a net loss of amplitude in the pattern. Note also how each particular baseline length is sensitive to a particular range of source sizes.
Step11: We can therefore measure source size by measuring the reduction in the amplitude of the interference pattern
Step12: In fact historically, this was the first application of interferometry in astronomy. In a famous experiment in 1920, a Michelson interferometer installed at Mount Wilson Observatory was used to measure the diameter of the red giant star Betelgeuse.
<div class=advice>
The historical origins of the term <em><b>visibility</b></em>, which you will become intimately familiar with in the course of these lectures, actually lie in the experiment described above. Originally, "visibility" was defined as just that, i.e. a measure of the contrast between the light and dark stripes of the interference pattern.
</div>
<div class=advice>
Modern interferometers deal in terms of <em><b>complex visibilities</b></em>, i.e. complex quantitities. The amplitude of a complex visibility, or <em>visibility amplitude</em>, corresponds to the intensity of the interference pattern, while the <em>visibility phase</em> corresponds to its relative phase (in our simulator, this is the phase of the fringe at the centre of the screen). This one complex number is all the information we have about the light source. Note that while our double-slit experiment shows an entire pattern, the variation in that pattern across the screen is entirely due to the geometry of the "box" (generically, this is the instrument used to make the measurement) -- the informational content, as far as the light source is concerned, is just the amplitude and the phase!
</div>
<div class=advice>
In the single-source simulations above, you can clearly see that amplitude encodes source shape (and intensity), while phase encodes source position. <b>Visibility phase measures position, amplitude measures shape and intensity.</b> This is a recurring theme in radio interferometry, one that we'll revisit again and again in subsequent lectures.
</div>
Note that a size measurement is a lot simpler than a position measurement. The phase of the fringe pattern gives us a very precise measurement of the position of the source relative to the optical axis of the instrument. To get an absolute position, however, we would need to know where the optical axis is pointing in the first place -- for practical reasons, the precision of this is a lot less. The amplitude of the fringe partern, on the other hand, is not very sensitive to errors in the instrument pointing. It is for this reason that the first astronomical applications of interferometry dealt with size measurements.
1.9.4.3 Measuring instrument geometry
Until now, we've only been concerned with measuring source properties. Obviously, the interference pattern is also quite sensitive to instrument geometry. We can easily see this in our toy simulator, by playing with the position of the slits and the screen
Step13: This simple fact has led to many other applications for interferometers, from geodetic VLBI (where continental drift is measured by measuring extremely accurate antenna positions via radio interferometry of known radio sources), to the recent gravitational wave detection by LIGO (where the light source is a laser, and the interference pattern is used to measure miniscule distortions in space-time -- and thus the geometry of the interferometer -- caused by gravitational waves).
1.9.5 Practical interferometers
If you were given the job of constructing an interferometer for astronomical measurements, you would quickly find that the double-slit experiment does not translate into a very practical design. The baseline needs to be quite large; a box with slits and a screen is physically unwieldy. A more viable design can be obtained by playing with the optical path.
The basic design still used in optical interferometry to this day is the Michelson stellar interferometer mentioned above. This is schematically laid out as follows
Step14: However, as soon as we take a measurement on another baseline, the difference becomes apparent
Step16: With a larger number of baselines, we can gather enough information to reconstruct an image of the sky. This is because each baseline essentially measures one Fourier component of the sky brightness distribution (Chapter 4 will explain this in more detail); and once we know the Fourier components, we can compute a Fourier transform in order to recover the sky image. The advent of sufficiently powerful computers in the late 1960s made this technique practical, and turned radio interferometers from exotic contraptions into generic imaging instruments. With a few notable exceptions, modern radio interferometry is aperture synthesis.
This concludes our introduction to radio interferometry; the rest of this course deals with aperture synthesis in detail. The remainder of this notebook consists of a few more interactive widgets that you can use to play with the toy dual-slit simulator.
Appendix
Step17: We have modified the setup as follows. First, the source is now infinitely distant, so we define the source position in terms of the angle of arrival of the incoming wavefront (with 0 meaning on-axis, i.e. along the vertical axis). We now define the baseline in terms of wavelengths. The phase difference of the wavefront arriving at the two arms of the interferometer is completely defined in terms of the angle of arrival. The two "rays" entering the outer arms of the interferometer indicate the angle of arrival.
The rest of the optical path consists of a series of mirrors to bring the two signals together. Note that the frequency of the fringe pattern is now completely determined by the internal geometry of the instrument (i.e. the distances between the inner set of mirrors and the screen); however the relative phase of the pattern is determined by source angle. Use the sliders below to get a feel for this.
Note that we've also modified the function to print the "visibility", as originally defined by Michelson.
Step18: And here's the same experiment for two sources
Step19: A.1 The Betelgeuse size measurement
For fun, let us use our toy to re-create the Betelgeuse size measurement of 1920 by A.A. Michelson and F.G. Pease. Their experiment was set up as follows. The interferometer they constructed had movable outside mirrors, giving it a baseline that could be adjusted from a maximum of 6m downwards. Red light has a wavelength of ~650n; this gave them a maximum baseline of 10 million wavelengths.
For the experiment, they started with a baseline of 1m (1.5 million wavelengths), and verified that they could see fringes from Betelguese with the naked eye. They then adjusted the baseline up in small increments, until at 3m the fringes disappeared. From this, they inferred the diameter of Betelgeuse to be about 0.05".
You can repeat the experiment using the sliders below. You will probably find your toy Betelegeuse to be somewhat larger than 0.05". This is because or simulator is too simplistic -- in particular, it assumes a monochromatic source of light, which makes the fringes a lot sharper. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: Outline
Glossary
1. Radio Science using Interferometric Arrays
Previous: 1.8 Astronomical radio sources
Next: 1.10 The Limits of Single Dish Astronomy
Import standard modules:
End of explanation
from IPython.display import display
from ipywidgets import interact
HTML('../style/code_toggle.html')
Explanation: Import section specific modules:
End of explanation
def double_slit (p0=[0],a0=[1],baseline=1,d1=5,d2=5,wavelength=.1,maxint=None):
Renders a toy dual-slit experiment.
'p0' is a list or array of source positions (drawn along the vertical axis)
'a0' is an array of source intensities
'baseline' is the distance between the slits
'd1' and 'd2' are distances between source and plate and plate and screen
'wavelength' is wavelength
'maxint' is the maximum intensity scale use to render the fringe pattern. If None, the pattern
is auto-scaled. Maxint is useful if you want to render fringes from multiple invocations
of double_slit() into the same intensity scale, i.e. for comparison.
## setup figure and axes
plt.figure(figsize=(20, 5))
plt.axes(frameon=False)
plt.xlim(-d1-.1, d2+2) and plt.ylim(-1, 1)
plt.xticks([]) and plt.yticks([])
plt.axhline(0, ls=':')
baseline /= 2.
## draw representation of slits
plt.arrow(0, 1,0, baseline-1, lw=0, width=.1, head_width=.1, length_includes_head=True)
plt.arrow(0,-1,0, 1-baseline, lw=0, width=.1, head_width=.1, length_includes_head=True)
plt.arrow(0, 0,0, baseline, lw=0, width=.1, head_width=.1, length_includes_head=True)
plt.arrow(0, 0,0, -baseline, lw=0, width=.1, head_width=.1, length_includes_head=True)
## draw representation of lightpath from slits to centre of screen
plt.arrow(0, baseline,d2,-baseline, length_includes_head=True)
plt.arrow(0,-baseline,d2, baseline, length_includes_head=True)
## draw representation of sinewave from the central position
xw = np.arange(-d1, -d1+(d1+d2)/4, .01)
yw = np.sin(2*np.pi*xw/wavelength)*.1 + (p0[0]+p0[-1])/2
plt.plot(xw,yw,'b')
## 'xs' is a vector of x cordinates on the screen
## and we accumulate the interference pattern for each source into 'pattern'
xs = np.arange(-1, 1, .01)
pattern = 0
total_intensity = 0
## compute contribution to pattern from each source position p
for p,a in np.broadcast(p0,a0):
plt.plot(-d1, p, marker='o', ms=10, mfc='red', mew=0)
total_intensity += a
if p == p0[0] or p == p0[-1]:
plt.arrow(-d1, p, d1, baseline-p, length_includes_head=True)
plt.arrow(-d1, p, d1,-baseline-p, length_includes_head=True)
# compute the two pathlenghts
path1 = np.sqrt(d1**2 + (p-baseline)**2) + np.sqrt(d2**2 + (xs-baseline)**2)
path2 = np.sqrt(d1**2 + (p+baseline)**2) + np.sqrt(d2**2 + (xs+baseline)**2)
diff = path1 - path2
# caccumulate interference pattern from this source
pattern = pattern + a*np.cos(2*np.pi*diff/wavelength)
maxint = maxint or total_intensity
# add fake axis to interference pattern just to make it a "wide" image
pattern_image = pattern[:,np.newaxis] + np.zeros(10)[np.newaxis,:]
plt.imshow(pattern_image, extent=(d2,d2+1,-1,1), cmap=plt.gray(), vmin=-maxint, vmax=maxint)
# make a plot of the interference pattern
plt.plot(d2+1.5+pattern/(maxint*2), xs, 'r')
plt.show()
# show patern for one source at 0
double_slit(p0=[0])
Explanation: 1.9 A brief introduction to interferometry and its history
1.9.1 The double-slit experiment
The basics of interferometry date back to Thomas Young's double-slit experiment ⤴ of 1801. In this experiment, a plate pierced by two parallel slits is illuminated by a monochromatic source of light. Due to the wave-like nature of light, the waves passing through the two slits interfere, resulting in an interference pattern, or fringe, projected onto a screen behind the slits:
<img src="figures/514px-Doubleslit.svg.png" width="50%"/>
Figure 1.9.1: Schematic diagram of Young's double-slit experiment. Credit: Unknown.
The position on the screen $P$ determines the phase difference between the two arriving wavefronts. Waves arriving in phase interfere constructively and produce bright strips in the interference pattern. Waves arriving out of phase interfere destructively and result in dark strips in the pattern.
In this section we'll construct a toy model of a dual-slit experiment. Note that this model is not really physically accurate, it is literally just a "toy" to help us get some intuition for what's going on. A proper description of interfering electromagnetic waves will follow later.
Firstly, a monochromatic electromagnetic wave of wavelength $\lambda$ can be described by at each point in time and space as a complex quantity i.e. having an amplitude and a phase, $A\mathrm{e}^{\imath\phi}$. For simplicity, let us assume a constant amplitude $A$ but allow the phase to vary as a function of time and position.
Now if the same wave travels along two paths of different lengths and recombines at point $P$, the resulting electric field is a sum:
$E=E_1+E_2 = A\mathrm{e}^{\imath\phi}+A\mathrm{e}^{\imath(\phi-\phi_0)},$
where the phase delay $\phi_0$ corresponds to the pathlength difference $\tau_0$:
$\phi_0 = 2\pi\tau_0/\lambda.$
What is actually "measured" on the screen, the brightness, is, physically, a time-averaged electric field intensity $EE^$, where the $^$ represents complex conjugation (this exactly what our eyes, or a photographic plate, or a detector in the camera perceive as "brightness"). We can work this out as
$
EE^ = (E_1+E_2)(E_1+E_2)^ = E_1 E_1^ + E_2 E_2^ + E_1 E_2^ + E_2 E_1^ = A^2 + A^2
+ A^2 \mathrm{e}^{\imath\phi_0}
+ A^2 \mathrm{e}^{-\imath\phi_0} =
2A^2 + 2A^2 \cos{\phi_0}.
$
Note how phase itself has dropped out, and the only thing that's left is the phase delay $\phi_0$. The first part of the sum is constant, while the second part, the interfering term, varies with phase difference $\phi_0$, which in turn depends on position on the screen $P$. It is easy to see that the resulting intensity $EE^*$ is a purely real quantity that varies from 0 to $4A^2$. This is exactly what produces the alternating bright and dark stripes on the screen.
1.9.2 A toy double-slit simulator
Let us write a short Python function to (very simplistically) simulate a double-slit experiment. Note, understanding the code presented is not a requirement to understand the experiment. Those not interested in the code implementation should feel free to look only at the results.
End of explanation
interact(lambda baseline,wavelength:double_slit(p0=[0],baseline=baseline,wavelength=wavelength),
baseline=(0.1,2,.01),wavelength=(.05,.2,.01)) and None
Explanation: This function draws a double-slit setup, with a light source at position $p$ (in fact the function can render multiple sources, but we'll only use it for one source for the moment). The dotted blue line shows the optical axis ($p=0$). The sine wave (schematically) shows the wavelength. (Note that the units here are arbitrary, since it is only geometry relative to wavelength that determines the results). The black lines show the path of the light waves through the slits and onto the screen at the right. The strip on the right schematically renders the resulting interference pattern, and the red curve shows a cross-section through the pattern.
Inside the function, we simply compute the pathlength difference along the two paths, convert it to phase delay, and render the corresponding interference pattern.
<div class=warn>
<b>Warning:</b> Once again, let us stress that this is just a "toy" rendering of an interferometer. It serves to demonstrate the basic principles, but it is not physically accurate. In particular, it does not properly model diffraction or propagation. Also, since astronomical sources are effectively infinitely distant (compared to the size of the interferometer), the incoming light rays should be parallel (or equivalently, the incoming wavefront should be planar, as in the first illustration in this chapter).
</div>
1.9.3 Playing with the baseline
First of all, note how the properties of the interference pattern vary with baseline $B$ (the distance between the slits) and wavelength $\lambda$. Use the sliders below to adjust both. Note how increasing the baseline increases the frequency of the fringe, as does reducing the wavelength.
End of explanation
interact(lambda position,baseline,wavelength:double_slit(p0=[position],baseline=baseline,wavelength=wavelength),
position=(-1,1,.01),baseline=(0.1,2,.01),wavelength=(.05,.2,.01)) and None
Explanation: 1.9.4 From the double-slit box to an interferometer
The original double-slit experiment was conceived as a demonstration of the wave-like nature of light. The role of the light source in the experiment was simply to illuminate the slits. Let us now turn it around and ask ourselves, given a working dual-slit setup, could we use it to obatin some information about the light source? Could we use the double-slit experiment as a measurement device, i.e. an interferometer?
1.9.4.1 Measuring source position
Obviously, we could measure source intensity -- but that's not very interesting, since we can measure that by looking at the source directly. Less obviously, we could measure the source position. Observe what happens when we move the source around, and repeat this experiment for longer and shorter baselines:
End of explanation
double_slit([0],baseline=1.5,wavelength=0.1)
double_slit([0.69],baseline=1.5,wavelength=0.1)
Explanation: Note that long baselines are very sensitive to change in source position, while short baselines are less sensitive. As we'll learn in Chapter 4, the spatial resolution (i.e. the distance at which we can distinguish sources) of an interfrometer is given by $\lambda/B$ , while the spatial resolution of a conventional telescope is given by $\lambda/D$, where $D$ is the dish (or mirror) aperture. This is a fortunate fact, as in practice it is much cheaper to build long baselines than large apertures!
On the other hand, due to the periodic nature of the interference pattern, the position measurement of a long baseline is ambiguous. Consider that two sources at completely different positions produce the same interference pattern:
End of explanation
double_slit([0],baseline=0.5,wavelength=0.1)
double_slit([0.69],baseline=0.5,wavelength=0.1)
Explanation: On the other hand, using a shorter baseline resolves the ambiguity:
End of explanation
interact(lambda position,intensity,baseline,wavelength:
double_slit(p0=[0,position],a0=[1,intensity],baseline=baseline,wavelength=wavelength),
position=(-1,1,.01),intensity=(.2,1,.01),baseline=(0.1,2,.01),wavelength=(.01,.2,.01)) and None
Explanation: Modern interferometers exploit this by using an array of elements, which provides a whole range of possible baselines.
1.9.4.2 Measuring source size
Perhaps less obviously, we can use an inteferometer to measure source size. Until now we have been simulating only point-like sources. First, consider what happens when we add a second source to the experiment (fortunately, we wrote the function above to accommodate such a scenario). The interference pattern from two (independent) sources is the sum of the individual interference patterns. This seems obvious, but will be shown more formally later on. Here we add a second source, with a slider to control its position and intensity. Try to move the second source around, and observe how the superimposed interference pattern can become attenuated or even cancel out.
End of explanation
double_slit(p0=[0,0.25],baseline=1,wavelength=0.1)
double_slit(p0=[0,0.25],baseline=1.5,wavelength=0.1)
Explanation: So we can already use our double-slit box to infer something about the structure of the light source. Note that with two sources of equal intensity, it is possible to have the interference pattern almost cancel out on any one baseline -- but never on all baselines at once:
End of explanation
interact(lambda extent,baseline,wavelength:
double_slit(p0=np.arange(-extent,extent+.01,.01),baseline=baseline,wavelength=wavelength),
extent=(0,1,.01),baseline=(0.1,2,.01),wavelength=(.01,.2,.01)) and None
Explanation: Now, let us simulate an extended source, by giving the simulator an array of closely spaced point-like sources. Try playing with the extent slider. What's happening here is that the many interference patterns generated by each little part of the extended source tend to "wash out" each other, resulting in a net loss of amplitude in the pattern. Note also how each particular baseline length is sensitive to a particular range of source sizes.
End of explanation
double_slit(p0=[0],baseline=1,wavelength=0.1)
double_slit(p0=np.arange(-0.2,.21,.01),baseline=1,wavelength=0.1)
Explanation: We can therefore measure source size by measuring the reduction in the amplitude of the interference pattern:
End of explanation
interact(lambda d1,d2,position,extent: double_slit(p0=np.arange(position-extent,position+extent+.01,.01),d1=d1,d2=d2),
d1=(1,5,.1),d2=(1,5,.1),
position=(-1,1,.01),extent=(0,1,.01)) and None
Explanation: In fact historically, this was the first application of interferometry in astronomy. In a famous experiment in 1920, a Michelson interferometer installed at Mount Wilson Observatory was used to measure the diameter of the red giant star Betelgeuse.
<div class=advice>
The historical origins of the term <em><b>visibility</b></em>, which you will become intimately familiar with in the course of these lectures, actually lie in the experiment described above. Originally, "visibility" was defined as just that, i.e. a measure of the contrast between the light and dark stripes of the interference pattern.
</div>
<div class=advice>
Modern interferometers deal in terms of <em><b>complex visibilities</b></em>, i.e. complex quantitities. The amplitude of a complex visibility, or <em>visibility amplitude</em>, corresponds to the intensity of the interference pattern, while the <em>visibility phase</em> corresponds to its relative phase (in our simulator, this is the phase of the fringe at the centre of the screen). This one complex number is all the information we have about the light source. Note that while our double-slit experiment shows an entire pattern, the variation in that pattern across the screen is entirely due to the geometry of the "box" (generically, this is the instrument used to make the measurement) -- the informational content, as far as the light source is concerned, is just the amplitude and the phase!
</div>
<div class=advice>
In the single-source simulations above, you can clearly see that amplitude encodes source shape (and intensity), while phase encodes source position. <b>Visibility phase measures position, amplitude measures shape and intensity.</b> This is a recurring theme in radio interferometry, one that we'll revisit again and again in subsequent lectures.
</div>
Note that a size measurement is a lot simpler than a position measurement. The phase of the fringe pattern gives us a very precise measurement of the position of the source relative to the optical axis of the instrument. To get an absolute position, however, we would need to know where the optical axis is pointing in the first place -- for practical reasons, the precision of this is a lot less. The amplitude of the fringe partern, on the other hand, is not very sensitive to errors in the instrument pointing. It is for this reason that the first astronomical applications of interferometry dealt with size measurements.
1.9.4.3 Measuring instrument geometry
Until now, we've only been concerned with measuring source properties. Obviously, the interference pattern is also quite sensitive to instrument geometry. We can easily see this in our toy simulator, by playing with the position of the slits and the screen:
End of explanation
double_slit(p0=[0], a0=[0.4], maxint=2)
double_slit(p0=[0,0.25], a0=[1, 0.6], maxint=2)
double_slit(p0=np.arange(-0.2,.21,.01), a0=.05, maxint=2)
Explanation: This simple fact has led to many other applications for interferometers, from geodetic VLBI (where continental drift is measured by measuring extremely accurate antenna positions via radio interferometry of known radio sources), to the recent gravitational wave detection by LIGO (where the light source is a laser, and the interference pattern is used to measure miniscule distortions in space-time -- and thus the geometry of the interferometer -- caused by gravitational waves).
1.9.5 Practical interferometers
If you were given the job of constructing an interferometer for astronomical measurements, you would quickly find that the double-slit experiment does not translate into a very practical design. The baseline needs to be quite large; a box with slits and a screen is physically unwieldy. A more viable design can be obtained by playing with the optical path.
The basic design still used in optical interferometry to this day is the Michelson stellar interferometer mentioned above. This is schematically laid out as follows:
<IMG SRC="figures/471px-Michelson_stellar_interferometer.svg.png" width="50%"/>
Figure 1.9.2: Schematic of a Michelson interferometer. Credit: Unknown.
The outer set of mirrors plays the role of slits, and provides a baseline of length $d$, while the rest of the optical path serves to bring the two wavefronts together onto a common screen. The first such interferometer, used to carry out the Betelgeuse size measurement, looked like this:
<IMG SRC="figures/Hooker_interferometer.jpg" width="50%"/>
Figure 1.9.3: 100-inch Hooker Telescope at Mount Wilson Observatory in southern California, USA. Credit: Unknown.
In modern optical interferometers using the Michelson layout, the role of the "outer" mirrors is played by optical telescopes in their own right. For example, the Very Large Telescope operated by ESO can operate as an inteferometer, combining four 8.2m and four 1.8m individual telescopes:
<IMG SRC="figures/Hard_Day's_Night_Ahead.jpg" width="100%"/>
Figure 1.9.4: The Very Large Telescope operated by ESO. Credit: European Southern Observatory.
In the radio regime, the physics allow for more straightforward designs. The first radio interferometric experiment was the sea-cliff interferometer developed in Australia during 1945-48. This used reflection off the surface of the sea to provide a "virtual" baseline, with a single antenna measuring the superimposed signal:
<IMG SRC="figures/sea_int_medium.jpg" width="50%"/>
Figure 1.9.5: Schematic of the sea-cliff single antenna interferometer developed in Australia post-World War 2. Credit: Unknown.
In a modern radio interferometer, the "slits" are replaced by radio dishes (or collections of antennas called aperture arrays) which sample and digitize the incoming wavefront. The part of the signal path between the "slits" and the "screen" is then completely replaced by electronics. The digitized signals are combined in a correlator, which computes the corresponding complex visibilities. We will study the details of this process in further lectures.
In contrast to the delicate optical path of an optical interferometer, digitized signals have the advantage of being endlessly and losslessly replicatable. This has allowed us to construct entire intererometric arrays. An example is the the Jansky Very Large Array (JVLA, New Mexico, US) consisting of 27 dishes:
<IMG SRC="figures/USA.NM.VeryLargeArray.02.jpg" width="50%"/>
Figure 1.9.6: Telescope elments of the Jansky Very Large Array (JVLA) in New Mexico, USA. Credit: Unknown.
The MeerKAT telescope coming online in the Karoo, South Africa, will consist of 64 dishes. This is an aerial photo showing the dish foundations being prepared:
<IMG SRC="figures/2014_core_02.jpg" width="50%"/>
Figure 1.9.7: Layout of the core of the MeerKAT array in the Northern Cape, South Africa. Credit: Unknown.
In an interferometer array, each pair of antennas forms a different baseline. With $N$ antennas, the correlator can then simultaneously measure the visibilities corresponding to $N(N-1)/2$ baselines, with each pairwise antenna combination yielding a unique baseline.
1.9.5.1 Additive vs. multiplicative interferometers
The double-slit experiment, the Michelson interferometer, and the sea-cliff interferometer are all examples of additive interferometers, where the fringe pattern is formed up by adding the two interfering signals $E_1$ and $E_2$:
$$
EE^ = (E_1+E_2)(E_1+E_2)^ = E_1 E_1^ + E_2 E_2^ + E_1 E_2^ + E_2 E_1^
$$
As we already discussed above, the first two terms in this sum are constant (corresponding to the total intensity of the two signals), while the cross-term $E_1 E_2^$ and its complex conjugate is the interfering* term that is responsible for fringe formation.
Modern radio interferometers are multiplicative. Rather than adding the signals, the antennas measure $E_1$ and $E_2$ and feed these measurements into a cross-correlator, which directly computes the $E_1 E_2^*$ term.
1.9.6 Aperture synthesis vs. targeted experiments
Interferometry was born as a way of conducting specific, targeted, and rather exotic experiments. The 1920 Betelgeuse size measurement is a typical example. In contrast to a classical optical telescope, which could directly obtain an image of the sky containing information on hundreds to thousands of objects, an interferometer was a very delicate apparatus for indirectly measuring a single physical quantity (the size of the star in this case). The spatial resolution of that single measurement far exceeded anything available to a conventional telescope, but in the end it was always a specific, one-off measurement. The first interferometers were not capable of directly imaging the sky at that improved resolution.
In radio interferometry, all this changed in the late 1960s with the development of the aperture synthesis technique by Sir Martin Ryle's group in Cambridge. The crux of this tehnique lies in combining the information from multiple baselines.
To understand this point, consider the following. As you saw from playing with the toy double-slit simulator above, for each baseline length, the interference pattern conveys a particular piece of information about the sky. For example, the following three "skies" yield exactly the same interference pattern on a particular baseline, so a single measurement would be unable to distinguish between them:
End of explanation
double_slit(p0=[0], a0=[0.4], baseline=0.5, maxint=2)
double_slit(p0=[0,0.25], a0=[1, 0.6], baseline=0.5, maxint=2)
double_slit(p0=np.arange(-0.2,.21,.01), a0=.05, baseline=0.5, maxint=2)
Explanation: However, as soon as we take a measurement on another baseline, the difference becomes apparent:
End of explanation
def michelson (p0=[0],a0=[1],baseline=50,maxbaseline=100,extent=0,d1=9,d2=1,d3=.2,wavelength=.1,fov=5,maxint=None):
Renders a toy Michelson interferometer with an infinitely distant (astronomical) source
'p0' is a list or array of source positions (as angles, in degrees).
'a0' is an array of source intensities
'extent' are source extents, in degrees
'baseline' is the baseline, in lambdas
'maxbaseline' is the max baseline to which the plot is scaled
'd1' is the plotted distance between the "sky" and the interferometer arms
'd2' is the plotted distance between arms and screen, in plot units
'd3' is the plotted distance between inner mirrors, in plot units
'fov' is the notionally rendered field of view radius (in degrees)
'wavelength' is wavelength, used for scale
'maxint' is the maximum intensity scale use to render the fringe pattern. If None, the pattern
is auto-scaled. Maxint is useful if you want to render fringes from multiple invocations
of michelson() into the same intensity scale, i.e. for comparison.
## setup figure and axes
plt.figure(figsize=(20, 5))
plt.axes(frameon=False)
plt.xlim(-d1-.1, d2+2) and plt.ylim(-1, 1)
plt.xticks([])
# label Y axis with degrees
yt,ytlab = plt.yticks()
plt.yticks(yt,["-%g"%(float(y)*fov) for y in yt])
plt.ylabel("Angle of Arrival (degrees)")
plt.axhline(0, ls=':')
## draw representation of arms and light path
maxbaseline = max(maxbaseline,baseline)
bl2 = baseline/float(maxbaseline) # coordinate of half a baseline, in plot units
plt.plot([0,0],[-bl2,bl2], 'o', ms=10)
plt.plot([0,d2/2.,d2/2.,d2],[-bl2,-bl2,-d3/2.,0],'-k')
plt.plot([0,d2/2.,d2/2.,d2],[ bl2, bl2, d3/2.,0],'-k')
plt.text(0,0,'$b=%d\lambda$'%baseline, ha='right', va='bottom', size='xx-large')
## draw representation of sinewave from the central position
if isinstance(p0,(int,float)):
p0 = [p0]
xw = np.arange(-d1, -d1+(d1+d2)/4, .01)
yw = np.sin(2*np.pi*xw/wavelength)*.1 + (p0[0]+p0[-1])/(2.*fov)
plt.plot(xw,yw,'b')
## 'xs' is a vector of x cordinates on the screen
xs = np.arange(-1, 1, .01)
## xsdiff is corresponding pathlength difference
xsdiff = (np.sqrt(d2**2 + (xs-d3)**2) - np.sqrt(d2**2 + (xs+d3)**2))
## and we accumulate the interference pattern for each source into 'pattern'
pattern = 0
total_intensity = 0
## compute contribution to pattern from each source position p
for pos,ampl in np.broadcast(p0,a0):
total_intensity += ampl
pos1 = pos/float(fov)
if extent: # simulate extent by plotting 100 sources of 1/100th intensity
positions = np.arange(-1,1.01,.01)*extent/fov + pos1
else:
positions = [pos1]
# draw arrows indicating lightpath
plt.arrow(-d1, bl2+pos1, d1, -pos1, head_width=.1, fc='k', length_includes_head=True)
plt.arrow(-d1,-bl2+pos1, d1, -pos1, head_width=.1, fc='k', length_includes_head=True)
for p in positions:
# compute the pathlength difference between slits and position on screen
plt.plot(-d1, p, marker='o', ms=10*ampl, mfc='red', mew=0)
# add pathlength difference at slits
diff = xsdiff + (baseline*wavelength)*np.sin(p*fov*np.pi/180)
# accumulate interference pattern from this source
pattern = pattern + (float(ampl)/len(positions))*np.cos(2*np.pi*diff/wavelength)
maxint = maxint or total_intensity
# add fake axis to interference pattern just to make it a "wide" image
pattern_image = pattern[:,np.newaxis] + np.zeros(10)[np.newaxis,:]
plt.imshow(pattern_image, extent=(d2,d2+1,-1,1), cmap=plt.gray(), vmin=-maxint, vmax=maxint)
# make a plot of the interference pattern
plt.plot(d2+1.5+pattern/(maxint*2), xs, 'r')
plt.show()
print "visibility (Imax-Imin)/(Imax+Imin): ",(pattern.max()-pattern.min())/(total_intensity*2)
# show patern for one source at 0
michelson(p0=[0])
Explanation: With a larger number of baselines, we can gather enough information to reconstruct an image of the sky. This is because each baseline essentially measures one Fourier component of the sky brightness distribution (Chapter 4 will explain this in more detail); and once we know the Fourier components, we can compute a Fourier transform in order to recover the sky image. The advent of sufficiently powerful computers in the late 1960s made this technique practical, and turned radio interferometers from exotic contraptions into generic imaging instruments. With a few notable exceptions, modern radio interferometry is aperture synthesis.
This concludes our introduction to radio interferometry; the rest of this course deals with aperture synthesis in detail. The remainder of this notebook consists of a few more interactive widgets that you can use to play with the toy dual-slit simulator.
Appendix: Recreating the Michelson interferometer
For completeness, let us modify the function above to make a more realistic interferometer. We'll im plement two changes:
we'll put the light source infinitely far away, as an astronomical source should be
we'll change the light path to mimic the layout of a Michelson interferometer.
End of explanation
# single source
interact(lambda position, intensity, baseline:
michelson(p0=[position], a0=[intensity], baseline=baseline, maxint=2),
position=(-5,5,.01),intensity=(.2,1,.01),baseline=(10,100,.01)) and None
Explanation: We have modified the setup as follows. First, the source is now infinitely distant, so we define the source position in terms of the angle of arrival of the incoming wavefront (with 0 meaning on-axis, i.e. along the vertical axis). We now define the baseline in terms of wavelengths. The phase difference of the wavefront arriving at the two arms of the interferometer is completely defined in terms of the angle of arrival. The two "rays" entering the outer arms of the interferometer indicate the angle of arrival.
The rest of the optical path consists of a series of mirrors to bring the two signals together. Note that the frequency of the fringe pattern is now completely determined by the internal geometry of the instrument (i.e. the distances between the inner set of mirrors and the screen); however the relative phase of the pattern is determined by source angle. Use the sliders below to get a feel for this.
Note that we've also modified the function to print the "visibility", as originally defined by Michelson.
End of explanation
interact(lambda position1,position2,intensity1,intensity2,baseline:
michelson(p0=[position1,position2], a0=[intensity1,intensity2], baseline=baseline, maxint=2),
position1=(-5,5,.01), position2=(-5,5,.01), intensity1=(.2,1,.01), intensity2=(.2,1,.01),
baseline=(10,100,.01)) and None
Explanation: And here's the same experiment for two sources:
End of explanation
arcsec = 1/3600.
interact(lambda extent_arcsec, baseline:
michelson(p0=[0], a0=[1], extent=extent_arcsec*arcsec, maxint=1,
baseline=baseline,fov=1*arcsec),
extent_arcsec=(0,0.1,0.001),
baseline=(1e+4,1e+7,1e+4)
) and None
Explanation: A.1 The Betelgeuse size measurement
For fun, let us use our toy to re-create the Betelgeuse size measurement of 1920 by A.A. Michelson and F.G. Pease. Their experiment was set up as follows. The interferometer they constructed had movable outside mirrors, giving it a baseline that could be adjusted from a maximum of 6m downwards. Red light has a wavelength of ~650n; this gave them a maximum baseline of 10 million wavelengths.
For the experiment, they started with a baseline of 1m (1.5 million wavelengths), and verified that they could see fringes from Betelguese with the naked eye. They then adjusted the baseline up in small increments, until at 3m the fringes disappeared. From this, they inferred the diameter of Betelgeuse to be about 0.05".
You can repeat the experiment using the sliders below. You will probably find your toy Betelegeuse to be somewhat larger than 0.05". This is because or simulator is too simplistic -- in particular, it assumes a monochromatic source of light, which makes the fringes a lot sharper.
End of explanation |
2,775 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Normalizing text
Step1: Normalizing columns
Step2: Answers in questions
Step3: Only 0.6% of the answers appear in the questions itself. Out of this 0.6%, a sample of the questions shows that they are all multiple choice questions, which concludes that it is very unlikely that the answer will be in the question itself.
Recycled questions
Step4: Low value vs high value questions | Python Code:
import string
def norm_words(words):
words = words.lower().translate(None, string.punctuation)
return words
jeopardy["clean_question"] = jeopardy["Question"].apply(norm_words)
jeopardy["clean_answer"] = jeopardy["Answer"].apply(norm_words)
jeopardy.head()
Explanation: Normalizing text
End of explanation
def norm_value(value):
try:
value = int(value.translate(None, string.punctuation))
except:
value = 0
return value
jeopardy["clean_value"] = jeopardy["Value"].apply(norm_value)
jeopardy["Air Date"] = pd.to_datetime(jeopardy["Air Date"])
print(jeopardy.dtypes)
jeopardy.head()
Explanation: Normalizing columns
End of explanation
def ans_in_q(row):
match_count = 0
split_answer = row["clean_answer"].split(" ")
split_question = row["clean_question"].split(" ")
try:
split_answer.remove("the")
except:
pass
if len(split_answer) == 0:
return 0
else:
for word in split_answer:
if word in split_question:
match_count += 1
return match_count / len(split_answer)
jeopardy["answer_in_question"] = jeopardy.apply(ans_in_q, axis=1)
print(jeopardy["answer_in_question"].mean())
jeopardy[jeopardy["answer_in_question"] > 0].head()
jeopardy[(jeopardy["answer_in_question"] > 0) & (jeopardy["clean_question"].apply(string.split).apply(len) > 6)].head()
Explanation: Answers in questions
End of explanation
jeopardy = jeopardy.sort_values(by="Air Date")
question_overlap = []
terms_used = set()
for index, row in jeopardy.iterrows():
match_count = 0
split_question = row["clean_question"].split(" ")
for word in split_question:
if len(word) < 6:
split_question.remove(word)
for word in split_question:
if word in terms_used:
match_count += 1
terms_used.add(word)
if len(split_question) > 0:
match_count /= float(len(split_question))
question_overlap.append(match_count)
jeopardy["question_overlap"] = question_overlap
print(jeopardy["question_overlap"].mean())
jeopardy.tail()
Explanation: Only 0.6% of the answers appear in the questions itself. Out of this 0.6%, a sample of the questions shows that they are all multiple choice questions, which concludes that it is very unlikely that the answer will be in the question itself.
Recycled questions
End of explanation
def value(row):
if row["clean_value"] > 800:
value = 1
else:
value = 0
return value
jeopardy["high_value"] = jeopardy.apply(value, axis=1)
jeopardy.head()
Explanation: Low value vs high value questions
End of explanation |
2,776 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step32: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step38: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step40: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step42: Checkpoint
Step45: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step48: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step50: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
count = Counter(text)
vocab = sorted(count, key=count.get, reverse=True)
vocab_to_int = {word: ii for ii,word in enumerate(vocab,0)}
int_to_vocab = {val:key for key,val in vocab_to_int.items()}
return (vocab_to_int, int_to_vocab)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
import re
dd = re.split(' |\n',scenes[0]) # split both based on space and \n and could be others
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
from string import punctuation
# strips all punctuations.. this is cool!
print(punctuation)
text1 = 'I am though! but alas _ not enough.'
words1 = ''.join([c for c in text1 if c not in punctuation])
#words1 = [word for word in text1.split(" ") if word not in punctuation]
words1
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
dict1={'.':'||Period||', ',':'||Comma||', '"':'||Quotation-mark||', ';':'||Semicolon||',
'!':"||Exclamation-mark||", '?':"||Question-mark||", '(':"||Left-Parentheses||",
')':"||Right-Parentheses||", '--':"||Dash||", '\n':"Return"}
return dict1
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
len(vocab_to_int)
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
input = tf.placeholder(tf.int32,shape=(None,None),name='input')
targets = tf.placeholder(tf.int32,shape=(None,None),name='targets')
learning_rate = tf.placeholder(tf.float32,name='learning_rate')
# TODO: Implement Function
return (input, targets, learning_rate)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
keep_prob = tf.placeholder(tf.float32,name='keep_prob')
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
cells = [cell, cell]
cell = tf.contrib.rnn.MultiRNNCell(cells) # don't fully understand this.. what if I made this a list
initialize_state = cell.zero_state(batch_size=batch_size, dtype=tf.float32)
initialize_state = tf.identity(initialize_state, name='initial_state')
#initialize_state = tf.contrib.rnn.MultiRNNCell.zero_state(batch_size=batch_size,dtype=tf.float32)
# TODO: Implement Function
return cell, initialize_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
word_embedding = tf.Variable(initial_value=tf.random_uniform((vocab_size, embed_dim),-1,1),name='word_embedding')
embedded_input = tf.nn.embedding_lookup(word_embedding, input_data)
return embedded_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
# TODO: Implement Function
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
embed_dim = 200
embedded_input = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embedded_input)
#output_weight = tf.Variable(tf.truncated_normal((vocab_size,rnn_size)),name='output_weights')
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
#ipdb.set_trace()
# initial state to the RNN is optional
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
effective_len = len(int_text) - 2*seq_length - 2
num_batches = int( effective_len /(batch_size*seq_length))
ind = 0
input1 = np.zeros((num_batches,2,batch_size,seq_length),dtype=np.int32)
for j in range(batch_size):
for i in range(num_batches):
input1[i][0][j] = int_text[ind:ind+seq_length]
input1[i][1][j] = int_text[ind+1:ind+seq_length+1]
ind += seq_length
return input1
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
# Number of Epochs
num_epochs = 10
# Batch Size
batch_size = 64
# RNN Size
rnn_size = 128
# Embedding Dimension Size
embed_dim = None
# Sequence Length
seq_length = 11
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 10
keep_prob = 0.5
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
seq_length
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
input_tensor = loaded_graph.get_tensor_by_name('input:0')
initial_state_tensor = loaded_graph.get_tensor_by_name('initial_state:0')
final_state_tensor = loaded_graph.get_tensor_by_name('final_state:0')
prob_tensor = loaded_graph.get_tensor_by_name('probs:0')
# TODO: Implement Function
return input_tensor, initial_state_tensor, final_state_tensor, prob_tensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
ind = np.argmax(probabilities)
return int_to_vocab[ind]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
2,777 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Collaborative filtering on the MovieLense Dataset
Learning Objectives
Know how to build a BigQuery ML Matrix Factorization Model
Know how to use the model to make recommendations for a user
Know how to use the model to recommend an item to a group of users
This notebook is based on part of Chapter 9 of BigQuery
Step1: Exploring the data
Two tables should now be available in <a href="https
Step2: A quick exploratory query yields that the dataset consists of over 138 thousand users, nearly 27 thousand movies, and a little more than 20 million ratings, confirming that the data has been loaded successfully.
Step3: On examining the first few movies using the query following query, we can see that the genres column is a formatted string
Step4: We can parse the genres into an array and rewrite the table as follows
Step5: Matrix factorization
Matrix factorization is a collaborative filtering technique that relies on factorizing the ratings matrix into two vectors called the user factors and the item factors. The user factors is a low-dimensional representation of a user_id and the item factors similarly represents an item_id.
We can create the recommender model using (<b>Optional</b>, takes 30 minutes. Note
Step6: Note that we create a model as usual, except that the model_type is matrix_factorization and that we have to identify which columns play what roles in the collaborative filtering setup.
What did you get? Our model took an hour to train, and the training loss starts out extremely bad and gets driven down to near-zero over next the four iterations
Step7: Now, we get faster convergence (three iterations instead of five), and a lot less overfitting. Here are our results
Step8: When we did that, we discovered that the evaluation loss was lower (0.97) with num_factors=16 than with num_factors=36 (1.67) or num_factors=24 (1.45). We could continue experimenting, but we are likely to see diminishing returns with further experimentation. So, let’s pick this as the final matrix factorization model and move on.
Making recommendations
With the trained model, we can now provide recommendations. For example, let’s find the best comedy movies to recommend to the user whose userId is 903. In the query below, we are calling ML.PREDICT passing in the trained recommendation model and providing a set of movieId and userId to carry out the predictions on. In this case, it’s just one userId (903), but all movies whose genre includes Comedy.
Step9: Filtering out already rated movies
Of course, this includes movies the user has already seen and rated in the past. Let’s remove them.
TODO 2
Step10: For this user, this happens to yield the same set of movies -- the top predicted ratings didn’t include any of the movies the user has already seen.
Customer targeting
In the previous section, we looked at how to identify the top-rated movies for a specific user. Sometimes, we have a product and have to find the customers who are likely to appreciate it. Suppose, for example, we wish to get more reviews for movieId = 96481 (American Mullet) which has only one rating and we wish to send coupons to the 5 users who are likely to rate it the highest.
TODO 3
Step11: Batch predictions for all users and movies
What if we wish to carry out predictions for every user and movie combination? Instead of having to pull distinct users and movies as in the previous query, a convenience function is provided to carry out batch predictions for all movieId and userId encountered during training. A limit is applied here, otherwise, all user-movie predictions will be returned and will crash the notebook. | Python Code:
import os
PROJECT = "your-project-here" # REPLACE WITH YOUR PROJECT ID
# Do not change these
os.environ["PROJECT"] = PROJECT
%%bash
rm -r bqml_data
mkdir bqml_data
cd bqml_data
curl -O 'http://files.grouplens.org/datasets/movielens/ml-20m.zip'
unzip ml-20m.zip
yes | bq rm -r $PROJECT:movielens
bq --location=US mk --dataset \
--description 'Movie Recommendations' \
$PROJECT:movielens
bq --location=US load --source_format=CSV \
--autodetect movielens.ratings ml-20m/ratings.csv
bq --location=US load --source_format=CSV \
--autodetect movielens.movies_raw ml-20m/movies.csv
Explanation: Collaborative filtering on the MovieLense Dataset
Learning Objectives
Know how to build a BigQuery ML Matrix Factorization Model
Know how to use the model to make recommendations for a user
Know how to use the model to recommend an item to a group of users
This notebook is based on part of Chapter 9 of BigQuery: The Definitive Guide by Lakshmanan and Tigani.
MovieLens dataset
To illustrate recommender systems in action, let’s use the MovieLens dataset. This is a dataset of movie reviews released by GroupLens, a research lab in the Department of Computer Science and Engineering at the University of Minnesota, through funding by the US National Science Foundation.
Download the data and load it as a BigQuery table using:
End of explanation
%%bigquery --project $PROJECT
SELECT *
FROM movielens.ratings
LIMIT 10
Explanation: Exploring the data
Two tables should now be available in <a href="https://console.cloud.google.com/bigquery">BigQuery</a>.
Collaborative filtering provides a way to generate product recommendations for users, or user targeting for products. The starting point is a table, <b>movielens.ratings</b>, with three columns: a user id, an item id, and the rating that the user gave the product. This table can be sparse -- users don’t have to rate all products. Then, based on just the ratings, the technique finds similar users and similar products and determines the rating that a user would give an unseen product. Then, we can recommend the products with the highest predicted ratings to users, or target products at users with the highest predicted ratings.
End of explanation
%%bigquery --project $PROJECT
SELECT
COUNT(DISTINCT userId) numUsers,
COUNT(DISTINCT movieId) numMovies,
COUNT(*) totalRatings
FROM movielens.ratings
Explanation: A quick exploratory query yields that the dataset consists of over 138 thousand users, nearly 27 thousand movies, and a little more than 20 million ratings, confirming that the data has been loaded successfully.
End of explanation
%%bigquery --project $PROJECT
SELECT *
FROM movielens.movies_raw
WHERE movieId < 5
Explanation: On examining the first few movies using the query following query, we can see that the genres column is a formatted string:
End of explanation
%%bigquery --project $PROJECT
CREATE OR REPLACE TABLE movielens.movies AS
SELECT * REPLACE(SPLIT(genres, "|") AS genres)
FROM movielens.movies_raw
%%bigquery --project $PROJECT
SELECT *
FROM movielens.movies
WHERE movieId < 5
Explanation: We can parse the genres into an array and rewrite the table as follows:
End of explanation
%%bigquery --project $PROJECT
CREATE OR REPLACE MODEL movielens.recommender
options(model_type='matrix_factorization',
user_col='userId', item_col='movieId', rating_col='rating')
AS
SELECT
userId, movieId, rating
FROM movielens.ratings
%%bigquery --project $PROJECT
SELECT *
-- Note: remove cloud-training-demos if you are using your own model:
FROM ML.TRAINING_INFO(MODEL `cloud-training-demos.movielens.recommender`)
Explanation: Matrix factorization
Matrix factorization is a collaborative filtering technique that relies on factorizing the ratings matrix into two vectors called the user factors and the item factors. The user factors is a low-dimensional representation of a user_id and the item factors similarly represents an item_id.
We can create the recommender model using (<b>Optional</b>, takes 30 minutes. Note: we have a model we already trained if you want to skip this step):
End of explanation
%%bigquery --project $PROJECT
CREATE OR REPLACE MODEL movielens.recommender_l2
options(model_type='matrix_factorization',
user_col='userId', item_col='movieId',
rating_col='rating', l2_reg=0.2)
AS
SELECT
userId, movieId, rating
FROM movielens.ratings
%%bigquery --project $PROJECT
SELECT *
-- Note: remove cloud-training-demos if you are using your own model:
FROM ML.TRAINING_INFO(MODEL `cloud-training-demos.movielens.recommender_l2`)
Explanation: Note that we create a model as usual, except that the model_type is matrix_factorization and that we have to identify which columns play what roles in the collaborative filtering setup.
What did you get? Our model took an hour to train, and the training loss starts out extremely bad and gets driven down to near-zero over next the four iterations:
<table>
<tr>
<th>Iteration</th>
<th>Training Data Loss</th>
<th>Evaluation Data Loss</th>
<th>Duration (seconds)</th>
</tr>
<tr>
<td>4</td>
<td>0.5734</td>
<td>172.4057</td>
<td>180.99</td>
</tr>
<tr>
<td>3</td>
<td>0.5826</td>
<td>187.2103</td>
<td>1,040.06</td>
</tr>
<tr>
<td>2</td>
<td>0.6531</td>
<td>4,758.2944</td>
<td>219.46</td>
</tr>
<tr>
<td>1</td>
<td>1.9776</td>
<td>6,297.2573</td>
<td>1,093.76</td>
</tr>
<tr>
<td>0</td>
<td>63,287,833,220.5795</td>
<td>168,995,333.0464</td>
<td>1,091.21</td>
</tr>
</table>
However, the evaluation data loss is quite high, and much higher than the training data loss. This indicates that overfitting is happening, and so we need to add some regularization. Let’s do that next. Note the added l2_reg=0.2 (<b>Optional</b>, takes 30 minutes):
End of explanation
%%bigquery --project $PROJECT
CREATE OR REPLACE MODEL movielens.recommender_16
options( #TODO: Insert paramters to make a 16 factor matrix factorization model
) AS
SELECT
userId, movieId, rating
FROM movielens.ratings
%%bigquery --project $PROJECT
SELECT *
-- Note: remove cloud-training-demos if you are using your own model:
FROM ML.TRAINING_INFO(MODEL `cloud-training-demos.movielens.recommender_16`)
Explanation: Now, we get faster convergence (three iterations instead of five), and a lot less overfitting. Here are our results:
<table>
<tr>
<th>Iteration</th>
<th>Training Data Loss</th>
<th>Evaluation Data Loss</th>
<th>Duration (seconds)</th>
</tr>
<tr>
<td>2</td>
<td>0.6509</td>
<td>1.4596</td>
<td>198.17</td>
</tr>
<tr>
<td>1</td>
<td>1.9829</td>
<td>33,814.3017</td>
<td>1,066.06</td>
</tr>
<tr>
<td>0</td>
<td>481,434,346,060.7928</td>
<td>2,156,993,687.7928</td>
<td>1,024.59</td>
</tr>
</table>
By default, BigQuery sets the number of factors to be the log2 of the number of rows. In our case, since we have 20 million rows in the table, the number of factors would have been chosen to be 24. As with the number of clusters in K-Means clustering, this is a reasonable default but it is often worth experimenting with a number about 50% higher (36) and a number that is about a third lower (16):
TODO 1: Create a Matrix Factorization model with 16 factors
End of explanation
%%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, (
SELECT
movieId, title, 903 AS userId
FROM movielens.movies, UNNEST(genres) g
WHERE g = 'Comedy'
))
ORDER BY predicted_rating DESC
LIMIT 5
Explanation: When we did that, we discovered that the evaluation loss was lower (0.97) with num_factors=16 than with num_factors=36 (1.67) or num_factors=24 (1.45). We could continue experimenting, but we are likely to see diminishing returns with further experimentation. So, let’s pick this as the final matrix factorization model and move on.
Making recommendations
With the trained model, we can now provide recommendations. For example, let’s find the best comedy movies to recommend to the user whose userId is 903. In the query below, we are calling ML.PREDICT passing in the trained recommendation model and providing a set of movieId and userId to carry out the predictions on. In this case, it’s just one userId (903), but all movies whose genre includes Comedy.
End of explanation
%%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, (
WITH seen AS (
SELECT ARRAY_AGG(movieId) AS movies
FROM movielens.ratings
WHERE userId = 903
)
SELECT
movieId, title, 903 AS userId
FROM movielens.movies, UNNEST(genres) g, seen
WHERE # TODO: Complete this WHERE to remove seen movies.
))
ORDER BY predicted_rating DESC
LIMIT 5
Explanation: Filtering out already rated movies
Of course, this includes movies the user has already seen and rated in the past. Let’s remove them.
TODO 2: Make a prediction for user 903 that does not include already seen movies.
End of explanation
%%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, (
SELECT
96481 AS movieId,
(SELECT title FROM movielens.movies WHERE movieId=96481) title,
userId
FROM
# TODO: Select all users
))
ORDER BY predicted_rating DESC
LIMIT 5
Explanation: For this user, this happens to yield the same set of movies -- the top predicted ratings didn’t include any of the movies the user has already seen.
Customer targeting
In the previous section, we looked at how to identify the top-rated movies for a specific user. Sometimes, we have a product and have to find the customers who are likely to appreciate it. Suppose, for example, we wish to get more reviews for movieId = 96481 (American Mullet) which has only one rating and we wish to send coupons to the 5 users who are likely to rate it the highest.
TODO 3: Find the top five users who will likely enjoy American Mullet (2001)
End of explanation
%%bigquery --project $PROJECT
SELECT *
FROM ML.RECOMMEND(MODEL `cloud-training-demos.movielens.recommender_16`)
LIMIT 10
Explanation: Batch predictions for all users and movies
What if we wish to carry out predictions for every user and movie combination? Instead of having to pull distinct users and movies as in the previous query, a convenience function is provided to carry out batch predictions for all movieId and userId encountered during training. A limit is applied here, otherwise, all user-movie predictions will be returned and will crash the notebook.
End of explanation |
2,778 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Непараметрические криетрии
Критерий | Одновыборочный | Двухвыборочный | Двухвыборочный (связанные выборки)
------------- | -------------|
Знаков | $\times$ | | $\times$
Ранговый | $\times$ | $\times$ | $\times$
Перестановочный | $\times$ | $\times$ | $\times$
Недвижимость в Сиэттле
Имеются данные о продажной стоимости недвижимости в Сиэтле для 50 сделок в 2001 году и 50 в 2002. Изменились ли в среднем цены?
Step1: Загрузка данных
Step2: Двухвыборочные критерии для независимых выборок
$H_0\colon$ медианы стоимости недвижимости в 2001 и 2002 годах совпадают
$H_1\colon$ медианы стоимости недвижимости в 2001 и 2002 годах не совпадают
Step3: Ранговый критерий Манна-Уитни
$H_0\colon P(X > Y) = \frac1{2}$
$H_1\colon P(X > Y) ≠ \frac1{2}$
Step4: Перестановочный критерий
$H_0\colon F_{X_1}(x) = F_{X_2}(x)$
$H_1\colon F_{X_1}(x) = F_{X_2}(x + \Delta), \Delta\neq 0$ | Python Code:
import numpy as np
import pandas as pd
import itertools
from scipy import stats
from statsmodels.stats.descriptivestats import sign_test
from statsmodels.stats.weightstats import zconfint
from statsmodels.stats.weightstats import *
%pylab inline
Explanation: Непараметрические криетрии
Критерий | Одновыборочный | Двухвыборочный | Двухвыборочный (связанные выборки)
------------- | -------------|
Знаков | $\times$ | | $\times$
Ранговый | $\times$ | $\times$ | $\times$
Перестановочный | $\times$ | $\times$ | $\times$
Недвижимость в Сиэттле
Имеются данные о продажной стоимости недвижимости в Сиэтле для 50 сделок в 2001 году и 50 в 2002. Изменились ли в среднем цены?
End of explanation
seattle_data = pd.read_csv('seattle.txt', sep = '\t', header = 0)
seattle_data.shape
seattle_data.head()
price2001 = seattle_data[seattle_data['Year'] == 2001].Price
price2002 = seattle_data[seattle_data['Year'] == 2002].Price
pylab.figure(figsize=(12,4))
pylab.subplot(1,2,1)
pylab.grid()
pylab.hist(price2001, color = 'r')
pylab.xlabel('2001')
pylab.subplot(1,2,2)
pylab.grid()
pylab.hist(price2002, color = 'b')
pylab.xlabel('2002')
pylab.show()
Explanation: Загрузка данных
End of explanation
print '95%% confidence interval for the mean: [%f, %f]' % zconfint(price2001)
print '95%% confidence interval for the mean: [%f, %f]' % zconfint(price2002)
Explanation: Двухвыборочные критерии для независимых выборок
$H_0\colon$ медианы стоимости недвижимости в 2001 и 2002 годах совпадают
$H_1\colon$ медианы стоимости недвижимости в 2001 и 2002 годах не совпадают
End of explanation
stats.mannwhitneyu(price2001, price2002)
Explanation: Ранговый критерий Манна-Уитни
$H_0\colon P(X > Y) = \frac1{2}$
$H_1\colon P(X > Y) ≠ \frac1{2}$
End of explanation
def permutation_t_stat_ind(sample1, sample2):
return np.mean(sample1) - np.mean(sample2)
def get_random_combinations(n1, n2, max_combinations):
index = range(n1 + n2)
indices = set([tuple(index)])
for i in range(max_combinations - 1):
np.random.shuffle(index)
indices.add(tuple(index))
return [(index[:n1], index[n1:]) for index in indices]
def permutation_zero_dist_ind(sample1, sample2, max_combinations = None):
joined_sample = np.hstack((sample1, sample2))
n1 = len(sample1)
n = len(joined_sample)
if max_combinations:
indices = get_random_combinations(n1, len(sample2), max_combinations)
else:
indices = [(list(index), filter(lambda i: i not in index, range(n))) \
for index in itertools.combinations(range(n), n1)]
distr = [joined_sample[list(i[0])].mean() - joined_sample[list(i[1])].mean() \
for i in indices]
return distr
pylab.hist(permutation_zero_dist_ind(price2001, price2002, max_combinations = 1000))
pylab.show()
def permutation_test(sample, mean, max_permutations = None, alternative = 'two-sided'):
if alternative not in ('two-sided', 'less', 'greater'):
raise ValueError("alternative not recognized\n"
"should be 'two-sided', 'less' or 'greater'")
t_stat = permutation_t_stat_ind(sample, mean)
zero_distr = permutation_zero_dist_ind(sample, mean, max_permutations)
if alternative == 'two-sided':
return sum([1. if abs(x) >= abs(t_stat) else 0. for x in zero_distr]) / len(zero_distr)
if alternative == 'less':
return sum([1. if x <= t_stat else 0. for x in zero_distr]) / len(zero_distr)
if alternative == 'greater':
return sum([1. if x >= t_stat else 0. for x in zero_distr]) / len(zero_distr)
print "p-value: %f" % permutation_test(price2001, price2002, max_permutations = 10000)
print "p-value: %f" % permutation_test(price2001, price2002, max_permutations = 50000)
Explanation: Перестановочный критерий
$H_0\colon F_{X_1}(x) = F_{X_2}(x)$
$H_1\colon F_{X_1}(x) = F_{X_2}(x + \Delta), \Delta\neq 0$
End of explanation |
2,779 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step4: Inline visualization of TensorFlow graph from https
Step5: Preprocessing the data
Step7: Function for providing batches
Step8: Definining the TensorFlow model with the core API
Step9: Training loop
Step10: Definining the TensorFlow model with the tf.layers API.
Step11: Training loop
Step12: Definining the TensorFlow model with the tf.estimator API. | Python Code:
from IPython.display import clear_output, Image, display, HTML
# Helper functions for TF Graph visualization
def strip_consts(graph_def, max_const_size=32):
Strip large constant values from graph_def.
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = tf.compat.as_bytes("<stripped %d bytes>"%size)
return strip_def
def show_graph(graph_def, max_const_size=32):
Visualize TensorFlow graph.
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code =
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe =
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
.format(code.replace('"', '"'))
display(HTML(iframe))
Explanation: Inline visualization of TensorFlow graph from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/deepdream/deepdream.ipynb
and
http://sdsawtelle.github.io/blog/output/getting-started-with-tensorflow-in-jupyter.html
End of explanation
dataset = mnist.load_data()
train_data = dataset[0][0] / 255
train_data = train_data[..., np.newaxis].astype('float32')
train_labels = np_utils.to_categorical(dataset[0][1]).astype('float32')
test_data = dataset[1][0] / 255
test_data = test_data[..., np.newaxis].astype('float32')
test_labels = np_utils.to_categorical(dataset[1][1]).astype('float32')
train_data.shape
train_labels[0]
plt.imshow(train_data[0, ..., 0])
Explanation: Preprocessing the data
End of explanation
def get_batch(data, labels, num_samples):
Get a random batch of corresponding data and labels of size `num_samples`
idx = np.random.choice(np.arange(0, data.shape[0]), num_samples)
return data[[idx]], labels[[idx]]
Explanation: Function for providing batches
End of explanation
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='VALID')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='VALID')
graph_plain = tf.Graph()
with graph_plain.as_default():
x = tf.placeholder(tf.float32, shape=[None, 28, 28, 1])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
with tf.name_scope('conv2d_1'):
W_conv1 = weight_variable([3, 3, 1, 32])
b_conv1 = bias_variable([32])
act_conv1 = tf.nn.relu(conv2d(x, W_conv1) + b_conv1)
with tf.name_scope('max_pooling2d_1'):
pool1 = max_pool_2x2(act_conv1)
with tf.name_scope('conv2d_2'):
W_conv2 = weight_variable([3, 3, 32, 32])
b_conv2 = bias_variable([32])
act_conv2 = tf.nn.relu(conv2d(pool1, W_conv2)) + b_conv2
with tf.name_scope('dropout_1'):
keep_prob1 = tf.placeholder(tf.float32)
drop1 = tf.nn.dropout(act_conv2, keep_prob=keep_prob1)
with tf.name_scope('flatten_1'):
flatten_1 = tf.reshape(drop1, [-1, 11 * 11 * 32])
with tf.name_scope('dense_1'):
W_dense1 = weight_variable([11 * 11 * 32, 64])
b_dense_1 = bias_variable([64])
act_dense1 = tf.nn.relu((flatten_1 @ W_dense1) + b_dense_1)
with tf.name_scope('dropout_2'):
keep_prob2 = tf.placeholder(tf.float32)
drop2 = tf.nn.dropout(act_dense1, keep_prob=keep_prob2)
with tf.name_scope('dense_2'):
W_dense2 = weight_variable([64, 10])
b_dense2 = bias_variable([10])
# Dont use softmax activation function, because tf provides cross entropy only in conjunction with it.
net_dense2 = (drop2 @ W_dense2) + b_dense2
with tf.name_scope('loss'):
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=net_dense2, labels=y_)
)
train_step = tf.train.AdamOptimizer(1e-4).minimize(loss)
correct_prediction = tf.equal(tf.argmax(net_dense2, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Create init op and saver in the graph, so they can find the variables.
init_op_plain = tf.global_variables_initializer()
saver = tf.train.Saver()
show_graph(graph_plain)
Explanation: Definining the TensorFlow model with the core API
End of explanation
sess = tf.Session(graph=graph_plain)
sess.run(init_op_plain)
for i in range(1000):
batch = get_batch(train_data, train_labels, 50)
if i % 100 == 0:
train_accuracy = sess.run(
fetches=accuracy, feed_dict={x: batch[0], y_: batch[1], keep_prob1: 0.75, keep_prob2: 0.5}
)
print('step %d, training accuracy %g' % (i, train_accuracy))
sess.run(train_step, feed_dict={x: batch[0], y_: batch[1], keep_prob1: 0.75, keep_prob2: 0.5})
print('test accuracy %g' % accuracy.eval(feed_dict={x: test_data, y_: test_labels, keep_prob1: 1.0, keep_prob2: 1.0},
session=sess))
# Save the model including weights.
saver.save(sess, 'tf_mnist_model_plain/tf_mnist_model.ckpt')
sess.close()
Explanation: Training loop
End of explanation
graph_layers = tf.Graph()
with graph_layers.as_default():
x = tf.placeholder(tf.float32, shape=[None, 28, 28, 1])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
training = tf.placeholder_with_default(False, shape=(), name='training') # Switch for dropout layers.
t = tf.layers.conv2d(x, filters=32, kernel_size=(3 ,3), activation=tf.nn.relu,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),
name='conv2d_1')
t = tf.layers.max_pooling2d(t, pool_size=(2, 2), strides=(2, 2),
name='max_pooling2d_1')
t = tf.layers.conv2d(t, filters=32, kernel_size=(3, 3), activation=tf.nn.relu,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),
name='conv2d_2')
t = tf.layers.dropout(t, rate=0.25, training=training, name='dropout_1')
t = tf.contrib.layers.flatten(t)
# Dense does not really flatten, but behaves like tensordot
# https://docs.scipy.org/doc/numpy/reference/generated/numpy.tensordot.html
# https://github.com/tensorflow/tensorflow/issues/8175
t = tf.layers.dense(t, units=64, activation=tf.nn.relu, name='dense_1')
t = tf.layers.dropout(t, rate=0.5, training=training, name='dropout_2')
t = tf.layers.dense(t, units=10, name='dense_2')
with tf.name_scope('loss'):
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=t, labels=y_)
)
train_step = tf.train.AdamOptimizer(1e-4).minimize(loss)
correct_prediction = tf.equal(tf.argmax(t, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Create init op and saver in the graph, so they can find the variables.
init_op_layers = tf.global_variables_initializer()
saver = tf.train.Saver()
show_graph(graph_layers)
Explanation: Definining the TensorFlow model with the tf.layers API.
End of explanation
sess = tf.Session(graph=graph_layers)
sess.run(init_op_layers)
for i in range(2000):
batch = get_batch(train_data, train_labels, 50)
if i % 100 == 0:
train_accuracy = sess.run(
fetches=accuracy, feed_dict={x: batch[0], y_: batch[1], training: True}
)
print('step %d, training accuracy %g' % (i, train_accuracy))
sess.run(train_step, feed_dict={x: batch[0], y_: batch[1], training: True})
print('test accuracy %g' % accuracy.eval(feed_dict={x: test_data, y_: test_labels},
session=sess))
# Save the model including weights.
saver.save(sess, 'tf_mnist_model_layers/tf_mnist_model.ckpt')
sess.close()
Explanation: Training loop
End of explanation
def model_fn(features, labels, mode):
training = (mode == tf.estimator.ModeKeys.TRAIN)
t = tf.layers.conv2d(features['x'], filters=32, kernel_size=(3 ,3), activation=tf.nn.relu,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),
name='conv2d_1')
t = tf.layers.max_pooling2d(t, pool_size=(3, 3), strides=(1 ,1),
name='max_pooling2d_1')
t = tf.layers.conv2d(t, filters=32, kernel_size=(3, 3), activation=tf.nn.relu,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),
name='conv2d_2')
t = tf.layers.dropout(t, rate=0.25, training=training, name='dropout_1')
t = tf.contrib.layers.flatten(t)
# Dense does not really flatten, but behaves like tensordot
# https://docs.scipy.org/doc/numpy/reference/generated/numpy.tensordot.html
# https://github.com/tensorflow/tensorflow/issues/8175
t = tf.layers.dense(t, units=64, activation=tf.nn.relu, name='dense_1')
t = tf.layers.dropout(t, rate=0.5, training=training, name='dropout_2')
t = tf.layers.dense(t, units=10, name='dense_2')
predictions = tf.argmax(t, axis=1)
# Provide an estimator spec for `ModeKeys.PREDICT`.
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(
mode=mode,
predictions={"numbers": predictions}
)
eval_metric_ops = {
'accuracy': tf.metrics.accuracy(predictions=predictions,
labels=tf.argmax(labels, axis=1))
}
loss = tf.losses.softmax_cross_entropy(labels, t)
train_op = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss, global_step=tf.train.get_global_step())
# Provide an estimator spec for `ModeKeys.EVAL` and `ModeKeys.TRAIN` modes.
return tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops
)
estimator = tf.estimator.Estimator(model_fn=model_fn, model_dir='tf_mnist_model_estimator/')
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={'x': train_data.astype('float32')},
y=train_labels.astype('float32'),
batch_size=50,
num_epochs=1,
shuffle=True
)
estimator.train(input_fn=train_input_fn)
# The model is automatically saved when using the estimator API.
test_input_fn = tf.estimator.inputs.numpy_input_fn(
x={'x': test_data},
y=test_labels,
num_epochs=1,
shuffle=False)
estimator.evaluate(input_fn=test_input_fn)
plt.imshow(train_data[0, ..., 0])
predict_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": train_data[0:1]},
shuffle=False)
predictions = estimator.predict(input_fn=predict_input_fn)
for pred in predictions:
print(pred)
# Restore model to look at the graph.
import tensorflow as tf
sess=tf.Session()
#First let's load meta graph and restore weights
saver = tf.train.import_meta_graph('tf_mnist_model_estimator/model.ckpt-1200.meta')
saver.restore(sess,tf.train.latest_checkpoint('tf_mnist_model_estimator/'))
show_graph(sess.graph)
Explanation: Definining the TensorFlow model with the tf.estimator API.
End of explanation |
2,780 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#PRODUCT_ID" data-toc-modified-id="PRODUCT_ID-1"><span class="toc-item-num">1 </span>PRODUCT_ID</a></div><div class="lev1 toc-item"><a href="#SOURCE_PRODUCT_ID" data-toc-modified-id="SOURCE_PRODUCT_ID-2"><span class="toc-item-num">2 </span>SOURCE_PRODUCT_ID</a></div><div class="lev1 toc-item"><a href="#HiRISE_URL" data-toc-modified-id="HiRISE_URL-3"><span class="toc-item-num">3 </span>HiRISE_URL</a></div><div class="lev1 toc-item"><a href="#others" data-toc-modified-id="others-4"><span class="toc-item-num">4 </span>others</a></div>
Step1: PRODUCT_ID
Step2: SOURCE_PRODUCT_ID
Step3: http
Step4: HiRISE_URL
Step5: others | Python Code:
# setup
from pyrise import products as prod
obsid = prod.OBSERVATION_ID('PSP_003072_0985')
# test orbit number
assert obsid.orbit == '003072'
# test setting orbit property
obsid.orbit = 4080
assert obsid.orbit == '004080'
# test repr
assert obsid.__repr__() == 'PSP_004080_0985'
# test targetcode
assert obsid.targetcode == '0985'
# test setting targetcode property
obsid.targetcode = '0980'
assert obsid.targetcode == '0980'
assert obsid.__repr__() == 'PSP_004080_0980'
# test phase
assert obsid.phase == 'PSP'
# test upper orbit folder
assert obsid.get_upper_orbit_folder() == 'ORB_004000_004099'
# test storage path stem
assert obsid.storage_path_stem == 'PSP/ORB_004000_004099/PSP_004080_0980'
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#PRODUCT_ID" data-toc-modified-id="PRODUCT_ID-1"><span class="toc-item-num">1 </span>PRODUCT_ID</a></div><div class="lev1 toc-item"><a href="#SOURCE_PRODUCT_ID" data-toc-modified-id="SOURCE_PRODUCT_ID-2"><span class="toc-item-num">2 </span>SOURCE_PRODUCT_ID</a></div><div class="lev1 toc-item"><a href="#HiRISE_URL" data-toc-modified-id="HiRISE_URL-3"><span class="toc-item-num">3 </span>HiRISE_URL</a></div><div class="lev1 toc-item"><a href="#others" data-toc-modified-id="others-4"><span class="toc-item-num">4 </span>others</a></div>
End of explanation
pid = prod.PRODUCT_ID('PSP_003072_0985')
pid
pid.kind = 'RED'
pid
pid.s
pid.storage_stem
pid.label_fname
pid.label_path
pid.jp2_fname
pid.jp2_path
for item in dir(pid):
if not item.startswith('__'):
print(item,':')
print(getattr(pid, item))
print()
Explanation: PRODUCT_ID
End of explanation
spid = prod.SOURCE_PRODUCT_ID('PSP_003092_0985_RED4_0')
spid
spid.channel = 1
spid
spid.ccd
for i in dir(spid):
if not i.startswith('__'):
print(i,':')
print(getattr(spid, i))
print()
Explanation: SOURCE_PRODUCT_ID
End of explanation
spid.pid.storage_stem
spid.pid.edr_storage_stem
spid.fpath
Explanation: http://hirise-pds.lpl.arizona.edu/PDS/EDR/PSP/ORB_003000_003099/PSP_003092_0985/PSP_003092_0985_RED4_0.IMG
End of explanation
hiurl = prod.HiRISE_URL(spid.fpath)
hiurl.url
hiurl.path
Explanation: HiRISE_URL
End of explanation
pid.label_path
pid.obsid
pid
prod.RED_PRODUCT_ID(pid.obsid.s, 4, 1).furl
prod.RED_PRODUCT_ID(pid.obsid.s, 4,1)
from pyrise import downloads
obsid = 'PSP_003092_0985'
downloads.download_RED_product(obsid, 4, 0)
red_pid = prod.RED_PRODUCT_ID(pid.obsid.s, 4,1)
red_pid.fname
pid
name = obsid + '_RED'
channels = [4, 5]
ccds = [0, 1]
for channel in channels:
for ccd in ccds:
print(f'{name}{channel}_0.cub')
sid = prod.RED_PRODUCT_ID(obsid, 4,0)
sid.pid.label_url
Explanation: others
End of explanation |
2,781 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Merge a bar's reviews into a single document
Step1: Now we must generate a dictionary which maps vocabulary into a number | Python Code:
from itertools import chain
from collections import OrderedDict
reviews_merged = OrderedDict()
# Flatten the reviews, so each review is just a single list of words.
n_reviews = -1
for bus_id in set(review.business_id.values[:n_reviews]):
# This horrible line first collapses each review of a corresponding business into a list
# of lists, and then collapses the list of sentences to a long list of words
reviews_merged[bus_id] = list(chain.from_iterable(
chain.from_iterable( review.cleaned_tokenized[review.business_id==bus_id] )))
Explanation: Merge a bar's reviews into a single document
End of explanation
import time
from itertools import chain
print 'Generating vector dictionary....'
# Review level LDA
# review_flatten = list(chain.from_iterable(review.cleaned_tokenized.iloc[:]))
# id2word_wiki = corpora.Dictionary(review_flatten)
start = time.time()
# Business level LDA (all reviews for a business merged)
id2word_wiki = corpora.Dictionary(reviews_merged.values())
print 'Dictonary generated in %1.2f seconds'%(time.time()-start)
# Convert corpus to bag of words for use with gensim...
# See https://radimrehurek.com/gensim/tut1.html#from-strings-to-vectors
#corpus = map(lambda doc: id2word_wiki.doc2bow(doc), review_flatten)
corpus = map(lambda doc: id2word_wiki.doc2bow(doc), reviews_merged.values())
corpora.MmCorpus.serialize('../output/bar_corpus.mm', corpus)
# Can load the corpus with
# from gensim import corpora
# corpus = corpora.MmCorpus('../output/bar_corpus.mm')
import gensim
print 'Fitting LDA Model'
start = time.time()
ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics=10,
id2word=id2word_wiki, passes=5,)
print 'LDA fit in %1.2f seconds'%(time.time()-start)
for topic in ldamodel.print_topics(num_topics=10, num_words=8):
print topic
from sklearn.decomposition import LatentDirichletAllocation, nmf
lda = LatentDirichletAllocation(n_topics=10, evaluate_every=1000, n_jobs=12, verbose=True)
lda.fit(corpus[:2000])
Explanation: Now we must generate a dictionary which maps vocabulary into a number
End of explanation |
2,782 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 8 Key
CHE 116
Step1: 2.2
The 99% confidence interval is $\mu > -10.1$
Step2: 2.3
The 85% confidence interval is $12.5 \pm 4.3$
Step3: 2.4
The 95% confidence interval is $12.5 \pm 2.5$
Step4: 2.5
The 95% upper bound is $\mu < 5.2$ | Python Code:
import scipy.stats as ss
data_21 = [65.58, -28.15, 21.17, -0.57, 6.04, -10.21, 36.46, 10.67, 77.98, 15.97]
se = np.std(data_21, ddof=1) / np.sqrt(len(data_21))
T = ss.t.ppf(0.9, df=len(data_21) - 1)
print(np.mean(data_21), T * se)
Explanation: Homework 8 Key
CHE 116: Numerical Methods and Statistics
2/21/2019
1. Short Answer (12 Points)
[2 points] If you sum together 20 numbers sampled from a binomial distribution and 10 from a Poisson distribution, how is your sum distribted?
[2 points] If you sample 25 numbers from different beta distributions, how will each of the numbers be distributed?
[4 points] Assume a HW grade is determined as the sample mean of 3 HW problems. How is the HW grade distributed if we do not know the population standard deviation? Why?
[4 points] For part 3, how could not knowing the population standard deviation change how it's distributed? How does knowledge of that number change the behavior of a random variable?
1.1
Normal
1.2
We are not summing, no NLT. Beta distributed
1.3
t-distribution, since we do not know population standard deviation and N < 25
1.4
We have to estimate the standard error using sample standard deviation, which itself is a random variable. If we have the exact number, then we no longer have two sources of randomness.
2. Confidence Intervals (30 Points)
Report the given confidence interval for error in the mean using the data given for each problem and describe in words what the confidence interval is for each example. 6 points each
2.1
80% Double.
data_21 = [65.58, -28.15, 21.17, -0.57, 6.04, -10.21, 36.46, 10.67, 77.98, 15.97]
2.2
99% Upper (lower bound, a value such that the mean lies above that value 99% of the time)
data_22 = [-8.78, -6.06, -6.03, -6.9, -13.57, -18.76, 1.5, -8.21, -3.21, -11.85, -2.72, -10.38, -11.03, -10.85, -7.6, -7.76, -5.99, -10.02, -6.32, -8.35, -19.28, -11.53, -6.04, -0.81, -12.01, -3.22, -9.25, -4.13, -7.22, -11.0, -14.42, 1.07]
2.3
95% Double
data_23 = [14.62, 10.34, 7.68, 15.81, 14.48]
2.4
Redo part 3 with a known standard deviation of 2
2.5
95% Lower (upper bound)
data_25 = [2.47, 2.03, 1.82, 6.98, 2.41, 2.32, 7.11, 5.89, 5.77, 3.34, 2.75, 6.51]
2.1
The 80% confidence interval is $19 \pm 14$
End of explanation
data_22 = [-8.78, -6.06, -6.03, -6.9, -13.57, -18.76, 1.5, -8.21, -3.21, -11.85, -2.72, -10.38, -11.03, -10.85, -7.6, -7.76, -5.99, -10.02, -6.32, -8.35, -19.28, -11.53, -6.04, -0.81, -12.01, -3.22, -9.25, -4.13, -7.22, -11.0, -14.42, 1.07]
se = np.std(data_22, ddof=1) / np.sqrt(len(data_22))
Z = ss.norm.ppf(1 - 0.99)
print(Z * se + np.mean(data_22))
Explanation: 2.2
The 99% confidence interval is $\mu > -10.1$
End of explanation
data_23 = [14.62, 10.34, 7.68, 15.81, 14.48]
se = np.std(data_23, ddof=1) / np.sqrt(len(data_23))
T = ss.t.ppf(0.975, df=len(data_23) - 1)
print(np.mean(data_23), T * se)
Explanation: 2.3
The 85% confidence interval is $12.5 \pm 4.3$
End of explanation
data_23 = [14.62, 10.34, 7.68, 15.81, 14.48]
se = 2 / np.sqrt(len(data_23))
Z = ss.norm.ppf(0.975)
print(np.mean(data_23), T * se)
Explanation: 2.4
The 95% confidence interval is $12.5 \pm 2.5$
End of explanation
data_25 = [2.47, 2.03, 1.82, 6.98, 2.41, 2.32, 7.11, 5.89, 5.77, 3.34, 2.75, 6.51]
se = np.std(data_25, ddof=1) / np.sqrt(len(data_25))
T = ss.t.ppf(0.95, df=len(data_25) - 1)
print(np.mean(data_25) + T * se)
Explanation: 2.5
The 95% upper bound is $\mu < 5.2$
End of explanation |
2,783 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intermediate Pandas
ToC
Navigating multilevel index
Accessing rows and columns
Naming indices
Accessing rows and columns using cross section
Missing data
dropna
fillna
Data aggregation
groupby
mean min max
describe
transpose
Combining DataFrames
concat
merge
inner merge
merge on multiple columns
outer merge
Sorting
left merge
right merge
join
Navigating multilevel index
Step1: Accessing rows and columns
You can use loc and iloc as a chain to access the elements. Go from outer index to inner index
Step2: Naming indices
Indices can have names (appear similar to column names)
Step3: Accessing rows and columns using cross section
The xs method allows to get a cross section. The advantage is it can penetrate a multilevel index in a single step. Now that we have named the indices, we can use cross section effectively
Step4: Missing data
You can either drop rows/cols with missing values using dropna() or fill those cells with values using the fillna() methods.
dropna
Use dropna(axis, thresh...) where axis is 0 for rows, 1 for cols and thresh represents how many occurrences of nan before dropping happens
Step5: fillna
Step6: Data aggregation
Pandas allows sql like control on the dataframes. You can treat each DF as a table and perform sql aggregation.
groupby
Format is
Step7: mean min max
Step8: You can run other aggregation functions like mean, min, max, std, count etc. Lets look at describe which does all of it.
describe
Step9: transpose
Long over due, you can tile a DF by calling the transpose() method.
Step10: Combining DataFrames
You can concatenate, merge and join data frames.
Lets take a look at 3 DataFrames
Step11: concat
pd.concat([list_of_df], axis=0) will extend a dataframe either along rows or columns. All DF in the list should be of same dimension.
Step12: merge
merge lets you do a sql merge with inner, outer, right and left joins.
pd.merge(left, right, how='outer', on='key') where, left and right are your two DataFrames (tables) and on refers to the foreign key
Step13: inner merge
Inner join keeps only the intersection.
Step14: When both tables have same column names that are not used for merging (on) then pandas appends x and y to their names to differentiate
merge on multiple columns
Sometimes, your foreign key is composite. Then you can merge on multiple keys by passing a list to the on argument.
Now lets add a key2 column to both the tables.
Step15: inner merge will only keep the intersection, thus only 2 rows.
outer merge
Use how='outer' to keep the union of both the tables. pandas fills NaN when a cell has no values.
Step16: Sorting
Use DataFrame.sort_values(by=columns, inplace=False, ascending=True) to sort the table.
Step17: right merge
how='right' will keep all the rows of right table and drop the rows of left table that dont have a matching keys.
Step18: left merge
how='left' will similarly keep all rows of left and those rows of right that has a matching foreign key.
Step19: join
Joins are like merges but work on index instead of columns. Further, they are by default either left or right with inner as mode of joins. See example below
Step20: Thus all rows of df_a and those in df_b. If df_b did not have that index, then NaN for values. | Python Code:
import pandas as pd
import numpy as np
# Index Levels
outside = ['G1','G1','G1','G2','G2','G2']
inside = [1,2,3,1,2,3]
hier_index = list(zip(outside,inside)) #create a list of tuples
hier_index
#create a multiindex
hier_index = pd.MultiIndex.from_tuples(hier_index)
hier_index
# Create a dataframe (6,2) with multi level index
df = pd.DataFrame(np.random.randn(6,2),index=hier_index,columns=['A','B'])
df
Explanation: Intermediate Pandas
ToC
Navigating multilevel index
Accessing rows and columns
Naming indices
Accessing rows and columns using cross section
Missing data
dropna
fillna
Data aggregation
groupby
mean min max
describe
transpose
Combining DataFrames
concat
merge
inner merge
merge on multiple columns
outer merge
Sorting
left merge
right merge
join
Navigating multilevel index
End of explanation
#access columns as usual
df['A']
#access rows
df.loc['G1']
#acess a single row form inner
df.loc['G1'].loc[1]
#access a single cell
df.loc['G2'].loc[3]['B']
Explanation: Accessing rows and columns
You can use loc and iloc as a chain to access the elements. Go from outer index to inner index
End of explanation
df.index.names
df.index.names = ['Group', 'Serial']
df
Explanation: Naming indices
Indices can have names (appear similar to column names)
End of explanation
# Get all rows with Serial 1
df.xs(1, level='Serial')
# Get rows with serial 2 in group 1
df.xs(['G1',2])
Explanation: Accessing rows and columns using cross section
The xs method allows to get a cross section. The advantage is it can penetrate a multilevel index in a single step. Now that we have named the indices, we can use cross section effectively
End of explanation
d = {'a':[1,2,np.nan], 'b':[np.nan, 5, np.nan], 'c':[6,7,8]}
dfna = pd.DataFrame(d)
dfna
# dropping rows with one or more na values
dfna.dropna()
# dropping cols with one or more na values
dfna.dropna(axis=1)
# Dropping rows only if 2 or more cols have na values
dfna.dropna(axis=0, thresh=2)
Explanation: Missing data
You can either drop rows/cols with missing values using dropna() or fill those cells with values using the fillna() methods.
dropna
Use dropna(axis, thresh...) where axis is 0 for rows, 1 for cols and thresh represents how many occurrences of nan before dropping happens
End of explanation
dfna.fillna(value=999)
# filling with mean value of entire dataframe
dfna.fillna(value = dfna.mean())
# fill with mean value row by row
dfna['a'].fillna(value = dfna['a'].mean())
Explanation: fillna
End of explanation
comp_data = {'Company':['GOOG','GOOG','MSFT','MSFT','FB','FB'],
'Person':['Sam','Charlie','Amy','Vanessa','Carl','Sarah'],
'Sales':[200,120,340,124,243,350]}
comp_df = pd.DataFrame(comp_data)
comp_df
Explanation: Data aggregation
Pandas allows sql like control on the dataframes. You can treat each DF as a table and perform sql aggregation.
groupby
Format is: df.groupby('col_name').aggregation()
End of explanation
# mean sales by company - automatically only applies mean on numerical columns
comp_df.groupby('Company').mean()
# standard deviation in sales by company
comp_df.groupby('Company').std()
Explanation: mean min max
End of explanation
comp_df.groupby('Company').describe()
Explanation: You can run other aggregation functions like mean, min, max, std, count etc. Lets look at describe which does all of it.
describe
End of explanation
comp_df.groupby('Company').describe().transpose()
comp_df.groupby('Company').describe().index
Explanation: transpose
Long over due, you can tile a DF by calling the transpose() method.
End of explanation
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],'D': ['D0', 'D1', 'D2', 'D3']}, index=[0, 1, 2, 3])
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'], 'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],'D': ['D4', 'D5', 'D6', 'D7']}, index=[4, 5, 6, 7])
df1
df2
Explanation: Combining DataFrames
You can concatenate, merge and join data frames.
Lets take a look at 3 DataFrames
End of explanation
# extend along rows
pd.concat([df1, df2]) #flows well because index is sequential and colmns match
#extend along columns
pd.concat([df1, df2], axis=1) #fills NaN when index dont match
Explanation: concat
pd.concat([list_of_df], axis=0) will extend a dataframe either along rows or columns. All DF in the list should be of same dimension.
End of explanation
left = pd.DataFrame({'key1': ['K0', 'K1', 'K2', 'K3'],'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key1': ['K0', 'K1', 'K2', 'K3'],'B': ['C0', 'C1', 'C2', 'C3'],
'C': ['D0', 'D1', 'D2', 'D3']})
left
right
Explanation: merge
merge lets you do a sql merge with inner, outer, right and left joins.
pd.merge(left, right, how='outer', on='key') where, left and right are your two DataFrames (tables) and on refers to the foreign key
End of explanation
#merge along key1
pd.merge(left, right, how='inner', on='key1')
Explanation: inner merge
Inner join keeps only the intersection.
End of explanation
left['key2'] = ['K0', 'K1', 'K0', 'K1']
left
right['key2'] = ['K0', 'K0', 'K0', 'K0']
right
pd.merge(left, right, how='inner', on=['key1', 'key2'])
Explanation: When both tables have same column names that are not used for merging (on) then pandas appends x and y to their names to differentiate
merge on multiple columns
Sometimes, your foreign key is composite. Then you can merge on multiple keys by passing a list to the on argument.
Now lets add a key2 column to both the tables.
End of explanation
om = pd.merge(left, right, how='outer', on=['key1', 'key2'])
om
Explanation: inner merge will only keep the intersection, thus only 2 rows.
outer merge
Use how='outer' to keep the union of both the tables. pandas fills NaN when a cell has no values.
End of explanation
om.sort_values(by=['key1', 'key2']) #now you got the merge sorted by columns.
Explanation: Sorting
Use DataFrame.sort_values(by=columns, inplace=False, ascending=True) to sort the table.
End of explanation
pd.merge(left, right, how='right', on=['key1', 'key2']).sort_values(by='key1')
Explanation: right merge
how='right' will keep all the rows of right table and drop the rows of left table that dont have a matching keys.
End of explanation
pd.merge(left, right, how='left', on=['key1', 'key2']).sort_values(by='key1')
Explanation: left merge
how='left' will similarly keep all rows of left and those rows of right that has a matching foreign key.
End of explanation
df_a = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']},
index=['K0', 'K1', 'K2'])
df_b = pd.DataFrame({'C': ['C0', 'C2', 'C3'],
'D': ['D0', 'D2', 'D3']},
index=['K0', 'K2', 'K3'])
df_a
df_b
#join b to a, default mode = keep all rows of a and matching rows of b (left join)
df_a.join(df_b)
Explanation: join
Joins are like merges but work on index instead of columns. Further, they are by default either left or right with inner as mode of joins. See example below:
End of explanation
#join b to a
df_b.join(df_a)
#outer join - union of outputs
df_b.join(df_a, how='outer')
Explanation: Thus all rows of df_a and those in df_b. If df_b did not have that index, then NaN for values.
End of explanation |
2,784 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fitting Models Exercise 1
Imports
Step1: Fitting a quadratic curve
For this problem we are going to work with the following model
Step2: First, generate a dataset using this model using these parameters and the following characteristics
Step3: Now fit the model to the dataset to recover estimates for the model's parameters | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
Explanation: Fitting Models Exercise 1
Imports
End of explanation
a_true = 0.5
b_true = 2.0
c_true = -4.0
Explanation: Fitting a quadratic curve
For this problem we are going to work with the following model:
$$ y_{model}(x) = a x^2 + b x + c $$
The true values of the model parameters are as follows:
End of explanation
# YOUR CODE HERE
N = 30
xdata = np.linspace(-5, 5, N)
np.random.seed(0)
dy = 2.0
ydata = c_true + b_true * xdata + a_true * xdata**2 + np.random.normal(0.0, dy, size=N)
plt.errorbar(xdata, ydata, dy,fmt='og', ecolor='darkgray')
plt.xlabel('x')
plt.ylabel('y')
plt.grid();
assert True # leave this cell for grading the raw data generation and plot
Explanation: First, generate a dataset using this model using these parameters and the following characteristics:
For your $x$ data use 30 uniformly spaced points between $[-5,5]$.
Add a noise term to the $y$ value at each point that is drawn from a normal distribution with zero mean and standard deviation 2.0. Make sure you add a different random number to each point (see the size argument of np.random.normal).
After you generate the data, make a plot of the raw data (use points).
End of explanation
# YOUR CODE HERE
def chi2(theta, x, y, dy):
# theta = [c, b, a]
return np.sum(((y - theta[0] - theta[1] * x - theta[2] * x**2) / dy) ** 2)
theta_guess = [0.0,1.0,2.0]
result = opt.minimize(chi2, theta_guess, args=(xdata,ydata,dy))
theta_best = result.x
print(theta_best)
xfit = np.linspace(-5, 5)
yfit = theta_best[2]*xfit**2 + theta_best[1]*xfit + theta_best[0]
plt.figure(figsize=(7,5))
plt.plot(xfit, yfit)
plt.errorbar(xdata, ydata, dy, fmt='og', ecolor='darkgray')
plt.xlabel('x')
plt.ylabel('y')
plt.grid();
assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors
Explanation: Now fit the model to the dataset to recover estimates for the model's parameters:
Print out the estimates and uncertainties of each parameter.
Plot the raw data and best fit of the model.
End of explanation |
2,785 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Leren
Step1: 1) Reading in data
Step2: 2) Gradient function
Step3: 3) Parameter updating
Step4: 4) Cost function
Step5: 5) Optimization learning rate and iterations
Step6: Polynomial Regression
1) Extension to polynomial regression
2) Cost function
3) Optimization learning rate and iterations
Discussion
Step7: 1) Reading the data
Step8: 2) Gradient calculating and parameter updating
Step9: 3) Cost function
Step10: 4) Pairwise comparison of classess
Step11: 5) Optimization learning rate and iterations | Python Code:
from __future__ import division
import numpy as np
import pandas as pd
import csv
import matplotlib.pylab as plt
class linReg:
df = None
input_vars = None
output_vars = None
thetas = None
alpha = 0.0
# formats the self.df properly
def __init__(self, fileName, alpha):
self.df = pd.read_csv(fileName, header=None)
length_col = len(self.df[self.df.columns[-1]])
# normalize the values
x = self.df[self.df.columns[0:-1]].as_matrix()
y = self.df[self.df.columns[-1]].as_matrix().reshape(length_col, 1)
self.output_vars = y / y.max(0)
theta_0 = np.ones((length_col, 1))
self.input_vars = np.hstack((theta_0, x))
# add a fake x_0 to make matrix multiplications possible
thet0 = np.ones((length_col, 1))
self.input_vars = np.hstack((thet0, x))
self.thetas = np.ones((5, 1))
self.alpha = alpha
@property
def grad_vec(self):
return np.dot(self.input_vars, self.thetas)
@property
def update(self):
x = self.output_vars - self.grad_vec
y = np.dot(self.input_vars.T, x)
self.thetas = self.thetas + self.alpha * y
return self.thetas
@property
def cost(self):
summation = (self.grad_vec - self.output_vars)
return 0.5 * np.dot(summation.T, summation)
def train(self, iterations):
for i in range(iterations):
self.update
print(self.cost)
Explanation: Leren: Programming assignment 2
This assignment can be done in teams of 2
Student 1: <span style="color:red">Roan de Jong</span> (<span style="color:red">10791930</span>)<br>
Student 2: <span style="color:red">Ghislaine van den Boogerd</span> (<span style="color:red">student_id</span>)<br>
This notebook provides a template for your programming assignment 2. You may want to use parts of your code from the previous assignment(s) as a starting point for this assignment.
The code you hand-in should follow the structure from this document. Write down your functions in the cells they belong to. Note that the structure corresponds with the structure from the actual programming assignment. Make sure you read this for the full explanation of what is expected of you.
Submission:
Make sure your code can be run from top to bottom without errors.
Include your data files in the zip file.
Comment your code
One way be sure you code can be run without errors is by quiting iPython completely and then restart iPython and run all cells again (you can do this by going to the menu bar above: Cell > Run all). This way you make sure that no old definitions of functions or values of variables are left (that your program might still be using).
If you have any questions ask your teaching assistent. We are here for you.
Multivariate Linear Regression
End of explanation
if __name__ == '__main__':
trainer = linReg('housesRegr.csv', 0.0000000000001)
Explanation: 1) Reading in data
End of explanation
if __name__ == '__main__':
trainer = linReg('housesRegr.csv', 0.0000000000001)
print(trainer.grad_vec)
Explanation: 2) Gradient function
End of explanation
if __name__ == '__main__':
trainer = linReg('housesRegr.csv', 0.0000000000001)
print(trainer.update)
Explanation: 3) Parameter updating
End of explanation
if __name__ == '__main__':
trainer = linReg('housesRegr.csv', 0.0000000000001)
print(trainer.cost)
Explanation: 4) Cost function
End of explanation
if __name__ == '__main__':
# the optimized learning rate
trainer = linReg('housesRegr.csv', 0.0000000000001)
trainer.train(1000000)
Explanation: 5) Optimization learning rate and iterations
End of explanation
from __future__ import division
import numpy as np
import pandas as pd
import csv
import math
class logReg:
df = None
input_vars = None
classifying_vars = None
thetas = None
alpha = 0.0
def __init__(self, fileName, alpha):
self.df = pd.read_csv(fileName, header=None)
length_col = len(self.df[self.df.columns[-1]])
self.classifying_vars = self.df[self.df.columns[-1]].as_matrix()\
.reshape(length_col, 1)
x = self.df[self.df.columns[0:-1]].as_matrix()
# this is the column for x_0
temp_arr = np.ones((1, len(x.T[0])))
for column in x.T:
if column.max(0) > 0:
column = column / column.max(0)
temp_arr = np.vstack((temp_arr, column))
self.input_vars = temp_arr.T
self.thetas = np.full((len(self.input_vars[0]), 1), 0.5)
self.alpha = alpha
@property
def gradient(self):
theta_x = np.dot(self.input_vars, self.thetas)
# An ugly way to make a np.array
h_x = np.array([0.0])
for example in theta_x:
h_x = np.vstack((h_x, 1 / (1 + math.e**(-example))))
# We added this range to get rid of the useless 1st index: 0.0
return h_x[1:]
# Update the theta's as described in the lecture notes
def update(self, classifier):
output_vars = self.classifying_vars
np.place(output_vars, output_vars != classifier, [0])
np.place(output_vars, output_vars == classifier, [1])
x = self.gradient - output_vars
y = np.dot(self.input_vars.T, x)
self.thetas = self.thetas - self.alpha * y
return self.thetas
# calculate the cost
def cost(self, classifier):
h_x = self.gradient
cost = 0.0
for training_example in zip(h_x, self.classifying_vars):
if training_example[1] == classifier:
cost = cost + math.log(training_example[0])
else:
cost = cost + math.log(1 - training_example[0])
cost = -(1/len(self.classifying_vars)) * cost
return cost
# train the model on a certain number
def train(self, classifier, iterations):
for i in range(0, iterations):
self.update(classifier)
print(self.cost(classifier))
Explanation: Polynomial Regression
1) Extension to polynomial regression
2) Cost function
3) Optimization learning rate and iterations
Discussion:
[You discussion comes here]
Logistic Regression
1) Reading in data
End of explanation
if __name__ == '__main__':
trainer = logReg('digits123.csv', 0.0001)
Explanation: 1) Reading the data
End of explanation
if __name__ == '__main__':
trainer = logReg('digits123.csv', 0.0001)
print(trainer.update(1))
Explanation: 2) Gradient calculating and parameter updating
End of explanation
if __name__ == '__main__':
trainer = logReg('digits123.csv', 0.0001)
trainer.train(3, 100)
Explanation: 3) Cost function
End of explanation
if __name__ == '__main__':
trainer = logReg('digits123.csv', 0.0001)
trainer.train(1, 100)
trainer.train(2, 100)
trainer.train(3, 100)
# the costs are quite similar right now, but it does seem to be the best for '3'
Explanation: 4) Pairwise comparison of classess
End of explanation
if __name__ == '__main__':
trainer = logReg('digits123.csv', 0.000000000001)
trainer.train(1, 1000)
Explanation: 5) Optimization learning rate and iterations
End of explanation |
2,786 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Curved edges
It doesn't appear that toyplot has the functionality to do radial curvature of edges. I need to dive into the actual SVG code that it writes to check...
https
Step1: Primer
M
Step3: GOAL FOR CIRLCE LAYOUT
Step4: GOAL FOR PIE CHARTS
Step5: Plan
Edges Class can be used to return ecoordinates and eshapes. Here I will set MA as opposed to MQ to indicate the use of SVG arc elements.
toyplot.mark seems to be where the MA info is expanded to make a path d="..." element. For curved edges this must incorporate the 'curvature' argument somehow, but maybe not, since it does not seem to work currently.
Like 'curvature', my Edge class should be able to build A elements from a single argument, origin, or alternatively, an x, y coordinate of the origin. From this it only needs to calculate the rx and ry (radius) and the sweep.
Curved edges
Currently the only option is curved edges, which uses bezier curves. This won't work, we need to use arcs.
Step8: Create an ArcEdges class similar to CurvedEdges
It takes an origin argument from which the radius can always be calculated, and it will also determine the sweep-flag automatically.
Step9: Parse SVG with lxml | Python Code:
import numpy as np
import toyplot
#import toytree
import toyplot.svg
from IPython.display import SVG
Explanation: Curved edges
It doesn't appear that toyplot has the functionality to do radial curvature of edges. I need to dive into the actual SVG code that it writes to check...
https://developer.mozilla.org/en-US/docs/Web/SVG/Tutorial/Paths
Can it be done using toyplot ellipses?
End of explanation
%%HTML
<svg viewBox="0 0 100 20" xmlns="http://www.w3.org/2000/svg" overflow="auto" stroke="red">
<text x="15" y="23"> This text is wider than the SVG, so there should be a scrollbar shown.</text>
</svg>
%%SVG
<svg width='300' height='300' viewBox="0 0 10 10">
<rect width="10" height="10">
<animate attributeName="rx" values="0;10;0" dur="1s" repeatCount="indefinite" />
</rect>
</svg>
%%SVG
<svg width="300" height="300" >
<g class="general" stroke="green" stroke-width="3" fill="none">
<path d="M 100 100 L 150 100 L 150 200" stroke-opacity="0.3"/>
<path d="M 100 100 L 50 100 L 50 200" stroke-opacity="0.3"/>
<path d="M 100 100 C 150 100, 150 100, 150 150" stroke='blue' stroke-opacity='0.3'/>
<path d="M 100 100 C 50 100, 50 100, 50 200" stroke='blue' stroke-opacity='0.3'/>
</g>
</svg>
%%SVG
<svg width="300" height="300" >
<g class="general" stroke="green" stroke-width="3" fill="none">
<path d="M 100 100 L 100 50 L 200 50 "/>
<path d="M 100 100 L 100 150 L 200 150"/>
<path d="M 100 50 A 50 50, 0, 0, 0, 100 200" fill='grey' fill-opacity="0.2"/>
</g>
</svg>
Explanation: Primer
M: move to. Moves cursor to this position.
L: line to. Draws line from cursor to this position.
C: Bezier curves (x1 y1, x2 y2, x y):
Q: Quadratic curve (x1 y1, x y):
A: Arc (rx ry x-axis-rotation large-arc-flag sweep-flag x y)
SEQVIEW ALIGN
to get overflow it needs to be set on a div or maybe g element not on the svg.
End of explanation
%%SVG
<svg width="300" height="300" >
<g class="general" stroke="green" stroke-width="3" fill="none">
<path d="M 150 150 L 200 150" stroke="black"/>
<path d="M 200 150 A 50 50, 0, 0, 0, 150 100" stroke="blue"/>
<path d="M 150 100 A 50 50, 0, 0, 0, 100 150" stroke="green"/>
<path d="M 100 150 A 50 50, 0, 0, 0, 150 200" stroke="orange"/>
<path d="M 150 200 L 150 250" stroke="black"/>
<path d="M 150 250 A 100 100, 0, 0, 1, 50 150" stroke="red" />
<path d="M 50 150 A 100 100, 0, 0, 1, 150 50" stroke="indigo" />
<path d="M 150 50 A 100 100, 0, 0, 1, 150 250" stroke="indigo" />
</g>
</svg>
125 + 100, 175 + 100
100 + 100, 150 + 50
100 - 150, 100 - 50
125 - 175, 100 - 100
50 - 225, 100 - 100
175 / 2.
x =
<svg width="320" height="320" xmlns="http://www.w3.org/2000/svg">
<path d=" M100 100 A 50 50 0 0 1 150 50" fill="yellow" fill-opacity='0.25' stroke='black'/>
<path d=" M125 100 A 25 25 0 0 1 175 100" fill='blue' fill-opacity='0.5' stroke='black' />
<path d=" M50 100 A 87 87 0 0 1 250 100" fill='blue' fill-opacity='0.5' stroke='black' />
<path d=" M 200 200 A 50 50 0 0 1 100 100 " fill="none" stroke="black" />
<path d=" M 100 100 A 50 50 0 0 0 200 200 " fill="none" stroke="black" />
<path d=" M 100 200 A 50 50 0 0 0 200 250 " fill="none" stroke="orange" />
<path d=" M 200 200 A 50 50 0 0 0 100 200 " fill="none" stroke="green" />
</svg>
from IPython.display import SVG
SVG(x)
%%SVG
<svg width="320" height="320" xmlns="http://www.w3.org/2000/svg">
<path d="M 10 315
L 110 215
A 30 50 0 0 1 162.55 162.45
L 172.55 152.45
A 30 50 -45 0 1 215.1 109.9
L 315 10" stroke="black" fill="green" stroke-width="2" fill-opacity="0.5"/>
</svg>
Explanation: GOAL FOR CIRLCE LAYOUT
End of explanation
%%SVG
<svg width="300" height="300" >
<g class="general" stroke="green" stroke-width="3" fill="none">
<path d="M 150 150 L 200 150" stroke="black"/>
<path d="M 200 150 A 50 50, 0, 0, 0, 150 100" stroke="blue" stroke-width='10'/>
<path d="M 150 100 A 50 50, 0, 0, 0, 100 150" stroke="green" stroke-width='10'/>
<path d="M 100 150 A 50 50, 0, 0, 0, 150 200" stroke="orange" stroke-width='10'/>
<path d="M 150 200 A 50 50, 0, 0, 0, 200 150" stroke="violet" stroke-width='10'/>
</g>
</svg>
#SVG(x)
Explanation: GOAL FOR PIE CHARTS
End of explanation
verts = np.array([(0, 0), (1, 0), (-1, 0)])
edges = np.array([(0, 1), (1, 2)])
# set up the canvas
c = toyplot.Canvas(width=400, height=300);
a = c.cartesian()
# add straight edges
a.graph(
np.array([(0, 1)]),
vcoordinates=[(0, 0), (1, 0)],
vlshow=False,
);
# add curved edges
a.graph(
np.array([(0, 1)]),
vcoordinates=[(1, 0), (-1, 0)],
layout=toyplot.layout.IgnoreVertices(
edges=ArcsEdges((0, 0))),
vlshow=False,
);
Explanation: Plan
Edges Class can be used to return ecoordinates and eshapes. Here I will set MA as opposed to MQ to indicate the use of SVG arc elements.
toyplot.mark seems to be where the MA info is expanded to make a path d="..." element. For curved edges this must incorporate the 'curvature' argument somehow, but maybe not, since it does not seem to work currently.
Like 'curvature', my Edge class should be able to build A elements from a single argument, origin, or alternatively, an x, y coordinate of the origin. From this it only needs to calculate the rx and ry (radius) and the sweep.
Curved edges
Currently the only option is curved edges, which uses bezier curves. This won't work, we need to use arcs.
End of explanation
??toyplot.mark
import numpy
class ArcsEdges(toyplot.layout.EdgeLayout):
Creates curved edges as arcs on a circle.
Parameters
----------
origin: tuple
The origin is the x,y coordinates of the circle center.
def __init__(self, origin):
self._origin = origin
def edges(self, vcoordinates, edges):
# check for loops
loops = edges.T[0] == edges.T[1]
if numpy.any(loops):
toyplot.log.warning(
"Graph contains %s loop edges that will not be visible.",
numpy.count_nonzero(loops))
# M will map start coords, A will map arc shape
eshapes = numpy.tile("MA", len(edges))
ecoordinates = numpy.empty((len(edges) * 3, 2))
# store start and end points
sources = vcoordinates[edges.T[0]]
targets = vcoordinates[edges.T[1]]
# calculate midpoints of arcs (TODO)
offsets = numpy.dot(targets - sources, [[0, 1], [-1, 0]]) * self._origin[0]
midpoints = ((sources + targets) * 0.5) + offsets
ecoordinates[0::3] = sources
ecoordinates[1::3] = midpoints
ecoordinates[2::3] = targets
return eshapes, ecoordinates
def get_path(self):
The sweep-flag determines if the arc should begin moving
at positive angles or negative angles.
# orientation depends on x-axis (no rotation)
sweep = 0
if y1 - y0:
sweep = 1
# the svg path string expanded
path = "M {x0} {y0} A {rx} {ry}, 0, 0, {sweep}, {x1, y1}"
path = path.format(**cdict)
return path
Explanation: Create an ArcEdges class similar to CurvedEdges
It takes an origin argument from which the radius can always be calculated, and it will also determine the sweep-flag automatically.
End of explanation
toyplot.svg.render(c, "test.svg")
svg = toyplot.svg.render(c)
html = toyplot.html.render(c, "test.html")
import xml.etree.ElementTree as ET
svg
svg.tag, svg.attrib
for child in svg:
print(child.tag, child.attrib)
for item in svg.iter('g'):
print(item.attrib)
for country in svg.findall('g'):
print(country)
Explanation: Parse SVG with lxml
End of explanation |
2,787 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Neural Network for Image Classification
Step1: 2 - Dataset
You will use the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better!
Problem Statement
Step2: The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images.
Step3: As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below.
<img src="images/imvectorkiank.png" style="width
Step5: $12,288$ equals $64 \times 64 \times 3$ which is the size of one reshaped image vector.
3 - Architecture of your model
Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images.
You will build two different models
Step6: Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
Step7: Expected Output
Step8: Expected Output
Step10: Expected Output
Step11: You will now train the model as a 5-layer neural network.
Run the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
Step12: Expected Output
Step13: <table>
<tr>
<td>
**Train Accuracy**
</td>
<td>
0.985645933014
</td>
</tr>
</table>
Step14: Expected Output
Step15: A few type of images the model tends to do poorly on include | Python Code:
import time
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
from dnn_app_utils_v2 import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
Explanation: Deep Neural Network for Image Classification: Application
When you finish this, you will have finished the last programming assignment of Week 4, and also the last programming assignment of this course!
You will use use the functions you'd implemented in the previous assignment to build a deep network, and apply it to cat vs non-cat classification. Hopefully, you will see an improvement in accuracy relative to your previous logistic regression implementation.
After this assignment you will be able to:
- Build and apply a deep neural network to supervised learning.
Let's get started!
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the fundamental package for scientific computing with Python.
- matplotlib is a library to plot graphs in Python.
- h5py is a common package to interact with a dataset that is stored on an H5 file.
- PIL and scipy are used here to test your model with your own picture at the end.
- dnn_app_utils provides the functions implemented in the "Building your Deep Neural Network: Step by Step" assignment to this notebook.
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
End of explanation
train_x_orig, train_y, test_x_orig, test_y, classes = load_data()
Explanation: 2 - Dataset
You will use the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better!
Problem Statement: You are given a dataset ("data.h5") containing:
- a training set of m_train images labelled as cat (1) or non-cat (0)
- a test set of m_test images labelled as cat and non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB).
Let's get more familiar with the dataset. Load the data by running the cell below.
End of explanation
# Example of a picture
index = 1
plt.imshow(train_x_orig[index])
print ("y = " + str(train_y[0,index]) + ". It's a " + classes[train_y[0,index]].decode("utf-8") + " picture.")
# Explore your dataset
m_train = train_x_orig.shape[0]
num_px = train_x_orig.shape[1]
m_test = test_x_orig.shape[0]
print ("Number of training examples: " + str(m_train))
print ("Number of testing examples: " + str(m_test))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_x_orig shape: " + str(train_x_orig.shape))
print ("train_y shape: " + str(train_y.shape))
print ("test_x_orig shape: " + str(test_x_orig.shape))
print ("test_y shape: " + str(test_y.shape))
Explanation: The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images.
End of explanation
# Reshape the training and test examples
train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T # The "-1" makes reshape flatten the remaining dimensions
test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T
# Standardize data to have feature values between 0 and 1.
train_x = train_x_flatten/255.
test_x = test_x_flatten/255.
print ("train_x's shape: " + str(train_x.shape))
print ("test_x's shape: " + str(test_x.shape))
Explanation: As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below.
<img src="images/imvectorkiank.png" style="width:450px;height:300px;">
<caption><center> <u>Figure 1</u>: Image to vector conversion. <br> </center></caption>
End of explanation
### CONSTANTS DEFINING THE MODEL ####
n_x = 12288 # num_px * num_px * 3
n_h = 7
n_y = 1
layers_dims = (n_x, n_h, n_y)
# GRADED FUNCTION: two_layer_model
def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):
Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (n_x, number of examples)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- dimensions of the layers (n_x, n_h, n_y)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- If set to True, this will print the cost every 100 iterations
Returns:
parameters -- a dictionary containing W1, W2, b1, and b2
np.random.seed(1)
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
(n_x, n_h, n_y) = layers_dims
# Initialize parameters dictionary, by calling one of the functions you'd previously implemented
### START CODE HERE ### (≈ 1 line of code)
parameters = initialize_parameters(n_x, n_h, n_y)
### END CODE HERE ###
# Get W1, b1, W2 and b2 from the dictionary parameters.
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1". Output: "A1, cache1, A2, cache2".
### START CODE HERE ### (≈ 2 lines of code)
A1, cache1 = linear_activation_forward(X, W1, b1, activation='relu')
A2, cache2 = linear_activation_forward(A1, W2, b2, activation='sigmoid')
### END CODE HERE ###
# Compute cost
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(A2,Y)
### END CODE HERE ###
# Initializing backward propagation
dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))
# Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1".
### START CODE HERE ### (≈ 2 lines of code)
dA1, dW2, db2 = linear_activation_backward(dA2, cache2, activation='sigmoid')
dA0, dW1, db1 = linear_activation_backward(dA1, cache1, activation='relu')
### END CODE HERE ###
# Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2
grads['dW1'] = dW1
grads['db1'] = db1
grads['dW2'] = dW2
grads['db2'] = db2
# Update parameters.
### START CODE HERE ### (approx. 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Retrieve W1, b1, W2, b2 from parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
Explanation: $12,288$ equals $64 \times 64 \times 3$ which is the size of one reshaped image vector.
3 - Architecture of your model
Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images.
You will build two different models:
- A 2-layer neural network
- An L-layer deep neural network
You will then compare the performance of these models, and also try out different values for $L$.
Let's look at the two architectures.
3.1 - 2-layer neural network
<img src="images/2layerNN_kiank.png" style="width:650px;height:400px;">
<caption><center> <u>Figure 2</u>: 2-layer neural network. <br> The model can be summarized as: INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT. </center></caption>
<u>Detailed Architecture of figure 2</u>:
- The input is a (64,64,3) image which is flattened to a vector of size $(12288,1)$.
- The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ of size $(n^{[1]}, 12288)$.
- You then add a bias term and take its relu to get the following vector: $[a_0^{[1]}, a_1^{[1]},..., a_{n^{[1]}-1}^{[1]}]^T$.
- You then repeat the same process.
- You multiply the resulting vector by $W^{[2]}$ and add your intercept (bias).
- Finally, you take the sigmoid of the result. If it is greater than 0.5, you classify it to be a cat.
3.2 - L-layer deep neural network
It is hard to represent an L-layer deep neural network with the above representation. However, here is a simplified network representation:
<img src="images/LlayerNN_kiank.png" style="width:650px;height:400px;">
<caption><center> <u>Figure 3</u>: L-layer neural network. <br> The model can be summarized as: [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID</center></caption>
<u>Detailed Architecture of figure 3</u>:
- The input is a (64,64,3) image which is flattened to a vector of size (12288,1).
- The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ and then you add the intercept $b^{[1]}$. The result is called the linear unit.
- Next, you take the relu of the linear unit. This process could be repeated several times for each $(W^{[l]}, b^{[l]})$ depending on the model architecture.
- Finally, you take the sigmoid of the final linear unit. If it is greater than 0.5, you classify it to be a cat.
3.3 - General methodology
As usual you will follow the Deep Learning methodology to build the model:
1. Initialize parameters / Define hyperparameters
2. Loop for num_iterations:
a. Forward propagation
b. Compute cost function
c. Backward propagation
d. Update parameters (using parameters, and grads from backprop)
4. Use trained parameters to predict labels
Let's now implement those two models!
4 - Two-layer neural network
Question: Use the helper functions you have implemented in the previous assignment to build a 2-layer neural network with the following structure: LINEAR -> RELU -> LINEAR -> SIGMOID. The functions you may need and their inputs are:
python
def initialize_parameters(n_x, n_h, n_y):
...
return parameters
def linear_activation_forward(A_prev, W, b, activation):
...
return A, cache
def compute_cost(AL, Y):
...
return cost
def linear_activation_backward(dA, cache, activation):
...
return dA_prev, dW, db
def update_parameters(parameters, grads, learning_rate):
...
return parameters
End of explanation
parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True)
Explanation: Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
End of explanation
predictions_train = predict(train_x, train_y, parameters)
Explanation: Expected Output:
<table>
<tr>
<td> **Cost after iteration 0**</td>
<td> 0.6930497356599888 </td>
</tr>
<tr>
<td> **Cost after iteration 100**</td>
<td> 0.6464320953428849 </td>
</tr>
<tr>
<td> **...**</td>
<td> ... </td>
</tr>
<tr>
<td> **Cost after iteration 2400**</td>
<td> 0.048554785628770206 </td>
</tr>
</table>
Good thing you built a vectorized implementation! Otherwise it might have taken 10 times longer to train this.
Now, you can use the trained parameters to classify images from the dataset. To see your predictions on the training and test sets, run the cell below.
End of explanation
predictions_test = predict(test_x, test_y, parameters)
Explanation: Expected Output:
<table>
<tr>
<td> **Accuracy**</td>
<td> 1.0 </td>
</tr>
</table>
End of explanation
### CONSTANTS ###
layers_dims = [12288, 20, 7, 5, 1] # 5-layer model
# GRADED FUNCTION: L_layer_model
def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009
Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.
Arguments:
X -- data, numpy array of shape (number of examples, num_px * num_px * 3)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
learning_rate -- learning rate of the gradient descent update rule
num_iterations -- number of iterations of the optimization loop
print_cost -- if True, it prints the cost every 100 steps
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
np.random.seed(1)
costs = [] # keep track of cost
# Parameters initialization.
### START CODE HERE ###
parameters = initialize_parameters_deep(layers_dims)
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
### START CODE HERE ### (≈ 1 line of code)
AL, caches = L_model_forward(X, parameters)
### END CODE HERE ###
# Compute cost.
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(AL, Y)
### END CODE HERE ###
# Backward propagation.
### START CODE HERE ### (≈ 1 line of code)
grads = L_model_backward(AL, Y, caches)
### END CODE HERE ###
# Update parameters.
### START CODE HERE ### (≈ 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" % (i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
Explanation: Expected Output:
<table>
<tr>
<td> **Accuracy**</td>
<td> 0.72 </td>
</tr>
</table>
Note: You may notice that running the model on fewer iterations (say 1500) gives better accuracy on the test set. This is called "early stopping" and we will talk about it in the next course. Early stopping is a way to prevent overfitting.
Congratulations! It seems that your 2-layer neural network has better performance (72%) than the logistic regression implementation (70%, assignment week 2). Let's see if you can do even better with an $L$-layer model.
5 - L-layer Neural Network
Question: Use the helper functions you have implemented previously to build an $L$-layer neural network with the following structure: [LINEAR -> RELU]$\times$(L-1) -> LINEAR -> SIGMOID. The functions you may need and their inputs are:
python
def initialize_parameters_deep(layer_dims):
...
return parameters
def L_model_forward(X, parameters):
...
return AL, caches
def compute_cost(AL, Y):
...
return cost
def L_model_backward(AL, Y, caches):
...
return grads
def update_parameters(parameters, grads, learning_rate):
...
return parameters
End of explanation
parameters = L_layer_model(train_x, train_y, layers_dims, num_iterations = 2500, print_cost = True)
Explanation: You will now train the model as a 5-layer neural network.
Run the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
End of explanation
pred_train = predict(train_x, train_y, parameters)
Explanation: Expected Output:
<table>
<tr>
<td> **Cost after iteration 0**</td>
<td> 0.771749 </td>
</tr>
<tr>
<td> **Cost after iteration 100**</td>
<td> 0.672053 </td>
</tr>
<tr>
<td> **...**</td>
<td> ... </td>
</tr>
<tr>
<td> **Cost after iteration 2400**</td>
<td> 0.092878 </td>
</tr>
</table>
End of explanation
pred_test = predict(test_x, test_y, parameters)
Explanation: <table>
<tr>
<td>
**Train Accuracy**
</td>
<td>
0.985645933014
</td>
</tr>
</table>
End of explanation
print_mislabeled_images(classes, test_x, test_y, pred_test)
Explanation: Expected Output:
<table>
<tr>
<td> **Test Accuracy**</td>
<td> 0.8 </td>
</tr>
</table>
Congrats! It seems that your 5-layer neural network has better performance (80%) than your 2-layer neural network (72%) on the same test set.
This is good performance for this task. Nice job!
Though in the next course on "Improving deep neural networks" you will learn how to obtain even higher accuracy by systematically searching for better hyperparameters (learning_rate, layers_dims, num_iterations, and others you'll also learn in the next course).
6) Results Analysis
First, let's take a look at some images the L-layer model labeled incorrectly. This will show a few mislabeled images.
End of explanation
## START CODE HERE ##
my_image = "h1.jpg" # change this to the name of your image file
my_label_y = [1] # the true class of your image (1 -> cat, 0 -> non-cat)
## END CODE HERE ##
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((num_px*num_px*3,1))
my_predicted_image = predict(my_image, my_label_y, parameters)
plt.imshow(image)
print ("y = " + str(np.squeeze(my_predicted_image)) + ", your L-layer model predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
Explanation: A few type of images the model tends to do poorly on include:
- Cat body in an unusual position
- Cat appears against a background of a similar color
- Unusual cat color and species
- Camera Angle
- Brightness of the picture
- Scale variation (cat is very large or small in image)
7) Test with your own image (optional/ungraded exercise)
Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Change your image's name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
End of explanation |
2,788 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
活性化関数
ステップ関数
$y=\begin{cases}1 & ( x \gt 0 ) \-1 & ( x \leqq 0 )\end{cases}$
Step1: シグモイド関数
$y = \frac{1}{1 + exp(-x)}$
Step2: ReLU関数
$y=\begin{cases}x & ( x \gt 0 ) \ 0 & ( x \leqq 0 ) \end{cases}$ | Python Code:
x = np.arange(-5.0, 5.0, 0.1)
y = np.array(x > 0, dtype=np.int)
plt.plot(x, y)
plt.show()
Explanation: 活性化関数
ステップ関数
$y=\begin{cases}1 & ( x \gt 0 ) \-1 & ( x \leqq 0 )\end{cases}$
End of explanation
x = np.arange(-5.0, 5.0, 0.1)
y = 1 / (1 + np.exp(-x))
plt.plot(x, y)
plt.show()
Explanation: シグモイド関数
$y = \frac{1}{1 + exp(-x)}$
End of explanation
x = np.arange(-5.0, 5.0, 0.1)
y = np.maximum(0, x)
plt.plot(x, y)
plt.show()
Explanation: ReLU関数
$y=\begin{cases}x & ( x \gt 0 ) \ 0 & ( x \leqq 0 ) \end{cases}$
End of explanation |
2,789 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Continuous Target Decoding with SPoC
Source Power Comodulation (SPoC)
Step1: Plot the contributions to the detected components (i.e., the forward model) | Python Code:
# Author: Alexandre Barachant <[email protected]>
# Jean-Remi King <[email protected]>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import Epochs
from mne.decoding import SPoC
from mne.datasets.fieldtrip_cmc import data_path
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import Ridge
from sklearn.model_selection import KFold, cross_val_predict
# Define parameters
fname = data_path() + '/SubjectCMC.ds'
raw = mne.io.read_raw_ctf(fname)
raw.crop(50., 250.) # crop for memory purposes
# Filter muscular activity to only keep high frequencies
emg = raw.copy().pick_channels(['EMGlft']).load_data()
emg.filter(20., None, fir_design='firwin')
# Filter MEG data to focus on beta band
raw.pick_types(meg=True, ref_meg=True, eeg=False, eog=False).load_data()
raw.filter(15., 30., fir_design='firwin')
# Build epochs as sliding windows over the continuous raw file
events = mne.make_fixed_length_events(raw, id=1, duration=.250)
# Epoch length is 1.5 second
meg_epochs = Epochs(raw, events, tmin=0., tmax=1.500, baseline=None,
detrend=1, decim=8)
emg_epochs = Epochs(emg, events, tmin=0., tmax=1.500, baseline=None)
# Prepare classification
X = meg_epochs.get_data()
y = emg_epochs.get_data().var(axis=2)[:, 0] # target is EMG power
# Classification pipeline with SPoC spatial filtering and Ridge Regression
spoc = SPoC(n_components=2, log=True, reg='oas', rank='full')
clf = make_pipeline(spoc, Ridge())
# Define a two fold cross-validation
cv = KFold(n_splits=2, shuffle=False)
# Run cross validaton
y_preds = cross_val_predict(clf, X, y, cv=cv)
# Plot the True EMG power and the EMG power predicted from MEG data
fig, ax = plt.subplots(1, 1, figsize=[10, 4])
times = raw.times[meg_epochs.events[:, 0] - raw.first_samp]
ax.plot(times, y_preds, color='b', label='Predicted EMG')
ax.plot(times, y, color='r', label='True EMG')
ax.set_xlabel('Time (s)')
ax.set_ylabel('EMG Power')
ax.set_title('SPoC MEG Predictions')
plt.legend()
mne.viz.tight_layout()
plt.show()
Explanation: Continuous Target Decoding with SPoC
Source Power Comodulation (SPoC) :footcite:DahneEtAl2014 allows to identify
the composition of
orthogonal spatial filters that maximally correlate with a continuous target.
SPoC can be seen as an extension of the CSP for continuous variables.
Here, SPoC is applied to decode the (continuous) fluctuation of an
electromyogram from MEG beta activity using data from
Cortico-Muscular Coherence example of FieldTrip
<http://www.fieldtriptoolbox.org/tutorial/coherence>_
End of explanation
spoc.fit(X, y)
spoc.plot_patterns(meg_epochs.info)
Explanation: Plot the contributions to the detected components (i.e., the forward model)
End of explanation |
2,790 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: After confirming the java environment, install tabula-py by using pip.
Step2: Before trying tabula-py, check your environment via tabula-py environment_info() function, which shows Python version, Java version, and your OS environment.
Step3: Read a PDF with read_pdf() function
Let's read a PDF from GitHub. tabula-py can load a PDF or file like object on both local or internet by using read_pdf() function.
Step4: Options for read_pdf()
Note that read_pdf() function reads only page 1 by default. For more details, use ?read_pdf and ?tabula.wrapper.build_options.
Step5: Let's set pages option. Here is the extraction result of page 3
Step6: You can set pages="all" for extration all pages. If you hit OOM error with Java, you should set appropriate -Xmx option for java_options.
Step7: Read partial area of PDF
If you want to set a certain part of page, you can use area option.
Note that as of tabula-py 2.0.0, multiple_tables option became True so if you want to use multiple area options like [[0, 0, 100, 50], [0, 50, 100, 100]], you need to set multiple_tables=False.
Step8: Read giving column information
Step9: Extract to JSON, TSV, or CSV
tabula-py has capability to convert not only DataFrame but also JSON, TSV, or CSV. You can set output format with output_format option.
Step10: Convert PDF tables into CSV, TSV, or JSON files
You can convert files directly rather creating Python objects with convert_into() function.
Step11: Use lattice mode for more accurate extraction for spreadsheet style tables
If your tables have lines separating cells, you can use lattice option. By default, tabula-py sets guess=True, which is the same behavior for default of tabula app. If your tables don't have separation lines, you can try stream option.
As it mentioned, try tabula app before struglling with tabula-py option. Or, PDFplumber can be an alternative since it has different extraction strategy.
Step12: Use tabula app template
tabula-py can handle tabula app template, which has area options set by GUI app to reuse. | Python Code:
!java -version
Explanation: <a href="https://colab.research.google.com/github/chezou/tabula-py/blob/master/examples/tabula_example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
tabula-py example notebook
tabula-py is a tool for convert PDF tables to pandas DataFrame. tabula-py is a wrapper of tabula-java, which requires java on your machine. tabula-py also enables you to convert tables in a PDF into CSV/TSV files.
tabula-py's PDF extraction accuracy is same as tabula-java or tabula app; GUI tool of tabula, so if you want to know the performance of tabula-py, I highly recommend you to try tabula app.
tabula-py is good for:
- automation with Python script
- advanced analytics after converting pandas DataFrame
- casual analytics with Jupyter notebook or Google Colabolatory
Check Java environment and install tabula-py
tabula-py requires a java environment, so let's check the java environment on your machine.
End of explanation
# To be more precisely, it's better to use `{sys.executable} -m pip install tabula-py`
!pip install -q tabula-py
Explanation: After confirming the java environment, install tabula-py by using pip.
End of explanation
import tabula
tabula.environment_info()
Explanation: Before trying tabula-py, check your environment via tabula-py environment_info() function, which shows Python version, Java version, and your OS environment.
End of explanation
import tabula
pdf_path = "https://github.com/chezou/tabula-py/raw/master/tests/resources/data.pdf"
dfs = tabula.read_pdf(pdf_path, stream=True)
# read_pdf returns list of DataFrames
print(len(dfs))
dfs[0]
Explanation: Read a PDF with read_pdf() function
Let's read a PDF from GitHub. tabula-py can load a PDF or file like object on both local or internet by using read_pdf() function.
End of explanation
help(tabula.read_pdf)
help(tabula.io.build_options)
Explanation: Options for read_pdf()
Note that read_pdf() function reads only page 1 by default. For more details, use ?read_pdf and ?tabula.wrapper.build_options.
End of explanation
# set pages option
dfs = tabula.read_pdf(pdf_path, pages=3, stream=True)
dfs[0]
# pass pages as string
tabula.read_pdf(pdf_path, pages="1-2,3", stream=True)
Explanation: Let's set pages option. Here is the extraction result of page 3:
End of explanation
# extract all pages
tabula.read_pdf(pdf_path, pages="all", stream=True)
Explanation: You can set pages="all" for extration all pages. If you hit OOM error with Java, you should set appropriate -Xmx option for java_options.
End of explanation
# set area option
dfs = tabula.read_pdf(pdf_path, area=[126,149,212,462], pages=2)
dfs[0]
Explanation: Read partial area of PDF
If you want to set a certain part of page, you can use area option.
Note that as of tabula-py 2.0.0, multiple_tables option became True so if you want to use multiple area options like [[0, 0, 100, 50], [0, 50, 100, 100]], you need to set multiple_tables=False.
End of explanation
pdf_path2 = "https://github.com/chezou/tabula-py/raw/master/tests/resources/campaign_donors.pdf"
dfs = tabula.read_pdf(pdf_path2, columns=[47, 147, 256, 310, 375, 431, 504], guess=False, pages=1)
df = dfs[0].drop(["Unnamed: 0"], axis=1)
df
Explanation: Read giving column information
End of explanation
# read pdf as JSON
tabula.read_pdf(pdf_path, output_format="json")
Explanation: Extract to JSON, TSV, or CSV
tabula-py has capability to convert not only DataFrame but also JSON, TSV, or CSV. You can set output format with output_format option.
End of explanation
# You can convert from pdf into JSON, CSV, TSV
tabula.convert_into(pdf_path, "test.json", output_format="json")
!cat test.json
tabula.convert_into(pdf_path, "test.tsv", output_format="tsv")
!cat test.tsv
tabula.convert_into(pdf_path, "test.csv", output_format="csv", stream=True)
!cat test.csv
Explanation: Convert PDF tables into CSV, TSV, or JSON files
You can convert files directly rather creating Python objects with convert_into() function.
End of explanation
pdf_path3 = "https://github.com/tabulapdf/tabula-java/raw/master/src/test/resources/technology/tabula/spanning_cells.pdf"
dfs = tabula.read_pdf(
pdf_path3,
pages="1",
lattice=True,
pandas_options={"header": [0, 1]},
area=[0, 0, 50, 100],
relative_area=True,
multiple_tables=False,
)
dfs[0]
Explanation: Use lattice mode for more accurate extraction for spreadsheet style tables
If your tables have lines separating cells, you can use lattice option. By default, tabula-py sets guess=True, which is the same behavior for default of tabula app. If your tables don't have separation lines, you can try stream option.
As it mentioned, try tabula app before struglling with tabula-py option. Or, PDFplumber can be an alternative since it has different extraction strategy.
End of explanation
template_path = "https://github.com/chezou/tabula-py/raw/master/tests/resources/data.tabula-template.json"
tabula.read_pdf_with_template(pdf_path, template_path)
Explanation: Use tabula app template
tabula-py can handle tabula app template, which has area options set by GUI app to reuse.
End of explanation |
2,791 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this example, we will create a typical CANDU bundle with rings of fuel pins. At present, OpenMC does not have a specialized lattice for this type of fuel arrangement, so we must resort to manual creation of the array of fuel pins.
Step1: Let's begin by creating the materials that will be used in our model.
Step2: With our materials created, we'll now define key dimensions in our model. These dimensions are taken from the example in section 11.1.3 of the Serpent manual.
Step3: To begin creating the bundle, we'll first create annular regions completely filled with heavy water and add in the fuel pins later. The radii that we've specified above correspond to the center of each ring. We actually need to create cylindrical surfaces at radii that are half-way between the centers.
Step4: Let's see what our geometry looks like so far. In order to plot the geometry, we create a universe that contains the annular water cells and then use the Universe.plot() method. While we're at it, we'll set some keyword arguments that can be reused for later plots.
Step5: Now we need to create a universe that contains a fuel pin. Note that we don't actually need to put water outside of the cladding in this universe because it will be truncated by a higher universe.
Step6: The code below works through each ring to create a cell containing the fuel pin universe. As each fuel pin is created, we modify the region of the water cell to include everything outside the fuel pin.
Step7: Looking pretty good! Finally, we create cells for the pressure tube and calendria and then put our bundle in the middle of the pressure tube.
Step8: Let's look at the final product. We'll export our geometry and materials and then use plot_inline() to get a nice-looking plot.
Step9: Interpreting Results
One of the difficulties of a geometry like this is identifying tally results when there was no lattice involved. To address this, we specifically gave an ID to each fuel pin of the form 100*ring + azimuthal position. Consequently, we can use a distribcell tally and then look at our DataFrame which will show these cell IDs.
Step10: The return code of 0 indicates that OpenMC ran successfully. Now let's load the statepoint into a openmc.StatePoint object and use the Tally.get_pandas_dataframe(...) method to see our results. | Python Code:
%matplotlib inline
from math import pi, sin, cos
import numpy as np
import openmc
Explanation: In this example, we will create a typical CANDU bundle with rings of fuel pins. At present, OpenMC does not have a specialized lattice for this type of fuel arrangement, so we must resort to manual creation of the array of fuel pins.
End of explanation
fuel = openmc.Material(name='fuel')
fuel.add_element('U', 1.0)
fuel.add_element('O', 2.0)
fuel.set_density('g/cm3', 10.0)
clad = openmc.Material(name='zircaloy')
clad.add_element('Zr', 1.0)
clad.set_density('g/cm3', 6.0)
heavy_water = openmc.Material(name='heavy water')
heavy_water.add_nuclide('H2', 2.0)
heavy_water.add_nuclide('O16', 1.0)
heavy_water.add_s_alpha_beta('c_D_in_D2O')
heavy_water.set_density('g/cm3', 1.1)
Explanation: Let's begin by creating the materials that will be used in our model.
End of explanation
# Outer radius of fuel and clad
r_fuel = 0.6122
r_clad = 0.6540
# Pressure tube and calendria radii
pressure_tube_ir = 5.16890
pressure_tube_or = 5.60320
calendria_ir = 6.44780
calendria_or = 6.58750
# Radius to center of each ring of fuel pins
ring_radii = np.array([0.0, 1.4885, 2.8755, 4.3305])
Explanation: With our materials created, we'll now define key dimensions in our model. These dimensions are taken from the example in section 11.1.3 of the Serpent manual.
End of explanation
# These are the surfaces that will divide each of the rings
radial_surf = [openmc.ZCylinder(r=r) for r in
(ring_radii[:-1] + ring_radii[1:])/2]
water_cells = []
for i in range(ring_radii.size):
# Create annular region
if i == 0:
water_region = -radial_surf[i]
elif i == ring_radii.size - 1:
water_region = +radial_surf[i-1]
else:
water_region = +radial_surf[i-1] & -radial_surf[i]
water_cells.append(openmc.Cell(fill=heavy_water, region=water_region))
Explanation: To begin creating the bundle, we'll first create annular regions completely filled with heavy water and add in the fuel pins later. The radii that we've specified above correspond to the center of each ring. We actually need to create cylindrical surfaces at radii that are half-way between the centers.
End of explanation
plot_args = {'width': (2*calendria_or, 2*calendria_or)}
bundle_universe = openmc.Universe(cells=water_cells)
bundle_universe.plot(**plot_args)
Explanation: Let's see what our geometry looks like so far. In order to plot the geometry, we create a universe that contains the annular water cells and then use the Universe.plot() method. While we're at it, we'll set some keyword arguments that can be reused for later plots.
End of explanation
surf_fuel = openmc.ZCylinder(r=r_fuel)
fuel_cell = openmc.Cell(fill=fuel, region=-surf_fuel)
clad_cell = openmc.Cell(fill=clad, region=+surf_fuel)
pin_universe = openmc.Universe(cells=(fuel_cell, clad_cell))
pin_universe.plot(**plot_args)
Explanation: Now we need to create a universe that contains a fuel pin. Note that we don't actually need to put water outside of the cladding in this universe because it will be truncated by a higher universe.
End of explanation
num_pins = [1, 6, 12, 18]
angles = [0, 0, 15, 0]
for i, (r, n, a) in enumerate(zip(ring_radii, num_pins, angles)):
for j in range(n):
# Determine location of center of pin
theta = (a + j/n*360.) * pi/180.
x = r*cos(theta)
y = r*sin(theta)
pin_boundary = openmc.ZCylinder(x0=x, y0=y, r=r_clad)
water_cells[i].region &= +pin_boundary
# Create each fuel pin -- note that we explicitly assign an ID so
# that we can identify the pin later when looking at tallies
pin = openmc.Cell(fill=pin_universe, region=-pin_boundary)
pin.translation = (x, y, 0)
pin.id = (i + 1)*100 + j
bundle_universe.add_cell(pin)
bundle_universe.plot(**plot_args)
Explanation: The code below works through each ring to create a cell containing the fuel pin universe. As each fuel pin is created, we modify the region of the water cell to include everything outside the fuel pin.
End of explanation
pt_inner = openmc.ZCylinder(r=pressure_tube_ir)
pt_outer = openmc.ZCylinder(r=pressure_tube_or)
calendria_inner = openmc.ZCylinder(r=calendria_ir)
calendria_outer = openmc.ZCylinder(r=calendria_or, boundary_type='vacuum')
bundle = openmc.Cell(fill=bundle_universe, region=-pt_inner)
pressure_tube = openmc.Cell(fill=clad, region=+pt_inner & -pt_outer)
v1 = openmc.Cell(region=+pt_outer & -calendria_inner)
calendria = openmc.Cell(fill=clad, region=+calendria_inner & -calendria_outer)
root_universe = openmc.Universe(cells=[bundle, pressure_tube, v1, calendria])
Explanation: Looking pretty good! Finally, we create cells for the pressure tube and calendria and then put our bundle in the middle of the pressure tube.
End of explanation
geom = openmc.Geometry(root_universe)
geom.export_to_xml()
mats = openmc.Materials(geom.get_all_materials().values())
mats.export_to_xml()
p = openmc.Plot.from_geometry(geom)
p.color_by = 'material'
p.colors = {
fuel: 'black',
clad: 'silver',
heavy_water: 'blue'
}
p.to_ipython_image()
Explanation: Let's look at the final product. We'll export our geometry and materials and then use plot_inline() to get a nice-looking plot.
End of explanation
settings = openmc.Settings()
settings.particles = 1000
settings.batches = 20
settings.inactive = 10
settings.source = openmc.Source(space=openmc.stats.Point())
settings.export_to_xml()
fuel_tally = openmc.Tally()
fuel_tally.filters = [openmc.DistribcellFilter(fuel_cell)]
fuel_tally.scores = ['flux']
tallies = openmc.Tallies([fuel_tally])
tallies.export_to_xml()
openmc.run(output=False)
Explanation: Interpreting Results
One of the difficulties of a geometry like this is identifying tally results when there was no lattice involved. To address this, we specifically gave an ID to each fuel pin of the form 100*ring + azimuthal position. Consequently, we can use a distribcell tally and then look at our DataFrame which will show these cell IDs.
End of explanation
sp = openmc.StatePoint('statepoint.{}.h5'.format(settings.batches))
t = sp.get_tally()
t.get_pandas_dataframe()
Explanation: The return code of 0 indicates that OpenMC ran successfully. Now let's load the statepoint into a openmc.StatePoint object and use the Tally.get_pandas_dataframe(...) method to see our results.
End of explanation |
2,792 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Triplet Loss on Totally Looks Like dataset
This notebook is inspired from this Keras tutorial by Hazem Essam and Santiago L. Valdarrama.
The goal is to showcase the use of siamese networks and triplet loss to do representation learning using a CNN. It will also showcase data generators and data augmentation techniques.
Dataset
The dataset considered is the Totally Looks Like dataset, consisting of pairs of web curated similar looking images
Step3: We will use mostly TensorFlow functions to open and process images
Step4: To generate the list of negative images, let's randomize the list of available images (anchors and positives) and concatenate them together.
Step5: We can visualize a triplet and display its shape
Step6: Exercise
Build the embedding network, starting from a resnet and adding a few layers. The output should have a dimension $d= 128$ or $d=256$. Edit the following code, and you may use the next cell to test your code.
Bonus
Step8: Run the following can be run to get the same architecture as we have
Step9: Exercise
Our goal is now to build the positive and negative distances from 3 inputs images
Step10: Solution
Step12: The final triplet model
Once we are able to produce the distances, we may wrap it into a new Keras Model which includes the computation of the loss. The following implementation uses a subclassing of the Model class, redefining a few functions used internally during model.fit
Step13: Find most similar images in test dataset
The negative_images list was built by concatenating all possible images, both anchors and positive. We can reuse these to form a bank of possible images to query from.
We will first compute all embeddings of these images. To do so, we build a tf.Dataset and apply the few functions
Step14: We can build a most_similar function which takes an image path as input and return the topn most similar images through the embedding representation. It would be possible to use another metric, such as the cosine similarity here. | Python Code:
import os
import os.path as op
from urllib.request import urlretrieve
from pathlib import Path
URL = "https://github.com/m2dsupsdlclass/lectures-labs/releases/download/totallylookslike/dataset_totally.zip"
FILENAME = "dataset_totally.zip"
if not op.exists(FILENAME):
print('Downloading %s to %s...' % (URL, FILENAME))
urlretrieve(URL, FILENAME)
import zipfile
if not op.exists("anchors"):
print('Extracting image files...')
with zipfile.ZipFile(FILENAME, 'r') as zip_ref:
zip_ref.extractall('.')
home_dir = Path(Path.home())
anchor_images_path = Path("./anchors")
positive_images_path = Path("./positives")
Explanation: Triplet Loss on Totally Looks Like dataset
This notebook is inspired from this Keras tutorial by Hazem Essam and Santiago L. Valdarrama.
The goal is to showcase the use of siamese networks and triplet loss to do representation learning using a CNN. It will also showcase data generators and data augmentation techniques.
Dataset
The dataset considered is the Totally Looks Like dataset, consisting of pairs of web curated similar looking images:
Image pair 1 | Image pair 2
:-------------------------:|:-------------------------:
|
The goal is to extract generic human perceptual representation through a CNN. The next cell downloads the dataset and unzips it (run it asap, it will download a few hundead megabytes).
End of explanation
def open_image(filename, target_shape = (256, 256)):
Load the specified file as a JPEG image, preprocess it and
resize it to the target shape.
image_string = tf.io.read_file(filename)
image = tf.image.decode_jpeg(image_string, channels=3)
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.resize(image, target_shape)
return image
import tensorflow as tf
# Careful to sort images folders so that the anchor and positive images correspond.
anchor_images = sorted([str(anchor_images_path / f) for f in os.listdir(anchor_images_path)])
positive_images = sorted([str(positive_images_path / f) for f in os.listdir(positive_images_path)])
anchor_count = len(anchor_images)
positive_count = len(positive_images)
print(f"number of anchors: {anchor_count}, positive: {positive_count}")
anchor_dataset_files = tf.data.Dataset.from_tensor_slices(anchor_images)
anchor_dataset = anchor_dataset_files.map(open_image)
positive_dataset_files = tf.data.Dataset.from_tensor_slices(positive_images)
positive_dataset = positive_dataset_files.map(open_image)
import matplotlib.pyplot as plt
def visualize(img_list):
Visualize a list of images
def show(ax, image):
ax.imshow(image)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig = plt.figure(figsize=(6, 18))
num_imgs = len(img_list)
axs = fig.subplots(1, num_imgs)
for i in range(num_imgs):
show(axs[i], img_list[i])
# display the first element of our dataset
anc = next(iter(anchor_dataset))
pos = next(iter(positive_dataset))
visualize([anc, pos])
from tensorflow.keras import layers
# data augmentations
data_augmentation = tf.keras.Sequential([
layers.RandomFlip("horizontal"),
# layers.RandomRotation(0.15), # you may add random rotations
layers.RandomCrop(224, 224)
])
Explanation: We will use mostly TensorFlow functions to open and process images:
End of explanation
import numpy as np
rng = np.random.RandomState(seed=42)
rng.shuffle(anchor_images)
rng.shuffle(positive_images)
negative_images = anchor_images + positive_images
np.random.RandomState(seed=32).shuffle(negative_images)
negative_dataset_files = tf.data.Dataset.from_tensor_slices(negative_images)
negative_dataset_files = negative_dataset_files.shuffle(buffer_size=4096)
# Build final triplet dataset
dataset = tf.data.Dataset.zip((anchor_dataset_files, positive_dataset_files, negative_dataset_files))
dataset = dataset.shuffle(buffer_size=1024)
# preprocess function
def preprocess_triplets(anchor, positive, negative):
return (
data_augmentation(open_image(anchor)),
data_augmentation(open_image(positive)),
data_augmentation(open_image(negative)),
)
# The map function is lazy, it is not evaluated on the spot,
# but each time a batch is sampled.
dataset = dataset.map(preprocess_triplets)
# Let's now split our dataset in train and validation.
train_dataset = dataset.take(round(anchor_count * 0.8))
val_dataset = dataset.skip(round(anchor_count * 0.8))
# define the batch size
train_dataset = train_dataset.batch(32, drop_remainder=False)
train_dataset = train_dataset.prefetch(8)
val_dataset = val_dataset.batch(32, drop_remainder=False)
val_dataset = val_dataset.prefetch(8)
Explanation: To generate the list of negative images, let's randomize the list of available images (anchors and positives) and concatenate them together.
End of explanation
anc_batch, pos_batch, neg_batch = next(train_dataset.take(1).as_numpy_iterator())
print(anc_batch.shape, pos_batch.shape, neg_batch.shape)
idx = np.random.randint(0, 32)
visualize([anc_batch[idx], pos_batch[idx], neg_batch[idx]])
Explanation: We can visualize a triplet and display its shape:
End of explanation
from tensorflow.keras import Model, layers
from tensorflow.keras import optimizers, losses, metrics, applications
from tensorflow.keras.applications import resnet
input_img = layers.Input((224,224,3))
output = input_img # change that line and edit this code!
embedding = Model(input_img, output, name="Embedding")
output = embedding(np.random.randn(1,224,224,3))
output.shape
Explanation: Exercise
Build the embedding network, starting from a resnet and adding a few layers. The output should have a dimension $d= 128$ or $d=256$. Edit the following code, and you may use the next cell to test your code.
Bonus: Try to freeze the weights of the ResNet.
End of explanation
from tensorflow.keras import Model, layers
from tensorflow.keras import optimizers, losses, metrics, applications
from tensorflow.keras.applications import resnet
input_img = layers.Input((224,224,3))
base_cnn = resnet.ResNet50(weights="imagenet", input_shape=(224,224,3), include_top=False)
resnet_output = base_cnn(input_img)
flatten = layers.Flatten()(resnet_output)
dense1 = layers.Dense(512, activation="relu")(flatten)
# The batch normalization layer enables to normalize the activations
# over the batch
dense1 = layers.BatchNormalization()(dense1)
dense2 = layers.Dense(256, activation="relu")(dense1)
dense2 = layers.BatchNormalization()(dense2)
output = layers.Dense(256)(dense2)
embedding = Model(input_img, output, name="Embedding")
trainable = False
for layer in base_cnn.layers:
if layer.name == "conv5_block1_out":
trainable = True
layer.trainable = trainable
def preprocess(x):
we'll need to preprocess the input before passing them
to the resnet for better results. This is the same preprocessing
that was used during the training of ResNet on ImageNet.
return resnet.preprocess_input(x * 255.)
Explanation: Run the following can be run to get the same architecture as we have:
End of explanation
anchor_input = layers.Input(name="anchor", shape=(224, 224, 3))
positive_input = layers.Input(name="positive", shape=(224, 224, 3))
negative_input = layers.Input(name="negative", shape=(224, 224, 3))
distances = [anchor_input, positive_input] # TODO: Change this code to actually compute the distances
siamese_network = Model(
inputs=[anchor_input, positive_input, negative_input], outputs=distances
)
Explanation: Exercise
Our goal is now to build the positive and negative distances from 3 inputs images: the anchor, the positive, and the negative one $‖f(A) - f(P)‖²$ $‖f(A) - f(N)‖²$. You may define a specific Layer using the Keras subclassing API, or any other method.
You will need to run the Embedding model previously defined, don't forget to apply the preprocessing function defined above!
End of explanation
class DistanceLayer(layers.Layer):
def __init__(self, **kwargs):
super().__init__(**kwargs)
def call(self, anchor, positive, negative):
ap_distance = tf.reduce_sum(tf.square(anchor - positive), -1)
an_distance = tf.reduce_sum(tf.square(anchor - negative), -1)
return (ap_distance, an_distance)
anchor_input = layers.Input(name="anchor", shape=(224, 224, 3))
positive_input = layers.Input(name="positive", shape=(224, 224, 3))
negative_input = layers.Input(name="negative", shape=(224, 224, 3))
distances = DistanceLayer()(
embedding(preprocess(anchor_input)),
embedding(preprocess(positive_input)),
embedding(preprocess(negative_input)),
)
siamese_network = Model(
inputs=[anchor_input, positive_input, negative_input], outputs=distances
)
Explanation: Solution: run the following cell to get the exact same method as we have.
End of explanation
class TripletModel(Model):
The Final Keras Model with a custom training and testing loops.
Computes the triplet loss using the three embeddings produced by the
Siamese Network.
The triplet loss is defined as:
L(A, P, N) = max(‖f(A) - f(P)‖² - ‖f(A) - f(N)‖² + margin, 0)
def __init__(self, siamese_network, margin=0.5):
super(TripletModel, self).__init__()
self.siamese_network = siamese_network
self.margin = margin
self.loss_tracker = metrics.Mean(name="loss")
def call(self, inputs):
return self.siamese_network(inputs)
def train_step(self, data):
# GradientTape is a context manager that records every operation that
# you do inside. We are using it here to compute the loss so we can get
# the gradients and apply them using the optimizer specified in
# `compile()`.
with tf.GradientTape() as tape:
loss = self._compute_loss(data)
# Storing the gradients of the loss function with respect to the
# weights/parameters.
gradients = tape.gradient(loss, self.siamese_network.trainable_weights)
# Applying the gradients on the model using the specified optimizer
self.optimizer.apply_gradients(
zip(gradients, self.siamese_network.trainable_weights)
)
# Let's update and return the training loss metric.
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
def test_step(self, data):
loss = self._compute_loss(data)
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
def _compute_loss(self, data):
# The output of the network is a tuple containing the distances
# between the anchor and the positive example, and the anchor and
# the negative example.
ap_distance, an_distance = self.siamese_network(data)
loss = ap_distance - an_distance
loss = tf.maximum(loss + self.margin, 0.0)
return loss
@property
def metrics(self):
# We need to list our metrics here so the `reset_states()` can be
# called automatically.
return [self.loss_tracker]
siamese_model = TripletModel(siamese_network)
siamese_model.compile(optimizer=optimizers.Adam(0.0001))
siamese_model.fit(train_dataset, epochs=10, validation_data=val_dataset)
embedding.save('best_model.h5')
# uncomment to get a pretrained model
url_pretrained = "https://github.com/m2dsupsdlclass/lectures-labs/releases/download/totallylookslike/best_model.h5"
urlretrieve(url_pretrained, "best_model.h5")
loaded_model = tf.keras.models.load_model('best_model.h5')
Explanation: The final triplet model
Once we are able to produce the distances, we may wrap it into a new Keras Model which includes the computation of the loss. The following implementation uses a subclassing of the Model class, redefining a few functions used internally during model.fit: call, train_step, test_step
End of explanation
from functools import partial
open_img = partial(open_image, target_shape=(224,224))
all_img_files = tf.data.Dataset.from_tensor_slices(negative_images)
dataset = all_img_files.map(open_img).map(preprocess).take(1024).batch(32, drop_remainder=False).prefetch(8)
all_embeddings = loaded_model.predict(dataset)
all_embeddings.shape
Explanation: Find most similar images in test dataset
The negative_images list was built by concatenating all possible images, both anchors and positive. We can reuse these to form a bank of possible images to query from.
We will first compute all embeddings of these images. To do so, we build a tf.Dataset and apply the few functions: open_img and preprocess.
End of explanation
random_img = np.random.choice(negative_images)
def most_similar(img, topn=5):
img_batch = tf.expand_dims(open_image(img, target_shape=(224, 224)), 0)
new_emb = loaded_model.predict(preprocess(img_batch))
dists = tf.sqrt(tf.reduce_sum((all_embeddings - new_emb)**2, -1)).numpy()
idxs = np.argsort(dists)[:topn]
return [(negative_images[idx], dists[idx]) for idx in idxs]
print(random_img)
most_similar(random_img)
random_img = np.random.choice(negative_images)
visualize([open_image(im) for im, _ in most_similar(random_img)])
Explanation: We can build a most_similar function which takes an image path as input and return the topn most similar images through the embedding representation. It would be possible to use another metric, such as the cosine similarity here.
End of explanation |
2,793 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Binding Model
The binding model may be represented by the following graph
$$P+L \; \underset{K_{D}^{'}}{\rightleftharpoons} \; P \bullet L \; \underset{K_{D}^{''}}{\rightleftharpoons} \; P \bullet L \bullet P$$
Equilibrium Dissociation Constants
Considering $K_{D}$ as a general measure of affinity between $P$ and the dually-functional $L$, the equilibrium dissociation constants for each of the steps in the model, $K_{D}^{'}$ and $K_{D}^{''}$, can be considered to be a scalar function of $K_{D}$. We'll represent this in the following expressions.
$$K_{D}^{'} = \frac{1}{2}K_{D}$$
$$K_{D}^{''} = \frac{2}{\alpha}K_{D}$$
In the above expressions you should note the accounting for statistical effects given the multiplicity of binding sites on the ligand; there are two ways for $P$ to join to $L$ in the first step and only one way for $P \bullet L$ to disassociate, while there is only one way for $P$ to join to $P \bullet L$ and two ways for $P \bullet L \bullet P$ to disassociate. Additionally, $\alpha$ is an expression of the degree of cooperativity in binding which may represent negative cooperativity (0, 1), non-cooperativity [1], and positive cooperativity (1, $\infty$).
Since these are equilibrium dissocation constants they may also be expressed as the ratios of reactants over products
Step1: Mass Balance
The following equations account for the distribution of the total ligand and total protein between the different species
Step2: The mass balance equations may be combined with the equilibrium dissociation equations into a cubic form. Isolating $[P \bullet L]$ and $[P \bullet L \bullet P]$ from their equations yields
Step3: In this fashion we can derive an expression for the total protein concentration, $[P]{T}$, in terms of $[P]$, $K{D}$, $\alpha$, and $[L]$. Which is as follows
Step4: By substituting the expression for free ligand concentration into our formula for total protein concentration, we can derive an expression for $[P]{T}$ that depends instead on $[P]$, $K{D}$, $\alpha$, and $[L]_{T}$ which is
Step5: Rearrange P_Total to cubic form
This equation for $[P]_{T}$ may be rearranged into a cubic function for $[P]$. The general form of this cubic function is
Step6: There exists only one real solution to a cubic function, whose value would represent the appropiate value of $[P]$ given the equilibrium system's specific traits of $K_{D}$, $\alpha$, $[L]{T}$, and $[P]{T}$. The solution takes the following form
Step7: Putting the model to work
Now we'll have a demonstration putting this work together into a predictive model. In an experimental setting, the researcher will not know the values of $K_{D}$ and $\alpha$ from the beginning but must instead fit their data according to the model and find the values that best explain their observations. | Python Code:
#Kd-prime and Kd-doubleprime as expressions of Kd and alpha (cooperativity)
#as well as their concentration ratios
kd, alpha, p, l, pl, plp = symbols('K_{D} alpha [P] [L] [PL] [PLP]')
kd_p = Eq(kd / 2, p * l / pl)
kd_p
kd_pp = Eq(2 * kd / alpha, p * pl / plp)
kd_pp
Explanation: The Binding Model
The binding model may be represented by the following graph
$$P+L \; \underset{K_{D}^{'}}{\rightleftharpoons} \; P \bullet L \; \underset{K_{D}^{''}}{\rightleftharpoons} \; P \bullet L \bullet P$$
Equilibrium Dissociation Constants
Considering $K_{D}$ as a general measure of affinity between $P$ and the dually-functional $L$, the equilibrium dissociation constants for each of the steps in the model, $K_{D}^{'}$ and $K_{D}^{''}$, can be considered to be a scalar function of $K_{D}$. We'll represent this in the following expressions.
$$K_{D}^{'} = \frac{1}{2}K_{D}$$
$$K_{D}^{''} = \frac{2}{\alpha}K_{D}$$
In the above expressions you should note the accounting for statistical effects given the multiplicity of binding sites on the ligand; there are two ways for $P$ to join to $L$ in the first step and only one way for $P \bullet L$ to disassociate, while there is only one way for $P$ to join to $P \bullet L$ and two ways for $P \bullet L \bullet P$ to disassociate. Additionally, $\alpha$ is an expression of the degree of cooperativity in binding which may represent negative cooperativity (0, 1), non-cooperativity [1], and positive cooperativity (1, $\infty$).
Since these are equilibrium dissocation constants they may also be expressed as the ratios of reactants over products:
$$K_{D}^{'} = \frac{1}{2}K_{D} = \frac{[P][L]}{[P \bullet L]}$$
$$K_{D}^{''} = \frac{2}{\alpha}K_{D} \frac{[P][P \bullet L]}{[P \bullet L \bullet P]}$$
End of explanation
l_t, p_t = symbols('[L]_{T} [P]_{T}')
#Represent L_total
l_total = Eq(l_t, l + pl + plp)
l_total
#Represent P_total
p_total = Eq(p_t, p + pl + 2 * plp)
p_total
Explanation: Mass Balance
The following equations account for the distribution of the total ligand and total protein between the different species:
$$[L]_{T} = [L] + [P \bullet L] + [P \bullet L \bullet P]$$
$$[P]_{T} = [P] + [P \bullet L] + 2[P \bullet L \bullet P]$$
End of explanation
#Isolate PL from Kd'
isol_pl = solvers.solve(kd_p, pl)[0]
#Isolate PLP from Kd''
isol_plp = solvers.solve(kd_pp, plp)[0]
#Replace isolated PLP expression with new form using substituted isolated PL
isol_plp = isol_plp.subs(pl, isol_pl)
#Show isolated PL expresson
isol_pl
#Show isolated PLP expression
isol_plp
#Substitute these expressions into L_total
subs_l_total = l_total.subs(plp, isol_plp)
subs_l_total = subs_l_total.subs(pl, isol_pl)
#Solve for L
l_free = solvers.solve(subs_l_total, l)[0]
#Show L_free
l_free
Explanation: The mass balance equations may be combined with the equilibrium dissociation equations into a cubic form. Isolating $[P \bullet L]$ and $[P \bullet L \bullet P]$ from their equations yields:
$$[P \bullet L] = 2 \frac{[L] [P]}{K_{{D}}}$$
$$[P \bullet L \bullet P] = \frac{[P \bullet L] [P] \alpha}{2 K_{{D}}}$$
Substituting the former into the latter yields:
$$[P \bullet L \bullet P] = \frac{[L] [P]^{2} \alpha}{K_{{D}}^{2}}$$
Which then allows us to express the free ligand concentration, $[L]$, in terms of $[L]{T}$, $K{D}$, $\alpha$, and $[P]$. Which is as follows:
$$[L] = \frac{K_{{D}}^{2} [L]{{T}}}{K{{D}}^{2} + 2 K_{{D}} [P] + [P]^{2} \alpha}$$
End of explanation
#Substitute the isolated PL/PLP expressions into P_total
subs_p_total = p_total.subs(plp, isol_plp)
subs_p_total = subs_p_total.subs(pl, isol_pl)
#Show P_total
subs_p_total
Explanation: In this fashion we can derive an expression for the total protein concentration, $[P]{T}$, in terms of $[P]$, $K{D}$, $\alpha$, and $[L]$. Which is as follows:
$$[P]{{T}} = [P] + 2 \frac{[L] [P]}{K{{D}}} + 2 \frac{[L] [P]^{2} \alpha}{K_{{D}}^{2}}$$
End of explanation
#Substitute in our l_free expression into sub_p_total to replace dependence on [L] with [L]_total
subs_p_total = subs_p_total.subs(l, l_free)
#Show subs_p_total
subs_p_total
Explanation: By substituting the expression for free ligand concentration into our formula for total protein concentration, we can derive an expression for $[P]{T}$ that depends instead on $[P]$, $K{D}$, $\alpha$, and $[L]_{T}$ which is:
$$[P]{{T}} = 2 \frac{K{{D}} [L]{{T}} [P]}{K{{D}}^{2} + 2 K_{{D}} [P] + [P]^{2} \alpha} + 2 \frac{[L]{{T}} [P]^{2} \alpha}{K{{D}}^{2} + 2 K_{{D}} [P] + [P]^{2} \alpha} + [P]$$
End of explanation
#Rearrange P_total to the other side then expand
p_expression = solvers.solve(subs_p_total, p_t)[0] - p_t
p_expression = p_expression.expand()
#Multiply the expression by the proper value to obtain [P]^(3) with coefficient of alpha
p_expression = p_expression * (kd ** 2 + 2 * kd * p + p ** 2 * alpha)
#Cancel through and show p_expression
p_expression = cancel(p_expression)
p_expression
#Collect the terms of the polynomial with the same power of [P]
power_coeffs = collect(p_expression, p, evaluate=False)
#Normalize for coefficientless [P]**3 by dividing all terms by alpha and assign/display
a, b, c = symbols('a b c')
a = power_coeffs[p**2] / alpha
b = power_coeffs[p**1] / alpha
c = power_coeffs[p**0] / alpha
a, b, c
#Using these coefficients, compose our new cubic function
cubic_p_expression = p ** 3 + a * p **2 + b * p + c
cubic_p_expression
#For the latex representation in the markdown, I simply rearranged this equation into
#power order and clarified this by placing [P]**n beside the fraction
Explanation: Rearrange P_Total to cubic form
This equation for $[P]_{T}$ may be rearranged into a cubic function for $[P]$. The general form of this cubic function is:
$$[P]^{3} + a[P]^{2} + b[P] + c = 0$$
Refer to the sympy code below for the rearrangement process... The result is:
$$[P]^{3} + \frac{\left(2 K_{{D}} + 2 [L]{{T}} \alpha - [P]{{T}} \alpha\right)}{\alpha}[P]^{2} + \frac{\left(K_{{D}}^{2} + 2 K_{{D}} [L]{{T}} - 2 K{{D}} [P]{{T}}\right)}{\alpha}[P] - \frac{K{{D}}^{2} [P]_{{T}}}{\alpha} = 0$$
Where our coefficients $a$, $b$, and $c$ are:
$$a = \frac{\left(2 K_{{D}} + 2 [L]{{T}} \alpha - [P]{{T}} \alpha\right)}{\alpha}$$
$$b = \frac{\left(K_{{D}}^{2} + 2 K_{{D}} [L]{{T}} - 2 K{{D}} [P]{{T}}\right)}{\alpha}$$
$$c = \frac{K{{D}}^{2} [P]_{{T}}}{\alpha}$$
End of explanation
pl_expr = isol_pl.subs(l, l_free)
pl_expr
plp_expr = isol_plp.subs(l, l_free)
plp_expr
Explanation: There exists only one real solution to a cubic function, whose value would represent the appropiate value of $[P]$ given the equilibrium system's specific traits of $K_{D}$, $\alpha$, $[L]{T}$, and $[P]{T}$. The solution takes the following form:
\begin{align}[P] &= -\frac{a}{3} + \sqrt[3]{R + \sqrt{Q^{3} + R^{2}}} + \sqrt[3]{R - \sqrt{Q^{3} + R^{2}}} \
Q &= \frac{3b - a^{2}}{9} \
R &= \frac{9ab - 27c -2a^{3}}{54}
\end{align}
Expressions for $[P \bullet L]$ and $[P \bullet L \bullet P]$ which are functions solely of $K_{D}$, $\alpha$, $[L]_{T}$, and $[P]$, we can begin by substituting our equation for free ligand,
$$[L] = \frac{K_{{D}}^{2} [L]{{T}}}{K{{D}}^{2} + 2 K_{{D}} [P] + [P]^{2} \alpha}$$
into the equations for $[P \bullet L]$ and $[P \bullet L \bullet P]$ isolated from the equilibrium dissociaton constant equations to yield:
$$[a \bullet L] = 2 \frac{K_{{D}} [L]{{T}} [P]}{K{{D}}^{2} + 2 K_{{D}} [P] + [P]^{2} \alpha}$$
and
$$[P \bullet L \bullet P] = \frac{[L]{{T}} [P]^{2} \alpha}{K{{D}}^{2} + 2 K_{{D}} [P] + [P]^{2} \alpha}$$
Having already derived a means of solving for [P] as a function of $K_{D}$, $\alpha$, and $[L]_{T}$, it is simple to solve for these species in addition.
End of explanation
#First we need to be able to calculate our cubic polynomial constants
def calc_abc(kd, alpha, p_total, l_total):
a = 2.0 * kd / alpha + 2.0 * l_total - p_total
b = (np.power(kd, 2.0) + 2.0 * kd * l_total - 2.0 * kd * p_total) / alpha
c = -1 * (np.power(kd, 2.0) * p_total) / alpha
return a, b, c
#Secondly we need to calculate the Q and R for the cubic solution
def calc_qr(a, b, c):
q = (3 * b - np.power(a, 2)) / 9
r = (9 * a * b - 27 * c - 2.0 * np.power(a, 3)) / 54
return q, r
#Thirdly we need to be able to solve the cubic formula in either cartesian or polar coords
def cartesian_cubic(a, q, r): # For use if Q^3+R^2 > 0
first = -1 * a / 3.0
second = np.power(r + np.power(np.power(q, 3.0) + np.power(r, 2.0), 0.5), 1.0 / 3.0)
third = np.power(r - np.power(np.power(q, 3.0) + np.power(r, 2.0), 0.5), 1.0 / 3.0)
return first + second + third
def polar_cubic(a, q, r): # For use if Q^3+R^2 < 0
theta = np.arccos(r / np.power(-1 * np.power(q, 3), 0.5))
return np.cos(theta / 3.0) * np.power(-1 * q, 0.5) * 2.0 - (a / 3.0)
#If we wish to plot [PL] and [PLP] as well, we need these
def get_pl(kd, alpha, l_total, p):
numerator = 2.0 * kd * l_total * p
denominator = np.power(kd, 2.0) + 2.0 * kd * p + alpha * np.power(p, 2.0)
return numerator / denominator
def get_plp(kd, alpha, l_total, p):
numerator = alpha * l_total * np.power(p, 2.0)
denominator = np.power(kd, 2.0) + 2.0 * kd * p + alpha * np.power(p, 2.0)
return numerator / denominator
def model_func(kd, alpha, p_total, l_total):
a, b, c = calc_abc(kd, alpha, p_total, l_total)
q, r = calc_qr(a, b, c)
p = []
for a_val, q_val, r_val in zip(a, q, r):
if np.power(q_val, 3) + np.power(r_val, 2) > 0:
p.append(cartesian_cubic(a_val, q_val, r_val))
else:
p.append(polar_cubic(a_val, q_val, r_val))
p = np.array(p)
pl = get_pl(kd, alpha, l_total, p)
plp = get_plp(kd, alpha, l_total, p)
return p, pl, plp
#Create the plot
plot = pylab.figure().add_subplot(111)
total_protein = 0.1
lig_range = 0.00001 * np.power(10, linspace(1, 8, 150))
p, pl, plp = model_func(0.02, 10.0, total_protein, lig_range)
plot.plot(lig_range, p / total_protein, label='[P]')
plot.plot(lig_range, pl / total_protein, label='[PL]')
plot.plot(lig_range, plp / total_protein, label='[PLP]')
plot.set_ylabel('[P]')
plot.set_xscale('log')
plot.set_xlabel(r'Ligand (uM)')
plot.legend(loc='center right')
plot.grid()
Explanation: Putting the model to work
Now we'll have a demonstration putting this work together into a predictive model. In an experimental setting, the researcher will not know the values of $K_{D}$ and $\alpha$ from the beginning but must instead fit their data according to the model and find the values that best explain their observations.
End of explanation |
2,794 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TF-Agents Authors.
Step1: A Tutorial on Multi-Armed Bandits with Per-Arm Features
Get Started
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Imports
Step3: Parameters -- Feel Free to Play Around
Step7: A Simple Per-Arm Environment
The stationary stochastic environment, explained in the other tutorial, has a per-arm counterpart.
To initialize the per-arm environment, one has to define functions that generate
* global and per-arm features
Step8: Now we are equipped to initialize our environment.
Step9: Below we can check what this environment produces.
Step10: We see that the observation spec is a dictionary with two elements
Step11: The Flow of Training Data
This section gives a sneak peek into the mechanics of how per-arm features go from the policy to training. Feel free to jump to the next section (Defining the Regret Metric) and come back here later if interested.
First, let us have a look at the data specification in the agent. The training_data_spec attribute of the agent specifies what elements and structure the training data should have.
Step12: If we have a closer look to the observation part of the spec, we see that it does not contain per-arm features!
Step13: What happened to the per-arm features? To answer this question, first we note that when the LinUCB agent trains, it does not need the per-arm features of all arms, it only needs those of the chosen arm. Hence, it makes sense to drop the tensor of shape [BATCH_SIZE, NUM_ACTIONS, PER_ARM_DIM], as it is very wasteful, especially if the number of actions is large.
But still, the per-arm features of the chosen arm must be somewhere! To this end, we make sure that the LinUCB policy stores the features of the chosen arm within the policy_info field of the training data
Step16: We see from the shape that the chosen_arm_features field has only the feature vector of one arm, and that will be the chosen arm. Note that the policy_info, and with it the chosen_arm_features, is part of the training data, as we saw from inspecting the training data spec, and thus it is available at training time.
Defining the Regret Metric
Before starting the training loop, we define some utility functions that help calculate the regret of our agent. These functions help determining the optimal expected reward given the set of actions (given by their arm features) and the linear parameter that is hidden from the agent.
Step17: Now we are all set for starting our bandit training loop. The driver below takes care of choosing actions using the policy, storing rewards of chosen actions in the replay buffer, calculating the predefined regret metric, and executing the training step of the agent.
Step18: Now let's see the result. If we did everything right, the agent is able to estimate the linear reward function well, and thus the policy can pick actions whose expected reward is close to that of the optimal. This is indicated by our above defined regret metric, which goes down and approaches zero. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TF-Agents Authors.
End of explanation
!pip install tf-agents
Explanation: A Tutorial on Multi-Armed Bandits with Per-Arm Features
Get Started
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/agents/tutorials/per_arm_bandits_tutorial">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/per_arm_bandits_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/agents/blob/master/docs/tutorials/per_arm_bandits_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/agents/docs/tutorials/per_arm_bandits_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial is a step-by-step guide on how to use the TF-Agents library for contextual bandits problems where the actions (arms) have their own features, such as a list of movies represented by features (genre, year of release, ...).
Prerequisite
It is assumed that the reader is somewhat familiar with the Bandit library of TF-Agents, in particular, has worked through the tutorial for Bandits in TF-Agents before reading this tutorial.
Multi-Armed Bandits with Arm Features
In the "classic" Contextual Multi-Armed Bandits setting, an agent receives a context vector (aka observation) at every time step and has to choose from a finite set of numbered actions (arms) so as to maximize its cumulative reward.
Now consider the scenario where an agent recommends to a user the next movie to watch. Every time a decision has to be made, the agent receives as context some information about the user (watch history, genre preference, etc...), as well as the list of movies to choose from.
We could try to formulate this problem by having the user information as the context and the arms would be movie_1, movie_2, ..., movie_K, but this approach has multiple shortcomings:
The number of actions would have to be all the movies in the system and it is cumbersome to add a new movie.
The agent has to learn a model for every single movie.
Similarity between movies is not taken into account.
Instead of numbering the movies, we can do something more intuitive: we can represent movies with a set of features including genre, length, cast, rating, year, etc. The advantages of this approach are manifold:
Generalisation across movies.
The agent learns just one reward function that models reward with user and movie features.
Easy to remove from, or introduce new movies to the system.
In this new setting, the number of actions does not even have to be the same in every time step.
Per-Arm Bandits in TF-Agents
The TF-Agents Bandit suite is developed so that one can use it for the per-arm case as well. There are per-arm environments, and also most of the policies and agents can operate in per-arm mode.
Before we dive into coding an example, we need the necessery imports.
Installation
End of explanation
import functools
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tf_agents.bandits.agents import lin_ucb_agent
from tf_agents.bandits.environments import stationary_stochastic_per_arm_py_environment as p_a_env
from tf_agents.bandits.metrics import tf_metrics as tf_bandit_metrics
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import tf_py_environment
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import time_step as ts
nest = tf.nest
Explanation: Imports
End of explanation
# The dimension of the global features.
GLOBAL_DIM = 40 #@param {type:"integer"}
# The elements of the global feature will be integers in [-GLOBAL_BOUND, GLOBAL_BOUND).
GLOBAL_BOUND = 10 #@param {type:"integer"}
# The dimension of the per-arm features.
PER_ARM_DIM = 50 #@param {type:"integer"}
# The elements of the PER-ARM feature will be integers in [-PER_ARM_BOUND, PER_ARM_BOUND).
PER_ARM_BOUND = 6 #@param {type:"integer"}
# The variance of the Gaussian distribution that generates the rewards.
VARIANCE = 100.0 #@param {type: "number"}
# The elements of the linear reward parameter will be integers in [-PARAM_BOUND, PARAM_BOUND).
PARAM_BOUND = 10 #@param {type: "integer"}
NUM_ACTIONS = 70 #@param {type:"integer"}
BATCH_SIZE = 20 #@param {type:"integer"}
# Parameter for linear reward function acting on the
# concatenation of global and per-arm features.
reward_param = list(np.random.randint(
-PARAM_BOUND, PARAM_BOUND, [GLOBAL_DIM + PER_ARM_DIM]))
Explanation: Parameters -- Feel Free to Play Around
End of explanation
def global_context_sampling_fn():
This function generates a single global observation vector.
return np.random.randint(
-GLOBAL_BOUND, GLOBAL_BOUND, [GLOBAL_DIM]).astype(np.float32)
def per_arm_context_sampling_fn():
"This function generates a single per-arm observation vector.
return np.random.randint(
-PER_ARM_BOUND, PER_ARM_BOUND, [PER_ARM_DIM]).astype(np.float32)
def linear_normal_reward_fn(x):
This function generates a reward from the concatenated global and per-arm observations.
mu = np.dot(x, reward_param)
return np.random.normal(mu, VARIANCE)
Explanation: A Simple Per-Arm Environment
The stationary stochastic environment, explained in the other tutorial, has a per-arm counterpart.
To initialize the per-arm environment, one has to define functions that generate
* global and per-arm features: These functions have no input parameters and generate a single (global or per-arm) feature vector when called.
* rewards: This function takes as parameter the concatenation of a global and a per-arm feature vector, and generates a reward. Basically this is the function that the agent will have to "guess". It is worth noting here that in the per-arm case the reward function is identical for every arm. This is a fundamental difference from the classic bandit case, where the agent has to estimate reward functions for each arm independently.
End of explanation
per_arm_py_env = p_a_env.StationaryStochasticPerArmPyEnvironment(
global_context_sampling_fn,
per_arm_context_sampling_fn,
NUM_ACTIONS,
linear_normal_reward_fn,
batch_size=BATCH_SIZE
)
per_arm_tf_env = tf_py_environment.TFPyEnvironment(per_arm_py_env)
Explanation: Now we are equipped to initialize our environment.
End of explanation
print('observation spec: ', per_arm_tf_env.observation_spec())
print('\nAn observation: ', per_arm_tf_env.reset().observation)
action = tf.zeros(BATCH_SIZE, dtype=tf.int32)
time_step = per_arm_tf_env.step(action)
print('\nRewards after taking an action: ', time_step.reward)
Explanation: Below we can check what this environment produces.
End of explanation
observation_spec = per_arm_tf_env.observation_spec()
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.BoundedTensorSpec(
dtype=tf.int32, shape=(), minimum=0, maximum=NUM_ACTIONS - 1)
agent = lin_ucb_agent.LinearUCBAgent(time_step_spec=time_step_spec,
action_spec=action_spec,
accepts_per_arm_features=True)
Explanation: We see that the observation spec is a dictionary with two elements:
One with key 'global': this is the global context part, with shape matching the parameter GLOBAL_DIM.
One with key 'per_arm': this is the per-arm context, and its shape is [NUM_ACTIONS, PER_ARM_DIM]. This part is the placeholder for the arm features for every arm in a time step.
The LinUCB Agent
The LinUCB agent implements the identically named Bandit algorithm, which estimates the parameter of the linear reward function while also maintains a confidence ellipsoid around the estimate. The agent chooses the arm that has the highest estimated expected reward, assuming that the parameter lies within the confidence ellipsoid.
Creating an agent requires the knowledge of the observation and the action specification. When defining the agent, we set the boolean parameter accepts_per_arm_features set to True.
End of explanation
print('training data spec: ', agent.training_data_spec)
Explanation: The Flow of Training Data
This section gives a sneak peek into the mechanics of how per-arm features go from the policy to training. Feel free to jump to the next section (Defining the Regret Metric) and come back here later if interested.
First, let us have a look at the data specification in the agent. The training_data_spec attribute of the agent specifies what elements and structure the training data should have.
End of explanation
print('observation spec in training: ', agent.training_data_spec.observation)
Explanation: If we have a closer look to the observation part of the spec, we see that it does not contain per-arm features!
End of explanation
print('chosen arm features: ', agent.training_data_spec.policy_info.chosen_arm_features)
Explanation: What happened to the per-arm features? To answer this question, first we note that when the LinUCB agent trains, it does not need the per-arm features of all arms, it only needs those of the chosen arm. Hence, it makes sense to drop the tensor of shape [BATCH_SIZE, NUM_ACTIONS, PER_ARM_DIM], as it is very wasteful, especially if the number of actions is large.
But still, the per-arm features of the chosen arm must be somewhere! To this end, we make sure that the LinUCB policy stores the features of the chosen arm within the policy_info field of the training data:
End of explanation
def _all_rewards(observation, hidden_param):
Outputs rewards for all actions, given an observation.
hidden_param = tf.cast(hidden_param, dtype=tf.float32)
global_obs = observation['global']
per_arm_obs = observation['per_arm']
num_actions = tf.shape(per_arm_obs)[1]
tiled_global = tf.tile(
tf.expand_dims(global_obs, axis=1), [1, num_actions, 1])
concatenated = tf.concat([tiled_global, per_arm_obs], axis=-1)
rewards = tf.linalg.matvec(concatenated, hidden_param)
return rewards
def optimal_reward(observation):
Outputs the maximum expected reward for every element in the batch.
return tf.reduce_max(_all_rewards(observation, reward_param), axis=1)
regret_metric = tf_bandit_metrics.RegretMetric(optimal_reward)
Explanation: We see from the shape that the chosen_arm_features field has only the feature vector of one arm, and that will be the chosen arm. Note that the policy_info, and with it the chosen_arm_features, is part of the training data, as we saw from inspecting the training data spec, and thus it is available at training time.
Defining the Regret Metric
Before starting the training loop, we define some utility functions that help calculate the regret of our agent. These functions help determining the optimal expected reward given the set of actions (given by their arm features) and the linear parameter that is hidden from the agent.
End of explanation
num_iterations = 20 # @param
steps_per_loop = 1 # @param
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.policy.trajectory_spec,
batch_size=BATCH_SIZE,
max_length=steps_per_loop)
observers = [replay_buffer.add_batch, regret_metric]
driver = dynamic_step_driver.DynamicStepDriver(
env=per_arm_tf_env,
policy=agent.collect_policy,
num_steps=steps_per_loop * BATCH_SIZE,
observers=observers)
regret_values = []
for _ in range(num_iterations):
driver.run()
loss_info = agent.train(replay_buffer.gather_all())
replay_buffer.clear()
regret_values.append(regret_metric.result())
Explanation: Now we are all set for starting our bandit training loop. The driver below takes care of choosing actions using the policy, storing rewards of chosen actions in the replay buffer, calculating the predefined regret metric, and executing the training step of the agent.
End of explanation
plt.plot(regret_values)
plt.title('Regret of LinUCB on the Linear per-arm environment')
plt.xlabel('Number of Iterations')
_ = plt.ylabel('Average Regret')
Explanation: Now let's see the result. If we did everything right, the agent is able to estimate the linear reward function well, and thus the policy can pick actions whose expected reward is close to that of the optimal. This is indicated by our above defined regret metric, which goes down and approaches zero.
End of explanation |
2,795 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Practical PyTorch
Step1: Here we will also define a constant to decide whether to use the GPU (with CUDA specifically) or the CPU. If you don't have a GPU, set this to False. Later when we create tensors, this variable will be used to decide whether we keep them on CPU or move them to GPU.
Step2: Loading data files
The data for this project is a set of many thousands of English to French translation pairs.
This question on Open Data Stack Exchange pointed me to the open translation site http
Step3: Reading and decoding files
The files are all in Unicode, to simplify we will turn Unicode characters to ASCII, make everything lowercase, and trim most punctuation.
Step4: To read the data file we will split the file into lines, and then split lines into pairs. The files are all English → Other Language, so if we want to translate from Other Language → English I added the reverse flag to reverse the pairs.
Step5: Filtering sentences
Since there are a lot of example sentences and we want to train something quickly, we'll trim the data set to only relatively short and simple sentences. Here the maximum length is 10 words (that includes punctuation) and we're filtering to sentences that translate to the form "I am" or "He is" etc. (accounting for apostrophes being removed).
Step6: The full process for preparing the data is
Step7: Turning training data into Tensors/Variables
To train we need to turn the sentences into something the neural network can understand, which of course means numbers. Each sentence will be split into words and turned into a Tensor, where each word is replaced with the index (from the Lang indexes made earlier). While creating these tensors we will also append the EOS token to signal that the sentence is over.
A Tensor is a multi-dimensional array of numbers, defined with some type e.g. FloatTensor or LongTensor. In this case we'll be using LongTensor to represent an array of integer indexes.
Trainable PyTorch modules take Variables as input, rather than plain Tensors. A Variable is basically a Tensor that is able to keep track of the graph state, which is what makes autograd (automatic calculation of backwards gradients) possible.
Step8: Building the models
The Encoder
<img src="images/encoder-network.png" style="float
Step9: Attention Decoder
Interpreting the Bahdanau et al. model
The attention model in Neural Machine Translation by Jointly Learning to Align and Translate is described as the following series of equations.
Each decoder output is conditioned on the previous outputs and some $\mathbf x$, where $\mathbf x$ consists of the current hidden state (which takes into account previous outputs) and the attention "context", which is calculated below. The function $g$ is a fully-connected layer with a nonlinear activation, which takes as input the values $y_{i-1}$, $s_i$, and $c_i$ concatenated.
$$
p(y_i \mid {y_1,...,y_{i-1}},\mathbf{x}) = g(y_{i-1}, s_i, c_i)
$$
The current hidden state $s_i$ is calculated by an RNN $f$ with the last hidden state $s_{i-1}$, last decoder output value $y_{i-1}$, and context vector $c_i$.
In the code, the RNN will be a nn.GRU layer, the hidden state $s_i$ will be called hidden, the output $y_i$ called output, and context $c_i$ called context.
$$
s_i = f(s_{i-1}, y_{i-1}, c_i)
$$
The context vector $c_i$ is a weighted sum of all encoder outputs, where each weight $a_{ij}$ is the amount of "attention" paid to the corresponding encoder output $h_j$.
$$
c_i = \sum_{j=1}^{T_x} a_{ij} h_j
$$
... where each weight $a_{ij}$ is a normalized (over all steps) attention "energy" $e_{ij}$ ...
$$
a_{ij} = \dfrac{exp(e_{ij})}{\sum_{k=1}^{T} exp(e_{ik})}
$$
... where each attention energy is calculated with some function $a$ (such as another linear layer) using the last hidden state $s_{i-1}$ and that particular encoder output $h_j$
Step10: Interpreting the Luong et al. model(s)
Effective Approaches to Attention-based Neural Machine Translation by Luong et al. describe a few more attention models that offer improvements and simplifications. They describe a few "global attention" models, the distinction between them being the way the attention scores are calculated.
The general form of the attention calculation relies on the target (decoder) side hidden state and corresponding source (encoder) side state, normalized over all states to get values summing to 1
Step11: Now we can build a decoder that plugs this Attn module in after the RNN to calculate attention weights, and apply those weights to the encoder outputs to get a context vector.
Step12: Testing the models
To make sure the Encoder and Decoder model are working (and working together) we'll do a quick test with fake word inputs
Step13: Training
Defining a training iteration
To train we first run the input sentence through the encoder word by word, and keep track of every output and the latest hidden state. Next the decoder is given the last hidden state of the decoder as its first hidden state, and the <SOS> token as its first input. From there we iterate to predict a next token from the decoder.
Teacher Forcing and Scheduled Sampling
"Teacher Forcing", or maximum likelihood sampling, means using the real target outputs as each next input when training. The alternative is using the decoder's own guess as the next input. Using teacher forcing may cause the network to converge faster, but when the trained network is exploited, it may exhibit instability.
You can observe outputs of teacher-forced networks that read with coherent grammar but wander far from the correct translation - you could think of it as having learned how to listen to the teacher's instructions, without learning how to venture out on its own.
The solution to the teacher-forcing "problem" is known as Scheduled Sampling, which simply alternates between using the target values and predicted values when training. We will randomly choose to use teacher forcing with an if statement while training - sometimes we'll feed use real target as the input (ignoring the decoder's output), sometimes we'll use the decoder's output.
Step14: Finally helper functions to print time elapsed and estimated time remaining, given the current time and progress.
Step15: Running training
With everything in place we can actually initialize a network and start training.
To start, we initialize models, optimizers, and a loss function (criterion).
Step16: Then set up variables for plotting and tracking progress
Step17: To actually train, we call the train function many times, printing a summary as we go.
Note
Step18: Plotting training loss
Plotting is done with matplotlib, using the array plot_losses that was created while training.
Step19: Evaluating the network
Evaluation is mostly the same as training, but there are no targets. Instead we always feed the decoder's predictions back to itself. Every time it predicts a word, we add it to the output string. If it predicts the EOS token we stop there. We also store the decoder's attention outputs for each step to display later.
Step20: We can evaluate random sentences from the training set and print out the input, target, and output to make some subjective quality judgements
Step21: Visualizing attention
A useful property of the attention mechanism is its highly interpretable outputs. Because it is used to weight specific encoder outputs of the input sequence, we can imagine looking where the network is focused most at each time step.
You could simply run plt.matshow(attentions) to see attention output displayed as a matrix, with the columns being input steps and rows being output steps
Step22: For a better viewing experience we will do the extra work of adding axes and labels | Python Code:
import unicodedata
import string
import re
import random
import time
import math
import torch
import torch.nn as nn
from torch.autograd import Variable
from torch import optim
import torch.nn.functional as F
Explanation: Practical PyTorch: Translation with a Sequence to Sequence Network and Attention
In this project we will be teaching a neural network to translate from French to English.
```
[KEY: > input, = target, < output]
il est en train de peindre un tableau .
= he is painting a picture .
< he is painting a picture .
pourquoi ne pas essayer ce vin delicieux ?
= why not try that delicious wine ?
< why not try that delicious wine ?
elle n est pas poete mais romanciere .
= she is not a poet but a novelist .
< she not not a poet but a novelist .
vous etes trop maigre .
= you re too skinny .
< you re all alone .
```
... to varying degrees of success.
This is made possible by the simple but powerful idea of the sequence to sequence network, in which two recurrent neural networks work together to transform one sequence to another. An encoder network condenses an input sequence into a single vector, and a decoder network unfolds that vector into a new sequence.
To improve upon this model we'll use an attention mechanism, which lets the decoder learn to focus over a specific range of the input sequence.
The Sequence to Sequence model
A Sequence to Sequence network, or seq2seq network, or Encoder Decoder network, is a model consisting of two separate RNNs called the encoder and decoder. The encoder reads an input sequence one item at a time, and outputs a vector at each step. The final output of the encoder is kept as the context vector. The decoder uses this context vector to produce a sequence of outputs one step at a time.
When using a single RNN, there is a one-to-one relationship between inputs and outputs. We would quickly run into problems with different sequence orders and lengths that are common during translation. Consider the simple sentence "Je ne suis pas le chat noir" → "I am not the black cat". Many of the words have a pretty direct translation, like "chat" → "cat". However the differing grammars cause words to be in different orders, e.g. "chat noir" and "black cat". There is also the "ne ... pas" → "not" construction that makes the two sentences have different lengths.
With the seq2seq model, by encoding many inputs into one vector, and decoding from one vector into many outputs, we are freed from the constraints of sequence order and length. The encoded sequence is represented by a single vector, a single point in some N dimensional space of sequences. In an ideal case, this point can be considered the "meaning" of the sequence.
This idea can be extended beyond sequences. Image captioning tasks take an image as input, and output a description of the image (img2seq). Some image generation tasks take a description as input and output a generated image (seq2img). These models can be referred to more generally as "encoder decoder" networks.
The Attention Mechanism
The fixed-length vector carries the burden of encoding the the entire "meaning" of the input sequence, no matter how long that may be. With all the variance in language, this is a very hard problem. Imagine two nearly identical sentences, twenty words long, with only one word different. Both the encoders and decoders must be nuanced enough to represent that change as a very slightly different point in space.
The attention mechanism introduced by Bahdanau et al. addresses this by giving the decoder a way to "pay attention" to parts of the input, rather than relying on a single vector. For every step the decoder can select a different part of the input sentence to consider.
Attention is calculated with another feedforward layer in the decoder. This layer will use the current input and hidden state to create a new vector, which is the same size as the input sequence (in practice, a fixed maximum length). This vector is processed through softmax to create attention weights, which are multiplied by the encoders' outputs to create a new context vector, which is then used to predict the next output.
Requirements
You will need PyTorch to build and train the models, and matplotlib for plotting training and visualizing attention outputs later.
End of explanation
USE_CUDA = True
Explanation: Here we will also define a constant to decide whether to use the GPU (with CUDA specifically) or the CPU. If you don't have a GPU, set this to False. Later when we create tensors, this variable will be used to decide whether we keep them on CPU or move them to GPU.
End of explanation
SOS_token = 0
EOS_token = 1
class Lang:
def __init__(self, name):
self.name = name
self.word2index = {}
self.word2count = {}
self.index2word = {0: "SOS", 1: "EOS"}
self.n_words = 2 # Count SOS and EOS
def index_words(self, sentence):
for word in sentence.split(' '):
self.index_word(word)
def index_word(self, word):
if word not in self.word2index:
self.word2index[word] = self.n_words
self.word2count[word] = 1
self.index2word[self.n_words] = word
self.n_words += 1
else:
self.word2count[word] += 1
Explanation: Loading data files
The data for this project is a set of many thousands of English to French translation pairs.
This question on Open Data Stack Exchange pointed me to the open translation site http://tatoeba.org/ which has downloads available at http://tatoeba.org/eng/downloads - and better yet, someone did the extra work of splitting language pairs into individual text files here: http://www.manythings.org/anki/
The English to French pairs are too big to include in the repo, so download fra-eng.zip, extract the text file in there, and rename it to data/eng-fra.txt before continuing (for some reason the zipfile is named backwards). The file is a tab separated list of translation pairs:
I am cold. J'ai froid.
Similar to the character encoding used in the character-level RNN tutorials, we will be representing each word in a language as a one-hot vector, or giant vector of zeros except for a single one (at the index of the word). Compared to the dozens of characters that might exist in a language, there are many many more words, so the encoding vector is much larger. We will however cheat a bit and trim the data to only use a few thousand words per language.
Indexing words
We'll need a unique index per word to use as the inputs and targets of the networks later. To keep track of all this we will use a helper class called Lang which has word → index (word2index) and index → word (index2word) dictionaries, as well as a count of each word word2count to use to later replace rare words.
End of explanation
# Turn a Unicode string to plain ASCII, thanks to http://stackoverflow.com/a/518232/2809427
def unicode_to_ascii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
)
# Lowercase, trim, and remove non-letter characters
def normalize_string(s):
s = unicode_to_ascii(s.lower().strip())
s = re.sub(r"([.!?])", r" \1", s)
s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
return s
Explanation: Reading and decoding files
The files are all in Unicode, to simplify we will turn Unicode characters to ASCII, make everything lowercase, and trim most punctuation.
End of explanation
def read_langs(lang1, lang2, reverse=False):
print("Reading lines...")
# Read the file and split into lines
lines = open('../data/%s-%s.txt' % (lang1, lang2)).read().strip().split('\n')
# Split every line into pairs and normalize
pairs = [[normalize_string(s) for s in l.split('\t')] for l in lines]
# Reverse pairs, make Lang instances
if reverse:
pairs = [list(reversed(p)) for p in pairs]
input_lang = Lang(lang2)
output_lang = Lang(lang1)
else:
input_lang = Lang(lang1)
output_lang = Lang(lang2)
return input_lang, output_lang, pairs
Explanation: To read the data file we will split the file into lines, and then split lines into pairs. The files are all English → Other Language, so if we want to translate from Other Language → English I added the reverse flag to reverse the pairs.
End of explanation
MAX_LENGTH = 10
good_prefixes = (
"i am ", "i m ",
"he is", "he s ",
"she is", "she s",
"you are", "you re "
)
def filter_pair(p):
return len(p[0].split(' ')) < MAX_LENGTH and len(p[1].split(' ')) < MAX_LENGTH and \
p[1].startswith(good_prefixes)
def filter_pairs(pairs):
return [pair for pair in pairs if filter_pair(pair)]
Explanation: Filtering sentences
Since there are a lot of example sentences and we want to train something quickly, we'll trim the data set to only relatively short and simple sentences. Here the maximum length is 10 words (that includes punctuation) and we're filtering to sentences that translate to the form "I am" or "He is" etc. (accounting for apostrophes being removed).
End of explanation
def prepare_data(lang1_name, lang2_name, reverse=False):
input_lang, output_lang, pairs = read_langs(lang1_name, lang2_name, reverse)
print("Read %s sentence pairs" % len(pairs))
pairs = filter_pairs(pairs)
print("Trimmed to %s sentence pairs" % len(pairs))
print("Indexing words...")
for pair in pairs:
input_lang.index_words(pair[0])
output_lang.index_words(pair[1])
return input_lang, output_lang, pairs
input_lang, output_lang, pairs = prepare_data('eng', 'fra', True)
# Print an example pair
print(random.choice(pairs))
Explanation: The full process for preparing the data is:
Read text file and split into lines, split lines into pairs
Normalize text, filter by length and content
Make word lists from sentences in pairs
End of explanation
# Return a list of indexes, one for each word in the sentence
def indexes_from_sentence(lang, sentence):
return [lang.word2index[word] for word in sentence.split(' ')]
def variable_from_sentence(lang, sentence):
indexes = indexes_from_sentence(lang, sentence)
indexes.append(EOS_token)
var = Variable(torch.LongTensor(indexes).view(-1, 1))
# print('var =', var)
if USE_CUDA: var = var.cuda()
return var
def variables_from_pair(pair):
input_variable = variable_from_sentence(input_lang, pair[0])
target_variable = variable_from_sentence(output_lang, pair[1])
return (input_variable, target_variable)
Explanation: Turning training data into Tensors/Variables
To train we need to turn the sentences into something the neural network can understand, which of course means numbers. Each sentence will be split into words and turned into a Tensor, where each word is replaced with the index (from the Lang indexes made earlier). While creating these tensors we will also append the EOS token to signal that the sentence is over.
A Tensor is a multi-dimensional array of numbers, defined with some type e.g. FloatTensor or LongTensor. In this case we'll be using LongTensor to represent an array of integer indexes.
Trainable PyTorch modules take Variables as input, rather than plain Tensors. A Variable is basically a Tensor that is able to keep track of the graph state, which is what makes autograd (automatic calculation of backwards gradients) possible.
End of explanation
class EncoderRNN(nn.Module):
def __init__(self, input_size, hidden_size, n_layers=1):
super(EncoderRNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.n_layers = n_layers
self.embedding = nn.Embedding(input_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size, n_layers)
def forward(self, word_inputs, hidden):
# Note: we run this all at once (over the whole input sequence)
seq_len = len(word_inputs)
embedded = self.embedding(word_inputs).view(seq_len, 1, -1)
output, hidden = self.gru(embedded, hidden)
return output, hidden
def init_hidden(self):
hidden = Variable(torch.zeros(self.n_layers, 1, self.hidden_size))
if USE_CUDA: hidden = hidden.cuda()
return hidden
Explanation: Building the models
The Encoder
<img src="images/encoder-network.png" style="float: right" />
The encoder of a seq2seq network is a RNN that outputs some value for every word from the input sentence. For every input word the encoder outputs a vector and a hidden state, and uses the hidden state for the next input word.
End of explanation
class BahdanauAttnDecoderRNN(nn.Module):
def __init__(self, hidden_size, output_size, n_layers=1, dropout_p=0.1):
super(AttnDecoderRNN, self).__init__()
# Define parameters
self.hidden_size = hidden_size
self.output_size = output_size
self.n_layers = n_layers
self.dropout_p = dropout_p
self.max_length = max_length
# Define layers
self.embedding = nn.Embedding(output_size, hidden_size)
self.dropout = nn.Dropout(dropout_p)
self.attn = GeneralAttn(hidden_size)
self.gru = nn.GRU(hidden_size * 2, hidden_size, n_layers, dropout=dropout_p)
self.out = nn.Linear(hidden_size, output_size)
def forward(self, word_input, last_hidden, encoder_outputs):
# Note that we will only be running forward for a single decoder time step, but will use all encoder outputs
# Get the embedding of the current input word (last output word)
word_embedded = self.embedding(word_input).view(1, 1, -1) # S=1 x B x N
word_embedded = self.dropout(word_embedded)
# Calculate attention weights and apply to encoder outputs
attn_weights = self.attn(last_hidden[-1], encoder_outputs)
context = attn_weights.bmm(encoder_outputs.transpose(0, 1)) # B x 1 x N
# Combine embedded input word and attended context, run through RNN
rnn_input = torch.cat((word_embedded, context), 2)
output, hidden = self.gru(rnn_input, last_hidden)
# Final output layer
output = output.squeeze(0) # B x N
output = F.log_softmax(self.out(torch.cat((output, context), 1)))
# Return final output, hidden state, and attention weights (for visualization)
return output, hidden, attn_weights
Explanation: Attention Decoder
Interpreting the Bahdanau et al. model
The attention model in Neural Machine Translation by Jointly Learning to Align and Translate is described as the following series of equations.
Each decoder output is conditioned on the previous outputs and some $\mathbf x$, where $\mathbf x$ consists of the current hidden state (which takes into account previous outputs) and the attention "context", which is calculated below. The function $g$ is a fully-connected layer with a nonlinear activation, which takes as input the values $y_{i-1}$, $s_i$, and $c_i$ concatenated.
$$
p(y_i \mid {y_1,...,y_{i-1}},\mathbf{x}) = g(y_{i-1}, s_i, c_i)
$$
The current hidden state $s_i$ is calculated by an RNN $f$ with the last hidden state $s_{i-1}$, last decoder output value $y_{i-1}$, and context vector $c_i$.
In the code, the RNN will be a nn.GRU layer, the hidden state $s_i$ will be called hidden, the output $y_i$ called output, and context $c_i$ called context.
$$
s_i = f(s_{i-1}, y_{i-1}, c_i)
$$
The context vector $c_i$ is a weighted sum of all encoder outputs, where each weight $a_{ij}$ is the amount of "attention" paid to the corresponding encoder output $h_j$.
$$
c_i = \sum_{j=1}^{T_x} a_{ij} h_j
$$
... where each weight $a_{ij}$ is a normalized (over all steps) attention "energy" $e_{ij}$ ...
$$
a_{ij} = \dfrac{exp(e_{ij})}{\sum_{k=1}^{T} exp(e_{ik})}
$$
... where each attention energy is calculated with some function $a$ (such as another linear layer) using the last hidden state $s_{i-1}$ and that particular encoder output $h_j$:
$$
e_{ij} = a(s_{i-1}, h_j)
$$
Implementing the Bahdanau et al. model
In summary our decoder should consist of four main parts - an embedding layer turning an input word into a vector; a layer to calculate the attention energy per encoder output; a RNN layer; and an output layer.
The decoder's inputs are the last RNN hidden state $s_{i-1}$, last output $y_{i-1}$, and all encoder outputs $h_*$.
embedding layer with inputs $y_{i-1}$
embedded = embedding(last_rnn_output)
attention layer $a$ with inputs $(s_{i-1}, h_j)$ and outputs $e_{ij}$, normalized to create $a_{ij}$
attn_energies[j] = attn_layer(last_hidden, encoder_outputs[j])
attn_weights = normalize(attn_energies)
context vector $c_i$ as an attention-weighted average of encoder outputs
context = sum(attn_weights * encoder_outputs)
RNN layer(s) $f$ with inputs $(s_{i-1}, y_{i-1}, c_i)$ and internal hidden state, outputting $s_i$
rnn_input = concat(embedded, context)
rnn_output, rnn_hidden = rnn(rnn_input, last_hidden)
an output layer $g$ with inputs $(y_{i-1}, s_i, c_i)$, outputting $y_i$
output = out(embedded, rnn_output, context)
End of explanation
class Attn(nn.Module):
def __init__(self, method, hidden_size, max_length=MAX_LENGTH):
super(Attn, self).__init__()
self.method = method
self.hidden_size = hidden_size
if self.method == 'general':
self.attn = nn.Linear(self.hidden_size, hidden_size)
elif self.method == 'concat':
self.attn = nn.Linear(self.hidden_size * 2, hidden_size)
self.other = nn.Parameter(torch.FloatTensor(1, hidden_size))
def forward(self, hidden, encoder_outputs):
seq_len = len(encoder_outputs)
# Create variable to store attention energies
attn_energies = Variable(torch.zeros(seq_len)) # B x 1 x S
if USE_CUDA: attn_energies = attn_energies.cuda()
# Calculate energies for each encoder output
for i in range(seq_len):
attn_energies[i] = self.score(hidden, encoder_outputs[i])
# Normalize energies to weights in range 0 to 1, resize to 1 x 1 x seq_len
return F.softmax(attn_energies).unsqueeze(0).unsqueeze(0)
def score(self, hidden, encoder_output):
if self.method == 'dot':
energy = hidden.dot(encoder_output)
return energy
elif self.method == 'general':
energy = self.attn(encoder_output)
energy = hidden.dot(energy)
return energy
elif self.method == 'concat':
energy = self.attn(torch.cat((hidden, encoder_output), 1))
energy = self.other.dot(energy)
return energy
Explanation: Interpreting the Luong et al. model(s)
Effective Approaches to Attention-based Neural Machine Translation by Luong et al. describe a few more attention models that offer improvements and simplifications. They describe a few "global attention" models, the distinction between them being the way the attention scores are calculated.
The general form of the attention calculation relies on the target (decoder) side hidden state and corresponding source (encoder) side state, normalized over all states to get values summing to 1:
$$
a_t(s) = align(h_t, \bar h_s) = \dfrac{exp(score(h_t, \bar h_s))}{\sum_{s'} exp(score(h_t, \bar h_{s'}))}
$$
The specific "score" function that compares two states is either dot, a simple dot product between the states; general, a a dot product between the decoder hidden state and a linear transform of the encoder state; or concat, a dot product between a new parameter $v_a$ and a linear transform of the states concatenated together.
$$
score(h_t, \bar h_s) =
\begin{cases}
h_t ^\top \bar h_s & dot \
h_t ^\top \textbf{W}_a \bar h_s & general \
v_a ^\top \textbf{W}_a [ h_t ; \bar h_s ] & concat
\end{cases}
$$
The modular definition of these scoring functions gives us an opportunity to build specific attention module that can switch between the different score methods. The input to this module is always the hidden state (of the decoder RNN) and set of encoder outputs.
End of explanation
class AttnDecoderRNN(nn.Module):
def __init__(self, attn_model, hidden_size, output_size, n_layers=1, dropout_p=0.1):
super(AttnDecoderRNN, self).__init__()
# Keep parameters for reference
self.attn_model = attn_model
self.hidden_size = hidden_size
self.output_size = output_size
self.n_layers = n_layers
self.dropout_p = dropout_p
# Define layers
self.embedding = nn.Embedding(output_size, hidden_size)
self.gru = nn.GRU(hidden_size * 2, hidden_size, n_layers, dropout=dropout_p)
self.out = nn.Linear(hidden_size * 2, output_size)
# Choose attention model
if attn_model != 'none':
self.attn = Attn(attn_model, hidden_size)
def forward(self, word_input, last_context, last_hidden, encoder_outputs):
# Note: we run this one step at a time
# Get the embedding of the current input word (last output word)
word_embedded = self.embedding(word_input).view(1, 1, -1) # S=1 x B x N
# Combine embedded input word and last context, run through RNN
rnn_input = torch.cat((word_embedded, last_context.unsqueeze(0)), 2)
rnn_output, hidden = self.gru(rnn_input, last_hidden)
# Calculate attention from current RNN state and all encoder outputs; apply to encoder outputs
attn_weights = self.attn(rnn_output.squeeze(0), encoder_outputs)
context = attn_weights.bmm(encoder_outputs.transpose(0, 1)) # B x 1 x N
# Final output layer (next word prediction) using the RNN hidden state and context vector
rnn_output = rnn_output.squeeze(0) # S=1 x B x N -> B x N
context = context.squeeze(1) # B x S=1 x N -> B x N
output = F.log_softmax(self.out(torch.cat((rnn_output, context), 1)))
# Return final output, hidden state, and attention weights (for visualization)
return output, context, hidden, attn_weights
Explanation: Now we can build a decoder that plugs this Attn module in after the RNN to calculate attention weights, and apply those weights to the encoder outputs to get a context vector.
End of explanation
encoder_test = EncoderRNN(10, 10, 2)
decoder_test = AttnDecoderRNN('general', 10, 10, 2)
print(encoder_test)
print(decoder_test)
encoder_hidden = encoder_test.init_hidden()
word_input = Variable(torch.LongTensor([1, 2, 3]))
if USE_CUDA:
encoder_test.cuda()
word_input = word_input.cuda()
encoder_outputs, encoder_hidden = encoder_test(word_input, encoder_hidden)
word_inputs = Variable(torch.LongTensor([1, 2, 3]))
decoder_attns = torch.zeros(1, 3, 3)
decoder_hidden = encoder_hidden
decoder_context = Variable(torch.zeros(1, decoder_test.hidden_size))
if USE_CUDA:
decoder_test.cuda()
word_inputs = word_inputs.cuda()
decoder_context = decoder_context.cuda()
for i in range(3):
decoder_output, decoder_context, decoder_hidden, decoder_attn = decoder_test(word_inputs[i], decoder_context, decoder_hidden, encoder_outputs)
print(decoder_output.size(), decoder_hidden.size(), decoder_attn.size())
decoder_attns[0, i] = decoder_attn.squeeze(0).cpu().data
Explanation: Testing the models
To make sure the Encoder and Decoder model are working (and working together) we'll do a quick test with fake word inputs:
End of explanation
teacher_forcing_ratio = 0.5
clip = 5.0
def train(input_variable, target_variable, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion, max_length=MAX_LENGTH):
# Zero gradients of both optimizers
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
loss = 0 # Added onto for each word
# Get size of input and target sentences
input_length = input_variable.size()[0]
target_length = target_variable.size()[0]
# Run words through encoder
encoder_hidden = encoder.init_hidden()
encoder_outputs, encoder_hidden = encoder(input_variable, encoder_hidden)
# Prepare input and output variables
decoder_input = Variable(torch.LongTensor([[SOS_token]]))
decoder_context = Variable(torch.zeros(1, decoder.hidden_size))
decoder_hidden = encoder_hidden # Use last hidden state from encoder to start decoder
if USE_CUDA:
decoder_input = decoder_input.cuda()
decoder_context = decoder_context.cuda()
# Choose whether to use teacher forcing
use_teacher_forcing = random.random() < teacher_forcing_ratio
if use_teacher_forcing:
# Teacher forcing: Use the ground-truth target as the next input
for di in range(target_length):
decoder_output, decoder_context, decoder_hidden, decoder_attention = decoder(decoder_input, decoder_context, decoder_hidden, encoder_outputs)
loss += criterion(decoder_output, target_variable[di])
decoder_input = target_variable[di] # Next target is next input
else:
# Without teacher forcing: use network's own prediction as the next input
for di in range(target_length):
decoder_output, decoder_context, decoder_hidden, decoder_attention = decoder(decoder_input, decoder_context, decoder_hidden, encoder_outputs)
loss += criterion(decoder_output, target_variable[di])
# Get most likely word index (highest value) from output
topv, topi = decoder_output.data.topk(1)
ni = topi[0][0]
decoder_input = Variable(torch.LongTensor([[ni]])) # Chosen word is next input
if USE_CUDA: decoder_input = decoder_input.cuda()
# Stop at end of sentence (not necessary when using known targets)
if ni == EOS_token: break
# Backpropagation
loss.backward()
torch.nn.utils.clip_grad_norm(encoder.parameters(), clip)
torch.nn.utils.clip_grad_norm(decoder.parameters(), clip)
encoder_optimizer.step()
decoder_optimizer.step()
return loss.data[0] / target_length
Explanation: Training
Defining a training iteration
To train we first run the input sentence through the encoder word by word, and keep track of every output and the latest hidden state. Next the decoder is given the last hidden state of the decoder as its first hidden state, and the <SOS> token as its first input. From there we iterate to predict a next token from the decoder.
Teacher Forcing and Scheduled Sampling
"Teacher Forcing", or maximum likelihood sampling, means using the real target outputs as each next input when training. The alternative is using the decoder's own guess as the next input. Using teacher forcing may cause the network to converge faster, but when the trained network is exploited, it may exhibit instability.
You can observe outputs of teacher-forced networks that read with coherent grammar but wander far from the correct translation - you could think of it as having learned how to listen to the teacher's instructions, without learning how to venture out on its own.
The solution to the teacher-forcing "problem" is known as Scheduled Sampling, which simply alternates between using the target values and predicted values when training. We will randomly choose to use teacher forcing with an if statement while training - sometimes we'll feed use real target as the input (ignoring the decoder's output), sometimes we'll use the decoder's output.
End of explanation
def as_minutes(s):
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
def time_since(since, percent):
now = time.time()
s = now - since
es = s / (percent)
rs = es - s
return '%s (- %s)' % (as_minutes(s), as_minutes(rs))
Explanation: Finally helper functions to print time elapsed and estimated time remaining, given the current time and progress.
End of explanation
attn_model = 'general'
hidden_size = 500
n_layers = 2
dropout_p = 0.05
# Initialize models
encoder = EncoderRNN(input_lang.n_words, hidden_size, n_layers)
decoder = AttnDecoderRNN(attn_model, hidden_size, output_lang.n_words, n_layers, dropout_p=dropout_p)
# Move models to GPU
if USE_CUDA:
encoder.cuda()
decoder.cuda()
# Initialize optimizers and criterion
learning_rate = 0.0001
encoder_optimizer = optim.Adam(encoder.parameters(), lr=learning_rate)
decoder_optimizer = optim.Adam(decoder.parameters(), lr=learning_rate)
criterion = nn.NLLLoss()
Explanation: Running training
With everything in place we can actually initialize a network and start training.
To start, we initialize models, optimizers, and a loss function (criterion).
End of explanation
# Configuring training
n_epochs = 50000
plot_every = 200
print_every = 1000
# Keep track of time elapsed and running averages
start = time.time()
plot_losses = []
print_loss_total = 0 # Reset every print_every
plot_loss_total = 0 # Reset every plot_every
Explanation: Then set up variables for plotting and tracking progress:
End of explanation
# Begin!
for epoch in range(1, n_epochs + 1):
# Get training data for this cycle
training_pair = variables_from_pair(random.choice(pairs))
input_variable = training_pair[0]
target_variable = training_pair[1]
# Run the train function
loss = train(input_variable, target_variable, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion)
# Keep track of loss
print_loss_total += loss
plot_loss_total += loss
if epoch == 0: continue
if epoch % print_every == 0:
print_loss_avg = print_loss_total / print_every
print_loss_total = 0
print_summary = '%s (%d %d%%) %.4f' % (time_since(start, epoch / n_epochs), epoch, epoch / n_epochs * 100, print_loss_avg)
print(print_summary)
if epoch % plot_every == 0:
plot_loss_avg = plot_loss_total / plot_every
plot_losses.append(plot_loss_avg)
plot_loss_total = 0
Explanation: To actually train, we call the train function many times, printing a summary as we go.
Note: If you run this notebook you can train, interrupt the kernel, evaluate, and continue training later. You can comment out the lines above where the encoder and decoder are initialized (so they aren't reset) or simply run the notebook starting from the following cell.
End of explanation
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import numpy as np
%matplotlib inline
def show_plot(points):
plt.figure()
fig, ax = plt.subplots()
loc = ticker.MultipleLocator(base=0.2) # put ticks at regular intervals
ax.yaxis.set_major_locator(loc)
plt.plot(points)
show_plot(plot_losses)
Explanation: Plotting training loss
Plotting is done with matplotlib, using the array plot_losses that was created while training.
End of explanation
def evaluate(sentence, max_length=MAX_LENGTH):
input_variable = variable_from_sentence(input_lang, sentence)
input_length = input_variable.size()[0]
# Run through encoder
encoder_hidden = encoder.init_hidden()
encoder_outputs, encoder_hidden = encoder(input_variable, encoder_hidden)
# Create starting vectors for decoder
decoder_input = Variable(torch.LongTensor([[SOS_token]])) # SOS
decoder_context = Variable(torch.zeros(1, decoder.hidden_size))
if USE_CUDA:
decoder_input = decoder_input.cuda()
decoder_context = decoder_context.cuda()
decoder_hidden = encoder_hidden
decoded_words = []
decoder_attentions = torch.zeros(max_length, max_length)
# Run through decoder
for di in range(max_length):
decoder_output, decoder_context, decoder_hidden, decoder_attention = decoder(decoder_input, decoder_context, decoder_hidden, encoder_outputs)
decoder_attentions[di,:decoder_attention.size(2)] += decoder_attention.squeeze(0).squeeze(0).cpu().data
# Choose top word from output
topv, topi = decoder_output.data.topk(1)
ni = topi[0][0]
if ni == EOS_token:
decoded_words.append('<EOS>')
break
else:
decoded_words.append(output_lang.index2word[ni])
# Next input is chosen word
decoder_input = Variable(torch.LongTensor([[ni]]))
if USE_CUDA: decoder_input = decoder_input.cuda()
return decoded_words, decoder_attentions[:di+1, :len(encoder_outputs)]
Explanation: Evaluating the network
Evaluation is mostly the same as training, but there are no targets. Instead we always feed the decoder's predictions back to itself. Every time it predicts a word, we add it to the output string. If it predicts the EOS token we stop there. We also store the decoder's attention outputs for each step to display later.
End of explanation
def evaluate_randomly():
pair = random.choice(pairs)
output_words, decoder_attn = evaluate(pair[0])
output_sentence = ' '.join(output_words)
print('>', pair[0])
print('=', pair[1])
print('<', output_sentence)
print('')
evaluate_randomly()
Explanation: We can evaluate random sentences from the training set and print out the input, target, and output to make some subjective quality judgements:
End of explanation
output_words, attentions = evaluate("je suis trop froid .")
plt.matshow(attentions.numpy())
Explanation: Visualizing attention
A useful property of the attention mechanism is its highly interpretable outputs. Because it is used to weight specific encoder outputs of the input sequence, we can imagine looking where the network is focused most at each time step.
You could simply run plt.matshow(attentions) to see attention output displayed as a matrix, with the columns being input steps and rows being output steps:
End of explanation
def show_attention(input_sentence, output_words, attentions):
# Set up figure with colorbar
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(attentions.numpy(), cmap='bone')
fig.colorbar(cax)
# Set up axes
ax.set_xticklabels([''] + input_sentence.split(' ') + ['<EOS>'], rotation=90)
ax.set_yticklabels([''] + output_words)
# Show label at every tick
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
plt.close()
def evaluate_and_show_attention(input_sentence):
output_words, attentions = evaluate(input_sentence)
print('input =', input_sentence)
print('output =', ' '.join(output_words))
show_attention(input_sentence, output_words, attentions)
evaluate_and_show_attention("elle a cinq ans de moins que moi .")
evaluate_and_show_attention("elle est trop petit .")
evaluate_and_show_attention("je ne crains pas de mourir .")
evaluate_and_show_attention("c est un jeune directeur plein de talent .")
Explanation: For a better viewing experience we will do the extra work of adding axes and labels:
End of explanation |
2,796 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BET Surface Area
The BET equation for determining the specific surface area from multilayer adsorption of nitrogen was first reported in 1938.
Brunauer, Stephen, Paul Hugh Emmett, and Edward Teller. "Adsorption of gases in multimolecular layers." Journal of the American Chemical Society 60, no. 2 (1938)
Step1: We then use the BET reference calculation on that restricted range (along with the molecular cross sectional area of 0.162 nm^2 to do teh calculation above. Below we show the tranform plot along with its best fit line.
Step2: Finally, we show the BET results. | Python Code:
%matplotlib inline
from micromeritics import bet, util, isotherm_examples as ex, plots
s = ex.carbon_black() # example isotherm of Carbon Black with N2.
min = 0.05 # 0.05 to 0.30 range for BET
max = 0.3
Q,P = util.restrict_isotherm(s.Qads, s.Prel, min, max)
plots.plotIsotherm(s.Qads, s.Prel, s.descr[s.descr.find(':')+1:], min, max )
Explanation: BET Surface Area
The BET equation for determining the specific surface area from multilayer adsorption of nitrogen was first reported in 1938.
Brunauer, Stephen, Paul Hugh Emmett, and Edward Teller. "Adsorption of gases in multimolecular layers." Journal of the American Chemical Society 60, no. 2 (1938): 309-319.
BET Surface Area Calculation Description
The BET data reduction applies to isotherm data. The isotherm consists of the quantity adsorbed $Q_i$ (in cm^3/g STP) and the relative pressure $P^{rel}_i$ for each point $i$ selected for the calculation.
THe BET model also requires the cross sectional area of the adsorptive, $\sigma_{ads}$ (in nm^2).
BET transformation calculation
The first thing is to calculate the BET transform $T_i$. This is done as follows:
$\displaystyle {T_i=\frac{1}{Q_i(1/P^{rel}_i-1)}}$
Then a least a least-squares fit is performed on the $T_i$ vs. $P^{rel}_i$ data. This fit calculates the following:
$m$: The slope of the best fit line.
$Y_0$: The Y-intercept of the best fit line.
$\sigma_m$: The uncertainty of the slope from the fit calculation.
$\sigma_{Y_0}$: The uncertainty of the Y-intercept the the fit calculation.
$r$ The correlation coefficient between the $T_i$ and $P^{rel}_i$.
Calculating the BET results
The slope of the line and intercept may be used to calculate the monolayer capacity $Q_m$ and the BET $c$ constant.
The fist thing to calculate is the BET $C$ value:
$\displaystyle {C = 1 + \frac{m}{Y_0}}$
From this we can caclulate the monolayer capacity $Q_m$:
$\displaystyle Q_m = \frac{1}{C*Y_0}$
Finally, we can calculate the BET surface area $A_{BET}$:
$\displaystyle A_{BET} =\frac{N_A \sigma_{ads}}{V_{STP} U_{mn^2,m^2} (m + Y_0)}$
Where:
* $V_{STP}$: volume of a molr of gas at STP $22414.0$ cm^3
* $N_A$: Number of molecules in a mole of gas: $6.02214129\times 10^{23}$.
* $U_{mn^2,m^2}$: Unit conversion form nm^2 to m^2: $10^{18}$.
Finally, we can find the uncertainty in the surface area $\sigma_{SA_{BET}}$ from the uncertainty in the line fit results:
$\displaystyle \sigma_{A_{BET}} = A_{BET} \frac{\sqrt{\sigma_m^2+\sigma_{Y_0}^2}}{m + Y_0}$
BET: Example Calculation
As an example of the BET calculation, we use the refernce calculation from the report-models-python on github from Micromeritics. This tool not only provides example the reference caclulations, but aslo provides exampel data to work with, and utilities to make working with the data batter.
For this example, we start with a Carbon Black Reference material $N_2$ istherm at 77K, and restrict it to the ususal 0.05 to 0.3 relative pressure range.
End of explanation
B = bet.bet(Q, P, 0.162)
plots.plotBET(P, B.transform, B.line_fit.slope, B.line_fit.y_intercept, max )
Explanation: We then use the BET reference calculation on that restricted range (along with the molecular cross sectional area of 0.162 nm^2 to do teh calculation above. Below we show the tranform plot along with its best fit line.
End of explanation
print("BET surface area: %.4f ± %.4f m²/g" % (B.sa, B.sa_err))
print("C: %.6f" % B.C)
print("Qm: %.4f cm³/g STP" % B.q_m)
Explanation: Finally, we show the BET results.
End of explanation |
2,797 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.eco - API, API REST
Petite revue d'API REST.
Step1: Définition
Step2: Faire appel à l'API de Tastekid
La Banque Mondiale c'était assez soft
Step3: Pour demander à l'API quels sont les oeuvres similaires à Pulp Fiction, nous utilisons la requête suivante
Step4: On peut aussi ne demander que des films, on ajoute juste l'option type = movies dans l'url | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 2A.eco - API, API REST
Petite revue d'API REST.
End of explanation
import requests
data_json = requests.get("http://api.worldbank.org/v2/countries?incomeLevel=LMC&format=json").json()
data_json
data_json[0]
# On voit qu'il y a nous manque des informations :
# il y a un total de 52 éléments
data_json_page_2 = requests.get("http://api.worldbank.org/v2/countries?incomeLevel=LMC&format=json&page=2").json()
data_json_page_2
# pour obtenir une observation
# on voit dans l'objet que l'élément 0 correspond à des informations sur les pages
data_json[1][0]
Explanation: Définition :
API, à part que ce mot qui vaut 5 au scrabble, c'est quoi au juste ?
API signifie Application Programming Interface. Le mot le plus important est “interface”, et c’est le mot le plus simple, car nous utilisons tous des interfaces.
Bon, et une interface ?
Définition Larrouse : "Une interface est un dispositif qui permet des échanges et interactions entre différents acteurs"
Pour faire simple, une API est un moyen efficace de faire communiquer entre elles deux applications : concrètement, un fournisseur de service met à disposition des développeurs une interface codifiée, qui leur permet d'obtenir des informations à partir de requêtes.
Sans rentrer dans le détail technique, le dialogue ressemble à : "envoie moi ton adresse sous la forme X = rue, Y = Ville, Z = Pays" et moi, en retour, je t'enverrai le code à afficher sur ton site pour avoir la carte interactive.
Les API qui existent
De plus en plus de sites mettent à disposition des développeurs et autres curieux des API.
Pour en citer quelques-uns :
Twitter : https://dev.twitter.com/rest/public
Facebook : https://developers.facebook.com/
Instagram : https://www.instagram.com/developer/
Spotify : https://developer.spotify.com/web-api/
Ou encore :
Pole Emploi : https://www.emploi-store-dev.fr/portail-developpeur-cms/home.html
SNCF : https://data.sncf.com/api
AirFrance KLM : https://developer.airfranceklm.com/Our_Apis
Banque Mondiale : https://datahelpdesk.worldbank.org/knowledgebase/topics/125589
Comment parler à une API ?
La plupart des API donnent des exemples par communiquer avec les données présentes sur le site.
Simplement, il faut trouver l'url qui renvoit les données que vous souhaitez avoir
Par exemple, avec l'API de la Banque mondiale, voici comme s'écrit une requête pour les données de la Banque Mondiale :
http://api.worldbank.org/countries?incomeLevel=LMC
Avec cette url, on demande la liste des pays dont le niveau de revenus est LMC, c'est à dire "Lower middle income".
En cliquant sur le lien, le site renvoit des données en XML, qui ressemblent pas mal à ce qu'on a vu plus tôt avec le scraping : une structure avec des balises qui s'ouvrent et qui se ferment.
Quand on regare de plus près, on voit que les informations suivantes apparaissent
Code du pays | Nom du pays | Région | Classification en termes de revenus | Les types de prêt pour ces pays | La capitale | Longitude | Latitude
<wb:country id="ARM">
<wb:iso2Code>AM</wb:iso2Code>
<wb:name>Armenia</wb:name>
<wb:region id="ECS">Europe & Central Asia</wb:region>
<wb:adminregion id="ECA">Europe & Central Asia (excluding high income)</wb:adminregion>
<wb:incomeLevel id="LMC">Lower middle income</wb:incomeLevel>
<wb:lendingType id="IBD">IBRD</wb:lendingType>
<wb:capitalCity>Yerevan</wb:capitalCity>
<wb:longitude>44.509</wb:longitude>
<wb:latitude>40.1596</wb:latitude>
</wb:country>
<wb:country id="BGD">
<wb:iso2Code>BD</wb:iso2Code>
<wb:name>Bangladesh</wb:name>
<wb:region id="SAS">South Asia</wb:region>
<wb:adminregion id="SAS">South Asia</wb:adminregion>
<wb:incomeLevel id="LMC">Lower middle income</wb:incomeLevel>
<wb:lendingType id="IDX">IDA</wb:lendingType>
<wb:capitalCity>Dhaka</wb:capitalCity>
<wb:longitude>90.4113</wb:longitude>
<wb:latitude>23.7055</wb:latitude>
</wb:country>
En utilisant cette url ci : http://api.worldbank.org/countries?incomeLevel=LMC&format=json, on obtient directement un json, qui est finalement presque comme un dictionnaire en python.
Rien de plus simple donc pour demander quelque chose à une API, il suffit d'avoir la bonne url.
Et Python : comment il s'adresse aux API ?
C'est là qu'on revient aux fondamentaux : on va avoir besoin du module requests de Python et suivant les API, un parser comme BeautifulSoup ou rien si on réussit à obtenir un json.
On va utiliser le module requests et sa méthode get : on lui donne l'url de l'API qui nous intéresse, on lui demande d'en faire un json et le tour est joué !
Faire appel à l'API de la Banque Mondiale
End of explanation
import os
from pyquickhelper.loghelper import get_password
key = get_password("tastekid", "ensae_teaching_cs,key")
if key is None:
raise ValueError("password cannot be None.")
Explanation: Faire appel à l'API de Tastekid
La Banque Mondiale c'était assez soft : on va passer sur du un peu plus costaud. On va utiliser l'API de Tastekid, site de recommandations de films, livres etc.
Pour cela, il faut commencer par créer un compte :
End of explanation
url = "https://tastedive.com/api/similar?q=pulp+fiction&info=1&k={}".format(key)
recommandations_res = requests.get(url)
if "401 Unauthorized" in recommandations_res.text:
print("Le site tastekid n'accepte pas les requêtes non authentifiée.")
print(recommandations_res.text)
recommandations_res = None
if recommandations_res is not None:
try:
recommandations = recommandations_res.json()
except Exception as e:
print(e)
# Parfois le format json est mal formé. On regarde pourquoi.
print()
raise Exception(recommandations_res.text) from e
if recommandations_res is not None:
print(str(recommandations)[:2000])
# on nous rappelle les informations sur l'élement que l'on recherche : Pulp Fiction
recommandations['Similar']['Info']
# on nous donnes des livres / filmes proches selon le gout des gens
for element in recommandations['Similar']['Results'] :
print(element['Name'],element['Type'])
Explanation: Pour demander à l'API quels sont les oeuvres similaires à Pulp Fiction, nous utilisons la requête suivante
End of explanation
recommandations_films = requests.get("https://tastedive.com/api/similar?q=pulp+fiction&type=movie&info=1&k={}"
.format(key)).json()
print(str(recommandations_films)[:2000])
# on nous donnes des livres / filmes proches selon le gout des gens
for element in recommandations_films['Similar']['Results'] :
print(element['Name'],element['Type'])
film_suivant = "Reservoir Dogs"
recommandations_suivantes_films = requests.get(
"https://tastedive.com/api/similar?q={}&type=movie&info=1&k={}"
.format(film_suivant, key)).json()
# on nous donnes des livres / filmes proches selon le gout des gens
for element in recommandations_suivantes_films['Similar']['Results'] :
print(element['Name'],element['Type'])
## On peut ensuite comparer les films communs aux deux recherches
liste1 = [element['Name'] for element in recommandations_films['Similar']['Results'] ]
liste2 = [element['Name'] for element in recommandations_suivantes_films['Similar']['Results'] ]
films_commun = set(liste1).intersection(liste2)
films_commun, len(films_commun)
films_non_partages = [f for f in liste1 if f not in liste2] + [f for f in liste2 if f not in liste1]
films_non_partages
Explanation: On peut aussi ne demander que des films, on ajoute juste l'option type = movies dans l'url
End of explanation |
2,798 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to read BigQuery data from TensorFlow 2.0 efficiently
This notebook accompanies the article
"How to read BigQuery data from TensorFlow 2.0 efficiently"
The example problem is to find credit card fraud from the dataset published in
Step1: Find the breakoff point etc. for Keras
When we do the training in Keras & TensorFlow, we need to find the place to split the dataset and how to weight the imbalanced data.
(BigQuery ML did that for us because we specified 'seq' as the split method and auto_class_weights to be True).
Step2: The time cutoff is 144803 and the Keras model's output bias needs to be set at -6.36
The class weights need to be 289.4 and 0.5
Training a TensorFlow/Keras model that reads from BigQuery
Create the dataset from BigQuery
Step3: Create Keras model
Step4: Load TensorFlow model into BigQuery
Now that we have trained a TensorFlow model off BigQuery data ...
let's load the model into BigQuery and use it for batch prediction!
Step5: Now predict with this model (the reason it's called 'd4' is because the output node of my Keras model was called 'd4').
To get probabilities, etc. we'd have to add the corresponding outputs to the Keras model. | Python Code:
%%bash
# create output dataset
bq mk advdata
%%bigquery
CREATE OR REPLACE MODEL advdata.ulb_fraud_detection
TRANSFORM(
* EXCEPT(Amount),
SAFE.LOG(Amount) AS log_amount
)
OPTIONS(
INPUT_LABEL_COLS=['class'],
AUTO_CLASS_WEIGHTS = TRUE,
DATA_SPLIT_METHOD='seq',
DATA_SPLIT_COL='Time',
MODEL_TYPE='logistic_reg'
) AS
SELECT
*
FROM `bigquery-public-data.ml_datasets.ulb_fraud_detection`
%%bigquery
SELECT * FROM ML.EVALUATE(MODEL advdata.ulb_fraud_detection)
%%bigquery
SELECT predicted_class_probs, Class
FROM ML.PREDICT( MODEL advdata.ulb_fraud_detection,
(SELECT * FROM `bigquery-public-data.ml_datasets.ulb_fraud_detection` WHERE Time = 85285.0)
)
Explanation: How to read BigQuery data from TensorFlow 2.0 efficiently
This notebook accompanies the article
"How to read BigQuery data from TensorFlow 2.0 efficiently"
The example problem is to find credit card fraud from the dataset published in:
<i>
Andrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. Calibrating Probability with Undersampling for Unbalanced Classification. In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015
</i>
and available in BigQuery at <pre>bigquery-public-data.ml_datasets.ulb_fraud_detection</pre>
Benchmark Model
In order to compare things, we will do a simple logistic regression in BigQuery ML.
Note that we are using all the columns in the dataset as predictors (except for the Time and Class columns).
The Time column is used to split the dataset 80:20 with the first 80% used for training and the last 20% used for evaluation.
We will also have BigQuery ML automatically balance the weights.
Because the Amount column has a huge range, we take the log of it in preprocessing.
End of explanation
%%bigquery
WITH counts AS (
SELECT
APPROX_QUANTILES(Time, 5)[OFFSET(4)] AS train_cutoff
, COUNTIF(CLASS > 0) AS pos
, COUNTIF(CLASS = 0) AS neg
FROM `bigquery-public-data`.ml_datasets.ulb_fraud_detection
)
SELECT
train_cutoff
, SAFE.LOG(SAFE_DIVIDE(pos,neg)) AS output_bias
, 0.5*SAFE_DIVIDE(pos + neg, pos) AS weight_pos
, 0.5*SAFE_DIVIDE(pos + neg, neg) AS weight_neg
FROM counts
Explanation: Find the breakoff point etc. for Keras
When we do the training in Keras & TensorFlow, we need to find the place to split the dataset and how to weight the imbalanced data.
(BigQuery ML did that for us because we specified 'seq' as the split method and auto_class_weights to be True).
End of explanation
import tensorflow as tf
from tensorflow.python.framework import dtypes
from tensorflow_io.bigquery import BigQueryClient
from tensorflow_io.bigquery import BigQueryReadSession
def features_and_labels(features):
label = features.pop('Class') # this is what we will train for
return features, label
def read_dataset(client, row_restriction, batch_size=2048):
GCP_PROJECT_ID='ai-analytics-solutions' # CHANGE
COL_NAMES = ['Time', 'Amount', 'Class'] + ['V{}'.format(i) for i in range(1,29)]
COL_TYPES = [dtypes.float64, dtypes.float64, dtypes.int64] + [dtypes.float64 for i in range(1,29)]
DATASET_GCP_PROJECT_ID, DATASET_ID, TABLE_ID, = 'bigquery-public-data.ml_datasets.ulb_fraud_detection'.split('.')
bqsession = client.read_session(
"projects/" + GCP_PROJECT_ID,
DATASET_GCP_PROJECT_ID, TABLE_ID, DATASET_ID,
COL_NAMES, COL_TYPES,
requested_streams=2,
row_restriction=row_restriction)
dataset = bqsession.parallel_read_rows()
return dataset.prefetch(1).map(features_and_labels).shuffle(batch_size*10).batch(batch_size)
client = BigQueryClient()
temp_df = read_dataset(client, 'Time <= 144803', 2)
for row in temp_df:
print(row)
break
train_df = read_dataset(client, 'Time <= 144803', 2048)
eval_df = read_dataset(client, 'Time > 144803', 2048)
Explanation: The time cutoff is 144803 and the Keras model's output bias needs to be set at -6.36
The class weights need to be 289.4 and 0.5
Training a TensorFlow/Keras model that reads from BigQuery
Create the dataset from BigQuery
End of explanation
metrics = [
tf.keras.metrics.BinaryAccuracy(name='accuracy'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall'),
tf.keras.metrics.AUC(name='roc_auc'),
]
# create inputs, and pass them into appropriate types of feature columns (here, everything is numeric)
inputs = {
'V{}'.format(i) : tf.keras.layers.Input(name='V{}'.format(i), shape=(), dtype='float64') for i in range(1, 29)
}
inputs['Amount'] = tf.keras.layers.Input(name='Amount', shape=(), dtype='float64')
input_fc = [tf.feature_column.numeric_column(colname) for colname in inputs.keys()]
# transformations. only the Amount is transformed
transformed = inputs.copy()
transformed['Amount'] = tf.keras.layers.Lambda(
lambda x: tf.math.log(tf.math.maximum(x, 0.01)), name='log_amount')(inputs['Amount'])
input_layer = tf.keras.layers.DenseFeatures(input_fc, name='inputs')(transformed)
# Deep learning model
d1 = tf.keras.layers.Dense(16, activation='relu', name='d1')(input_layer)
d2 = tf.keras.layers.Dropout(0.25, name='d2')(d1)
d3 = tf.keras.layers.Dense(16, activation='relu', name='d3')(d2)
output = tf.keras.layers.Dense(1, activation='sigmoid', name='d4', bias_initializer=tf.keras.initializers.Constant())(d3)
model = tf.keras.Model(inputs, output)
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=metrics)
tf.keras.utils.plot_model(model, rankdir='LR')
class_weight = {0: 0.5, 1: 289.4}
history = model.fit(train_df, validation_data=eval_df, epochs=20, class_weight=class_weight)
import matplotlib.pyplot as plt
plt.plot(history.history['val_roc_auc']);
plt.xlabel('Epoch');
plt.ylabel('AUC');
Explanation: Create Keras model
End of explanation
BUCKET='ai-analytics-solutions-kfpdemo' # CHANGE TO SOMETHING THAT YOU OWN
model.save('gs://{}/bqexample/export'.format(BUCKET))
%%bigquery
CREATE OR REPLACE MODEL advdata.keras_fraud_detection
OPTIONS(model_type='tensorflow', model_path='gs://ai-analytics-solutions-kfpdemo/bqexample/export/*')
Explanation: Load TensorFlow model into BigQuery
Now that we have trained a TensorFlow model off BigQuery data ...
let's load the model into BigQuery and use it for batch prediction!
End of explanation
%%bigquery
SELECT d4, Class
FROM ML.PREDICT( MODEL advdata.keras_fraud_detection,
(SELECT * FROM `bigquery-public-data.ml_datasets.ulb_fraud_detection` WHERE Time = 85285.0)
)
Explanation: Now predict with this model (the reason it's called 'd4' is because the output node of my Keras model was called 'd4').
To get probabilities, etc. we'd have to add the corresponding outputs to the Keras model.
End of explanation |
2,799 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create relaxed geodynamic 1D profile
NOTE
Step1: Now a few imports
Step2: The following code block sets up some parameters for our problem,
including the PerpleX model we will use to determine physical properties of the planet.
Step3: The following code block reinterpolates the entropy and volume onto a grid for smoothing purposes.
As we're using a PerpleX table, we could just have read the original grid in directly, but this way, we could more easily adapt the code to use other BurnMan materials.
This step takes a few seconds (10--30 s on a fast ultrabook), but we only do it once.
Step5: This long code block sets up three functions, which
Step6: Save file
Step7: The following code block attempts to make updating the multipart interactive figure as efficient as possible.
It does this by calculating all of the properties in an inner function, and storing them in a global parameter (global_stored_properties). That function is then wrapped in an outer function that dictates which properties to return. If the inner function has been previously called with the same input parameters, the previously stored properties are returned, otherwise, all properties are calculated from the new input parameters. | Python Code:
interactive = True
if interactive:
%matplotlib ipympl
import ipywidgets as widgets
import mpl_interactions.ipyplot as iplt
Explanation: Create relaxed geodynamic 1D profile
NOTE: This notebook contains an interactive figure with sliders. it relies on python modules ipympl and mpl_interactions.
If you want to run this notebook with static figures, restart the kernel, clear any history and set interactive = False in the first code block.
In the mantle, it is common to assume that convecting material is at chemical equilibrium; all of the reactions between phases keep pace with the changes in pressure and temperature. Because of this relaxation, physical properties such as heat capacity $C_P$, thermal expansion $\alpha$ and compressibility $\beta$ must be computed by numerical differentiation of the entropy $\mathcal{S}$ and volume $\mathcal{V}$. It is these values, rather than the unrelaxed values output as standard by BurnMan and PerpleX which should be used in geodynamic simulations.
Relaxed properties can sometimes be very different from their unrelaxed counterparts. Take, for example, the univariant reaction forsterite -> Mg-wadsleyite. These transformation involves a step change in volume, and thus the relaxed compressibility at the transition is infinite. Obviously, if geodynamics software uses compressibility as an input parameter, then whichever meshing is chosen, it will completely miss the transition. There are two solutions to this problem:
* Calculate the entropy and volume at the quadrature points, and calculate $\nabla\mathcal{S}$ and $\nabla\mathcal{V}$ within each cell. This method is computationally expensive and there may be convergence problems if the quadrature points are very close to the positions of near-univariant reactions.
* Smooth $\mathcal{S}(P, T)$ and $\mathcal{V}(P, T)$ by convolution with a 2D Gaussian (in $P$ and $T$) before calculating $C_P$, $\alpha$ and $\beta$. A good rule of thumb is that reactions should span about 4 cells for the latent heat to be captured within a few percent.
The second method is used here to create 1D material property profiles which can be directly used by $ASPECT$. The user of this notebook can vary important mineral physics parameters (rock type, potential temperature, surface gravity) and smoothing parameters (Gaussian widths).
First, let's install some modules that we need to run interactive figures.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import burnman
from burnman import Layer
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import fsolve, brentq
from scipy.integrate import odeint
from scipy.interpolate import UnivariateSpline
plt.style.use('bmh')
Explanation: Now a few imports
End of explanation
perplex_filename = '../../burnman/data/input_perplex/in23_1.tab' # 'example23_hires.tab' # '../../burnman/data/input_perplex/in23_1.tab'
potential_temperature = 1550.
outer_radius = 6371.e3
thickness = 550.e3
n_points = 251
pressure_top = 1.e5
gravity_bottom = 10.
depths = np.linspace(thickness, 0., n_points)
rock = burnman.PerplexMaterial(perplex_filename)
layer = Layer(name='Mantle', radii=outer_radius-depths)
layer.set_material(rock)
layer.set_temperature_mode(temperature_mode='adiabatic',
temperature_top=1550.)
layer.set_pressure_mode(pressure_mode='self-consistent',
pressure_top=1.e5,
gravity_bottom=gravity_bottom)
layer.make()
truncate = 4. # truncates the convolution Gaussian at 4 sigma
Explanation: The following code block sets up some parameters for our problem,
including the PerpleX model we will use to determine physical properties of the planet.
End of explanation
n_gridpoints = (501, 101)
min_grid_pressure = rock.bounds[0][0]
max_grid_pressure = rock.bounds[0][1]
min_grid_temperature = rock.bounds[1][0]
max_grid_temperature = rock.bounds[1][1]
grid_pressures = np.linspace(min_grid_pressure, max_grid_pressure, n_gridpoints[0])
grid_temperatures = np.linspace(min_grid_temperature, max_grid_temperature, n_gridpoints[1])
pp, TT = np.meshgrid(grid_pressures, grid_temperatures)
mesh_shape = pp.shape
pp = np.ndarray.flatten(pp)
TT = np.ndarray.flatten(TT)
grid_entropies = np.zeros_like(pp)
grid_volumes = np.zeros_like(pp)
grid_entropies, grid_volumes = layer.material.evaluate(['S', 'V'], pp, TT)
grid_entropies = grid_entropies.reshape(mesh_shape)
grid_volumes = grid_volumes.reshape(mesh_shape)
Explanation: The following code block reinterpolates the entropy and volume onto a grid for smoothing purposes.
As we're using a PerpleX table, we could just have read the original grid in directly, but this way, we could more easily adapt the code to use other BurnMan materials.
This step takes a few seconds (10--30 s on a fast ultrabook), but we only do it once.
End of explanation
# Define function to find an isentrope given a
# 2D entropy interpolation function
# Here we use fsolve, because we'll normally have a good starting guess
# from the previous pressure
def interp_isentrope(interp, pressures, entropies, T_guess):
def _deltaS(args, S, P):
T = args[0]
return interp(P, T)[0] - S
sol = [T_guess]
temperatures = np.empty_like(pressures)
for i in range(len(pressures)):
sol = fsolve(_deltaS, sol, args=(entropies[i], pressures[i]))
temperatures[i] = sol[0]
return temperatures
def relaxed_profile(layer, pressure_stdev, temperature_stdev,
truncate):
unsmoothed_T_spline = UnivariateSpline(layer.pressure[::-1], layer.temperature[::-1])
unsmoothed_grid_isentrope_temperatures = unsmoothed_T_spline(grid_pressures)
# Having defined the grid and calculated unsmoothed properties,
# we now calculate the smoothed entropy and volume and derivatives with
# respect to pressure and temperature.
S_interps = burnman.tools.math.interp_smoothed_array_and_derivatives(array=grid_entropies,
x_values=grid_pressures,
y_values=grid_temperatures,
x_stdev=pressure_stdev,
y_stdev=temperature_stdev,
truncate=truncate)
interp_smoothed_S, interp_smoothed_dSdP, interp_smoothed_dSdT = S_interps
V_interps = burnman.tools.math.interp_smoothed_array_and_derivatives(array=grid_volumes,
x_values=grid_pressures,
y_values=grid_temperatures,
x_stdev=pressure_stdev,
y_stdev=temperature_stdev,
truncate=truncate)
interp_smoothed_V, interp_smoothed_dVdP, interp_smoothed_dVdT = V_interps
# Now we can calculate and plot the relaxed and smoothed properties along the isentrope
smoothed_temperatures = interp_isentrope(interp_smoothed_S, layer.pressure[::-1], layer.S[::-1], layer.temperature[-1])[::-1]
densities = layer.material.evaluate(['rho'], layer.pressure, smoothed_temperatures)[0]
volumes = np.array([interp_smoothed_V(p, T)[0] for (p, T) in zip(*[layer.pressure, smoothed_temperatures])])
dSdT = np.array([interp_smoothed_dSdT(p, T)[0] for (p, T) in zip(*[layer.pressure, smoothed_temperatures])])
dVdT = np.array([interp_smoothed_dVdT(p, T)[0] for (p, T) in zip(*[layer.pressure, smoothed_temperatures])])
dVdP = np.array([interp_smoothed_dVdP(p, T)[0] for (p, T) in zip(*[layer.pressure, smoothed_temperatures])])
alphas_relaxed = dVdT / volumes
compressibilities_relaxed = -dVdP / volumes
specific_heats_relaxed = smoothed_temperatures * dSdT / (densities[0]*volumes[0])
dT = 0.1
Vpsub, Vssub = layer.material.evaluate(['p_wave_velocity', 'shear_wave_velocity'],
layer.pressure, smoothed_temperatures-dT/2.)
Vpadd, Vsadd = layer.material.evaluate(['p_wave_velocity', 'shear_wave_velocity'],
layer.pressure, smoothed_temperatures+dT/2.)
Vps = (Vpadd + Vpsub)/2.
Vss = (Vsadd + Vssub)/2.
dVpdT = (Vpadd - Vpsub)/dT
dVsdT = (Vsadd - Vssub)/dT
depths = layer.outer_radius - layer.radii
return (smoothed_temperatures, layer.pressure, depths, layer.gravity, densities,
alphas_relaxed, compressibilities_relaxed, specific_heats_relaxed,
Vss, Vps, dVsdT, dVpdT)
flag_index = {'T': 0, 'P': 1, 'z': 2, 'g': 3, 'rho': 4,
'alpha': 5, 'beta_T': 6, 'Cp': 7,
'Vs': 8, 'Vp': 9, 'dVsdT': 10,
'dVpdT': 11}
def save_relaxed_properties(layer, P_GPa_gaussian, T_K_gaussian, outfile='isentrope_properties.txt'):
A function to output smoothed, relaxed properties for use in ASPECT
depth, pressure, temperature, density, gravity, Cp (per kilo), thermal expansivity
d = relaxed_profile(layer, P_GPa_gaussian*1.e9, T_K_gaussian, truncate)[::-1]
np.savetxt(outfile, X=np.array([d[flag_index['z']],
d[flag_index['P']],
d[flag_index['T']],
d[flag_index['rho']],
d[flag_index['g']],
d[flag_index['alpha']],
d[flag_index['Cp']],
d[flag_index['beta_T']],
d[flag_index['Vs']],
d[flag_index['Vp']],
d[flag_index['dVsdT']],
d[flag_index['dVpdT']]]).T,
header=('# This ASPECT-compatible file contains material '
'properties calculated along an isentrope by the '
f'BurnMan software.\n# POINTS: {n_points}\n'
'# depth (m), pressure (Pa), temperature (K), '
'density (kg/m^3), gravity (m/s^2), '
'thermal expansivity (1/K), specific heat (J/K/kg), '
'compressibility (1/Pa), seismic Vs (m/s), '
'seismic Vp (m/s), seismic dVs/dT (m/s/K), '
'seismic dVp/dT (m/s/K)\n'
'depth pressure '
'temperature density '
'gravity thermal_expansivity '
'specific_heat compressibility '
'seismic_Vs seismic_Vp '
'seismic_dVs_dT seismic_dVp_dT'),
fmt='%.10e', delimiter='\t', comments='')
print('File saved to {0}'.format(outfile))
Explanation: This long code block sets up three functions, which:
- return the temperatures along an isentrope given a 2D S(P,T) interpolator.
- return the relaxed geodynamic properties along the adiabat
- save a file containing the relaxed properties
End of explanation
save_relaxed_properties(layer, P_GPa_gaussian=0.25, T_K_gaussian=0.25)
Explanation: Save file
End of explanation
global global_PT_smooth
global_PT_smooth = [None, None]
global_stored_properties = [None]
def plot_y(flag):
index = flag_index[flag]
def f(x, P_GPa_gaussian, T_K_gaussian):
if P_GPa_gaussian == global_PT_smooth[0] and T_K_gaussian == global_PT_smooth[1]:
pass
else:
global_PT_smooth[0] = P_GPa_gaussian
global_PT_smooth[1] = T_K_gaussian
f = relaxed_profile(layer, P_GPa_gaussian*1.e9, T_K_gaussian,
truncate)
global_stored_properties[0] = f
return global_stored_properties[0][index]
return f
plt.rcParams['figure.figsize'] = 8, 5 # inches
fig = plt.figure()
px, py = [2, 3]
depths = layer.outer_radius - layer.radii
gravity = layer.gravity
x = depths/1.e3
xlabel = 'Depths (km)'
ax_T = fig.add_subplot(px, py, 1)
ax_T.plot(x, layer.temperatures, label='unrelaxed')
ax_T.set_ylabel('Temperature (K)')
ax_T.set_xlabel(xlabel)
ax_g = fig.add_subplot(px, py, 2)
ax_g.plot(x, layer.gravity)
ax_g.set_ylabel('Gravity (m/s^2)')
ax_g.set_xlabel(xlabel)
ax_rho = fig.add_subplot(px, py, 3)
ax_rho.plot(x, layer.rho, label='$\rho$ (kg/m$^3$)')
ax_rho.plot(x, layer.v_p, label='P (km/s)')
ax_rho.plot(x, layer.v_s, label='S (km/s)')
ax_rho.set_ylabel('Densities/Velocities')
ax_rho.set_xlabel(xlabel)
ax_alpha = fig.add_subplot(px, py, 4)
ax_alpha.plot(x, layer.alpha)
ax_alpha.set_ylabel('alpha (/K)')
ax_alpha.set_xlabel(xlabel)
ax_beta = fig.add_subplot(px, py, 5)
ax_beta.plot(x, layer.beta_T)
ax_beta.set_ylabel('compressibilities (/Pa)')
ax_beta.set_xlabel(xlabel)
ax_cp = fig.add_subplot(px, py, 6)
ax_cp.plot(x, layer.C_p/layer.molar_mass)
ax_cp.set_ylabel('Cp (J/K/kg)')
ax_cp.set_xlabel(xlabel)
# Relaxed, unsmoothed properties
ax_T.plot(x, plot_y('T')(x, 0., 0.), label='relaxed, unsmoothed')
ax_g.plot(x, plot_y('g')(x, 0., 0.))
ax_alpha.plot(x, plot_y('alpha')(x, 0., 0.))
ax_beta.plot(x, plot_y('beta_T')(x, 0., 0.))
ax_cp.plot(x, plot_y('Cp')(x, 0., 0.))
if interactive:
# Interactive smoothing
P_GPa_gaussian = np.linspace(0., 3., 51)
T_K_gaussian = np.linspace(0., 30, 41)
controls = iplt.plot(x, plot_y('T'), P_GPa_gaussian=P_GPa_gaussian, T_K_gaussian=T_K_gaussian, ax=ax_T, label='relaxed, smoothed')
_ = iplt.plot(x, plot_y('g'), controls=controls, ax=ax_g)
_ = iplt.plot(x, plot_y('alpha'), controls=controls, ax=ax_alpha)
_ = iplt.plot(x, plot_y('beta_T'), controls=controls, ax=ax_beta)
_ = iplt.plot(x, plot_y('Cp'), controls=controls, ax=ax_cp)
else:
# Non-interactive smoothing
P_GPa_gaussian = 0.5
T_K_gaussian = 20.
ax_T.plot(x, plot_y('T')(x, P_GPa_gaussian, T_K_gaussian), label='relaxed, smoothed')
ax_g.plot(x, plot_y('g')(x, P_GPa_gaussian, T_K_gaussian))
ax_alpha.plot(x, plot_y('alpha')(x, P_GPa_gaussian, T_K_gaussian))
ax_beta.plot(x, plot_y('beta_T')(x, P_GPa_gaussian, T_K_gaussian))
ax_cp.plot(x, plot_y('Cp')(x, P_GPa_gaussian, T_K_gaussian))
ax_T.legend(loc='lower right',prop={'size':8})
fig.set_tight_layout(True)
Explanation: The following code block attempts to make updating the multipart interactive figure as efficient as possible.
It does this by calculating all of the properties in an inner function, and storing them in a global parameter (global_stored_properties). That function is then wrapped in an outer function that dictates which properties to return. If the inner function has been previously called with the same input parameters, the previously stored properties are returned, otherwise, all properties are calculated from the new input parameters.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.