markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
We need a first order system, so convert the second order system $$m \ddot{u} + k u = 0,\quad u(0) = u_0,\quad \dot{u}(0) = \dot{u}_0$$into $$\left\{ \begin{array}{l} \dot u = v\\ \dot v = \ddot u = -\frac{ku}{m}\end{array} \right.$$You need to define a function that computes the right hand side of above equation:
|
def rhseqn(t, x, xdot):
""" we create rhs equations for the problem"""
xdot[0] = x[1]
xdot[1] = - k/m * x[0]
|
_____no_output_____
|
BSD-3-Clause
|
ipython_examples/Simple Oscillator.ipynb
|
tinosulzer/odes
|
To solve the ODE you define an ode object, specify the solver to use, here cvode, and pass the right hand side function. You request the solution at specific timepoints by passing an array of times to the solve member.
|
solver = ode('cvode', rhseqn, old_api=False)
solution = solver.solve([0., 1., 2.], initx)
print('\n t Solution Exact')
print('------------------------------------')
for t, u in zip(solution.values.t, solution.values.y):
print('{0:>4.0f} {1:15.6g} {2:15.6g}'.format(t, u[0],
initx[0]*np.cos(np.sqrt(k/m)*t)+initx[1]*np.sin(np.sqrt(k/m)*t)/np.sqrt(k/m)))
|
t Solution Exact
------------------------------------
0 1 1
1 -0.370694 -0.370682
2 -0.691508 -0.691484
|
BSD-3-Clause
|
ipython_examples/Simple Oscillator.ipynb
|
tinosulzer/odes
|
You can continue the solver by passing further times. Calling the solve routine reinits the solver, so you can restart at whatever time. To continue from the last computed solution, pass the last obtained time and solution. **Note:** The solver performes better if it can take into account history information, so avoid calling solve to continue computation!In general, you must check for errors using the errors output of solve.
|
#Solve over the next hour by continuation
times = np.linspace(0, 3600, 61)
times[0] = solution.values.t[-1]
solution = solver.solve(times, solution.values.y[-1])
if solution.errors.t:
print ('Error: ', solution.message, 'Error at time', solution.errors.t)
print ('Computed Solutions:')
print('\n t Solution Exact')
print('------------------------------------')
for t, u in zip(solution.values.t, solution.values.y):
print('{0:>4.0f} {1:15.6g} {2:15.6g}'.format(t, u[0],
initx[0]*np.cos(np.sqrt(k/m)*t)+initx[1]*np.sin(np.sqrt(k/m)*t)/np.sqrt(k/m)))
|
Error: Could not reach endpoint Error at time 24.5780834078
Computed Solutions:
t Solution Exact
------------------------------------
2 -0.691508 -0.691484
|
BSD-3-Clause
|
ipython_examples/Simple Oscillator.ipynb
|
tinosulzer/odes
|
The solution fails at a time around 24 seconds. Erros can be due to many things. Here however the reason is simple: we try to make too large jumps in time output. Increasing the allowed steps the solver can take will fix this. This is the **max_steps** option of cvode:
|
solver = ode('cvode', rhseqn, old_api=False, max_steps=5000)
solution = solver.solve(times, solution.values.y[-1])
if solution.errors.t:
print ('Error: ', solution.message, 'Error at time', solution.errors.t)
print ('Computed Solutions:')
print('\n t Solution Exact')
print('------------------------------------')
for t, u in zip(solution.values.t, solution.values.y):
print('{0:>4.0f} {1:15.6g} {2:15.6g}'.format(t, u[0],
initx[0]*np.cos(np.sqrt(k/m)*t)+initx[1]*np.sin(np.sqrt(k/m)*t)/np.sqrt(k/m)))
|
Computed Solutions:
t Solution Exact
------------------------------------
2 -0.691508 -0.691484
60 0.843074 0.843212
120 0.372884 0.373054
180 -0.235749 -0.235745
240 -0.756553 -0.756932
300 -0.996027 -0.996814
360 -0.865262 -0.866242
420 -0.412897 -0.413742
480 0.192583 0.192521
540 0.726263 0.727236
600 0.989879 0.991682
660 0.885441 0.887581
720 0.452113 0.453622
780 -0.149122 -0.148921
840 -0.694753 -0.696119
900 -0.981996 -0.984613
960 -0.904317 -0.907187
1020 -0.490547 -0.492616
1080 0.105433 0.10503
1140 0.661996 0.663643
1200 0.972376 0.97562
1260 0.921229 0.925021
1320 0.527762 0.530648
1380 -0.0616772 -0.0609338
1440 -0.627855 -0.62987
1500 -0.960354 -0.964723
1560 -0.935939 -0.941048
1620 -0.563703 -0.567643
1680 0.0179025 0.0167187
1740 0.592593 0.594867
1800 0.946886 0.951941
1860 0.949087 0.955237
1920 0.59865 0.60353
1980 0.025868 0.0275291
2040 -0.556352 -0.558703
2100 -0.931643 -0.9373
2160 -0.960605 -0.96756
2220 -0.632579 -0.638239
2280 -0.0695761 -0.0717232
2340 0.51911 0.521447
2400 0.914706 0.920828
2460 0.97026 0.977994
2520 0.66523 0.6717
2580 0.113083 0.115777
2640 -0.48092 -0.483173
2700 -0.896022 -0.902558
2760 -0.978027 -0.986518
2820 -0.69649 -0.70385
2880 -0.156248 -0.159605
2940 0.441822 0.443956
3000 0.875509 0.882526
3060 0.983777 0.993115
3120 0.72638 0.734626
3180 0.199071 0.203121
3240 -0.401994 -0.403871
3300 -0.853534 -0.860769
3360 -0.987807 -0.997773
3420 -0.75483 -0.763966
3480 -0.241495 -0.246241
3540 0.361435 0.362997
3600 0.82981 0.837332
|
BSD-3-Clause
|
ipython_examples/Simple Oscillator.ipynb
|
tinosulzer/odes
|
To plot the simple oscillator, we show a (t,x) plot of the solution. Doing this over 60 seconds can be done as follows:
|
#plot of the oscilator
solver = ode('cvode', rhseqn, old_api=False)
times = np.linspace(0,60,600)
solution = solver.solve(times, initx)
plt.plot(solution.values.t,[x[0] for x in solution.values.y])
plt.xlabel('Time [s]')
plt.ylabel('Position [m]')
plt.show()
|
_____no_output_____
|
BSD-3-Clause
|
ipython_examples/Simple Oscillator.ipynb
|
tinosulzer/odes
|
You can refine the tolerances from their defaults to obtain more accurate solutions
|
options1= {'rtol': 1e-6, 'atol': 1e-12, 'max_steps': 50000} # default rtol and atol
options2= {'rtol': 1e-15, 'atol': 1e-25, 'max_steps': 50000}
solver1 = ode('cvode', rhseqn, old_api=False, **options1)
solver2 = ode('cvode', rhseqn, old_api=False, **options2)
solution1 = solver1.solve([0., 1., 60], initx)
solution2 = solver2.solve([0., 1., 60], initx)
print('\n t Solution1 Solution2 Exact')
print('-----------------------------------------------------')
for t, u1, u2 in zip(solution1.values.t, solution1.values.y, solution2.values.y):
print('{0:>4.0f} {1:15.8g} {2:15.8g} {3:15.8g}'.format(t, u1[0], u2[0],
initx[0]*np.cos(np.sqrt(k/m)*t)+initx[1]*np.sin(np.sqrt(k/m)*t)/np.sqrt(k/m)))
|
t Solution1 Solution2 Exact
-----------------------------------------------------
0 1 1 1
1 -0.37069371 -0.37068197 -0.37068197
60 0.8430298 0.84321153 0.84321153
|
BSD-3-Clause
|
ipython_examples/Simple Oscillator.ipynb
|
tinosulzer/odes
|
Simple Oscillator Example: Stepwise runningWhen using the *solve* method, you solve over a period of time you decided before. In some problems you might want to solve and decide on the output when to stop. Then you use the *step* method. The same example as above using the step method can be solved as follows. You define the ode object selecting the cvode solver. You initialize the solver with the begin time and initial conditions using *init_step*. You compute solutions going forward with the *step* method.
|
solver = ode('cvode', rhseqn, old_api=False)
time = 0.
solver.init_step(time, initx)
plott = []
plotx = []
while True:
time += 0.1
# fix roundoff error at end
if time > 60: time = 60
solution = solver.step(time)
if solution.errors.t:
print ('Error: ', solution.message, 'Error at time', solution.errors.t)
break
#we store output for plotting
plott.append(solution.values.t)
plotx.append(solution.values.y[0])
if time >= 60:
break
plt.plot(plott,plotx)
plt.xlabel('Time [s]')
plt.ylabel('Position [m]')
plt.show()
|
_____no_output_____
|
BSD-3-Clause
|
ipython_examples/Simple Oscillator.ipynb
|
tinosulzer/odes
|
The solver interpolates solutions to return the solution at the required output times:
|
print ('plott length:', len(plott), ', last computation times:', plott[-15:]);
|
plott length: 600 , last computation times: [58.60000000000056, 58.700000000000564, 58.800000000000566, 58.90000000000057, 59.00000000000057, 59.10000000000057, 59.20000000000057, 59.30000000000057, 59.400000000000574, 59.500000000000576, 59.60000000000058, 59.70000000000058, 59.80000000000058, 59.90000000000058, 60.0]
|
BSD-3-Clause
|
ipython_examples/Simple Oscillator.ipynb
|
tinosulzer/odes
|
Simple Oscillator Example: Internal Solver Stepwise runningWhen using the *solve* method, you solve over a period of time you decided before. With the *step* method you solve by default towards a desired output time after which you can continue solving the problem. For full control, you can also compute problems using the solver internal steps. This is not advised, as the number of return steps can be very large, **slowing down** the computation enormously. If you want this nevertheless, you can achieve it with the *one_step_compute* option. Like this:
|
solver = ode('cvode', rhseqn, old_api=False, one_step_compute=True)
time = 0.
solver.init_step(time, initx)
plott = []
plotx = []
while True:
solution = solver.step(60)
if solution.errors.t:
print ('Error: ', solution.message, 'Error at time', solution.errors.t)
break
#we store output for plotting
plott.append(solution.values.t)
plotx.append(solution.values.y[0])
if solution.values.t >= 60:
#back up to 60
solver.set_options(one_step_compute=False)
solution = solver.step(60)
plott[-1] = solution.values.t
plotx[-1] = solution.values.y[0]
break
plt.plot(plott,plotx)
plt.xlabel('Time [s]')
plt.ylabel('Position [m]')
plt.show()
|
_____no_output_____
|
BSD-3-Clause
|
ipython_examples/Simple Oscillator.ipynb
|
tinosulzer/odes
|
By inspection of the returned times you can see how efficient the solver can solve this problem:
|
print ('plott length:', len(plott), ', last computation times:', plott[-15:]);
|
plott length: 1328 , last computation times: [59.2297953049153, 59.28543421477497, 59.34107312463465, 59.39671203449432, 59.452350944353995, 59.50798985421367, 59.56362876407334, 59.61926767393302, 59.67490658379269, 59.730545493652365, 59.78618440351204, 59.84182331337171, 59.897462223231386, 59.95310113309106, 60.0]
|
BSD-3-Clause
|
ipython_examples/Simple Oscillator.ipynb
|
tinosulzer/odes
|
Siamese networks with TensorFlow 2.0/KerasIn this example, we'll implement a simple siamese network system, which verifyies whether a pair of MNIST images is of the same class (true) or not (false). _This example is partially based on_ [https://github.com/keras-team/keras/blob/master/examples/mnist_siamese.py](https://github.com/keras-team/keras/blob/master/examples/mnist_siamese.py) Let's start with the imports
|
import random
import numpy as np
import tensorflow as tf
|
_____no_output_____
|
MIT
|
Chapter10/siamese.ipynb
|
arifmudi/Advanced-Deep-Learning-with-Python
|
We'll continue with the `create_pairs` function, which creates a training dataset of equal number of true/false pairs of each MNIST class.
|
def create_pairs(inputs: np.ndarray, labels: np.ndarray):
"""Create equal number of true/false pairs of samples"""
num_classes = 10
digit_indices = [np.where(labels == i)[0] for i in range(num_classes)]
pairs = list()
labels = list()
n = min([len(digit_indices[d]) for d in range(num_classes)]) - 1
for d in range(num_classes):
for i in range(n):
z1, z2 = digit_indices[d][i], digit_indices[d][i + 1]
pairs += [[inputs[z1], inputs[z2]]]
inc = random.randrange(1, num_classes)
dn = (d + inc) % num_classes
z1, z2 = digit_indices[d][i], digit_indices[dn][i]
pairs += [[inputs[z1], inputs[z2]]]
labels += [1, 0]
return np.array(pairs), np.array(labels, dtype=np.float32)
|
_____no_output_____
|
MIT
|
Chapter10/siamese.ipynb
|
arifmudi/Advanced-Deep-Learning-with-Python
|
Next, we'll define the base network of the siamese system:
|
def create_base_network():
"""The shared encoding part of the siamese network"""
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
])
|
_____no_output_____
|
MIT
|
Chapter10/siamese.ipynb
|
arifmudi/Advanced-Deep-Learning-with-Python
|
Next, let's load the regular MNIST training and validation sets and create true/false pairs out of them:
|
# Load the train and test MNIST datasets
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.astype(np.float32)
x_test = x_test.astype(np.float32)
x_train /= 255
x_test /= 255
input_shape = x_train.shape[1:]
# Create true/false training and testing pairs
train_pairs, tr_labels = create_pairs(x_train, y_train)
test_pairs, test_labels = create_pairs(x_test, y_test)
|
_____no_output_____
|
MIT
|
Chapter10/siamese.ipynb
|
arifmudi/Advanced-Deep-Learning-with-Python
|
Then, we'll build the siamese system, which includes the `base_network`, the 2 siamese paths `encoder_a` and `encoder_b`, the `l1_dist` measure, and the combined `model`:
|
# Create the siamese network
# Start from the shared layers
base_network = create_base_network()
# Create first half of the siamese system
input_a = tf.keras.layers.Input(shape=input_shape)
# Note how we reuse the base_network in both halfs
encoder_a = base_network(input_a)
# Create the second half of the siamese system
input_b = tf.keras.layers.Input(shape=input_shape)
encoder_b = base_network(input_b)
# Create the the distance measure
l1_dist = tf.keras.layers.Lambda(
lambda embeddings: tf.keras.backend.abs(embeddings[0] - embeddings[1])) \
([encoder_a, encoder_b])
# Final fc layer with a single logistic output for the binary classification
flattened_weighted_distance = tf.keras.layers.Dense(1, activation='sigmoid') \
(l1_dist)
# Build the model
model = tf.keras.models.Model([input_a, input_b], flattened_weighted_distance)
|
_____no_output_____
|
MIT
|
Chapter10/siamese.ipynb
|
arifmudi/Advanced-Deep-Learning-with-Python
|
Finally, we can train the model and check the validation accuracy, which reaches 99.37%:
|
# Train
model.compile(loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
model.fit([train_pairs[:, 0], train_pairs[:, 1]], tr_labels,
batch_size=128,
epochs=20,
validation_data=([test_pairs[:, 0], test_pairs[:, 1]], test_labels))
|
Train on 108400 samples, validate on 17820 samples
Epoch 1/20
108400/108400 [==============================] - 5s 44us/sample - loss: 0.3328 - accuracy: 0.8540 - val_loss: 0.2435 - val_accuracy: 0.9184
Epoch 2/20
108400/108400 [==============================] - 4s 37us/sample - loss: 0.1612 - accuracy: 0.9409 - val_loss: 0.1672 - val_accuracy: 0.9465
Epoch 3/20
108400/108400 [==============================] - 4s 38us/sample - loss: 0.1096 - accuracy: 0.9611 - val_loss: 0.1221 - val_accuracy: 0.9625
Epoch 4/20
108400/108400 [==============================] - 4s 38us/sample - loss: 0.0824 - accuracy: 0.9712 - val_loss: 0.1052 - val_accuracy: 0.9667
Epoch 5/20
108400/108400 [==============================] - 4s 37us/sample - loss: 0.0673 - accuracy: 0.9760 - val_loss: 0.0958 - val_accuracy: 0.9706
Epoch 6/20
108400/108400 [==============================] - 4s 37us/sample - loss: 0.0542 - accuracy: 0.9808 - val_loss: 0.1054 - val_accuracy: 0.9689
Epoch 7/20
108400/108400 [==============================] - 4s 37us/sample - loss: 0.0471 - accuracy: 0.9832 - val_loss: 0.0823 - val_accuracy: 0.9764
Epoch 8/20
108400/108400 [==============================] - 4s 37us/sample - loss: 0.0410 - accuracy: 0.9853 - val_loss: 0.0769 - val_accuracy: 0.9769
Epoch 9/20
108400/108400 [==============================] - 4s 37us/sample - loss: 0.0377 - accuracy: 0.9868 - val_loss: 0.0921 - val_accuracy: 0.9731
Epoch 10/20
108400/108400 [==============================] - 4s 37us/sample - loss: 0.0326 - accuracy: 0.9887 - val_loss: 0.0920 - val_accuracy: 0.9744
Epoch 11/20
108400/108400 [==============================] - 4s 37us/sample - loss: 0.0309 - accuracy: 0.9887 - val_loss: 0.0846 - val_accuracy: 0.9753
Epoch 12/20
108400/108400 [==============================] - 4s 37us/sample - loss: 0.0283 - accuracy: 0.9898 - val_loss: 0.0902 - val_accuracy: 0.9742
Epoch 13/20
108400/108400 [==============================] - 4s 37us/sample - loss: 0.0262 - accuracy: 0.9908 - val_loss: 0.0956 - val_accuracy: 0.9753
Epoch 14/20
108400/108400 [==============================] - 4s 37us/sample - loss: 0.0228 - accuracy: 0.9918 - val_loss: 0.0820 - val_accuracy: 0.9781
Epoch 15/20
108400/108400 [==============================] - 4s 37us/sample - loss: 0.0240 - accuracy: 0.9914 - val_loss: 0.0869 - val_accuracy: 0.9759
Epoch 16/20
108400/108400 [==============================] - 4s 37us/sample - loss: 0.0225 - accuracy: 0.9921 - val_loss: 0.0754 - val_accuracy: 0.9794
Epoch 17/20
108400/108400 [==============================] - 4s 37us/sample - loss: 0.0209 - accuracy: 0.9928 - val_loss: 0.0786 - val_accuracy: 0.9778
Epoch 18/20
108400/108400 [==============================] - 4s 37us/sample - loss: 0.0207 - accuracy: 0.9929 - val_loss: 0.0797 - val_accuracy: 0.9787
Epoch 19/20
108400/108400 [==============================] - 4s 37us/sample - loss: 0.0178 - accuracy: 0.9937 - val_loss: 0.0884 - val_accuracy: 0.9785
Epoch 20/20
108400/108400 [==============================] - 4s 37us/sample - loss: 0.0189 - accuracy: 0.9935 - val_loss: 0.0754 - val_accuracy: 0.9799
|
MIT
|
Chapter10/siamese.ipynb
|
arifmudi/Advanced-Deep-Learning-with-Python
|
IntroductionJuan Camilo Henao LondonoExercise from [Hacker rank](https://www.hackerrank.com/challenges/write-a-function/problem) Write a functionWe add a Leap Day on February 29, almost every four years. The leap day is an extra, or intercalary day and we add it to the shortest month of the year, February.In the Gregorian calendar three criteria must be taken into account to identify leap years:* The year can be evenly divided by 4, is a leap year, unless: * The year can be evenly divided by 100, it is NOT a leap year, unless: * The year is also evenly divisible by 400. Then it is a leap year.This means that in the Gregorian calendar, the years 2000 and 2400 are leap years, while 1800, 1900, 2100, 2200, 2300 and 2500 are NOT leap years.**Task**You are given the year, and you have to write a function to check if the year is leap or not.Note that you have to complete the function and remaining code is given as template. **Input Format**Read y, the year that needs to be checked. **Constraints*** $1900 \le y \le 10^{5}$**Output Format**Output is taken care of by the template. Your function must return a boolean value (```True```/```False```)**Sample Input 0**```1990```**Sample Output 0**```False```**Explanation 0**1990 is not a multiple of 4 hence it's not a leap year. My implementation If one of the conditions to be a leap year is achieved, the function return `True`. If not return `False`. This implementation was used as the exercise give a "template" to complete with the code.
|
def is_leap(year):
if (year%400 == 0 and year%100 == 0):
return True
elif (year%4 == 0 and year%100 != 0):
return True
return False
year = int(input())
print(is_leap(year))
|
2019
False
|
MIT
|
01_introduction/06_write_a_function.ipynb
|
juanhenao21/hack_rank
|
Best implementation in the discussion From the discussion:> you should know that doing something like the setup for this challenge inclines you to do is a bad practice. Never do the following:```Pythondef f(): if condition: return True else: return False```
|
def is_leap(year):
return year % 4 == 0 and (year % 400 == 0 or year % 100 != 0)
year = int(input())
print(is_leap(year))
|
2019
False
|
MIT
|
01_introduction/06_write_a_function.ipynb
|
juanhenao21/hack_rank
|
BBoxerwGradCAM This class forms boundary boxes (rectangle and polygon) using GradCAM outputs for a given image.The purpose of this class is to develop Rectangle and Polygon coordinates that define an object based on an image classification model. The 'automatic' creation of these coordinates, which are often included in COCO JSONs used to train object detection models, is valuable because data preparation and labeling can be a time consuming task. This class takes 5 user inputs:* **Pretrained Learner** (image classification model)* **GradCAM Heatmap** (heatmap of GradCAM object - formed by a pretrained image classification learner)* **Source Image*** **Image Resizing Scale** (also applied to corresponding GradCAM heatmap)* **BBOX Rectangle Resizing Scale***Class is compatible with google colab and other Python 3 enivronments*
|
# Imports for loading learner and the GradCAM class
from fastai import *
from fastai.vision import *
from fastai.callbacks.hooks import *
import scipy.ndimage
|
_____no_output_____
|
MIT
|
BBOX_GRADCAM_demo_example.ipynb
|
zalkikar/BBOX_GradCAM
|
The following cell contains the widely used GradCAM class for pretrained image classification models (unedited).
|
#@title GradCAM Class
class GradCam():
@classmethod
def from_interp(cls,learn,interp,img_idx,ds_type=DatasetType.Valid,include_label=False):
# produce heatmap and xb_grad for pred label (and actual label if include_label is True)
if ds_type == DatasetType.Valid:
ds = interp.data.valid_ds
elif ds_type == DatasetType.Test:
ds = interp.data.test_ds
include_label=False
else:
return None
x_img = ds.x[img_idx]
xb,_ = interp.data.one_item(x_img)
xb_img = Image(interp.data.denorm(xb)[0])
probs = interp.preds[img_idx].numpy()
pred_idx = interp.pred_class[img_idx].item() # get class idx of img prediction label
hmap_pred,xb_grad_pred = get_grad_heatmap(learn,xb,pred_idx,size=xb_img.shape[-1])
prob_pred = probs[pred_idx]
actual_args=None
if include_label:
actual_idx = ds.y.items[img_idx] # get class idx of img actual label
if actual_idx!=pred_idx:
hmap_actual,xb_grad_actual = get_grad_heatmap(learn,xb,actual_idx,size=xb_img.shape[-1])
prob_actual = probs[actual_idx]
actual_args=[interp.data.classes[actual_idx],prob_actual,hmap_actual,xb_grad_actual]
return cls(xb_img,interp.data.classes[pred_idx],prob_pred,hmap_pred,xb_grad_pred,actual_args)
@classmethod
def from_one_img(cls,learn,x_img,label1=None,label2=None):
'''
learn: fastai's Learner
x_img: fastai.vision.image.Image
label1: generate heatmap according to this label. If None, this wil be the label with highest probability from the model
label2: generate additional heatmap according to this label
'''
pred_class,pred_idx,probs = learn.predict(x_img)
label1= str(pred_class) if not label1 else label1
xb,_ = learn.data.one_item(x_img)
xb_img = Image(learn.data.denorm(xb)[0])
probs = probs.numpy()
label1_idx = learn.data.classes.index(label1)
hmap1,xb_grad1 = get_grad_heatmap(learn,xb,label1_idx,size=xb_img.shape[-1])
prob1 = probs[label1_idx]
label2_args = None
if label2:
label2_idx = learn.data.classes.index(label2)
hmap2,xb_grad2 = get_grad_heatmap(learn,xb,label2_idx,size=xb_img.shape[-1])
prob2 = probs[label2_idx]
label2_args = [label2,prob2,hmap2,xb_grad2]
return cls(xb_img,label1,prob1,hmap1,xb_grad1,label2_args)
def __init__(self,xb_img,label1,prob1,hmap1,xb_grad1,label2_args=None):
self.xb_img=xb_img
self.label1,self.prob1,self.hmap1,self.xb_grad1 = label1,prob1,hmap1,xb_grad1
if label2_args:
self.label2,self.prob2,self.hmap2,self.xb_grad2 = label2_args
def plot(self,plot_hm=True,plot_gbp=True):
if not plot_hm and not plot_gbp:
plot_hm=True
cols = 5 if hasattr(self, 'label2') else 3
if not plot_gbp or not plot_hm:
cols-= 2 if hasattr(self, 'label2') else 1
fig,row_axes = plt.subplots(1,cols,figsize=(cols*5,5))
col=0
size=self.xb_img.shape[-1]
self.xb_img.show(row_axes[col]);col+=1
label1_title = f'1.{self.label1} {self.prob1:.3f}'
if plot_hm:
show_heatmap(self.hmap1,self.xb_img,size,row_axes[col])
row_axes[col].set_title(label1_title);col+=1
if plot_gbp:
row_axes[col].imshow(self.xb_grad1)
row_axes[col].set_axis_off()
row_axes[col].set_title(label1_title);col+=1
if hasattr(self, 'label2'):
label2_title = f'2.{self.label2} {self.prob2:.3f}'
if plot_hm:
show_heatmap(self.hmap2,self.xb_img,size,row_axes[col])
row_axes[col].set_title(label2_title);col+=1
if plot_gbp:
row_axes[col].imshow(self.xb_grad2)
row_axes[col].set_axis_off()
row_axes[col].set_title(label2_title)
# plt.tight_layout()
fig.subplots_adjust(wspace=0, hspace=0)
# fig.savefig('data_draw/both/gradcam.png')
def minmax_norm(x):
return (x - np.min(x))/(np.max(x) - np.min(x))
def scaleup(x,size):
scale_mult=size/x.shape[0]
upsampled = scipy.ndimage.zoom(x, scale_mult)
return upsampled
# hook for Gradcam
def hooked_backward(m,xb,target_layer,clas):
with hook_output(target_layer) as hook_a: #hook at last layer of group 0's output (after bn, size 512x7x7 if resnet34)
with hook_output(target_layer, grad=True) as hook_g: # gradient w.r.t to the target_layer
preds = m(xb)
preds[0,int(clas)].backward() # same as onehot backprop
return hook_a,hook_g
def clamp_gradients_hook(module, grad_in, grad_out):
for grad in grad_in:
torch.clamp_(grad, min=0.0)
# hook for guided backprop
def hooked_ReLU(m,xb,clas):
relu_modules = [module[1] for module in m.named_modules() if str(module[1]) == "ReLU(inplace)"]
with callbacks.Hooks(relu_modules, clamp_gradients_hook, is_forward=False) as _:
preds = m(xb)
preds[0,int(clas)].backward()
def guided_backprop(learn,xb,y):
xb = xb.cuda()
m = learn.model.eval();
xb.requires_grad_();
if not xb.grad is None:
xb.grad.zero_();
hooked_ReLU(m,xb,y);
return xb.grad[0].cpu().numpy()
def show_heatmap(hm,xb_im,size,ax=None):
if ax is None:
_,ax = plt.subplots()
xb_im.show(ax)
ax.imshow(hm, alpha=0.8, extent=(0,size,size,0),
interpolation='bilinear',cmap='magma');
def get_grad_heatmap(learn,xb,y,size):
'''
Main function to get hmap for heatmap and xb_grad for guided backprop
'''
xb = xb.cuda()
m = learn.model.eval();
target_layer = m[0][-1][-1] # last layer of group 0
hook_a,hook_g = hooked_backward(m,xb,target_layer,y)
target_act= hook_a.stored[0].cpu().numpy()
target_grad = hook_g.stored[0][0].cpu().numpy()
mean_grad = target_grad.mean(1).mean(1)
# hmap = (target_act*mean_grad[...,None,None]).mean(0)
hmap = (target_act*mean_grad[...,None,None]).sum(0)
hmap = np.where(hmap >= 0, hmap, 0)
xb_grad = guided_backprop(learn,xb,y) # (3,224,224)
#minmax norm the grad
xb_grad = minmax_norm(xb_grad)
hmap_scaleup = minmax_norm(scaleup(hmap,size)) # (224,224)
# multiply xb_grad and hmap_scaleup and switch axis
xb_grad = np.einsum('ijk, jk->jki',xb_grad, hmap_scaleup) #(224,224,3)
return hmap,xb_grad
|
_____no_output_____
|
MIT
|
BBOX_GRADCAM_demo_example.ipynb
|
zalkikar/BBOX_GradCAM
|
I connect to google drive (this notebook was made on google colab for GPU usage) and load my pretrained learner.
|
from google.colab import drive
drive.mount('/content/drive')
base_dir = '/content/drive/My Drive/fellowshipai-data/final_3_class_data_train_test_split'
def get_data(sz): # This function returns an ImageDataBunch with a given image size
return ImageDataBunch.from_folder(base_dir+'/', train='train', valid='valid', # 0% validation because we already formed our testing set
ds_tfms=get_transforms(), size=sz, num_workers=4).normalize(imagenet_stats) # Normalized, 4 workers (multiprocessing) - 64 batch size (default)
arch = models.resnet34
data = get_data(224)
learn = cnn_learner(data,arch,metrics=[error_rate,Precision(average='micro'),Recall(average='micro')],train_bn=True,pretrained=True).mixup()
learn.load('model-224sz-basicaugments-oversampling-mixup-dLRs')
example_image = '/content/drive/My Drive/fellowshipai-data/final_3_class_data_train_test_split/train/raw/00000015.jpg'
img = open_image(example_image)
gcam = GradCam.from_one_img(learn,img) # using the GradCAM class
gcam.plot(plot_gbp = False) # We care about the heatmap (which is overlayed on top of the original image inherently)
gcam_heatmap = gcam.hmap1 # This is a 2d array
|
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change "
|
MIT
|
BBOX_GRADCAM_demo_example.ipynb
|
zalkikar/BBOX_GradCAM
|
My pretrained learner correctly classified the image as raw with probability 0.996.Note that images with very low noise and accurate feature importances (as with the example image) are The learner is focusing on the steak in center view (heatmap pixels indicate feature importance).
|
from BBOXES_from_GRADCAM import BBoxerwGradCAM # load class from .py file
image_resizing_scale = [400,300]
bbox_scaling = [1,1,1,1]
bbox = BBoxerwGradCAM(learn,
gcam_heatmap,
example_image,
image_resizing_scale,
bbox_scaling)
for function in dir(bbox)[-18:]: print(function)
bbox.show_smoothheatmap()
bbox.show_contouredheatmap()
#bbox.show_bboxrectangle()
bbox.show_bboxpolygon()
bbox.show_bboxrectangle()
rect_coords, polygon_coords = bbox.get_bboxes()
rect_coords # x,y,w,h
polygon_coords
# IoU for object detection
def get_IoU(truth_coords, pred_coords):
pred_area = pred_coords[2]*pred_coords[3]
truth_area = truth_coords[2]*truth_coords[3]
# coords of intersection rectangle
x1 = max(truth_coords[0], pred_coords[0])
y1 = max(truth_coords[1], pred_coords[1])
x2 = min(truth_coords[2], pred_coords[2])
y2 = min(truth_coords[3], pred_coords[3])
# area of intersection rectangle
interArea = max(0, x2 - x1 + 1) * max(0, y2 - y1 + 1)
# area of prediction and truth rectangles
boxTruthArea = (truth_coords[2] - truth_coords[0] + 1) * (truth_coords[3] - truth_coords[1] + 1)
boxPredArea = (pred_coords[2] - pred_coords[0] + 1) * (pred_coords[3] - pred_coords[1] + 1)
# intersection over union
iou = interArea / float(boxTruthArea + boxPredArea - interArea)
return iou
get_IoU([80,40,240,180],rect_coords)
|
_____no_output_____
|
MIT
|
BBOX_GRADCAM_demo_example.ipynb
|
zalkikar/BBOX_GradCAM
|
Decision Tree ref: http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html
|
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.datasets import load_breast_cancer
from sklearn.tree import DecisionTreeClassifier
# Load data
cancer = load_breast_cancer()
print(cancer.feature_names)
print(cancer.target_names)
x = cancer.data
y = cancer.target
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=1)
model = DecisionTreeClassifier(min_samples_leaf=3)
model.fit(x_train, y_train)
y_pred = model.predict(x_test)
accuracy = accuracy_score(y_test, y_pred)
num_correct_samples = accuracy_score(y_test, y_pred, normalize=False)
print(f'number of correct sample: {num_correct_samples}')
print(f'accuracy: {accuracy}')
|
['mean radius' 'mean texture' 'mean perimeter' 'mean area'
'mean smoothness' 'mean compactness' 'mean concavity'
'mean concave points' 'mean symmetry' 'mean fractal dimension'
'radius error' 'texture error' 'perimeter error' 'area error'
'smoothness error' 'compactness error' 'concavity error'
'concave points error' 'symmetry error' 'fractal dimension error'
'worst radius' 'worst texture' 'worst perimeter' 'worst area'
'worst smoothness' 'worst compactness' 'worst concavity'
'worst concave points' 'worst symmetry' 'worst fractal dimension']
['malignant' 'benign']
number of correct sample: 109
accuracy: 0.956140350877193
|
MIT
|
mod09.ipynb
|
theabc50111/machine_learning_tibame
|
Random Forest ref: http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
|
from sklearn.ensemble import RandomForestClassifier
# Load data
cancer = load_breast_cancer()
x = cancer.data
y = cancer.target
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2)
model = RandomForestClassifier(max_depth=6, n_estimators=10)
model.fit(x_train, y_train)
y_pred = model.predict(x_test)
accuracy = accuracy_score(y_test, y_pred)
num_correct_samples = accuracy_score(y_test, y_pred, normalize=False)
print('number of correct sample: {}'.format(num_correct_samples))
print('accuracy: {}'.format(accuracy))
|
number of correct sample: 112
accuracy: 0.9824561403508771
|
MIT
|
mod09.ipynb
|
theabc50111/machine_learning_tibame
|
Importing libraries and methods from thermograms, ml_training and ultilites modules
|
import numpy as np
print('Project MLaDECO')
print('Author: Viswambhar Yasa')
print('Software version: 0.1')
from sklearn.preprocessing import MinMaxScaler, StandardScaler
import tensorflow as tf
from tensorflow.keras import models
from thermograms.Utilities import Utilities
from ml_training.dataset_generation.fourier_transformation import fourier_transformation
from ml_training.dataset_generation.principal_componant_analysis import principal_componant_analysis
from utilites.segmentation_colormap_anno import segmentation_colormap_anno
from utilites.tolerance_maks_gen import tolerance_predicted_mask
import matplotlib.pyplot as plt
|
Project MLaDECO
Author: Viswambhar Yasa
Software version: 0.1
|
MIT
|
ml_training/dataset_generation/masks/segmentation_mask_metal.ipynb
|
viswambhar-yasa/LaDECO
|
Importing dataset for training
|
root_path = r'utilites/datasets'
data_file_name = r'metal_data.hdf5'
thermal_class = Utilities()
thermal_data,experiment_list=thermal_class.open_file(root_path, data_file_name,True)
experiment_name=r'2021-12-15-Materialstudie_Metallproben-ML3-laserbehandelte_Probe-1000W-10s'
experimental_data=thermal_data[experiment_name]
|
Experiments in the file
1 : 2021-12-15-Materialstudie_Metallproben-ML1-laserbehandelte_Probe-1000W-10s
2 : 2021-12-15-Materialstudie_Metallproben-ML2-laserbehandelte_Probe-1000W-10s
3 : 2021-12-15-Materialstudie_Metallproben-ML3-laserbehandelte_Probe-1000W-10s
A total of 3 experiments are loaded in file
|
MIT
|
ml_training/dataset_generation/masks/segmentation_mask_metal.ipynb
|
viswambhar-yasa/LaDECO
|
Checking the shape and file format of the thermographic experiment dataset
|
experimental_data
|
_____no_output_____
|
MIT
|
ml_training/dataset_generation/masks/segmentation_mask_metal.ipynb
|
viswambhar-yasa/LaDECO
|
Identifying the reflection phase index
|
input_data, reflection_st_index, reflection_end_index = fourier_transformation(experimental_data,
scaling_type='normalization', index=1)
from PIL import Image
|
_____no_output_____
|
MIT
|
ml_training/dataset_generation/masks/segmentation_mask_metal.ipynb
|
viswambhar-yasa/LaDECO
|
Performing data normalization to improve the learning ability of machine learning model by scaling down the data between a smaller range
|
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from sklearn.model_selection import train_test_split
exp_data=np.array(experimental_data)
standardizing = StandardScaler()
std_output_data = standardizing.fit_transform(
exp_data.reshape(exp_data.shape[0], -1)).reshape(exp_data.shape)
normalizing = MinMaxScaler(feature_range=(0, 1))
nrm_output_data = normalizing.fit_transform(
exp_data.reshape(exp_data.shape[0], -1)).reshape(exp_data.shape)
|
_____no_output_____
|
MIT
|
ml_training/dataset_generation/masks/segmentation_mask_metal.ipynb
|
viswambhar-yasa/LaDECO
|
Plotting thermograms after gaussian normalization and min max scaling operation
|
from mpl_toolkits.axes_grid1 import make_axes_locatable
fig = plt.figure(figsize=(10,5))
ax1 = fig.add_subplot(121)
im1 = ax1.imshow(std_output_data[:,:,400].astype(np.float32), cmap='RdYlBu_r', interpolation='None')
ax1.set_title('Gaussian distribution scaling')
divider = make_axes_locatable(ax1)
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im1, cax=cax, orientation='vertical')
ax1.axis('off')
ax2 = fig.add_subplot(122)
im2 = ax2.imshow(nrm_output_data[:,:,400].astype(np.float32), cmap='RdYlBu_r', interpolation='None')
ax2.set_title('Min-Max Normalization')
ax2.axis('off')
divider = make_axes_locatable(ax2)
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im2, cax=cax, orientation='vertical')
plt.savefig(r"Documents/temp/metal_scaling.png",dpi=600,bbox_inches='tight',transparent=True)
plt.imshow(std_output_data[:,:,400].astype(np.float32),cmap='RdYlBu_r')
plt.colorbar()
#experiment_name='2021-05-11 - Variantenvergleich - VarioTherm Halogenlampe - Winkel 30°'
experimental_data=np.array(thermal_data[experiment_name])
import tensorflow as tf
def grayscale_image(data):
"""
Creates a gray scale image dataset from the input data
Args:
data (numpy array): Thermograms
Returns:
(numpy array): Gray scale images
"""
#print(data.shape)
seq_data=np.zeros((data.shape))
for i in range(data.shape[-1]):
temp=np.expand_dims(data[:,:,i],axis=-1)
#print(temp.shape)
a_i=tf.keras.utils.array_to_img(temp).convert('L')
#a_i=array_to_img(temp).convert('L')
imgGray = tf.keras.utils.img_to_array(a_i)
#print(imgGray.shape)
seq_data[:,:,i]=np.squeeze(imgGray)
return seq_data
d=grayscale_image(experimental_data)
d.shape
|
_____no_output_____
|
MIT
|
ml_training/dataset_generation/masks/segmentation_mask_metal.ipynb
|
viswambhar-yasa/LaDECO
|
Extracting information from gray scale image and saving it in png format
|
from PIL import Image
from keras.preprocessing.image import array_to_img,img_to_array
data=experimental_data
plt.figure()
plt.imshow(d[:,:,500],cmap='RdYlBu_r')
#plt.imshow(std_output_data[:,:,250].astype(np.float64),cmap='gray')
plt.savefig("Documents/temp/metal_output.png")
plt.axis('off')
img=plt.imread('Documents/temp/metal_output.png')
img.shape
|
_____no_output_____
|
MIT
|
ml_training/dataset_generation/masks/segmentation_mask_metal.ipynb
|
viswambhar-yasa/LaDECO
|
Performing principal companant analysis to features by filtering intensity
|
EOFs=principal_componant_analysis(experimental_data)
img1 = Image.fromarray(EOFs[:,:,0].astype(np.int8))
#img2 = Image.fromarray(EOFs[:,:,0].astype(np.float32))
plt.imshow(np.squeeze(EOFs),cmap='YlOrRd_r')
plt.colorbar()
plt.savefig("Documents/temp/metal_PCA.png",dpi=600,bbox_inches='tight',transparent=True)
mask=np.zeros(shape=(np.squeeze(EOFs).shape))
mask[np.squeeze(EOFs) > 250]=1
mask[np.squeeze(EOFs) < -450]=1
plt.imsave('Documents/temp/metal_mask1.png',np.squeeze(mask),cmap='binary_r')
substrate=mask
|
_____no_output_____
|
MIT
|
ml_training/dataset_generation/masks/segmentation_mask_metal.ipynb
|
viswambhar-yasa/LaDECO
|
Final mask of the dataset
|
plt.imshow(mask)
import cv2
img1 = cv2.imread('Documents/temp/metal_mask1.png',0)
print(img1.shape)
|
(256, 256)
|
MIT
|
ml_training/dataset_generation/masks/segmentation_mask_metal.ipynb
|
viswambhar-yasa/LaDECO
|
converting the image format data to a numpy array format and scaling it between integer values based on number of features
|
img1[img1==255]=1
plt.imshow(img1,cmap='binary_r')
plt.colorbar()
|
_____no_output_____
|
MIT
|
ml_training/dataset_generation/masks/segmentation_mask_metal.ipynb
|
viswambhar-yasa/LaDECO
|
Saving the segmentation mask
|
name='ml_training/dataset_generation/annots/'+experiment_name
np.save(name,img1)
ar=np.load(name+'.npy')
plt.imshow(ar,cmap='gray')
plt.colorbar()
|
_____no_output_____
|
MIT
|
ml_training/dataset_generation/masks/segmentation_mask_metal.ipynb
|
viswambhar-yasa/LaDECO
|
Hierarchical Clustering **Hierarchical clustering** refers to a class of clustering methods that seek to build a **hierarchy** of clusters, in which some clusters contain others. In this assignment, we will explore a top-down approach, recursively bipartitioning the data using k-means. **Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook. Import packages
|
from __future__ import print_function # to conform python 2.x print to python 3.x
import turicreate
import matplotlib.pyplot as plt
import numpy as np
import sys
import os
import time
from scipy.sparse import csr_matrix
from sklearn.cluster import KMeans
from sklearn.metrics import pairwise_distances
%matplotlib inline
|
_____no_output_____
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
Load the Wikipedia dataset
|
wiki = turicreate.SFrame('people_wiki.sframe/')
|
_____no_output_____
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
As we did in previous assignments, let's extract the TF-IDF features:
|
wiki['tf_idf'] = turicreate.text_analytics.tf_idf(wiki['text'])
|
_____no_output_____
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
To run k-means on this dataset, we should convert the data matrix into a sparse matrix.
|
from em_utilities import sframe_to_scipy # converter
# This will take about a minute or two.
wiki = wiki.add_row_number()
tf_idf, map_word_to_index = sframe_to_scipy(wiki, 'tf_idf')
|
_____no_output_____
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
To be consistent with the k-means assignment, let's normalize all vectors to have unit norm.
|
from sklearn.preprocessing import normalize
tf_idf = normalize(tf_idf)
|
_____no_output_____
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
Bipartition the Wikipedia dataset using k-means Recall our workflow for clustering text data with k-means:1. Load the dataframe containing a dataset, such as the Wikipedia text dataset.2. Extract the data matrix from the dataframe.3. Run k-means on the data matrix with some value of k.4. Visualize the clustering results using the centroids, cluster assignments, and the original dataframe. We keep the original dataframe around because the data matrix does not keep auxiliary information (in the case of the text dataset, the title of each article).Let us modify the workflow to perform bipartitioning:1. Load the dataframe containing a dataset, such as the Wikipedia text dataset.2. Extract the data matrix from the dataframe.3. Run k-means on the data matrix with k=2.4. Divide the data matrix into two parts using the cluster assignments.5. Divide the dataframe into two parts, again using the cluster assignments. This step is necessary to allow for visualization.6. Visualize the bipartition of data.We'd like to be able to repeat Steps 3-6 multiple times to produce a **hierarchy** of clusters such as the following:``` (root) | +------------+-------------+ | | Cluster Cluster +------+-----+ +------+-----+ | | | | Cluster Cluster Cluster Cluster```Each **parent cluster** is bipartitioned to produce two **child clusters**. At the very top is the **root cluster**, which consists of the entire dataset.Now we write a wrapper function to bipartition a given cluster using k-means. There are three variables that together comprise the cluster:* `dataframe`: a subset of the original dataframe that correspond to member rows of the cluster* `matrix`: same set of rows, stored in sparse matrix format* `centroid`: the centroid of the cluster (not applicable for the root cluster)Rather than passing around the three variables separately, we package them into a Python dictionary. The wrapper function takes a single dictionary (representing a parent cluster) and returns two dictionaries (representing the child clusters).
|
def bipartition(cluster, maxiter=400, num_runs=4, seed=None):
'''cluster: should be a dictionary containing the following keys
* dataframe: original dataframe
* matrix: same data, in matrix format
* centroid: centroid for this particular cluster'''
data_matrix = cluster['matrix']
dataframe = cluster['dataframe']
# Run k-means on the data matrix with k=2. We use scikit-learn here to simplify workflow.
kmeans_model = KMeans(n_clusters=2, max_iter=maxiter, n_init=num_runs, random_state=seed, n_jobs=1)
kmeans_model.fit(data_matrix)
centroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_model.labels_
# Divide the data matrix into two parts using the cluster assignments.
data_matrix_left_child, data_matrix_right_child = data_matrix[cluster_assignment==0], \
data_matrix[cluster_assignment==1]
# Divide the dataframe into two parts, again using the cluster assignments.
cluster_assignment_sa = turicreate.SArray(cluster_assignment) # minor format conversion
dataframe_left_child, dataframe_right_child = dataframe[cluster_assignment_sa==0], \
dataframe[cluster_assignment_sa==1]
# Package relevant variables for the child clusters
cluster_left_child = {'matrix': data_matrix_left_child,
'dataframe': dataframe_left_child,
'centroid': centroids[0]}
cluster_right_child = {'matrix': data_matrix_right_child,
'dataframe': dataframe_right_child,
'centroid': centroids[1]}
return (cluster_left_child, cluster_right_child)
|
_____no_output_____
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
The following cell performs bipartitioning of the Wikipedia dataset. Allow 2+ minutes to finish.Note. For the purpose of the assignment, we set an explicit seed (`seed=1`) to produce identical outputs for every run. In pratical applications, you might want to use different random seeds for all runs.
|
%%time
wiki_data = {'matrix': tf_idf, 'dataframe': wiki} # no 'centroid' for the root cluster
left_child, right_child = bipartition(wiki_data, maxiter=100, num_runs=1, seed=0)
|
/home/offbeat/Environments/turicreate/lib/python3.7/site-packages/sklearn/cluster/_kmeans.py:974: FutureWarning: 'n_jobs' was deprecated in version 0.23 and will be removed in 0.25.
" removed in 0.25.", FutureWarning)
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
Let's examine the contents of one of the two clusters, which we call the `left_child`, referring to the tree visualization above.
|
left_child
|
_____no_output_____
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
And here is the content of the other cluster we named `right_child`.
|
right_child
|
_____no_output_____
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
Visualize the bipartition We provide you with a modified version of the visualization function from the k-means assignment. For each cluster, we print the top 5 words with highest TF-IDF weights in the centroid and display excerpts for the 8 nearest neighbors of the centroid.
|
def display_single_tf_idf_cluster(cluster, map_index_to_word):
'''map_index_to_word: SFrame specifying the mapping betweeen words and column indices'''
wiki_subset = cluster['dataframe']
tf_idf_subset = cluster['matrix']
centroid = cluster['centroid']
# Print top 5 words with largest TF-IDF weights in the cluster
idx = centroid.argsort()[::-1]
for i in range(5):
print('{0}:{1:.3f}'.format(map_index_to_word['category'], centroid[idx[i]])),
print('')
# Compute distances from the centroid to all data points in the cluster.
distances = pairwise_distances(tf_idf_subset, [centroid], metric='euclidean').flatten()
# compute nearest neighbors of the centroid within the cluster.
nearest_neighbors = distances.argsort()
# For 8 nearest neighbors, print the title as well as first 180 characters of text.
# Wrap the text at 80-character mark.
for i in range(8):
text = ' '.join(wiki_subset[nearest_neighbors[i]]['text'].split(None, 25)[0:25])
print('* {0:50s} {1:.5f}\n {2:s}\n {3:s}'.format(wiki_subset[nearest_neighbors[i]]['name'],
distances[nearest_neighbors[i]], text[:90], text[90:180] if len(text) > 90 else ''))
print('')
|
_____no_output_____
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
Let's visualize the two child clusters:
|
display_single_tf_idf_cluster(left_child, map_word_to_index)
display_single_tf_idf_cluster(right_child, map_word_to_index)
|
113949:0.040
113949:0.036
113949:0.029
113949:0.029
113949:0.028
* Todd Williams 0.95468
todd michael williams born february 13 1971 in syracuse new york is a former major league
baseball relief pitcher he attended east syracuseminoa high school
* Gord Sherven 0.95622
gordon r sherven born august 21 1963 in gravelbourg saskatchewan and raised in mankota sas
katchewan is a retired canadian professional ice hockey forward who played
* Justin Knoedler 0.95639
justin joseph knoedler born july 17 1980 in springfield illinois is a former major league
baseball catcherknoedler was originally drafted by the st louis cardinals
* Chris Day 0.95648
christopher nicholas chris day born 28 july 1975 is an english professional footballer who
plays as a goalkeeper for stevenageday started his career at tottenham
* Tony Smith (footballer, born 1957) 0.95653
anthony tony smith born 20 february 1957 is a former footballer who played as a central de
fender in the football league in the 1970s and
* Ashley Prescott 0.95761
ashley prescott born 11 september 1972 is a former australian rules footballer he played w
ith the richmond and fremantle football clubs in the afl between
* Leslie Lea 0.95802
leslie lea born 5 october 1942 in manchester is an english former professional footballer
he played as a midfielderlea began his professional career with blackpool
* Tommy Anderson (footballer) 0.95818
thomas cowan tommy anderson born 24 september 1934 in haddington is a scottish former prof
essional footballer he played as a forward and was noted for
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
The right cluster consists of athletes and artists (singers and actors/actresses), whereas the left cluster consists of non-athletes and non-artists. So far, we have a single-level hierarchy consisting of two clusters, as follows: ``` Wikipedia + | +--------------------------+--------------------+ | | + + Non-athletes/artists Athletes/artists``` Is this hierarchy good enough? **When building a hierarchy of clusters, we must keep our particular application in mind.** For instance, we might want to build a **directory** for Wikipedia articles. A good directory would let you quickly narrow down your search to a small set of related articles. The categories of athletes and non-athletes are too general to facilitate efficient search. For this reason, we decide to build another level into our hierarchy of clusters with the goal of getting more specific cluster structure at the lower level. To that end, we subdivide both the `athletes/artists` and `non-athletes/artists` clusters. Perform recursive bipartitioning Cluster of athletes and artists To help identify the clusters we've built so far, let's give them easy-to-read aliases:
|
non_athletes_artists = left_child
athletes_artists = right_child
|
_____no_output_____
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
Using the bipartition function, we produce two child clusters of the athlete cluster:
|
# Bipartition the cluster of athletes and artists
left_child_athletes_artists, right_child_athletes_artists = bipartition(athletes_artists, maxiter=100, num_runs=6, seed=1)
|
/home/offbeat/Environments/turicreate/lib/python3.7/site-packages/sklearn/cluster/_kmeans.py:974: FutureWarning: 'n_jobs' was deprecated in version 0.23 and will be removed in 0.25.
" removed in 0.25.", FutureWarning)
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
The left child cluster mainly consists of athletes:
|
display_single_tf_idf_cluster(left_child_athletes_artists, map_word_to_index)
|
113949:0.054
113949:0.043
113949:0.038
113949:0.035
113949:0.030
* Tony Smith (footballer, born 1957) 0.94677
anthony tony smith born 20 february 1957 is a former footballer who played as a central de
fender in the football league in the 1970s and
* Justin Knoedler 0.94746
justin joseph knoedler born july 17 1980 in springfield illinois is a former major league
baseball catcherknoedler was originally drafted by the st louis cardinals
* Chris Day 0.94849
christopher nicholas chris day born 28 july 1975 is an english professional footballer who
plays as a goalkeeper for stevenageday started his career at tottenham
* Todd Williams 0.94882
todd michael williams born february 13 1971 in syracuse new york is a former major league
baseball relief pitcher he attended east syracuseminoa high school
* Todd Curley 0.95007
todd curley born 14 january 1973 is a former australian rules footballer who played for co
llingwood and the western bulldogs in the australian football league
* Ashley Prescott 0.95015
ashley prescott born 11 september 1972 is a former australian rules footballer he played w
ith the richmond and fremantle football clubs in the afl between
* Tommy Anderson (footballer) 0.95037
thomas cowan tommy anderson born 24 september 1934 in haddington is a scottish former prof
essional footballer he played as a forward and was noted for
* Leslie Lea 0.95065
leslie lea born 5 october 1942 in manchester is an english former professional footballer
he played as a midfielderlea began his professional career with blackpool
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
On the other hand, the right child cluster consists mainly of artists (singers and actors/actresses):
|
display_single_tf_idf_cluster(right_child_athletes_artists, map_word_to_index)
|
113949:0.045
113949:0.043
113949:0.035
113949:0.031
113949:0.031
* Alessandra Aguilar 0.93880
alessandra aguilar born 1 july 1978 in lugo is a spanish longdistance runner who specialis
es in marathon running she represented her country in the event
* Heather Samuel 0.93999
heather barbara samuel born 6 july 1970 is a retired sprinter from antigua and barbuda who
specialized in the 100 and 200 metres in 1990
* Viola Kibiwot 0.94037
viola jelagat kibiwot born december 22 1983 in keiyo district is a runner from kenya who s
pecialises in the 1500 metres kibiwot won her first
* Ayelech Worku 0.94052
ayelech worku born june 12 1979 is an ethiopian longdistance runner most known for winning
two world championships bronze medals on the 5000 metres she
* Krisztina Papp 0.94105
krisztina papp born 17 december 1982 in eger is a hungarian long distance runner she is th
e national indoor record holder over 5000 mpapp began
* Petra Lammert 0.94230
petra lammert born 3 march 1984 in freudenstadt badenwrttemberg is a former german shot pu
tter and current bobsledder she was the 2009 european indoor champion
* Morhad Amdouni 0.94231
morhad amdouni born 21 january 1988 in portovecchio is a french middle and longdistance ru
nner he was european junior champion in track and cross country
* Brian Davis (golfer) 0.94378
brian lester davis born 2 august 1974 is an english professional golferdavis was born in l
ondon he turned professional in 1994 and became a member
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
Our hierarchy of clusters now looks like this:``` Wikipedia + | +--------------------------+--------------------+ | | + + Non-athletes/artists Athletes/artists + | +----------+----------+ | | | | + | athletes artists``` Should we keep subdividing the clusters? If so, which cluster should we subdivide? To answer this question, we again think about our application. Since we organize our directory by topics, it would be nice to have topics that are about as coarse as each other. For instance, if one cluster is about baseball, we expect some other clusters about football, basketball, volleyball, and so forth. That is, **we would like to achieve similar level of granularity for all clusters.**Both the athletes and artists node can be subdivided more, as each one can be divided into more descriptive professions (singer/actress/painter/director, or baseball/football/basketball, etc.). Let's explore subdividing the athletes cluster further to produce finer child clusters. Let's give the clusters aliases as well:
|
athletes = left_child_athletes_artists
artists = right_child_athletes_artists
|
_____no_output_____
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
Cluster of athletes In answering the following quiz question, take a look at the topics represented in the top documents (those closest to the centroid), as well as the list of words with highest TF-IDF weights.Let us bipartition the cluster of athletes.
|
left_child_athletes, right_child_athletes = bipartition(athletes, maxiter=100, num_runs=6, seed=1)
display_single_tf_idf_cluster(left_child_athletes, map_word_to_index)
display_single_tf_idf_cluster(right_child_athletes, map_word_to_index)
|
113949:0.110
113949:0.102
113949:0.051
113949:0.046
113949:0.045
* Steve Springer 0.89370
steven michael springer born february 11 1961 is an american former professional baseball
player who appeared in major league baseball as a third baseman and
* Dave Ford 0.89622
david alan ford born december 29 1956 is a former major league baseball pitcher for the ba
ltimore orioles born in cleveland ohio ford attended lincolnwest
* Todd Williams 0.89829
todd michael williams born february 13 1971 in syracuse new york is a former major league
baseball relief pitcher he attended east syracuseminoa high school
* Justin Knoedler 0.90102
justin joseph knoedler born july 17 1980 in springfield illinois is a former major league
baseball catcherknoedler was originally drafted by the st louis cardinals
* Kevin Nicholson (baseball) 0.90619
kevin ronald nicholson born march 29 1976 is a canadian baseball shortstop he played part
of the 2000 season for the san diego padres of
* Joe Strong 0.90658
joseph benjamin strong born september 9 1962 in fairfield california is a former major lea
gue baseball pitcher who played for the florida marlins from 2000
* James Baldwin (baseball) 0.90691
james j baldwin jr born july 15 1971 is a former major league baseball pitcher he batted a
nd threw righthanded in his 11season career he
* James Garcia 0.90738
james robert garcia born february 3 1980 is an american former professional baseball pitch
er who played in the san francisco giants minor league system as
113949:0.048
113949:0.043
113949:0.041
113949:0.036
113949:0.034
* Todd Curley 0.94563
todd curley born 14 january 1973 is a former australian rules footballer who played for co
llingwood and the western bulldogs in the australian football league
* Tony Smith (footballer, born 1957) 0.94590
anthony tony smith born 20 february 1957 is a former footballer who played as a central de
fender in the football league in the 1970s and
* Chris Day 0.94605
christopher nicholas chris day born 28 july 1975 is an english professional footballer who
plays as a goalkeeper for stevenageday started his career at tottenham
* Jason Roberts (footballer) 0.94617
jason andre davis roberts mbe born 25 january 1978 is a former professional footballer and
now a football punditborn in park royal london roberts was
* Ashley Prescott 0.94618
ashley prescott born 11 september 1972 is a former australian rules footballer he played w
ith the richmond and fremantle football clubs in the afl between
* David Hamilton (footballer) 0.94910
david hamilton born 7 november 1960 is an english former professional association football
player who played as a midfielder he won caps for the england
* Richard Ambrose 0.94924
richard ambrose born 10 june 1972 is a former australian rules footballer who played with
the sydney swans in the australian football league afl he
* Neil Grayson 0.94944
neil grayson born 1 november 1964 in york is an english footballer who last played as a st
riker for sutton towngraysons first club was local
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
**Quiz Question**. Which diagram best describes the hierarchy right after splitting the `athletes` cluster? Refer to the quiz form for the diagrams. **Caution**. The granularity criteria is an imperfect heuristic and must be taken with a grain of salt. It takes a lot of manual intervention to obtain a good hierarchy of clusters.* **If a cluster is highly mixed, the top articles and words may not convey the full picture of the cluster.** Thus, we may be misled if we judge the purity of clusters solely by their top documents and words. * **Many interesting topics are hidden somewhere inside the clusters but do not appear in the visualization.** We may need to subdivide further to discover new topics. For instance, subdividing the `ice_hockey_football` cluster led to the appearance of runners and golfers. Cluster of non-athletes Now let us subdivide the cluster of non-athletes.
|
%%time
# Bipartition the cluster of non-athletes
left_child_non_athletes_artists, right_child_non_athletes_artists = bipartition(non_athletes_artists, maxiter=100, num_runs=3, seed=1)
display_single_tf_idf_cluster(left_child_non_athletes_artists, map_word_to_index)
display_single_tf_idf_cluster(right_child_non_athletes_artists, map_word_to_index)
|
113949:0.039
113949:0.030
113949:0.023
113949:0.021
113949:0.015
* Madonna (entertainer) 0.96092
madonna louise ciccone tkoni born august 16 1958 is an american singer songwriter actress
and businesswoman she achieved popularity by pushing the boundaries of lyrical
* Janet Jackson 0.96153
janet damita jo jackson born may 16 1966 is an american singer songwriter and actress know
n for a series of sonically innovative socially conscious and
* Cher 0.96540
cher r born cherilyn sarkisian may 20 1946 is an american singer actress and television ho
st described as embodying female autonomy in a maledominated industry
* Laura Smith 0.96600
laura smith is a canadian folk singersongwriter she is best known for her 1995 single shad
e of your love one of the years biggest hits
* Natashia Williams 0.96677
natashia williamsblach born august 2 1978 is an american actress and former wonderbra camp
aign model who is perhaps best known for her role as shane
* Anita Kunz 0.96716
anita e kunz oc born 1956 is a canadianborn artist and illustratorkunz has lived in london
new york and toronto contributing to magazines and working
* Maggie Smith 0.96747
dame margaret natalie maggie smith ch dbe born 28 december 1934 is an english actress she
made her stage debut in 1952 and has had
* Lizzie West 0.96752
lizzie west born in brooklyn ny on july 21 1973 is a singersongwriter her music can be des
cribed as a blend of many genres including
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
The clusters are not as clear, but the left cluster has a tendency to show important female figures, and the right one to show politicians and government officials.Let's divide them further.
|
female_figures = left_child_non_athletes_artists
politicians_etc = right_child_non_athletes_artists
politicians_etc = left_child_non_athletes_artists
female_figures = right_child_non_athletes_artists
|
_____no_output_____
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
**Quiz Question**. Let us bipartition the clusters `female_figures` and `politicians`. Which diagram best describes the resulting hierarchy of clusters for the non-athletes? Refer to the quiz for the diagrams.**Note**. Use `maxiter=100, num_runs=6, seed=1` for consistency of output.
|
left_female_figures, right_female_figures = bipartition(female_figures, maxiter=100, num_runs=6, seed=1)
left_politicians_etc, right_politicians_etc = bipartition(politicians_etc, maxiter=100, num_runs=6, seed=1)
display_single_tf_idf_cluster(left_female_figures, map_word_to_index)
display_single_tf_idf_cluster(right_female_figures, map_word_to_index)
display_single_tf_idf_cluster(left_politicians_etc, map_word_to_index)
display_single_tf_idf_cluster(right_politicians_etc, map_word_to_index)
|
113949:0.027
113949:0.023
113949:0.017
113949:0.016
113949:0.015
* Julian Knowles 0.96904
julian knowles is an australian composer and performer specialising in new and emerging te
chnologies his creative work spans the fields of composition for theatre dance
* Peter Combe 0.97080
peter combe born 20 october 1948 is an australian childrens entertainer and musicianmusica
l genre childrens musiche has had 22 releases including seven gold albums two
* Craig Pruess 0.97121
craig pruess born 1950 is an american composer musician arranger and gold platinum record
producer who has been living in britain since 1973 his career
* Ceiri Torjussen 0.97169
ceiri torjussen born 1976 is a composer who has contributed music to dozens of film and te
levision productions in the ushis music was described by
* Brenton Broadstock 0.97192
brenton broadstock ao born 1952 is an australian composerbroadstock was born in melbourne
he studied history politics and music at monash university and later composition
* Michael Peter Smith 0.97318
michael peter smith born september 7 1941 is a chicagobased singersongwriter rolling stone
magazine once called him the greatest songwriter in the english language he
* Marc Hoffman 0.97356
marc hoffman born april 16 1961 is a composer of concert music and music for film pianist
vocalist recording artist and music educator hoffman grew
* Tom Bancroft 0.97378
tom bancroft born 1967 london is a british jazz drummer and composer he began drumming age
d seven and started off playing jazz with his father
113949:0.124
113949:0.092
113949:0.015
113949:0.015
113949:0.014
* Janet Jackson 0.93374
janet damita jo jackson born may 16 1966 is an american singer songwriter and actress know
n for a series of sonically innovative socially conscious and
* Barbara Hershey 0.93507
barbara hershey born barbara lynn herzstein february 5 1948 once known as barbara seagull
is an american actress in a career spanning nearly 50 years
* Lauren Royal 0.93717
lauren royal born march 3 circa 1965 is a book writer from california royal has written bo
th historic and novelistic booksa selfproclaimed angels baseball fan
* Alexandra Potter 0.93802
alexandra potter born 1970 is a british author of romantic comediesborn in bradford yorksh
ire england and educated at liverpool university gaining an honors degree in
* Cher 0.93804
cher r born cherilyn sarkisian may 20 1946 is an american singer actress and television ho
st described as embodying female autonomy in a maledominated industry
* Madonna (entertainer) 0.93806
madonna louise ciccone tkoni born august 16 1958 is an american singer songwriter actress
and businesswoman she achieved popularity by pushing the boundaries of lyrical
* Jane Fonda 0.93836
jane fonda born lady jayne seymour fonda december 21 1937 is an american actress writer po
litical activist former fashion model and fitness guru she is
* Ellina Graypel 0.93927
ellina graypel born july 19 1972 is an awardwinning russian singersongwriter she was born
near the volga river in the heart of russia she spent
|
MIT
|
4-ml-clustering-and-retrieval/Week_6-assignment.ipynb
|
anilk991/Coursera-Machine-Learning-Specialization-UW
|
콜모고로프의 공리 1) 모든 사건에 대해 확률은 실수이고 0 또는 양수이다. - $P(A) \geq 0$ 2) 표본공간(전체집합) 이라는 사건(부분집합)에대 대한 확률은 1이다. - $P(\Omega) = 1$ 3) 공통원소가 없는 두 사건의 합집합의 확률은 사건별 확률의 합이다 - $A\cap B = \emptyset \rightarrow P(A\cup B) = P(A) + P(B)$ ----- 확률 성질 요약 1) 공집합의 확률 - $P(0) = \emptyset$ 2) 여집합의 확률 - $P(A\complement) = 1-P(A)$ 3) 포함 배제의 원리 - $P(A\cup B) = P(A) + P(B) - P(A\cap B) $ 4) 전체확률의 법칙 - $P(A) = \sum_i(A,C_i)$ ---- 확률 분포 - 표본의 개수가 유한한 경우 : 단순사건에 대한 정보만 전달하면 됨 - 확률질량함수 : $P({a})$ - 표본의 개수가 무한한 경우 - 구간 : 표본공간이 실수의 집합이면 사건은 시작점과 끝점이라는 두 숫자로 표현 $A = {a < x \leq b}$ 1) 누적분포함수 (cdf == $F(x)$) - $F(b) = F(a) + P(a,b)$ $\rightarrow P(a,b) = F(b) - F(a)$ 2) 확률밀도함수 (pdf) : $F(x)$ 를 미분하여, 도함수를 구해서 기울기 출력 - $p(x) = \frac{dF(x)}{dx}$ - 누적분포함수와 확률밀도함수의 관계 $\rightarrow$ 적분 - $F(x) = \int^x_{-\infty}p(u)du$ 확률 밀도 함수의 특징 1) 누적분포함수의 기울기가 음수가 될 수 없기 때문에 확률밀도함수는 0보다 크거나 같다. - $p(x) \geq 0$ 2) $-\infty$부터 $\infty$까지 적분하면 표본공간$(-\infty , \infty)$의 확률이 되므로 값은 1이다. - $\int^\infty_{-\infty} p(u)du = 1$ ---- 결합확률과 조건부 확률 - 결합확률 (joint probability) - $P(A\cap B) or P(A,B)$ - 주변확률 (marginal probability) - $P(A) , P(B)$ - 조건부 확률 (100% 참) - $P(A|B) = new P(A) if P(B) = 1$ $ = \frac{P(A,B)}{P(B)} = \frac{P(Anew)}{P(\Omega new)}$ - 독립 (independent) - 사건A와 사건B의 결합확률의 값이 $P(A,B) = P(A)P(B)$ 관계가 성립하면서 두 사건 $A$와 $B$는 서로 독립이라고 정의한다. - $P(A) = P(A|B)$ ---- 원인과 결과, 근거와 추론, 가정과 조건부 결론 - 조건부확률 $P(A|B)$에서 사건 (주장/명제) B, A 1) 가정과 그 가정에 따른 조건부 결론 2) 원인과 결과 3) 근거와 추론 $\rightarrow P(A,B) = P(A|B)P(B)$ : A,B가 모두 발생할 확률은 B라는 사건이 발생할 확률과 그 사건이 발생한 경우 다시 A가 발생할 경우의 곱 --- 사슬법칙 - $P(X_1,...,X_N) = P(X_1)\prod^N_{i=2}P(N_i|X_1,...,X_{i-1})$ ---- 확률변수 (random variable) - 확률적인 숫자를 출력하는 변수 ex) 랜덤박스 ---- JointProbabilityDistribution(variables, cardinality, values) - variables : 확률변수의 이름, 문자열의리스트, 정의하려는 확률변수가 하나인 경우에도 리스트로 넣어야 한다. - cardinality : 각 확률변수의 표본 혹은 배타적 사건의 수의 리스트 - values : 확률 변수의 모든 표본(조합)에 대한 (결합) 확률값의 리스트
|
from pgmpy.factors.discrete import JointProbabilityDistribution as JPD
px = JPD(['X'], [2], np.array([12, 8]) / 20)
print(px)
py = JPD(['Y'], [2], np.array([10, 10])/20)
print(py)
pxy = JPD(['X', 'Y'], [2, 2], np.array([3, 9, 7, 1])/20)
print(pxy)
pxy2 = JPD(['X', 'Y'], [2, 2], np.array([6, 6, 4, 4, ])/20)
print(pxy2)
|
+------+------+----------+
| X | Y | P(X,Y) |
+======+======+==========+
| X(0) | Y(0) | 0.3000 |
+------+------+----------+
| X(0) | Y(1) | 0.3000 |
+------+------+----------+
| X(1) | Y(0) | 0.2000 |
+------+------+----------+
| X(1) | Y(1) | 0.2000 |
+------+------+----------+
|
MIT
|
MATH/17_joint_probability_conditional_probability.ipynb
|
CATERINA-SEUL/Data-Science-School
|
---- marginal_distribution() - 인수로 받은 확률변수에 대한 주변확률분포를 구함 - $X$의 주변확률을 구해라 ---- - 결합확률로 부터 주변확률 $P(A),P(A^\complement)$ 를 계산
|
pmx = pxy.marginal_distribution(['X'], inplace=False)
print(pmx)
|
+------+--------+
| X | P(X) |
+======+========+
| X(0) | 0.6000 |
+------+--------+
| X(1) | 0.4000 |
+------+--------+
|
MIT
|
MATH/17_joint_probability_conditional_probability.ipynb
|
CATERINA-SEUL/Data-Science-School
|
marginalize() - 인수로 받은 확률변수를 주변화(marginalize)하여 나머지 확률 변수에 대한 주변확률분포를 구한다. - $X$를 구하기 위해서 $Y$를 없애라! (== 배타적인 사건으로 써라)---- - 결합확률로부터 $P(A) , P(A^\complement)$ 를 계산
|
pmx = pxy.marginalize(['Y'], inplace=False)
print(pmx)
|
+------+--------+
| X | P(X) |
+======+========+
| X(0) | 0.6000 |
+------+--------+
| X(1) | 0.4000 |
+------+--------+
|
MIT
|
MATH/17_joint_probability_conditional_probability.ipynb
|
CATERINA-SEUL/Data-Science-School
|
- 결합확률로부터 $P(B),P(B^\complement)$를 계산
|
py = pxy.marginal_distribution(['Y'], inplace=False)
print(pmy)
py = pxy.marginalize(['X'], inplace=False)
print(py)
|
+------+--------+
| Y | P(Y) |
+======+========+
| Y(0) | 0.5000 |
+------+--------+
| Y(1) | 0.5000 |
+------+--------+
|
MIT
|
MATH/17_joint_probability_conditional_probability.ipynb
|
CATERINA-SEUL/Data-Science-School
|
conditional_distribution() - 어떤 확률변수가 어떤 사건이 되는 조건에 대해 조건부확률값을 계산 --- - 결합확률로부터 조건부확률 $P(B|A),P(B^\complement|A)$ 를 계산
|
py_on_x0 = pxy.conditional_distribution([('X', 0)], inplace=False)
print(py_on_x0)
|
+------+--------+
| Y | P(Y) |
+======+========+
| Y(0) | 0.2500 |
+------+--------+
| Y(1) | 0.7500 |
+------+--------+
|
MIT
|
MATH/17_joint_probability_conditional_probability.ipynb
|
CATERINA-SEUL/Data-Science-School
|
- 결합확률로부터 조건부확률 $P(B|A^\complement),P(B^\complement|A^\complement)$ 를 계산
|
py_on_x1 = pxy.conditional_distribution([('X', 1)], inplace=False)
print(py_on_x1)
|
+------+--------+
| Y | P(Y) |
+======+========+
| Y(0) | 0.8750 |
+------+--------+
| Y(1) | 0.1250 |
+------+--------+
|
MIT
|
MATH/17_joint_probability_conditional_probability.ipynb
|
CATERINA-SEUL/Data-Science-School
|
- 결합확률로부터 조건부확률 $P(A|B),P(A^\complement|B)$ 를 계산
|
px_on_y0 = pxy.conditional_distribution([('Y', 0)], inplace=False)
print(px_on_y0)
px_on_y1 = pxy.conditional_distribution([('Y', 1)], inplace=False)
print(px_on_y1)
|
+------+--------+
| X | P(X) |
+======+========+
| X(0) | 0.9000 |
+------+--------+
| X(1) | 0.1000 |
+------+--------+
|
MIT
|
MATH/17_joint_probability_conditional_probability.ipynb
|
CATERINA-SEUL/Data-Science-School
|
check_independence() - 두 확률변수 간의 독립 확인 가능. * 독립 : $P(A,B) = P(A)P(B)$ - 독립인 경우에는 결합확률을 다 구할 필요가 없다 .
|
pxy.check_independence(['X'], ['Y'])
|
_____no_output_____
|
MIT
|
MATH/17_joint_probability_conditional_probability.ipynb
|
CATERINA-SEUL/Data-Science-School
|
- JointProbabilityDistribution 객체끼리 곱하면 두 분포가 독립이라는 가정하에 결합확률을 구함
|
# 독립이 아님
print(px*py)
print(pxy)
pxy2.check_independence(['X'], ['Y'])
|
_____no_output_____
|
MIT
|
MATH/17_joint_probability_conditional_probability.ipynb
|
CATERINA-SEUL/Data-Science-School
|
This notebook preprocesses subject 8 for Question 1: Can we predict if the subject will select Gamble or Safebet *before* the button press time? Behavior data
|
## Explore behavior data using pandas
import pandas as pd
beh_dir = '../data/decision-making/data/data_behav'
# os.listdir(beh_dir)
# S08
beh8_df = pd.read_csv(os.path.join(beh_dir,'gamble.data.s08.csv'))
|
_____no_output_____
|
MIT
|
Misc/.ipynb_checkpoints/preprocess-checkpoint.ipynb
|
dattasiddhartha/DataX-NeuralDecisionMaking
|
Choice.class will be our outcome variable
|
beh8_df.groupby('choice.class').nunique()
|
_____no_output_____
|
MIT
|
Misc/.ipynb_checkpoints/preprocess-checkpoint.ipynb
|
dattasiddhartha/DataX-NeuralDecisionMaking
|
Great, we have 100 trials per choice: Gamble vs Safebet.
|
# This will be the outcome variable: beh8_df['choice.class']
y8 = beh8_df['choice.class'].values
|
_____no_output_____
|
MIT
|
Misc/.ipynb_checkpoints/preprocess-checkpoint.ipynb
|
dattasiddhartha/DataX-NeuralDecisionMaking
|
Save y-data
|
mkdir ../data/decision-making/data/data_preproc
np.save('../data/decision-making/data/data_preproc/y8',y8)
ls ../data/decision-making/data/data_preproc
|
y8.npy
|
MIT
|
Misc/.ipynb_checkpoints/preprocess-checkpoint.ipynb
|
dattasiddhartha/DataX-NeuralDecisionMaking
|
Neural data
|
sfreq = 1000
neur_dir = '../data/decision-making/data/data_ephys'
# os.listdir(neur_dir)
from scipy.io import loadmat
neur8 = loadmat(os.path.join(neur_dir, 's08_ofc_hg_events.mat'))
neur8['buttonpress_events_hg'].shape
%matplotlib inline
import matplotlib.pyplot as plt
# first electrode
plt.plot(neur8['buttonpress_events_hg'][:,:,0].T)
plt.axvline(1000, color='k')
pass
%matplotlib inline
import matplotlib.pyplot as plt
# second electrode
plt.plot(neur8['buttonpress_events_hg'][:,:,1].T)
plt.axvline(1000, color='k')
pass
%matplotlib inline
import matplotlib.pyplot as plt
# 10th electrode
plt.plot(neur8['buttonpress_events_hg'][:,:,-1].T)
plt.axvline(1000, color='k')
pass
|
_____no_output_____
|
MIT
|
Misc/.ipynb_checkpoints/preprocess-checkpoint.ipynb
|
dattasiddhartha/DataX-NeuralDecisionMaking
|
Convert format of data to work for "decoding over time" For decoding over time the data X is the epochs data of shape n_epochs x n_channels x n_times. As the last dimension of X is the time an estimator will be fit on every time instant.
|
neur8['buttonpress_events_hg'].shape
|
_____no_output_____
|
MIT
|
Misc/.ipynb_checkpoints/preprocess-checkpoint.ipynb
|
dattasiddhartha/DataX-NeuralDecisionMaking
|
Notice that current shape is n_epochs (200) x n_times (3000) x n_channels (10)
|
X8 = np.swapaxes(neur8['buttonpress_events_hg'],1,2)
X8.shape
|
_____no_output_____
|
MIT
|
Misc/.ipynb_checkpoints/preprocess-checkpoint.ipynb
|
dattasiddhartha/DataX-NeuralDecisionMaking
|
Hooray, now it's n_epochs x n_channels x n_times. Save out X8
|
np.save('../data/decision-making/data/data_preproc/X8',X8)
|
_____no_output_____
|
MIT
|
Misc/.ipynb_checkpoints/preprocess-checkpoint.ipynb
|
dattasiddhartha/DataX-NeuralDecisionMaking
|
EOPF S2 MSI L1 A/B Products Data Structure Proposal
|
from IPython.display import IFrame
from utils import display
from EOProductDataStructure import EOProductBuilder, EOVariableBuilder, EOGroupBuilder
import yaml
|
_____no_output_____
|
Apache-2.0
|
eopf-notebooks/eopf_product_data_structure/EOPF_S2_MSI_L1AB.ipynb
|
CSC-DPR/notebooks
|
Display function
|
def set_variables(yaml_group, eopf_group):
variables = yaml_group['variables']
for var in variables:
variable = EOVariableBuilder(var, default_attrs=True)
try:
variable.dtype = variables[var].split('->')[1]
except:
variable.dtype = "string"
pass
variable.dims = [d.split('->')[0] for d in variables[var].replace(' ','').replace('F(','').replace(')','').split(',')]
eopf_group.variables.append(variable)
for d in variable.dims:
if d not in eopf_group.dims:
eopf_group.dims.append(d)
|
_____no_output_____
|
Apache-2.0
|
eopf-notebooks/eopf_product_data_structure/EOPF_S2_MSI_L1AB.ipynb
|
CSC-DPR/notebooks
|
1. Read S1 MSI Product
|
path_product="data/s2_msi_l1ab.yaml"
product = None
with open(path_product, "r") as stream:
try:
product = yaml.safe_load(stream)['product']
except yaml.YAMLError as exc:
print(exc)
s2_msi = EOProductBuilder("S2_MSI_L1AB__", coords=EOGroupBuilder('coords'))
for key, values in product.items():
if key == "attributes":
s2_msi.attrs = values
else:
group = EOGroupBuilder(key)
if "groups" in values:
groups = values['groups']
for ygp in groups:
eopf_group = EOGroupBuilder(ygp)
set_variables(values['groups'][ygp], eopf_group)
group.groups.append(eopf_group)
else:
set_variables(values,group)
s2_msi.groups.append(group)
display(s2_msi.compute())
|
_____no_output_____
|
Apache-2.0
|
eopf-notebooks/eopf_product_data_structure/EOPF_S2_MSI_L1AB.ipynb
|
CSC-DPR/notebooks
|
Object Detection with SSD Here we demostrate detection on example images using SSD with PyTorch
|
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import torch
import torch.nn as nn
import torch.backends.cudnn as cudnn
from torch.autograd import Variable
import numpy as np
import cv2
if torch.cuda.is_available():
torch.set_default_tensor_type('torch.cuda.FloatTensor')
from ssd import build_ssd
|
_____no_output_____
|
MIT
|
demo/demo.ipynb
|
joshnn/SSD.Pytorch
|
Build SSD300 in Test Phase1. Build the architecture, specifyingsize of the input image (300), and number of object classes to score (21 for VOC dataset)2. Next we load pretrained weights on the VOC0712 trainval dataset
|
net = build_ssd('test', 300, 21) # initialize SSD
net.load_weights('../weights/ssd300_VOC_28000.pth')
|
_____no_output_____
|
MIT
|
demo/demo.ipynb
|
joshnn/SSD.Pytorch
|
Load Image Here we just load a sample image from the VOC07 dataset
|
# image = cv2.imread('./data/example.jpg', cv2.IMREAD_COLOR) # uncomment if dataset not downloaded
%matplotlib inline
from matplotlib import pyplot as plt
from data import VOCDetection, VOC_ROOT, VOCAnnotationTransform
# here we specify year (07 or 12) and dataset ('test', 'val', 'train')
testset = VOCDetection(VOC_ROOT, [('2007', 'val')], None, VOCAnnotationTransform())
img_id = 60
image = testset.pull_image(img_id)
rgb_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# View the sampled input image before transform
plt.figure(figsize=(10,10))
plt.imshow(rgb_image)
plt.show()
|
_____no_output_____
|
MIT
|
demo/demo.ipynb
|
joshnn/SSD.Pytorch
|
Pre-process the input. Using the torchvision package, we can create a Compose of multiple built-in transorm ops to apply For SSD, at test time we use a custom BaseTransform callable toresize our image to 300x300, subtract the dataset's mean rgb values, and swap the color channels for input to SSD300.
|
x = cv2.resize(image, (300, 300)).astype(np.float32)
x -= (104.0, 117.0, 123.0)
x = x.astype(np.float32)
x = x[:, :, ::-1].copy()
plt.imshow(x)
x = torch.from_numpy(x).permute(2, 0, 1)
|
_____no_output_____
|
MIT
|
demo/demo.ipynb
|
joshnn/SSD.Pytorch
|
SSD Forward Pass Now just wrap the image in a Variable so it is recognized by PyTorch autograd
|
xx = Variable(x.unsqueeze(0)) # wrap tensor in Variable
if torch.cuda.is_available():
xx = xx.cuda()
y = net(xx)
|
_____no_output_____
|
MIT
|
demo/demo.ipynb
|
joshnn/SSD.Pytorch
|
Parse the Detections and View ResultsFilter outputs with confidence scores lower than a threshold Here we choose 60%
|
from data import VOC_CLASSES as labels
top_k=10
plt.figure(figsize=(10,10))
colors = plt.cm.hsv(np.linspace(0, 1, 21)).tolist()
plt.imshow(rgb_image) # plot the image for matplotlib
currentAxis = plt.gca()
detections = y.data
# scale each detection back up to the image
scale = torch.Tensor(rgb_image.shape[1::-1]).repeat(2)
for i in range(detections.size(1)):
j = 0
while detections[0,i,j,0] >= 0.6:
score = detections[0,i,j,0]
label_name = labels[i-1]
display_txt = '%s: %.2f'%(label_name, score)
pt = (detections[0,i,j,1:]*scale).cpu().numpy()
coords = (pt[0], pt[1]), pt[2]-pt[0]+1, pt[3]-pt[1]+1
color = colors[i]
currentAxis.add_patch(plt.Rectangle(*coords, fill=False, edgecolor=color, linewidth=2))
currentAxis.text(pt[0], pt[1], display_txt, bbox={'facecolor':color, 'alpha':0.5})
j+=1
|
_____no_output_____
|
MIT
|
demo/demo.ipynb
|
joshnn/SSD.Pytorch
|
Data Attribute Recommendation - TechED 2020 INT260Getting started with the Python SDK for the Data Attribute Recommendation service. Business ScenarioWe will consider a business scenario involving product master data. The creation and maintenance of this product master data requires the careful manual selection of the correct categories for a given product from a pre-defined hierarchy of product categories.In this workshop, we will explore how to automate this tedious manual task with the Data Attribute Recommendation service. This workshop will cover: * Data Upload* Model Training and Deployment* Inference Requests We will work through a basic example of how to achieve these tasks using the [Python SDK for Data Attribute Recommendation](https://github.com/SAP/data-attribute-recommendation-python-sdk). *Note: if you are doing several runs of this notebook on a trial account, you may see errors stating 'The resource can no longer be used. Usage limit has been reached'. It can be beneficial to [clean up the service instance](Cleaning-up-a-service-instance) to free up limited trial resources acquired by an earlier run of the notebook. [Some limits](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/c03b561eea1744c9b9892b416037b99a.html) cannot be reset this way.* Table of Contents* [Exercise 01.1](Exercise-01.1) - Installing the SDK and preparing the service key * [Creating a service instance and key on BTP Trial](Creating-a-service-instance-and-key) * [Installing the SDK](Installing-the-SDK) * [Loading the service key into your Jupyter Notebook](Loading-the-service-key-into-your-Jupyter-Notebook)* [Exercise 01.2](Exercise-01.2) - Uploading the data* [Exercise 01.3](Exercise-01.3) - Training the model* [Exercise 01.4](Exercise-01.4) - Deploying the Model and predicting labels* [Resources](Resources) - Additional reading* [Cleaning up a service instance](Cleaning-up-a-service-instance) - Clean up all resources on the service instance* [Optional Exercises](Optional-Exercises) - Optional exercises RequirementsSee the [README in the Github repository for this workshop](https://github.com/SAP-samples/teched2020-INT260/blob/master/exercises/ex1-DAR/README.md). Exercise 01.1*Back to [table of contents](Table-of-Contents)*In exercise 01.1, we will install the SDK and prepare the service key. Creating a service instance and key on BTP Trial Please log in to your trial account: https://cockpit.eu10.hana.ondemand.com/trial/In the your global account screen, go to the "Boosters" tab:*Boosters are only available on the Trial landscape. If you are using a production environment, please follow this tutorial to manually [create a service instance and a service key](https://developers.sap.com/tutorials/cp-aibus-dar-service-instance.html)*. In the Boosters tab, enter "Data Attribute Recommendation" into the search box. Then, select theservice tile from the search results:  The resulting screen shows details of the booster pack. Here, click the "Start" button and wait a few seconds. Once the booster is finished, click the "go to Service Key" link to obtain your service key. Finally, download the key and save it to disk. Installing the SDK The Data Attribute Recommendation SDK is available from the Python package repository. It can be installed with the standard `pip` tool:
|
! pip install data-attribute-recommendation-sdk
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
*Note: If you are not using a Jupyter notebook, but instead a regular Python development environment, we recommend using a Python virtual environment to set up your development environment. Please see [the dedicated tutorial to learn how to install the SDK inside a Python virtual environment](https://developers.sap.com/tutorials/cp-aibus-dar-sdk-setup.html).* Loading the service key into your Jupyter Notebook Once you downloaded the service key from the Cockpit, upload it to your notebook environment. The service key must be uploaded to same directory where the `teched2020-INT260_Data_Attribute_Recommendation.ipynb` is stored.We first navigate to the file browser in Jupyter. On the top of your Jupyter notebook, right-click on the Jupyter logo and open in a new tab. **In the file browser, navigate to the directory where the `teched2020-INT260_Data_Attribute_Recommendation.ipynb` notebook file is stored. The service key must reside next to this file.**In the Jupyter file browser, click the **Upload** button (1). In the file selection dialog that opens, select the `defaultKey_*.json` file you downloaded previously from the SAP Cloud Platform Cockpit. Rename the file to `key.json`. Confirm the upload by clicking on the second **Upload** button (2). The service key contains your credentials to access the service. Please treat this as carefully as you would treat any password. We keep the service key as a separate file outside this notebook to avoid leaking the secret credentials.The service key is a JSON file. We will load this file once and use the credentials throughout this workshop.
|
# First, set up logging so we can see the actions performed by the SDK behind the scenes
import logging
import sys
logging.basicConfig(level=logging.INFO, stream=sys.stdout)
from pprint import pprint # for nicer output formatting
import json
import os
if not os.path.exists("key.json"):
msg = "key.json is not found. Please follow instructions above to create a service key of"
msg += " Data Attribute Recommendation. Then, upload it into the same directory where"
msg += " this notebook is saved."
print(msg)
raise ValueError(msg)
with open("key.json") as file_handle:
key = file_handle.read()
SERVICE_KEY = json.loads(key)
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
Summary Exercise 01.1In exercise 01.1, we have covered the following topics:* How to install the Python SDK for Data Attribute Recommendation* How to obtain a service key for the Data Attribute Recommendation service Exercise 01.2*Back to [table of contents](Table-of-Contents)**To perform this exercise, you need to execute the code in all previous exercises.*In exercise 01.2, we will upload our demo dataset to the service. The Dataset Obtaining the Data The dataset we use in this workshop is a CSV file containing product master data. The original data was released by BestBuy, a retail company, under an [open license](https://github.com/SAP-samples/data-attribute-recommendation-postman-tutorial-sampledata-and-license). This makes it ideal for first experiments with the Data Attribute Recommendation service. The dataset can be downloaded directly from Github using the following command:
|
! wget -O bestBuy.csv "https://raw.githubusercontent.com/SAP-samples/data-attribute-recommendation-postman-tutorial-sample/master/Tutorial_Example_Dataset.csv"
# If you receive a "command not found" error (i.e. on Windows), try curl instead of wget:
# ! curl -o bestBuy.csv "https://raw.githubusercontent.com/SAP-samples/data-attribute-recommendation-postman-tutorial-sample/master/Tutorial_Example_Dataset.csv"
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
Let's inspect the data:
|
# if you are experiencing an import error here, run the following in a new cell:
# ! pip install pandas
import pandas as pd
df = pd.read_csv("bestBuy.csv")
df.head(5)
print()
print(f"Data has {df.shape[0]} rows and {df.shape[1]} columns.")
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
The CSV contains the several products. For each product, the description, the manufacturer and the price are given. Additionally, three levels of the products hierarchy are given.The first product, a set of AAA batteries, is located in the following place in the product hierarchy:```level1_category: Connected Home & Housewares |level2_category: Housewares |level3_category: Household Batteries``` We will use the Data Attribute Recommendation service to predict the categories for a given product based on its **description**, **manufacturer** and **price**. Creating the DatasetSchema We first have to describe the shape of our data by creating a DatasetSchema. This schema informs the service about the individual column types found in the CSV. We also describe which are the target columns used for training. These columns will be later predicted. In our case, these are the three category columns.The service currently supports three column types: **text**, **category** and **number**. For prediction, only **category** is currently supported.A DatasetSchema for the BestBuy dataset looks as follows:```json{ "features": [ {"label": "manufacturer", "type": "CATEGORY"}, {"label": "description", "type": "TEXT"}, {"label": "price", "type": "NUMBER"} ], "labels": [ {"label": "level1_category", "type": "CATEGORY"}, {"label": "level2_category", "type": "CATEGORY"}, {"label": "level3_category", "type": "CATEGORY"} ], "name": "bestbuy-category-prediction",}```We will now upload this DatasetSchema to the Data Attribute Recommendation service. The SDK provides the[`DataManagerClient.create_dataset_schema()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.data_manager_client.DataManagerClient.create_dataset_schema) method for this purpose.
|
from sap.aibus.dar.client.data_manager_client import DataManagerClient
dataset_schema = {
"features": [
{"label": "manufacturer", "type": "CATEGORY"},
{"label": "description", "type": "TEXT"},
{"label": "price", "type": "NUMBER"}
],
"labels": [
{"label": "level1_category", "type": "CATEGORY"},
{"label": "level2_category", "type": "CATEGORY"},
{"label": "level3_category", "type": "CATEGORY"}
],
"name": "bestbuy-category-prediction",
}
data_manager = DataManagerClient.construct_from_service_key(SERVICE_KEY)
response = data_manager.create_dataset_schema(dataset_schema)
dataset_schema_id = response["id"]
print()
print("DatasetSchema created:")
pprint(response)
print()
print(f"DatasetSchema ID: {dataset_schema_id}")
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
The API responds with the newly created DatasetSchema resource. The service assigned an ID to the schema. We save this ID in a variable, as we will need it when we upload the data. Uploading the Data to the service The [`DataManagerClient`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.data_manager_client.DataManagerClient) class is also responsible for uploading data to the service. This data must fit to an existing DatasetSchema. After uploading the data, the service will validate the Dataset against the DataSetSchema in a background process. The data must be a CSV file which can optionally be `gzip` compressed.We will now upload our `bestBuy.csv` file, using the DatasetSchema which we created earlier.Data upload is a two-step process. We first create the Dataset using [`DataManagerClient.create_dataset()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.data_manager_client.DataManagerClient.create_dataset). Then we can upload data to the Dataset using the [`DataManagerClient.upload_data_to_dataset()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.data_manager_client.DataManagerClient.upload_data_to_dataset) method.
|
dataset_resource = data_manager.create_dataset("my-bestbuy-dataset", dataset_schema_id)
dataset_id = dataset_resource["id"]
print()
print("Dataset created:")
pprint(dataset_resource)
print()
print(f"Dataset ID: {dataset_id}")
# Compress file first for a faster upload
! gzip -9 -c bestBuy.csv > bestBuy.csv.gz
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
Note that the data upload can take a few minutes. Please do not restart the process while the cell is still running.
|
# Open in binary mode.
with open('bestBuy.csv.gz', 'rb') as file_handle:
dataset_resource = data_manager.upload_data_to_dataset(dataset_id, file_handle)
print()
print("Dataset after data upload:")
print()
pprint(dataset_resource)
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
Note that the Dataset status changed from `NO_DATA` to `VALIDATING`.Dataset validation is a background process. The status will eventually change from `VALIDATING` to `SUCCEEDED`.The SDK provides the [`DataManagerClient.wait_for_dataset_validation()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.data_manager_client.DataManagerClient.wait_for_dataset_validation) method to poll for the Dataset validation.
|
dataset_resource = data_manager.wait_for_dataset_validation(dataset_id)
print()
print("Dataset after validation has finished:")
print()
pprint(dataset_resource)
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
If the status is `FAILED` instead of `SUCCEEDED`, then the `validationMessage` will contain details about the validation failure. To better understand the Dataset lifecycle, refer to the [corresponding document on help.sap.com](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/a9b7429687a04e769dbc7955c6c44265.html). Summary Exercise 01.2In exercise 01.2, we have covered the following topics:* How to create a DatasetSchema* How to upload a Dataset to the serviceYou can find optional exercises related to exercise 01.2 [below](Optional-Exercises-for-01.2). Exercise 01.3*Back to [table of contents](Table-of-Contents)**To perform this exercise, you need to execute the code in all previous exercises.*In exercise 01.3, we will train the model. Training the Model The Dataset is now uploaded and has been validated successfully by the service.To train a machine learning model, we first need to select the correct model template. Selecting the right ModelTemplateThe Data Attribute Recommendation service currently supports two different ModelTemplates:| ID | Name | Description ||--------------------------------------|---------------------------|---------------------------------------------------------------------------|| d7810207-ca31-4d4d-9b5a-841a644fd81f | **Hierarchical template** | Recommended for the prediction of multiple classes that form a hierarchy. || 223abe0f-3b52-446f-9273-f3ca39619d2c | **Generic template** | Generic neural network for multi-label, multi-class classification. || 188df8b2-795a-48c1-8297-37f37b25ea00 | **AutoML template** | Finds the [best traditional machine learning model out of several traditional algorithms](https://blogs.sap.com/2021/04/28/how-does-automl-works-in-data-attribute-recommendation/). Single label only. |We are building a model to predict product hierarchies. The **Hierarchical Template** is correct for this scenario. In this template, the first label in the DatasetSchema is considered the top-level category. Each subsequent label is considered to be further down in the hierarchy. Coming back to our example DatasetSchema:```json{ "labels": [ {"label": "level1_category", "type": "CATEGORY"}, {"label": "level2_category", "type": "CATEGORY"}, {"label": "level3_category", "type": "CATEGORY"} ]}```The first defined label is `level1_category`, which is given more weight during training than `level3_category`.Refer to the [official documentation on ModelTemplates](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/1e76e8c636974a06967552c05d40e066.html) to learn more. Additional model templates may be added over time, so check back regularly. Starting the training When working with models, we use the [`ModelManagerClient`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.model_manager_client.ModelManagerClient) class.To start the training, we need the IDs of the dataset and the desired model template. We also have to provide a name for the model.The [`ModelManagerClient.create_job()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.model_manager_client.ModelManagerClient.create_job) method launches the training Job.*Only one model of a given name can exist. If you receive a message stating 'The model name specified is already in use', you either have to remove the job and its associated model first or you have to change the `model_name` variable name below. You can also [clean up the entire service instance](Cleaning-up-a-service-instance).*
|
from sap.aibus.dar.client.model_manager_client import ModelManagerClient
from sap.aibus.dar.client.exceptions import DARHTTPException
model_manager = ModelManagerClient.construct_from_service_key(SERVICE_KEY)
model_template_id = "d7810207-ca31-4d4d-9b5a-841a644fd81f" # hierarchical template
model_name = "bestbuy-hierarchy-model"
job_resource = model_manager.create_job(model_name, dataset_id, model_template_id)
job_id = job_resource['id']
print()
print("Job resource:")
print()
pprint(job_resource)
print()
print(f"ID of submitted Job: {job_id}")
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
The job is now running in the background. Similar to the DatasetValidation, we have to poll the job until it succeeds.The SDK provides the [`ModelManagerClient.wait_for_job()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.model_manager_client.ModelManagerClient.wait_for_job) method:
|
job_resource = model_manager.wait_for_job(job_id)
print()
print("Job resource after training is finished:")
pprint(job_resource)
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
To better understand the Training Job lifecycle, see the [corresponding document on help.sap.com](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/0fc40aa077ce4c708c1e5bfc875aa3be.html). IntermissionThe model training will take between 5 and 10 minutes.In the meantime, we can explore the available [resources](Resources) for both the service and the SDK. Inspecting the ModelOnce the training job is finished successfully, we can inspect the model using [`ModelManagerClient.read_model_by_name()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.model_manager_client.ModelManagerClient.read_model_by_name).
|
model_resource = model_manager.read_model_by_name(model_name)
print()
pprint(model_resource)
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
In the model resource, the `validationResult` key provides information about model performance. You can also use these metrics to compare performance of different [ModelTemplates](Selecting-the-right-ModelTemplate) or different datasets. Summary Exercise 01.3In exercise 01.3, we have covered the following topics:* How to select the appropriate ModelTemplate* How to train a Model from a previously uploaded DatasetYou can find optional exercises related to exercise 01.3 [below](Optional-Exercises-for-01.3). Exercise 01.4*Back to [table of contents](Table-of-Contents)**To perform this exercise, you need to execute the code in all previous exercises.*In exercise 01.4, we will deploy the model and predict labels for some unlabeled data. Deploying the Model The training job has finished and the model is ready to be deployed. By deploying the model, we create a server process in the background on the Data Attribute Recommendation service which will serve inference requests.In the SDK, the [`ModelManagerClient.create_deployment()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlmodule-sap.aibus.dar.client.model_manager_client) method lets us create a Deployment.
|
deployment_resource = model_manager.create_deployment(model_name)
deployment_id = deployment_resource["id"]
print()
print("Deployment resource:")
print()
pprint(deployment_resource)
print(f"Deployment ID: {deployment_id}")
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
*Note: if you are using a trial account and you see errors such as 'The resource can no longer be used. Usage limit has been reached', consider [cleaning up the service instance](Cleaning-up-a-service-instance) to free up limited trial resources.* Similar to the data upload and the training job, model deployment is an asynchronous process. We have to poll the API until the Deployment is in status `SUCCEEDED`. The SDK provides the [`ModelManagerClient.wait_for_deployment()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.model_manager_client.ModelManagerClient.wait_for_deployment) for this purposes.
|
deployment_resource = model_manager.wait_for_deployment(deployment_id)
print()
print("Finished deployment resource:")
print()
pprint(deployment_resource)
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
Once the Deployment is in status `SUCCEEDED`, we can run inference requests. To better understand the Deployment lifecycle, see the [corresponding document on help.sap.com](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/f473b5b19a3b469e94c40eb27623b4f0.html). *For trial users: the deployment will be stopped after 8 hours. You can restart it by deleting the deployment and creating a new one for your model. The [`ModelManagerClient.ensure_deployment_exists()`](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/c03b561eea1744c9b9892b416037b99a.html) method will delete and re-create automatically. Then, you need to poll until the deployment is succeeded using [`ModelManagerClient.wait_for_deployment()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.model_manager_client.ModelManagerClient.wait_for_deployment) as above.* Executing Inference requests With a single inference request, we can send up to 50 objects to the service to predict the labels. The data send to the service must match the `features` section of the DatasetSchema created earlier. The `labels` defined inside of the DatasetSchema will be predicted for each object and returned as a response to the request.In the SDK, the [`InferenceClient.create_inference_request()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.inference_client.InferenceClient.create_inference_request) method handles submission of inference requests.
|
from sap.aibus.dar.client.inference_client import InferenceClient
inference = InferenceClient.construct_from_service_key(SERVICE_KEY)
objects_to_be_classified = [
{
"features": [
{"name": "manufacturer", "value": "Energizer"},
{"name": "description", "value": "Alkaline batteries; 1.5V"},
{"name": "price", "value": "5.99"},
],
},
]
inference_response = inference.create_inference_request(model_name, objects_to_be_classified)
print()
print("Inference request processed. Response:")
print()
pprint(inference_response)
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
*Note: For trial accounts, you only have a limited number of objects which you can classify.* You can also try to come up with your own example:
|
my_own_items = [
{
"features": [
{"name": "manufacturer", "value": "EDIT THIS"},
{"name": "description", "value": "EDIT THIS"},
{"name": "price", "value": "0.00"},
],
},
]
inference_response = inference.create_inference_request(model_name, my_own_items)
print()
print("Inference request processed. Response:")
print()
pprint(inference_response)
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
You can also classify multiple objects at once. For each object, the `top_n` parameter determines how many predictions are returned.
|
objects_to_be_classified = [
{
"objectId": "optional-identifier-1",
"features": [
{"name": "manufacturer", "value": "Energizer"},
{"name": "description", "value": "Alkaline batteries; 1.5V"},
{"name": "price", "value": "5.99"},
],
},
{
"objectId": "optional-identifier-2",
"features": [
{"name": "manufacturer", "value": "Eidos"},
{"name": "description", "value": "Unravel a grim conspiracy at the brink of Revolution"},
{"name": "price", "value": "19.99"},
],
},
{
"objectId": "optional-identifier-3",
"features": [
{"name": "manufacturer", "value": "Cadac"},
{"name": "description", "value": "CADAC Grill Plate for Safari Chef Grills: 12\""
+ "cooking surface; designed for use with Safari Chef grills;"
+ "105 sq. in. cooking surface; PTFE nonstick coating;"
+ " 2 grill surfaces"
},
{"name": "price", "value": "39.99"},
],
}
]
inference_response = inference.create_inference_request(model_name, objects_to_be_classified, top_n=3)
print()
print("Inference request processed. Response:")
print()
pprint(inference_response)
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
We can see that the service now returns the `n-best` predictions for each label as indicated by the `top_n` parameter.In some cases, the predicted category has the special value `nan`. In the `bestBuy.csv` data set, not all records have the full set of three categories. Some records only have a top-level category. The model learns this fact from the data and will occasionally suggest that a record should not have a category.
|
# Inspect all video games with just a top-level category entry
video_games = df[df['level1_category'] == 'Video Games']
video_games.loc[df['level2_category'].isna() & df['level3_category'].isna()].head(5)
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
To learn how to execute inference calls without the SDK just using the underlying RESTful API, see [Inference without the SDK](Inference-without-the-SDK). Summary Exercise 01.4In exercise 01.4, we have covered the following topics:* How to deploy a previously trained model* How to execute inference requests against a deployed modelYou can find optional exercises related to exercise 01.4 [below](Optional-Exercises-for-01.4). Wrapping upIn this workshop, we looked into the following topics:* Installation of the Python SDK for Data Attribute Recommendation* Modelling data with a DatasetSchema* Uploading data into a Dataset* Training a model* Predicting labels for unlabelled dataUsing these tools, we are able to solve the problem of missing Master Data attributes starting from just a CSV file containing training data.Feel free to revisit the workshop materials at any time. The [resources](Resources) section below contains additional reading.If you would like to explore the additional capabilities of the SDK, visit the [optional exercises](Optional-Exercises) below. Cleanup During the course of the workshop, we have created several resources on the Data Attribute Recommendation Service:* DatasetSchema* Dataset* Job* Model* DeploymentThe SDK provides several methods to delete these resources. Note that there are dependencies between objects: you cannot delete a Dataset without deleting the Model beforehand.You will need to set `CLEANUP_SESSION = True` below to execute the cleanup.
|
# Clean up all resources created earlier
CLEANUP_SESSION = False
def cleanup_session():
model_manager.delete_deployment_by_id(deployment_id) # this can take a few seconds
model_manager.delete_model_by_name(model_name)
model_manager.delete_job_by_id(job_id)
data_manager.delete_dataset_by_id(dataset_id)
data_manager.delete_dataset_schema_by_id(dataset_schema_id)
print("DONE cleaning up!")
if CLEANUP_SESSION:
print("Cleaning up resources generated in this session.")
cleanup_session()
else:
print("Not cleaning up. Set 'CLEANUP_SESSION = True' above and run again!")
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
Resources*Back to [table of contents](Table-of-Contents)* SDK Resources* [SDK source code on Github](https://github.com/SAP/data-attribute-recommendation-python-sdk)* [SDK documentation](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/)* [How to obtain support](https://github.com/SAP/data-attribute-recommendation-python-sdk/blob/master/README.mdhow-to-obtain-support)* [Tutorials: Classify Data Records with the SDK for Data Attribute Recommendation](https://developers.sap.com/group.cp-aibus-data-attribute-sdk.html) Data Attribute Recommendation* [SAP Help Portal](https://help.sap.com/viewer/product/Data_Attribute_Recommendation/SHIP/en-US)* [API Reference](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/b45cf9b24fd042d082c16191aa938c8d.html)* [Tutorials using Postman - interact with the service RESTful API directly](https://developers.sap.com/mission.cp-aibus-data-attribute.html)* [Trial Account Limits](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/c03b561eea1744c9b9892b416037b99a.html)* [Metering and Pricing](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/1e093326a2764c298759fcb92c5b0500.html) Addendum Inference without the SDK*Back to [table of contents](Table-of-Contents)* The Data Attribute Service exposes a RESTful API. The SDK we use in this workshop uses this API to interact with the DAR service.For custom integration, you can implement your own client for the API. The tutorial "[Use Machine Learning to Classify Data Records]" is a great way to explore the Data Attribute Recommendation API with the Postman REST client. Beyond the tutorial, the [API Reference] is a comprehensive documentation of the RESTful interface.[Use Machine Learning to Classify Data Records]: https://developers.sap.com/mission.cp-aibus-data-attribute.html[API Reference]: https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/b45cf9b24fd042d082c16191aa938c8d.htmlTo demonstrate the underlying API, the next example uses the `curl` command line tool to perform an inference request against the Inference API.The example uses the `jq` command to extract the credentials from the service. The authentication token is retrieved from the `uaa_url` and then used for the inference request.
|
# If the following example gives you errors that the jq or curl commands cannot be found,
# you may be able to install them from conda by uncommenting one of the lines below:
#%conda install -q jq
#%conda install -q curl
%%bash -s "$model_name" # Pass the python model_name variable as the first argument to shell script
model_name=$1
echo "Model: $model_name"
key=$(cat key.json)
url=$(echo $key | jq -r .url)
uaa_url=$(echo $key | jq -r .uaa.url)
clientid=$(echo $key | jq -r .uaa.clientid)
clientsecret=$(echo $key | jq -r .uaa.clientsecret)
echo "Service URL: $url"
token_url=${uaa_url}/oauth/token?grant_type=client_credentials
echo "Obtaining token with clientid $clientid from $token_url"
bearer_token=$(curl \
--silent --show-error \
--user $clientid:$clientsecret \
$token_url \
| jq -r .access_token
)
inference_url=${url}/inference/api/v3/models/${model_name}/versions/1
echo "Running inference request against endpoint $inference_url"
echo ""
# We pass the token in the Authorization header.
# The payload for the inference request is passed as
# the body of the POST request below.
# The output of the curl command is piped through `jq`
# for pretty-printing
curl \
--silent --show-error \
--header "Authorization: Bearer ${bearer_token}" \
--header "Content-Type: application/json" \
-XPOST \
${inference_url} \
-d '{
"objects": [
{
"features": [
{
"name": "manufacturer",
"value": "Energizer"
},
{
"name": "description",
"value": "Alkaline batteries; 1.5V"
},
{
"name": "price",
"value": "5.99"
}
]
}
]
}' | jq
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.