markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
**Memory Stats**
import GPUtil def memory_stats(): for gpu in GPUtil.getGPUs(): print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal)) memory_stats()
GPU RAM Free: 10721MB | Used: 267MB | Util 2% | Total 10988MB GPU RAM Free: 10988MB | Used: 1MB | Util 0% | Total 10989MB GPU RAM Free: 10988MB | Used: 1MB | Util 0% | Total 10989MB GPU RAM Free: 10988MB | Used: 1MB | Util 0% | Total 10989MB
MIT
Mobilenetv2 Tuning/MobileNetV2 Baseline.ipynb
vlad-danaila/Mobilenetv2_Ensemble_for_Cervical_Precancerous_Lesions_Classification
**Deterministic Measurements** This statements help making the experiments reproducible by fixing the random seeds. Despite fixing the random seeds, experiments are usually not reproducible using different PyTorch releases, commits, platforms or between CPU and GPU executions. Please find more details in the PyTorch documentation:https://pytorch.org/docs/stable/notes/randomness.html
SEED = 0 t.manual_seed(SEED) t.cuda.manual_seed(SEED) t.backends.cudnn.deterministic = True t.backends.cudnn.benchmark = False np.random.seed(SEED) random.seed(SEED)
_____no_output_____
MIT
Mobilenetv2 Tuning/MobileNetV2 Baseline.ipynb
vlad-danaila/Mobilenetv2_Ensemble_for_Cervical_Precancerous_Lesions_Classification
**Loading Data** The dataset is structured in multiple small folders of 7 images each. This generator iterates through the folders and returns the category and 7 paths: one for each image in the folder. The paths are ordered; the order is important since each folder contains 3 types of images, first 5 are with acetic acid solution and the last two are through a green lens and having iodine solution(a solution of a dark red color).
def sortByLastDigits(elem): chars = [c for c in elem if c.isdigit()] return 0 if len(chars) == 0 else int(''.join(chars)) def getImagesPaths(root_path): for class_folder in [root_path + f for f in listdir(root_path)]: category = int(class_folder[-1]) for case_folder in listdir(class_folder): case_folder_path = class_folder + '/' + case_folder + '/' img_files = [case_folder_path + file_name for file_name in listdir(case_folder_path)] yield category, sorted(img_files, key = sortByLastDigits)
_____no_output_____
MIT
Mobilenetv2 Tuning/MobileNetV2 Baseline.ipynb
vlad-danaila/Mobilenetv2_Ensemble_for_Cervical_Precancerous_Lesions_Classification
We define 3 datasets, which load 3 kinds of images: natural images, images taken through a green lens and images where the doctor applied iodine solution (which gives a dark red color). Each dataset has dynamic and static transformations which could be applied to the data. The static transformations are applied on the initialization of the dataset, while the dynamic ones are applied when loading each batch of data.
class SimpleImagesDataset(t.utils.data.Dataset): def __init__(self, root_path, transforms_x_static = None, transforms_x_dynamic = None, transforms_y_static = None, transforms_y_dynamic = None): self.dataset = [] self.transforms_x = transforms_x_dynamic self.transforms_y = transforms_y_dynamic for category, img_files in getImagesPaths(root_path): for i in range(5): img = pil.Image.open(img_files[i]) if transforms_x_static != None: img = transforms_x_static(img) if transforms_y_static != None: category = transforms_y_static(category) self.dataset.append((img, category)) def __getitem__(self, i): x, y = self.dataset[i] if self.transforms_x != None: x = self.transforms_x(x) if self.transforms_y != None: y = self.transforms_y(y) return x, y def __len__(self): return len(self.dataset) class GreenLensImagesDataset(SimpleImagesDataset): def __init__(self, root_path, transforms_x_static = None, transforms_x_dynamic = None, transforms_y_static = None, transforms_y_dynamic = None): self.dataset = [] self.transforms_x = transforms_x_dynamic self.transforms_y = transforms_y_dynamic for category, img_files in getImagesPaths(root_path): # Only the green lens image img = pil.Image.open(img_files[-2]) if transforms_x_static != None: img = transforms_x_static(img) if transforms_y_static != None: category = transforms_y_static(category) self.dataset.append((img, category)) class RedImagesDataset(SimpleImagesDataset): def __init__(self, root_path, transforms_x_static = None, transforms_x_dynamic = None, transforms_y_static = None, transforms_y_dynamic = None): self.dataset = [] self.transforms_x = transforms_x_dynamic self.transforms_y = transforms_y_dynamic for category, img_files in getImagesPaths(root_path): # Only the green lens image img = pil.Image.open(img_files[-1]) if transforms_x_static != None: img = transforms_x_static(img) if transforms_y_static != None: category = transforms_y_static(category) self.dataset.append((img, category))
_____no_output_____
MIT
Mobilenetv2 Tuning/MobileNetV2 Baseline.ipynb
vlad-danaila/Mobilenetv2_Ensemble_for_Cervical_Precancerous_Lesions_Classification
**Preprocess Data** Convert pytorch tensor to numpy array.
def to_numpy(x): return x.cpu().detach().numpy()
_____no_output_____
MIT
Mobilenetv2 Tuning/MobileNetV2 Baseline.ipynb
vlad-danaila/Mobilenetv2_Ensemble_for_Cervical_Precancerous_Lesions_Classification
Data transformations for the test and training sets.
norm_mean = [0.485, 0.456, 0.406] norm_std = [0.229, 0.224, 0.225] transforms_train = tv.transforms.Compose([ tv.transforms.RandomAffine(degrees = 45, translate = None, scale = (1., 2.), shear = 30), # tv.transforms.CenterCrop(CROP_SIZE), tv.transforms.Resize(IMAGE_SIZE), tv.transforms.RandomHorizontalFlip(), tv.transforms.ToTensor(), tv.transforms.Lambda(lambda t: t.cuda()), tv.transforms.Normalize(mean=norm_mean, std=norm_std) ]) transforms_test = tv.transforms.Compose([ # tv.transforms.CenterCrop(CROP_SIZE), tv.transforms.Resize(IMAGE_SIZE), tv.transforms.ToTensor(), tv.transforms.Normalize(mean=norm_mean, std=norm_std) ]) y_transform = tv.transforms.Lambda(lambda y: t.tensor(y, dtype=t.long, device = 'cuda:0'))
_____no_output_____
MIT
Mobilenetv2 Tuning/MobileNetV2 Baseline.ipynb
vlad-danaila/Mobilenetv2_Ensemble_for_Cervical_Precancerous_Lesions_Classification
Initialize pytorch datasets and loaders for training and test.
def create_loaders(dataset_class): dataset_train = dataset_class(TRAIN_PATH, transforms_x_dynamic = transforms_train, transforms_y_dynamic = y_transform) dataset_test = dataset_class(TEST_PATH, transforms_x_static = transforms_test, transforms_x_dynamic = tv.transforms.Lambda(lambda t: t.cuda()), transforms_y_dynamic = y_transform) loader_train = DataLoader(dataset_train, BATCH_SIZE, shuffle = True, num_workers = 0) loader_test = DataLoader(dataset_test, BATCH_SIZE, shuffle = False, num_workers = 0) return loader_train, loader_test, len(dataset_train), len(dataset_test) loader_train_simple_img, loader_test_simple_img, len_train, len_test = create_loaders(SimpleImagesDataset)
_____no_output_____
MIT
Mobilenetv2 Tuning/MobileNetV2 Baseline.ipynb
vlad-danaila/Mobilenetv2_Ensemble_for_Cervical_Precancerous_Lesions_Classification
**Visualize Data** Load a few images so that we can see the effects of the data augmentation on the training set.
def plot_one_prediction(x, label, pred): x, label, pred = to_numpy(x), to_numpy(label), to_numpy(pred) x = np.transpose(x, [1, 2, 0]) if x.shape[-1] == 1: x = x.squeeze() x = x * np.array(norm_std) + np.array(norm_mean) plt.title(label, color = 'green' if label == pred else 'red') plt.imshow(x) def plot_predictions(imgs, labels, preds): fig = plt.figure(figsize = (20, 5)) for i in range(20): fig.add_subplot(2, 10, i + 1, xticks = [], yticks = []) plot_one_prediction(imgs[i], labels[i], preds[i]) # x, y = next(iter(loader_train_simple_img)) # plot_predictions(x, y, y)
_____no_output_____
MIT
Mobilenetv2 Tuning/MobileNetV2 Baseline.ipynb
vlad-danaila/Mobilenetv2_Ensemble_for_Cervical_Precancerous_Lesions_Classification
**Model** Define a few models to experiment with.
def get_mobilenet_v2(): model = t.hub.load('pytorch/vision', 'mobilenet_v2', pretrained=True) model.classifier[1] = Linear(in_features=1280, out_features=4, bias=True) model = model.cuda() return model def get_vgg_19(): model = tv.models.vgg19(pretrained = True) model = model.cuda() model.classifier[6].out_features = 4 return model def get_res_next_101(): model = t.hub.load('facebookresearch/WSL-Images', 'resnext101_32x8d_wsl') model.fc.out_features = 4 model = model.cuda() return model def get_resnet_18(): model = tv.models.resnet18(pretrained = True) model.fc.out_features = 4 model = model.cuda() return model def get_dense_net(): model = tv.models.densenet121(pretrained = True) model.classifier.out_features = 4 model = model.cuda() return model class MobileNetV2_FullConv(t.nn.Module): def __init__(self): super().__init__() self.cnn = get_mobilenet_v2().features self.cnn[18] = t.nn.Sequential( tv.models.mobilenet.ConvBNReLU(320, 32, kernel_size=1), t.nn.Dropout2d(p = .7) ) self.fc = t.nn.Linear(32, 4) def forward(self, x): x = self.cnn(x) x = x.mean([2, 3]) x = self.fc(x); return x model_simple = t.nn.DataParallel(get_mobilenet_v2())
Using cache found in /root/.cache/torch/hub/pytorch_vision_master
MIT
Mobilenetv2 Tuning/MobileNetV2 Baseline.ipynb
vlad-danaila/Mobilenetv2_Ensemble_for_Cervical_Precancerous_Lesions_Classification
**Train & Evaluate** Timer utility function. This is used to measure the execution speed.
time_start = 0 def timer_start(): global time_start time_start = time.time() def timer_end(): return time.time() - time_start
_____no_output_____
MIT
Mobilenetv2 Tuning/MobileNetV2 Baseline.ipynb
vlad-danaila/Mobilenetv2_Ensemble_for_Cervical_Precancerous_Lesions_Classification
This function trains the network and evaluates it at the same time. It outputs the metrics recorded during the training for both train and test. We are measuring accuracy and the loss. The function also saves a checkpoint of the model every time the accuracy is improved. In the end we will have a checkpoint of the model which gave the best accuracy.
def train_eval(optimizer, model, loader_train, loader_test, chekpoint_name, epochs): metrics = { 'losses_train': [], 'losses_test': [], 'acc_train': [], 'acc_test': [], 'prec_train': [], 'prec_test': [], 'rec_train': [], 'rec_test': [], 'f_score_train': [], 'f_score_test': [] } best_acc = 0 loss_fn = t.nn.CrossEntropyLoss() try: for epoch in range(epochs): timer_start() train_epoch_loss, train_epoch_acc, train_epoch_precision, train_epoch_recall, train_epoch_f_score = 0, 0, 0, 0, 0 test_epoch_loss, test_epoch_acc, test_epoch_precision, test_epoch_recall, test_epoch_f_score = 0, 0, 0, 0, 0 # Train model.train() for x, y in loader_train: y_pred = model.forward(x) loss = loss_fn(y_pred, y) loss.backward() optimizer.step() # memory_stats() optimizer.zero_grad() y_pred, y = to_numpy(y_pred), to_numpy(y) pred = y_pred.argmax(axis = 1) ratio = len(y) / len_train train_epoch_loss += (loss.item() * ratio) train_epoch_acc += (sk.metrics.accuracy_score(y, pred) * ratio) precision, recall, f_score, _ = sk.metrics.precision_recall_fscore_support(y, pred, average = 'macro') train_epoch_precision += (precision * ratio) train_epoch_recall += (recall * ratio) train_epoch_f_score += (f_score * ratio) metrics['losses_train'].append(train_epoch_loss) metrics['acc_train'].append(train_epoch_acc) metrics['prec_train'].append(train_epoch_precision) metrics['rec_train'].append(train_epoch_recall) metrics['f_score_train'].append(train_epoch_f_score) # Evaluate model.eval() with t.no_grad(): for x, y in loader_test: y_pred = model.forward(x) loss = loss_fn(y_pred, y) y_pred, y = to_numpy(y_pred), to_numpy(y) pred = y_pred.argmax(axis = 1) ratio = len(y) / len_test test_epoch_loss += (loss * ratio) test_epoch_acc += (sk.metrics.accuracy_score(y, pred) * ratio ) precision, recall, f_score, _ = sk.metrics.precision_recall_fscore_support(y, pred, average = 'macro') test_epoch_precision += (precision * ratio) test_epoch_recall += (recall * ratio) test_epoch_f_score += (f_score * ratio) metrics['losses_test'].append(test_epoch_loss) metrics['acc_test'].append(test_epoch_acc) metrics['prec_test'].append(test_epoch_precision) metrics['rec_test'].append(test_epoch_recall) metrics['f_score_test'].append(test_epoch_f_score) if metrics['acc_test'][-1] > best_acc: best_acc = metrics['acc_test'][-1] t.save({'model': model.state_dict()}, 'checkpint {}.tar'.format(chekpoint_name)) print('Epoch {} acc {} prec {} rec {} f {} minutes {}'.format( epoch + 1, metrics['acc_test'][-1], metrics['prec_test'][-1], metrics['rec_test'][-1], metrics['f_score_test'][-1], timer_end() / 60)) except KeyboardInterrupt as e: print(e) print('Ended training') return metrics
_____no_output_____
MIT
Mobilenetv2 Tuning/MobileNetV2 Baseline.ipynb
vlad-danaila/Mobilenetv2_Ensemble_for_Cervical_Precancerous_Lesions_Classification
Plot a metric for both train and test.
def plot_train_test(train, test, title, y_title): plt.plot(range(len(train)), train, label = 'train') plt.plot(range(len(test)), test, label = 'test') plt.xlabel('Epochs') plt.ylabel(y_title) plt.title(title) plt.legend() plt.show()
_____no_output_____
MIT
Mobilenetv2 Tuning/MobileNetV2 Baseline.ipynb
vlad-danaila/Mobilenetv2_Ensemble_for_Cervical_Precancerous_Lesions_Classification
Plot precision - recall curve
def plot_precision_recall(metrics): plt.scatter(metrics['prec_train'], metrics['rec_train'], label = 'train') plt.scatter(metrics['prec_test'], metrics['rec_test'], label = 'test') plt.legend() plt.title('Precision-Recall') plt.xlabel('Precision') plt.ylabel('Recall')
_____no_output_____
MIT
Mobilenetv2 Tuning/MobileNetV2 Baseline.ipynb
vlad-danaila/Mobilenetv2_Ensemble_for_Cervical_Precancerous_Lesions_Classification
Train a model for several epochs. The steps_learning parameter is a list of tuples. Each tuple specifies the steps and the learning rate.
def do_train(model, loader_train, loader_test, checkpoint_name, steps_learning): for steps, learn_rate in steps_learning: metrics = train_eval(t.optim.Adam(model.parameters(), lr = learn_rate, weight_decay = 0), model, loader_train, loader_test, checkpoint_name, steps) print('Best test accuracy :', max(metrics['acc_test'])) plot_train_test(metrics['losses_train'], metrics['losses_test'], 'Loss (lr = {})'.format(learn_rate)) plot_train_test(metrics['acc_train'], metrics['acc_test'], 'Accuracy (lr = {})'.format(learn_rate))
_____no_output_____
MIT
Mobilenetv2 Tuning/MobileNetV2 Baseline.ipynb
vlad-danaila/Mobilenetv2_Ensemble_for_Cervical_Precancerous_Lesions_Classification
Perform actual training.
def do_train(model, loader_train, loader_test, checkpoint_name, steps_learning): t.cuda.empty_cache() for steps, learn_rate in steps_learning: metrics = train_eval(t.optim.Adam(model.parameters(), lr = learn_rate, weight_decay = 0), model, loader_train, loader_test, checkpoint_name, steps) index_max = np.array(metrics['acc_test']).argmax() print('Best test accuracy :', metrics['acc_test'][index_max]) print('Corresponding precision :', metrics['prec_test'][index_max]) print('Corresponding recall :', metrics['rec_test'][index_max]) print('Corresponding f1 score :', metrics['f_score_test'][index_max]) plot_train_test(metrics['losses_train'], metrics['losses_test'], 'Loss (lr = {})'.format(learn_rate), 'Loss') plot_train_test(metrics['acc_train'], metrics['acc_test'], 'Accuracy (lr = {})'.format(learn_rate), 'Accuracy') plot_train_test(metrics['prec_train'], metrics['prec_test'], 'Precision (lr = {})'.format(learn_rate), 'Precision') plot_train_test(metrics['rec_train'], metrics['rec_test'], 'Recall (lr = {})'.format(learn_rate), 'Recall') plot_train_test(metrics['f_score_train'], metrics['f_score_test'], 'F1 Score (lr = {})'.format(learn_rate), 'F1 Score') plot_precision_recall(metrics) do_train(model_simple, loader_train_simple_img, loader_test_simple_img, 'simple_1', [(50, 1e-4)]) # checkpoint = t.load('/content/checkpint simple_1.tar') # model_simple.load_state_dict(checkpoint['model'])
_____no_output_____
MIT
Mobilenetv2 Tuning/MobileNetV2 Baseline.ipynb
vlad-danaila/Mobilenetv2_Ensemble_for_Cervical_Precancerous_Lesions_Classification
graphblas.matrix_multiplyThis example will go over how to use the `--graphblas-lower` pass from `graphblas-opt` to lower the `graphblas.matrix_multiply` op.Let’s first import some necessary modules and generate an instance of our JIT engine.
import mlir_graphblas import mlir_graphblas.sparse_utils import numpy as np engine = mlir_graphblas.MlirJitEngine()
_____no_output_____
Apache-2.0
docs/dialect/graphblas_dialect_tutorials/graphblas_lower/graphblas_matrix_multiply.ipynb
chelini/mlir-graphblas-1
Here are the passes we'll use.
passes = [ "--graphblas-lower", "--sparsification", "--sparse-tensor-conversion", "--linalg-bufferize", "--func-bufferize", "--tensor-bufferize", "--tensor-constant-bufferize", "--finalizing-bufferize", "--convert-linalg-to-loops", "--convert-scf-to-std", "--convert-std-to-llvm", ]
_____no_output_____
Apache-2.0
docs/dialect/graphblas_dialect_tutorials/graphblas_lower/graphblas_matrix_multiply.ipynb
chelini/mlir-graphblas-1
Similar to our examples using the GraphBLAS dialect, we'll need some helper functions to convert sparse tensors to dense tensors. We'll also need some helpers to convert our sparse matrices to CSC format.
mlir_text = """ #trait_densify_csr = { indexing_maps = [ affine_map<(i,j) -> (i,j)>, affine_map<(i,j) -> (i,j)> ], iterator_types = ["parallel", "parallel"] } #CSR64 = #sparse_tensor.encoding<{ dimLevelType = [ "dense", "compressed" ], dimOrdering = affine_map<(i,j) -> (i,j)>, pointerBitWidth = 64, indexBitWidth = 64 }> func @csr_densify4x4(%argA: tensor<4x4xf64, #CSR64>) -> tensor<4x4xf64> { %output_storage = constant dense<0.0> : tensor<4x4xf64> %0 = linalg.generic #trait_densify_csr ins(%argA: tensor<4x4xf64, #CSR64>) outs(%output_storage: tensor<4x4xf64>) { ^bb(%A: f64, %x: f64): linalg.yield %A : f64 } -> tensor<4x4xf64> return %0 : tensor<4x4xf64> } #trait_densify_csc = { indexing_maps = [ affine_map<(i,j) -> (j,i)>, affine_map<(i,j) -> (i,j)> ], iterator_types = ["parallel", "parallel"] } #CSC64 = #sparse_tensor.encoding<{ dimLevelType = [ "dense", "compressed" ], dimOrdering = affine_map<(i,j) -> (j,i)>, pointerBitWidth = 64, indexBitWidth = 64 }> func @csc_densify4x4(%argA: tensor<4x4xf64, #CSC64>) -> tensor<4x4xf64> { %output_storage = constant dense<0.0> : tensor<4x4xf64> %0 = linalg.generic #trait_densify_csc ins(%argA: tensor<4x4xf64, #CSC64>) outs(%output_storage: tensor<4x4xf64>) { ^bb(%A: f64, %x: f64): linalg.yield %A : f64 } -> tensor<4x4xf64> return %0 : tensor<4x4xf64> } func @convert_csr_to_csc(%sparse_tensor: tensor<?x?xf64, #CSR64>) -> tensor<?x?xf64, #CSC64> { %answer = graphblas.convert_layout %sparse_tensor : tensor<?x?xf64, #CSR64> to tensor<?x?xf64, #CSC64> return %answer : tensor<?x?xf64, #CSC64> } """
_____no_output_____
Apache-2.0
docs/dialect/graphblas_dialect_tutorials/graphblas_lower/graphblas_matrix_multiply.ipynb
chelini/mlir-graphblas-1
Let's compile our MLIR code.
engine.add(mlir_text, passes)
_____no_output_____
Apache-2.0
docs/dialect/graphblas_dialect_tutorials/graphblas_lower/graphblas_matrix_multiply.ipynb
chelini/mlir-graphblas-1
Overview of graphblas.matrix_multiplyHere, we'll show how to use the `graphblas.matrix_multiply` op. `graphblas.matrix_multiply` takes a sparse matrix operand in CSR format, a sparse matrix operand in CSC format, and a `semiring` attribute. The single `semiring` attribute indicates an element-wise operator and an aggregation operator. For example, the plus-times semiring indicates an element-wise operator of multiplication and an aggregation operator of addition/summation. For more details about semirings, see [here](https://en.wikipedia.org/wiki/GraphBLAS).`graphblas.matrix_multiply` applies the semiring's element-wise operator and aggregation operator in matrix-multiply order over the two given sparse matrices. For example, using `graphblas.matrix_multiply` with the plus-times semiring will get a matrix that is the result of a conventional matrix multiply.Here's an example use of the `graphblas.matrix_multiply` op:```%answer = graphblas.matrix_multiply %argA, %argB, %mask { semiring = "plus_times" } : (tensor, tensor, tensor) to tensor```The supported options for the `semiring` attribute are "plus_pair", "plus_plus", and "plus_times".`graphblas.matrix_multiply` can also take an optional mask operand (a CSR matrix) as shown in this example:```%answer = graphblas.matrix_multiply %argA, %argB, %mask { semiring = "plus_times" } : (tensor, tensor, tensor) to tensor```The mask operand must have the same shape as the output matrix. The mask operand acts as a boolean mask (though doesn't necessarily have to have a boolean element type) for the result, which increases performance since the mask will indicate which values in the output do not have to be calculated.`graphblas.matrix_multiply` can also take an optional [region](https://mlir.llvm.org/docs/LangRef/regions) as shown in this example:```%cf4 = constant 4.0 : f64%answer = graphblas.matrix_multiply %argA, %argB { semiring = "plus_times" } : (tensor, tensor) to tensor { ^bb0(%value: f64): %result = std.addf %value, %cf4: f64 graphblas.yield %result : f64 }```The NumPy equivalent of this code would be `answer = (argA @ argB) + 4.0`.The region specifies element-wise post-processing done on values that survived the masking (applies to all elements if no mask). We'll go into deeper details later on on how to write a region using `graphblas.yield`. Let's create some example input matrices.
indices = np.array( [ [0, 3], [1, 3], [2, 0], [3, 0], [3, 1], ], dtype=np.uint64, ) values = np.array([1, 2, 3, 4, 5], dtype=np.float64) sizes = np.array([4, 4], dtype=np.uint64) sparsity = np.array([False, True], dtype=np.bool8) A = mlir_graphblas.sparse_utils.MLIRSparseTensor(indices, values, sizes, sparsity) indices = np.array( [ [0, 1], [0, 3], [1, 1], [1, 3], [2, 0], [2, 2], [3, 0], [3, 2], ], dtype=np.uint64, ) values = np.array([1, 2, 3, 4, 5, 6, 7, 8], dtype=np.float64) sizes = np.array([4, 4], dtype=np.uint64) sparsity = np.array([False, True], dtype=np.bool8) B_csr = mlir_graphblas.sparse_utils.MLIRSparseTensor(indices, values, sizes, sparsity) B = engine.convert_csr_to_csc(B_csr) indices = np.array( [ [0, 1], [0, 2], [1, 1], [1, 2], [2, 1], [2, 2], [3, 1], [3, 2], ], dtype=np.uint64, ) values = np.array([1, 1, 1, 1, 1, 1, 1, 1], dtype=np.float64) sizes = np.array([4, 4], dtype=np.uint64) sparsity = np.array([False, True], dtype=np.bool8) mask = mlir_graphblas.sparse_utils.MLIRSparseTensor(indices, values, sizes, sparsity) A_dense = engine.csr_densify4x4(A) A_dense B_dense = engine.csc_densify4x4(B) B_dense mask_dense = engine.csr_densify4x4(mask) mask_dense
_____no_output_____
Apache-2.0
docs/dialect/graphblas_dialect_tutorials/graphblas_lower/graphblas_matrix_multiply.ipynb
chelini/mlir-graphblas-1
graphblas.matrix_multiply (Plus-Times Semiring)Here, we'll simply perform a conventional matrix-multiply by using `graphblas.matrix_multiply` with the plus-times semiring.
mlir_text = """ #CSR64 = #sparse_tensor.encoding<{ dimLevelType = [ "dense", "compressed" ], dimOrdering = affine_map<(i,j) -> (i,j)>, pointerBitWidth = 64, indexBitWidth = 64 }> #CSC64 = #sparse_tensor.encoding<{ dimLevelType = [ "dense", "compressed" ], dimOrdering = affine_map<(i,j) -> (j,i)>, pointerBitWidth = 64, indexBitWidth = 64 }> module { func @matrix_multiply_plus_times(%a: tensor<?x?xf64, #CSR64>, %b: tensor<?x?xf64, #CSC64>) -> tensor<?x?xf64, #CSR64> { %answer = graphblas.matrix_multiply %a, %b { semiring = "plus_times" } : (tensor<?x?xf64, #CSR64>, tensor<?x?xf64, #CSC64>) to tensor<?x?xf64, #CSR64> return %answer : tensor<?x?xf64, #CSR64> } } """ engine.add(mlir_text, passes) sparse_matmul_result = engine.matrix_multiply_plus_times(A, B) engine.csr_densify4x4(sparse_matmul_result)
_____no_output_____
Apache-2.0
docs/dialect/graphblas_dialect_tutorials/graphblas_lower/graphblas_matrix_multiply.ipynb
chelini/mlir-graphblas-1
The result looks sane. Let's verify that it has the same behavior as NumPy.
np.all(A_dense @ B_dense == engine.csr_densify4x4(sparse_matmul_result))
_____no_output_____
Apache-2.0
docs/dialect/graphblas_dialect_tutorials/graphblas_lower/graphblas_matrix_multiply.ipynb
chelini/mlir-graphblas-1
graphblas.matrix_multiply (Plus-Plus Semiring with Mask)Here, we'll perform a matrix-multiply with the plus-plus semiring. We'll show the result with and without a mask to demonstrate how the masking works.
mlir_text = """ #CSR64 = #sparse_tensor.encoding<{ dimLevelType = [ "dense", "compressed" ], dimOrdering = affine_map<(i,j) -> (i,j)>, pointerBitWidth = 64, indexBitWidth = 64 }> #CSC64 = #sparse_tensor.encoding<{ dimLevelType = [ "dense", "compressed" ], dimOrdering = affine_map<(i,j) -> (j,i)>, pointerBitWidth = 64, indexBitWidth = 64 }> module { func @matrix_multiply_plus_plus_no_mask(%a: tensor<?x?xf64, #CSR64>, %b: tensor<?x?xf64, #CSC64>) -> tensor<?x?xf64, #CSR64> { %answer = graphblas.matrix_multiply %a, %b { semiring = "plus_plus" } : (tensor<?x?xf64, #CSR64>, tensor<?x?xf64, #CSC64>) to tensor<?x?xf64, #CSR64> return %answer : tensor<?x?xf64, #CSR64> } func @matrix_multiply_plus_plus(%a: tensor<?x?xf64, #CSR64>, %b: tensor<?x?xf64, #CSC64>, %m: tensor<?x?xf64, #CSR64>) -> tensor<?x?xf64, #CSR64> { %answer = graphblas.matrix_multiply %a, %b, %m { semiring = "plus_plus" } : (tensor<?x?xf64, #CSR64>, tensor<?x?xf64, #CSC64>, tensor<?x?xf64, #CSR64>) to tensor<?x?xf64, #CSR64> return %answer : tensor<?x?xf64, #CSR64> } } """ engine.add(mlir_text, passes) no_mask_result = engine.matrix_multiply_plus_plus_no_mask(A, B) with_mask_result = engine.matrix_multiply_plus_plus(A, B, mask) engine.csr_densify4x4(no_mask_result) engine.csr_densify4x4(with_mask_result)
_____no_output_____
Apache-2.0
docs/dialect/graphblas_dialect_tutorials/graphblas_lower/graphblas_matrix_multiply.ipynb
chelini/mlir-graphblas-1
Note how the results in the masked output only have elements present in the positions where the mask had elements present. Since we can't verify the results via NumPy given that it doesn't support semirings in its matrix multiply implementation, we'll leave the task of verifying the results as an exercise for the reader. Note that if we're applying the element-wise operation to the values at two positions (one each sparse tensor) and one position has a value but not the other does not, then the element-wise operation for these two positions will contribute no value to be aggregated. graphblas.matrix_multiply (Plus-Pair Semiring with Region)Here, we'll perform a matrix-multiply with the plus-pair semiring. We'll show the result without using a region and with a region. The element-wise operation of the plus-pair semiring is defined as `pair(x, y) = 1`.
mlir_text = """ #CSR64 = #sparse_tensor.encoding<{ dimLevelType = [ "dense", "compressed" ], dimOrdering = affine_map<(i,j) -> (i,j)>, pointerBitWidth = 64, indexBitWidth = 64 }> #CSC64 = #sparse_tensor.encoding<{ dimLevelType = [ "dense", "compressed" ], dimOrdering = affine_map<(i,j) -> (j,i)>, pointerBitWidth = 64, indexBitWidth = 64 }> module { func @matrix_multiply_plus_pair_no_region(%a: tensor<?x?xf64, #CSR64>, %b: tensor<?x?xf64, #CSC64>) -> tensor<?x?xf64, #CSR64> { %answer = graphblas.matrix_multiply %a, %b { semiring = "plus_pair" } : (tensor<?x?xf64, #CSR64>, tensor<?x?xf64, #CSC64>) to tensor<?x?xf64, #CSR64> return %answer : tensor<?x?xf64, #CSR64> } func @matrix_multiply_plus_pair_and_square(%a: tensor<?x?xf64, #CSR64>, %b: tensor<?x?xf64, #CSC64>) -> tensor<?x?xf64, #CSR64> { %answer = graphblas.matrix_multiply %a, %b { semiring = "plus_pair" } : (tensor<?x?xf64, #CSR64>, tensor<?x?xf64, #CSC64>) to tensor<?x?xf64, #CSR64> { ^bb0(%value: f64): %result = std.mulf %value, %value: f64 graphblas.yield %result : f64 } return %answer : tensor<?x?xf64, #CSR64> } } """ engine.add(mlir_text, passes)
_____no_output_____
Apache-2.0
docs/dialect/graphblas_dialect_tutorials/graphblas_lower/graphblas_matrix_multiply.ipynb
chelini/mlir-graphblas-1
The code in the region of `matrix_multiply_plus_pair_and_square` simply squares each individual element's value. The use of `graphblas.yield` is used here to indicate the result of each element-wise squaring.Let's first get our results without the region. `matrix_multiply_plus_pair_no_region` simply does a matrix multiply with the plus-pair semiring.
no_region_result = engine.matrix_multiply_plus_pair_no_region(A, B) engine.csr_densify4x4(no_region_result)
_____no_output_____
Apache-2.0
docs/dialect/graphblas_dialect_tutorials/graphblas_lower/graphblas_matrix_multiply.ipynb
chelini/mlir-graphblas-1
Let's now get the results from `matrix_multiply_plus_pair_and_square`.
with_region_result = engine.matrix_multiply_plus_pair_and_square(A, B) engine.csr_densify4x4(with_region_result)
_____no_output_____
Apache-2.0
docs/dialect/graphblas_dialect_tutorials/graphblas_lower/graphblas_matrix_multiply.ipynb
chelini/mlir-graphblas-1
Let's verify that our results are sane.
np.all(engine.csr_densify4x4(with_region_result) == engine.csr_densify4x4(no_region_result)**2)
_____no_output_____
Apache-2.0
docs/dialect/graphblas_dialect_tutorials/graphblas_lower/graphblas_matrix_multiply.ipynb
chelini/mlir-graphblas-1
Classify 10 different object with Convolutional Neural NetworkDatasetThe dataset we will use is built into tensorflow and called the [**CIFAR Image Dataset.**](https://www.cs.toronto.edu/~kriz/cifar.html) It contains 60,000 32x32 color images with 6000 images of each class. The labels in this dataset are the following:- Airplane- Automobile- Bird- Cat- Deer- Dog- Frog- Horse- Ship- Truck*This tutorial is based on the guide from the TensorFlow documentation: https://www.tensorflow.org/tutorials/images/cnn* Load Python libraries
_____no_output_____
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Load the image data and split into "train" and "test" dataNormalize the pixel values to be between 0 and 1Define class names
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz 170500096/170498071 [==============================] - 6s 0us/step 170508288/170498071 [==============================] - 6s 0us/step
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Show an example image
_____no_output_____
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
CNN ArchitectureA common architecture for a CNN is a stack of Conv2D and MaxPooling2D layers followed by a few denesly connected layers. The stack of convolutional and maxPooling layers extract the features from the image. Then these features are flattened and fed to densly connected layers that determine the class of an image based on the presence of features.Add Convolutional Layers**Layer 1**The input shape of our data will be 32, 32, 3 and we will process 32 filters of size 3x3 over our input data. We will also apply the activation function relu to the output of each convolution operation.**Layer 2**This layer will perform the max pooling operation using 2x2 samples and a stride of 2.**Other Layers**The next set of layers do very similar things but take as input the feature map from the previous layer. They also increase the frequency of filters from 32 to 64. We can do this as our data shrinks in spacial dimensions as it passed through the layers, meaning we can afford (computationally) to add more depth.
_____no_output_____
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Show model summary:Total trainable parameters: 56,320The depth of the feature map increases but the spacial dimensions reduce.
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 30, 30, 32) 896 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 15, 15, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 13, 13, 64) 18496 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 6, 6, 64) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 4, 4, 64) 36928 ================================================================= Total params: 56,320 Trainable params: 56,320 Non-trainable params: 0 _________________________________________________________________
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Feature MapsThe term *feature map* stands for a 3D tensor with two spacial axes (width and height) and one depth axis. Our convolutional layers take feature maps as their input and return a new feature map that reprsents the prescence of spcific filters from the previous feature map. These are what we call *response maps*. Add Dense LayersSo far, we have just completed the **convolutional base**. Now we need to take these extracted features and add a way to classify them. Add a fully connected layer with 64 output nodesAdd a fully connected layer with 10 (final) output nodes
_____no_output_____
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Show model summaryTotal trainable parameters: 122,570
_____no_output_____
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Train the ModelTrain and compile the model using the recommended hyper paramaters from tensorflow.
Epoch 1/20 1563/1563 [==============================] - 73s 46ms/step - loss: 1.5072 - accuracy: 0.4521 - val_loss: 1.2174 - val_accuracy: 0.5625 Epoch 2/20 1563/1563 [==============================] - 73s 47ms/step - loss: 1.1273 - accuracy: 0.6023 - val_loss: 1.0827 - val_accuracy: 0.6188 Epoch 3/20 1563/1563 [==============================] - 73s 47ms/step - loss: 0.9765 - accuracy: 0.6570 - val_loss: 1.0449 - val_accuracy: 0.6384 Epoch 4/20 1563/1563 [==============================] - 72s 46ms/step - loss: 0.8798 - accuracy: 0.6915 - val_loss: 0.8968 - val_accuracy: 0.6851 Epoch 5/20 1563/1563 [==============================] - 72s 46ms/step - loss: 0.8064 - accuracy: 0.7164 - val_loss: 0.8853 - val_accuracy: 0.6925 Epoch 6/20 1563/1563 [==============================] - 72s 46ms/step - loss: 0.7510 - accuracy: 0.7363 - val_loss: 0.8763 - val_accuracy: 0.6962 Epoch 7/20 1563/1563 [==============================] - 71s 46ms/step - loss: 0.6987 - accuracy: 0.7545 - val_loss: 0.8531 - val_accuracy: 0.7117 Epoch 8/20 1563/1563 [==============================] - 71s 46ms/step - loss: 0.6545 - accuracy: 0.7691 - val_loss: 0.8763 - val_accuracy: 0.7056 Epoch 9/20 1563/1563 [==============================] - 71s 45ms/step - loss: 0.6118 - accuracy: 0.7854 - val_loss: 0.8917 - val_accuracy: 0.6949 Epoch 10/20 1563/1563 [==============================] - 71s 46ms/step - loss: 0.5772 - accuracy: 0.7950 - val_loss: 0.8671 - val_accuracy: 0.7168 Epoch 11/20 1563/1563 [==============================] - 71s 46ms/step - loss: 0.5333 - accuracy: 0.8107 - val_loss: 0.9228 - val_accuracy: 0.7059 Epoch 12/20 1563/1563 [==============================] - 71s 46ms/step - loss: 0.5090 - accuracy: 0.8201 - val_loss: 0.9081 - val_accuracy: 0.7130 Epoch 13/20 1563/1563 [==============================] - 71s 45ms/step - loss: 0.4747 - accuracy: 0.8336 - val_loss: 0.9726 - val_accuracy: 0.7093 Epoch 14/20 1563/1563 [==============================] - 72s 46ms/step - loss: 0.4438 - accuracy: 0.8427 - val_loss: 0.9797 - val_accuracy: 0.7149 Epoch 15/20 1563/1563 [==============================] - 73s 47ms/step - loss: 0.4202 - accuracy: 0.8507 - val_loss: 1.0394 - val_accuracy: 0.7029 Epoch 16/20 1563/1563 [==============================] - 74s 47ms/step - loss: 0.3906 - accuracy: 0.8599 - val_loss: 1.0597 - val_accuracy: 0.7083 Epoch 17/20 1563/1563 [==============================] - 73s 47ms/step - loss: 0.3653 - accuracy: 0.8686 - val_loss: 1.0889 - val_accuracy: 0.7080 Epoch 18/20 1563/1563 [==============================] - 73s 47ms/step - loss: 0.3456 - accuracy: 0.8767 - val_loss: 1.1832 - val_accuracy: 0.6971 Epoch 19/20 1563/1563 [==============================] - 73s 47ms/step - loss: 0.3234 - accuracy: 0.8840 - val_loss: 1.2505 - val_accuracy: 0.6977 Epoch 20/20 1563/1563 [==============================] - 73s 47ms/step - loss: 0.3007 - accuracy: 0.8932 - val_loss: 1.2848 - val_accuracy: 0.6964
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Evaluate the ModelWe evaluate how well the model performs by looking at it's performance on the test data set.You should get an accuracy of about 70%. This isn't bad for a simple model like this, but we'll dive into some better approaches for computer vision.
313/313 - 3s - loss: 1.2848 - accuracy: 0.6964 0.696399986743927
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Working with Small DatasetsIn the situation where you don't have millions of images it is difficult to train a CNN from scratch that performs very well. This is why we will learn about a few techniques we can use to train CNN's on small datasets of just a few thousand images. Data AugmentationTo avoid overfitting and create a larger dataset from a smaller one we can use a technique called data augmentation. This is to perform random transofrmations on our images so that our model can generalize better. These transformations can be things like compressions, rotations, stretches and even color changes. Fortunately, Keras can help us do this. Look at the code below to an example of data augmentation.
from keras.preprocessing import image from keras.preprocessing.image import ImageDataGenerator # creates a data generator object that transforms images datagen = ImageDataGenerator( rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') # pick an image to transform test_img = train_images[20] img = image.img_to_array(test_img) # convert image to numpy arry img = img.reshape((1,) + img.shape) # reshape image i = 0 for batch in datagen.flow(img, save_prefix='test', save_format='jpeg'): # this loops runs forever until we break, saving images to current directory with specified prefix plt.figure(i) plot = plt.imshow(image.img_to_array(batch[0])) i += 1 if i > 4: # show 4 images break plt.show()
_____no_output_____
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Use a Pretrained ModelIn this section we will combine the tecniques we learned above and use a pretrained model and fine tuning to classify images of dogs and cats using a small dataset.Pretrained ModelsIn this section we will use a pretrained CNN as part of our own custom network to improve the accuracy of our model. We know that CNN's alone (with no dense layers) don't do anything other than map the presence of features from our input. This means we can use a pretrained CNN, one trained on millions of images, as the start of our model. This will allow us to have a very good convolutional base before adding our own dense layered classifier at the end. In fact, by using this techique we can train a very good classifier for a realtively small dataset (< 10,000 images). This is because the ConvNet already has a very good idea of what features to look for in an image and can find them very effectively. So, if we can determine the presence of features all the rest of the model needs to do is determine which combination of features makes a specific image.Fine TuningWhen we employ the technique defined above, we will often want to tweak the final layers in our convolutional base to work better for our specific problem. This involves not touching or retraining the earlier layers in our convolutional base but only adjusting the final few. We do this because the first layers in our base are very good at extracting low level features lile lines and edges, things that are similar for any kind of image. Where the later layers are better at picking up very specific features like shapes or even eyes. If we adjust the final layers than we can look for only features relevant to our very specific problem.*This tutorial is based on the following guide from the TensorFlow documentation: https://www.tensorflow.org/tutorials/images/transfer_learning*
#Imports import os import numpy as np import matplotlib.pyplot as plt import tensorflow as tf keras = tf.keras
_____no_output_____
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Load the DatasetWe will load the *cats_vs_dogs* dataset from the modoule tensorflow_datatsets.This dataset contains (image, label) pairs where images have different dimensions and 3 color channels.
import tensorflow_datasets as tfds tfds.disable_progress_bar() # split the data manually into 80% training, 10% testing, 10% validation (raw_train, raw_validation, raw_test), metadata = tfds.load( 'cats_vs_dogs', split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'], with_info=True, as_supervised=True, )
Downloading and preparing dataset cats_vs_dogs/4.0.0 (download: 786.68 MiB, generated: Unknown size, total: 786.68 MiB) to /root/tensorflow_datasets/cats_vs_dogs/4.0.0...
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Display images from the dataset
get_label_name = metadata.features['label'].int2str # creates a function object that we can use to get labels for image, label in raw_train.take(5): plt.figure() plt.imshow(image) plt.title(get_label_name(label))
_____no_output_____
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Data PreprocessingSince the sizes of our images are all different, we need to convert them all to the same size. We can create a function that will do that for us below.
IMG_SIZE = 160 # All images will be resized to 160x160 def format_example(image, label): """ returns an image that is reshaped to IMG_SIZE """ image = tf.cast(image, tf.float32) image = (image/127.5) - 1 image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE)) return image, label
_____no_output_____
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Now we can apply this function to all our images using ```.map()```.
train = raw_train.map(format_example) validation = raw_validation.map(format_example) test = raw_test.map(format_example)
_____no_output_____
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Let's have a look at our images now.
for image, label in train.take(2): plt.figure() plt.imshow(image) plt.title(get_label_name(label))
WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Finally we will shuffle and batch the images.
BATCH_SIZE = 32 SHUFFLE_BUFFER_SIZE = 1000 train_batches = train.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE) validation_batches = validation.batch(BATCH_SIZE) test_batches = test.batch(BATCH_SIZE)
_____no_output_____
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Now if we look at the shape of an original image vs the new image we will see it has been changed.
for img, label in raw_train.take(2): print("Original shape:", img.shape) for img, label in train.take(2): print("New shape:", img.shape)
Original shape: (262, 350, 3) Original shape: (409, 336, 3) New shape: (160, 160, 3) New shape: (160, 160, 3)
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Pick a Pretrained ModelThe model we are going to use as the convolutional base for our model is the **MobileNet V2** developed at Google. This model is trained on 1.4 million images and has 1000 different classes.We want to use this model but only its convolutional base. So, when we load in the model, we'll specify that we don't want to load the top (classification) layer. We'll tell the model what input shape to expect and to use the predetermined weights from *imagenet* (Googles dataset).
IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3) # Create the base model from the pre-trained model MobileNet V2 base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE, include_top=False, weights='imagenet') base_model.summary()
Model: "mobilenetv2_1.00_160" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 160, 160, 3) 0 __________________________________________________________________________________________________ Conv1 (Conv2D) (None, 80, 80, 32) 864 input_1[0][0] __________________________________________________________________________________________________ bn_Conv1 (BatchNormalization) (None, 80, 80, 32) 128 Conv1[0][0] __________________________________________________________________________________________________ Conv1_relu (ReLU) (None, 80, 80, 32) 0 bn_Conv1[0][0] __________________________________________________________________________________________________ expanded_conv_depthwise (Depthw (None, 80, 80, 32) 288 Conv1_relu[0][0] __________________________________________________________________________________________________ expanded_conv_depthwise_BN (Bat (None, 80, 80, 32) 128 expanded_conv_depthwise[0][0] __________________________________________________________________________________________________ expanded_conv_depthwise_relu (R (None, 80, 80, 32) 0 expanded_conv_depthwise_BN[0][0] __________________________________________________________________________________________________ expanded_conv_project (Conv2D) (None, 80, 80, 16) 512 expanded_conv_depthwise_relu[0][0 __________________________________________________________________________________________________ expanded_conv_project_BN (Batch (None, 80, 80, 16) 64 expanded_conv_project[0][0] __________________________________________________________________________________________________ block_1_expand (Conv2D) (None, 80, 80, 96) 1536 expanded_conv_project_BN[0][0] __________________________________________________________________________________________________ block_1_expand_BN (BatchNormali (None, 80, 80, 96) 384 block_1_expand[0][0] __________________________________________________________________________________________________ block_1_expand_relu (ReLU) (None, 80, 80, 96) 0 block_1_expand_BN[0][0] __________________________________________________________________________________________________ block_1_pad (ZeroPadding2D) (None, 81, 81, 96) 0 block_1_expand_relu[0][0] __________________________________________________________________________________________________ block_1_depthwise (DepthwiseCon (None, 40, 40, 96) 864 block_1_pad[0][0] __________________________________________________________________________________________________ block_1_depthwise_BN (BatchNorm (None, 40, 40, 96) 384 block_1_depthwise[0][0] __________________________________________________________________________________________________ block_1_depthwise_relu (ReLU) (None, 40, 40, 96) 0 block_1_depthwise_BN[0][0] __________________________________________________________________________________________________ block_1_project (Conv2D) (None, 40, 40, 24) 2304 block_1_depthwise_relu[0][0] __________________________________________________________________________________________________ block_1_project_BN (BatchNormal (None, 40, 40, 24) 96 block_1_project[0][0] __________________________________________________________________________________________________ block_2_expand (Conv2D) (None, 40, 40, 144) 3456 block_1_project_BN[0][0] __________________________________________________________________________________________________ block_2_expand_BN (BatchNormali (None, 40, 40, 144) 576 block_2_expand[0][0] __________________________________________________________________________________________________ block_2_expand_relu (ReLU) (None, 40, 40, 144) 0 block_2_expand_BN[0][0] __________________________________________________________________________________________________ block_2_depthwise (DepthwiseCon (None, 40, 40, 144) 1296 block_2_expand_relu[0][0] __________________________________________________________________________________________________ block_2_depthwise_BN (BatchNorm (None, 40, 40, 144) 576 block_2_depthwise[0][0] __________________________________________________________________________________________________ block_2_depthwise_relu (ReLU) (None, 40, 40, 144) 0 block_2_depthwise_BN[0][0] __________________________________________________________________________________________________ block_2_project (Conv2D) (None, 40, 40, 24) 3456 block_2_depthwise_relu[0][0] __________________________________________________________________________________________________ block_2_project_BN (BatchNormal (None, 40, 40, 24) 96 block_2_project[0][0] __________________________________________________________________________________________________ block_2_add (Add) (None, 40, 40, 24) 0 block_1_project_BN[0][0] block_2_project_BN[0][0] __________________________________________________________________________________________________ block_3_expand (Conv2D) (None, 40, 40, 144) 3456 block_2_add[0][0] __________________________________________________________________________________________________ block_3_expand_BN (BatchNormali (None, 40, 40, 144) 576 block_3_expand[0][0] __________________________________________________________________________________________________ block_3_expand_relu (ReLU) (None, 40, 40, 144) 0 block_3_expand_BN[0][0] __________________________________________________________________________________________________ block_3_pad (ZeroPadding2D) (None, 41, 41, 144) 0 block_3_expand_relu[0][0] __________________________________________________________________________________________________ block_3_depthwise (DepthwiseCon (None, 20, 20, 144) 1296 block_3_pad[0][0] __________________________________________________________________________________________________ block_3_depthwise_BN (BatchNorm (None, 20, 20, 144) 576 block_3_depthwise[0][0] __________________________________________________________________________________________________ block_3_depthwise_relu (ReLU) (None, 20, 20, 144) 0 block_3_depthwise_BN[0][0] __________________________________________________________________________________________________ block_3_project (Conv2D) (None, 20, 20, 32) 4608 block_3_depthwise_relu[0][0] __________________________________________________________________________________________________ block_3_project_BN (BatchNormal (None, 20, 20, 32) 128 block_3_project[0][0] __________________________________________________________________________________________________ block_4_expand (Conv2D) (None, 20, 20, 192) 6144 block_3_project_BN[0][0] __________________________________________________________________________________________________ block_4_expand_BN (BatchNormali (None, 20, 20, 192) 768 block_4_expand[0][0] __________________________________________________________________________________________________ block_4_expand_relu (ReLU) (None, 20, 20, 192) 0 block_4_expand_BN[0][0] __________________________________________________________________________________________________ block_4_depthwise (DepthwiseCon (None, 20, 20, 192) 1728 block_4_expand_relu[0][0] __________________________________________________________________________________________________ block_4_depthwise_BN (BatchNorm (None, 20, 20, 192) 768 block_4_depthwise[0][0] __________________________________________________________________________________________________ block_4_depthwise_relu (ReLU) (None, 20, 20, 192) 0 block_4_depthwise_BN[0][0] __________________________________________________________________________________________________ block_4_project (Conv2D) (None, 20, 20, 32) 6144 block_4_depthwise_relu[0][0] __________________________________________________________________________________________________ block_4_project_BN (BatchNormal (None, 20, 20, 32) 128 block_4_project[0][0] __________________________________________________________________________________________________ block_4_add (Add) (None, 20, 20, 32) 0 block_3_project_BN[0][0] block_4_project_BN[0][0] __________________________________________________________________________________________________ block_5_expand (Conv2D) (None, 20, 20, 192) 6144 block_4_add[0][0] __________________________________________________________________________________________________ block_5_expand_BN (BatchNormali (None, 20, 20, 192) 768 block_5_expand[0][0] __________________________________________________________________________________________________ block_5_expand_relu (ReLU) (None, 20, 20, 192) 0 block_5_expand_BN[0][0] __________________________________________________________________________________________________ block_5_depthwise (DepthwiseCon (None, 20, 20, 192) 1728 block_5_expand_relu[0][0] __________________________________________________________________________________________________ block_5_depthwise_BN (BatchNorm (None, 20, 20, 192) 768 block_5_depthwise[0][0] __________________________________________________________________________________________________ block_5_depthwise_relu (ReLU) (None, 20, 20, 192) 0 block_5_depthwise_BN[0][0] __________________________________________________________________________________________________ block_5_project (Conv2D) (None, 20, 20, 32) 6144 block_5_depthwise_relu[0][0] __________________________________________________________________________________________________ block_5_project_BN (BatchNormal (None, 20, 20, 32) 128 block_5_project[0][0] __________________________________________________________________________________________________ block_5_add (Add) (None, 20, 20, 32) 0 block_4_add[0][0] block_5_project_BN[0][0] __________________________________________________________________________________________________ block_6_expand (Conv2D) (None, 20, 20, 192) 6144 block_5_add[0][0] __________________________________________________________________________________________________ block_6_expand_BN (BatchNormali (None, 20, 20, 192) 768 block_6_expand[0][0] __________________________________________________________________________________________________ block_6_expand_relu (ReLU) (None, 20, 20, 192) 0 block_6_expand_BN[0][0] __________________________________________________________________________________________________ block_6_pad (ZeroPadding2D) (None, 21, 21, 192) 0 block_6_expand_relu[0][0] __________________________________________________________________________________________________ block_6_depthwise (DepthwiseCon (None, 10, 10, 192) 1728 block_6_pad[0][0] __________________________________________________________________________________________________ block_6_depthwise_BN (BatchNorm (None, 10, 10, 192) 768 block_6_depthwise[0][0] __________________________________________________________________________________________________ block_6_depthwise_relu (ReLU) (None, 10, 10, 192) 0 block_6_depthwise_BN[0][0] __________________________________________________________________________________________________ block_6_project (Conv2D) (None, 10, 10, 64) 12288 block_6_depthwise_relu[0][0] __________________________________________________________________________________________________ block_6_project_BN (BatchNormal (None, 10, 10, 64) 256 block_6_project[0][0] __________________________________________________________________________________________________ block_7_expand (Conv2D) (None, 10, 10, 384) 24576 block_6_project_BN[0][0] __________________________________________________________________________________________________ block_7_expand_BN (BatchNormali (None, 10, 10, 384) 1536 block_7_expand[0][0] __________________________________________________________________________________________________ block_7_expand_relu (ReLU) (None, 10, 10, 384) 0 block_7_expand_BN[0][0] __________________________________________________________________________________________________ block_7_depthwise (DepthwiseCon (None, 10, 10, 384) 3456 block_7_expand_relu[0][0] __________________________________________________________________________________________________ block_7_depthwise_BN (BatchNorm (None, 10, 10, 384) 1536 block_7_depthwise[0][0] __________________________________________________________________________________________________ block_7_depthwise_relu (ReLU) (None, 10, 10, 384) 0 block_7_depthwise_BN[0][0] __________________________________________________________________________________________________ block_7_project (Conv2D) (None, 10, 10, 64) 24576 block_7_depthwise_relu[0][0] __________________________________________________________________________________________________ block_7_project_BN (BatchNormal (None, 10, 10, 64) 256 block_7_project[0][0] __________________________________________________________________________________________________ block_7_add (Add) (None, 10, 10, 64) 0 block_6_project_BN[0][0] block_7_project_BN[0][0] __________________________________________________________________________________________________ block_8_expand (Conv2D) (None, 10, 10, 384) 24576 block_7_add[0][0] __________________________________________________________________________________________________ block_8_expand_BN (BatchNormali (None, 10, 10, 384) 1536 block_8_expand[0][0] __________________________________________________________________________________________________ block_8_expand_relu (ReLU) (None, 10, 10, 384) 0 block_8_expand_BN[0][0] __________________________________________________________________________________________________ block_8_depthwise (DepthwiseCon (None, 10, 10, 384) 3456 block_8_expand_relu[0][0] __________________________________________________________________________________________________ block_8_depthwise_BN (BatchNorm (None, 10, 10, 384) 1536 block_8_depthwise[0][0] __________________________________________________________________________________________________ block_8_depthwise_relu (ReLU) (None, 10, 10, 384) 0 block_8_depthwise_BN[0][0] __________________________________________________________________________________________________ block_8_project (Conv2D) (None, 10, 10, 64) 24576 block_8_depthwise_relu[0][0] __________________________________________________________________________________________________ block_8_project_BN (BatchNormal (None, 10, 10, 64) 256 block_8_project[0][0] __________________________________________________________________________________________________ block_8_add (Add) (None, 10, 10, 64) 0 block_7_add[0][0] block_8_project_BN[0][0] __________________________________________________________________________________________________ block_9_expand (Conv2D) (None, 10, 10, 384) 24576 block_8_add[0][0] __________________________________________________________________________________________________ block_9_expand_BN (BatchNormali (None, 10, 10, 384) 1536 block_9_expand[0][0] __________________________________________________________________________________________________ block_9_expand_relu (ReLU) (None, 10, 10, 384) 0 block_9_expand_BN[0][0] __________________________________________________________________________________________________ block_9_depthwise (DepthwiseCon (None, 10, 10, 384) 3456 block_9_expand_relu[0][0] __________________________________________________________________________________________________ block_9_depthwise_BN (BatchNorm (None, 10, 10, 384) 1536 block_9_depthwise[0][0] __________________________________________________________________________________________________ block_9_depthwise_relu (ReLU) (None, 10, 10, 384) 0 block_9_depthwise_BN[0][0] __________________________________________________________________________________________________ block_9_project (Conv2D) (None, 10, 10, 64) 24576 block_9_depthwise_relu[0][0] __________________________________________________________________________________________________ block_9_project_BN (BatchNormal (None, 10, 10, 64) 256 block_9_project[0][0] __________________________________________________________________________________________________ block_9_add (Add) (None, 10, 10, 64) 0 block_8_add[0][0] block_9_project_BN[0][0] __________________________________________________________________________________________________ block_10_expand (Conv2D) (None, 10, 10, 384) 24576 block_9_add[0][0] __________________________________________________________________________________________________ block_10_expand_BN (BatchNormal (None, 10, 10, 384) 1536 block_10_expand[0][0] __________________________________________________________________________________________________ block_10_expand_relu (ReLU) (None, 10, 10, 384) 0 block_10_expand_BN[0][0] __________________________________________________________________________________________________ block_10_depthwise (DepthwiseCo (None, 10, 10, 384) 3456 block_10_expand_relu[0][0] __________________________________________________________________________________________________ block_10_depthwise_BN (BatchNor (None, 10, 10, 384) 1536 block_10_depthwise[0][0] __________________________________________________________________________________________________ block_10_depthwise_relu (ReLU) (None, 10, 10, 384) 0 block_10_depthwise_BN[0][0] __________________________________________________________________________________________________ block_10_project (Conv2D) (None, 10, 10, 96) 36864 block_10_depthwise_relu[0][0] __________________________________________________________________________________________________ block_10_project_BN (BatchNorma (None, 10, 10, 96) 384 block_10_project[0][0] __________________________________________________________________________________________________ block_11_expand (Conv2D) (None, 10, 10, 576) 55296 block_10_project_BN[0][0] __________________________________________________________________________________________________ block_11_expand_BN (BatchNormal (None, 10, 10, 576) 2304 block_11_expand[0][0] __________________________________________________________________________________________________ block_11_expand_relu (ReLU) (None, 10, 10, 576) 0 block_11_expand_BN[0][0] __________________________________________________________________________________________________ block_11_depthwise (DepthwiseCo (None, 10, 10, 576) 5184 block_11_expand_relu[0][0] __________________________________________________________________________________________________ block_11_depthwise_BN (BatchNor (None, 10, 10, 576) 2304 block_11_depthwise[0][0] __________________________________________________________________________________________________ block_11_depthwise_relu (ReLU) (None, 10, 10, 576) 0 block_11_depthwise_BN[0][0] __________________________________________________________________________________________________ block_11_project (Conv2D) (None, 10, 10, 96) 55296 block_11_depthwise_relu[0][0] __________________________________________________________________________________________________ block_11_project_BN (BatchNorma (None, 10, 10, 96) 384 block_11_project[0][0] __________________________________________________________________________________________________ block_11_add (Add) (None, 10, 10, 96) 0 block_10_project_BN[0][0] block_11_project_BN[0][0] __________________________________________________________________________________________________ block_12_expand (Conv2D) (None, 10, 10, 576) 55296 block_11_add[0][0] __________________________________________________________________________________________________ block_12_expand_BN (BatchNormal (None, 10, 10, 576) 2304 block_12_expand[0][0] __________________________________________________________________________________________________ block_12_expand_relu (ReLU) (None, 10, 10, 576) 0 block_12_expand_BN[0][0] __________________________________________________________________________________________________ block_12_depthwise (DepthwiseCo (None, 10, 10, 576) 5184 block_12_expand_relu[0][0] __________________________________________________________________________________________________ block_12_depthwise_BN (BatchNor (None, 10, 10, 576) 2304 block_12_depthwise[0][0] __________________________________________________________________________________________________ block_12_depthwise_relu (ReLU) (None, 10, 10, 576) 0 block_12_depthwise_BN[0][0] __________________________________________________________________________________________________ block_12_project (Conv2D) (None, 10, 10, 96) 55296 block_12_depthwise_relu[0][0] __________________________________________________________________________________________________ block_12_project_BN (BatchNorma (None, 10, 10, 96) 384 block_12_project[0][0] __________________________________________________________________________________________________ block_12_add (Add) (None, 10, 10, 96) 0 block_11_add[0][0] block_12_project_BN[0][0] __________________________________________________________________________________________________ block_13_expand (Conv2D) (None, 10, 10, 576) 55296 block_12_add[0][0] __________________________________________________________________________________________________ block_13_expand_BN (BatchNormal (None, 10, 10, 576) 2304 block_13_expand[0][0] __________________________________________________________________________________________________ block_13_expand_relu (ReLU) (None, 10, 10, 576) 0 block_13_expand_BN[0][0] __________________________________________________________________________________________________ block_13_pad (ZeroPadding2D) (None, 11, 11, 576) 0 block_13_expand_relu[0][0] __________________________________________________________________________________________________ block_13_depthwise (DepthwiseCo (None, 5, 5, 576) 5184 block_13_pad[0][0] __________________________________________________________________________________________________ block_13_depthwise_BN (BatchNor (None, 5, 5, 576) 2304 block_13_depthwise[0][0] __________________________________________________________________________________________________ block_13_depthwise_relu (ReLU) (None, 5, 5, 576) 0 block_13_depthwise_BN[0][0] __________________________________________________________________________________________________ block_13_project (Conv2D) (None, 5, 5, 160) 92160 block_13_depthwise_relu[0][0] __________________________________________________________________________________________________ block_13_project_BN (BatchNorma (None, 5, 5, 160) 640 block_13_project[0][0] __________________________________________________________________________________________________ block_14_expand (Conv2D) (None, 5, 5, 960) 153600 block_13_project_BN[0][0] __________________________________________________________________________________________________ block_14_expand_BN (BatchNormal (None, 5, 5, 960) 3840 block_14_expand[0][0] __________________________________________________________________________________________________ block_14_expand_relu (ReLU) (None, 5, 5, 960) 0 block_14_expand_BN[0][0] __________________________________________________________________________________________________ block_14_depthwise (DepthwiseCo (None, 5, 5, 960) 8640 block_14_expand_relu[0][0] __________________________________________________________________________________________________ block_14_depthwise_BN (BatchNor (None, 5, 5, 960) 3840 block_14_depthwise[0][0] __________________________________________________________________________________________________ block_14_depthwise_relu (ReLU) (None, 5, 5, 960) 0 block_14_depthwise_BN[0][0] __________________________________________________________________________________________________ block_14_project (Conv2D) (None, 5, 5, 160) 153600 block_14_depthwise_relu[0][0] __________________________________________________________________________________________________ block_14_project_BN (BatchNorma (None, 5, 5, 160) 640 block_14_project[0][0] __________________________________________________________________________________________________ block_14_add (Add) (None, 5, 5, 160) 0 block_13_project_BN[0][0] block_14_project_BN[0][0] __________________________________________________________________________________________________ block_15_expand (Conv2D) (None, 5, 5, 960) 153600 block_14_add[0][0] __________________________________________________________________________________________________ block_15_expand_BN (BatchNormal (None, 5, 5, 960) 3840 block_15_expand[0][0] __________________________________________________________________________________________________ block_15_expand_relu (ReLU) (None, 5, 5, 960) 0 block_15_expand_BN[0][0] __________________________________________________________________________________________________ block_15_depthwise (DepthwiseCo (None, 5, 5, 960) 8640 block_15_expand_relu[0][0] __________________________________________________________________________________________________ block_15_depthwise_BN (BatchNor (None, 5, 5, 960) 3840 block_15_depthwise[0][0] __________________________________________________________________________________________________ block_15_depthwise_relu (ReLU) (None, 5, 5, 960) 0 block_15_depthwise_BN[0][0] __________________________________________________________________________________________________ block_15_project (Conv2D) (None, 5, 5, 160) 153600 block_15_depthwise_relu[0][0] __________________________________________________________________________________________________ block_15_project_BN (BatchNorma (None, 5, 5, 160) 640 block_15_project[0][0] __________________________________________________________________________________________________ block_15_add (Add) (None, 5, 5, 160) 0 block_14_add[0][0] block_15_project_BN[0][0] __________________________________________________________________________________________________ block_16_expand (Conv2D) (None, 5, 5, 960) 153600 block_15_add[0][0] __________________________________________________________________________________________________ block_16_expand_BN (BatchNormal (None, 5, 5, 960) 3840 block_16_expand[0][0] __________________________________________________________________________________________________ block_16_expand_relu (ReLU) (None, 5, 5, 960) 0 block_16_expand_BN[0][0] __________________________________________________________________________________________________ block_16_depthwise (DepthwiseCo (None, 5, 5, 960) 8640 block_16_expand_relu[0][0] __________________________________________________________________________________________________ block_16_depthwise_BN (BatchNor (None, 5, 5, 960) 3840 block_16_depthwise[0][0] __________________________________________________________________________________________________ block_16_depthwise_relu (ReLU) (None, 5, 5, 960) 0 block_16_depthwise_BN[0][0] __________________________________________________________________________________________________ block_16_project (Conv2D) (None, 5, 5, 320) 307200 block_16_depthwise_relu[0][0] __________________________________________________________________________________________________ block_16_project_BN (BatchNorma (None, 5, 5, 320) 1280 block_16_project[0][0] __________________________________________________________________________________________________ Conv_1 (Conv2D) (None, 5, 5, 1280) 409600 block_16_project_BN[0][0] __________________________________________________________________________________________________ Conv_1_bn (BatchNormalization) (None, 5, 5, 1280) 5120 Conv_1[0][0] __________________________________________________________________________________________________ out_relu (ReLU) (None, 5, 5, 1280) 0 Conv_1_bn[0][0] ================================================================================================== Total params: 2,257,984 Trainable params: 2,223,872 Non-trainable params: 34,112 __________________________________________________________________________________________________
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
At this point this *base_model* will simply output a shape (32, 5, 5, 1280) tensor that is a feature extraction from our original (1, 160, 160, 3) image. The 32 means that we have 32 layers of differnt filters/features.
for image, _ in train_batches.take(1): pass feature_batch = base_model(image) print(feature_batch.shape)
(32, 5, 5, 1280)
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Freeze the BaseThe term **freezing** refers to disabling the training property of a layer. It simply means we won’t make any changes to the weights of any layers that are frozen during training. This is important as we don't want to change the convolutional base that already has learned weights.
base_model.trainable = False base_model.summary()
Model: "mobilenetv2_1.00_160" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 160, 160, 3) 0 __________________________________________________________________________________________________ Conv1 (Conv2D) (None, 80, 80, 32) 864 input_1[0][0] __________________________________________________________________________________________________ bn_Conv1 (BatchNormalization) (None, 80, 80, 32) 128 Conv1[0][0] __________________________________________________________________________________________________ Conv1_relu (ReLU) (None, 80, 80, 32) 0 bn_Conv1[0][0] __________________________________________________________________________________________________ expanded_conv_depthwise (Depthw (None, 80, 80, 32) 288 Conv1_relu[0][0] __________________________________________________________________________________________________ expanded_conv_depthwise_BN (Bat (None, 80, 80, 32) 128 expanded_conv_depthwise[0][0] __________________________________________________________________________________________________ expanded_conv_depthwise_relu (R (None, 80, 80, 32) 0 expanded_conv_depthwise_BN[0][0] __________________________________________________________________________________________________ expanded_conv_project (Conv2D) (None, 80, 80, 16) 512 expanded_conv_depthwise_relu[0][0 __________________________________________________________________________________________________ expanded_conv_project_BN (Batch (None, 80, 80, 16) 64 expanded_conv_project[0][0] __________________________________________________________________________________________________ block_1_expand (Conv2D) (None, 80, 80, 96) 1536 expanded_conv_project_BN[0][0] __________________________________________________________________________________________________ block_1_expand_BN (BatchNormali (None, 80, 80, 96) 384 block_1_expand[0][0] __________________________________________________________________________________________________ block_1_expand_relu (ReLU) (None, 80, 80, 96) 0 block_1_expand_BN[0][0] __________________________________________________________________________________________________ block_1_pad (ZeroPadding2D) (None, 81, 81, 96) 0 block_1_expand_relu[0][0] __________________________________________________________________________________________________ block_1_depthwise (DepthwiseCon (None, 40, 40, 96) 864 block_1_pad[0][0] __________________________________________________________________________________________________ block_1_depthwise_BN (BatchNorm (None, 40, 40, 96) 384 block_1_depthwise[0][0] __________________________________________________________________________________________________ block_1_depthwise_relu (ReLU) (None, 40, 40, 96) 0 block_1_depthwise_BN[0][0] __________________________________________________________________________________________________ block_1_project (Conv2D) (None, 40, 40, 24) 2304 block_1_depthwise_relu[0][0] __________________________________________________________________________________________________ block_1_project_BN (BatchNormal (None, 40, 40, 24) 96 block_1_project[0][0] __________________________________________________________________________________________________ block_2_expand (Conv2D) (None, 40, 40, 144) 3456 block_1_project_BN[0][0] __________________________________________________________________________________________________ block_2_expand_BN (BatchNormali (None, 40, 40, 144) 576 block_2_expand[0][0] __________________________________________________________________________________________________ block_2_expand_relu (ReLU) (None, 40, 40, 144) 0 block_2_expand_BN[0][0] __________________________________________________________________________________________________ block_2_depthwise (DepthwiseCon (None, 40, 40, 144) 1296 block_2_expand_relu[0][0] __________________________________________________________________________________________________ block_2_depthwise_BN (BatchNorm (None, 40, 40, 144) 576 block_2_depthwise[0][0] __________________________________________________________________________________________________ block_2_depthwise_relu (ReLU) (None, 40, 40, 144) 0 block_2_depthwise_BN[0][0] __________________________________________________________________________________________________ block_2_project (Conv2D) (None, 40, 40, 24) 3456 block_2_depthwise_relu[0][0] __________________________________________________________________________________________________ block_2_project_BN (BatchNormal (None, 40, 40, 24) 96 block_2_project[0][0] __________________________________________________________________________________________________ block_2_add (Add) (None, 40, 40, 24) 0 block_1_project_BN[0][0] block_2_project_BN[0][0] __________________________________________________________________________________________________ block_3_expand (Conv2D) (None, 40, 40, 144) 3456 block_2_add[0][0] __________________________________________________________________________________________________ block_3_expand_BN (BatchNormali (None, 40, 40, 144) 576 block_3_expand[0][0] __________________________________________________________________________________________________ block_3_expand_relu (ReLU) (None, 40, 40, 144) 0 block_3_expand_BN[0][0] __________________________________________________________________________________________________ block_3_pad (ZeroPadding2D) (None, 41, 41, 144) 0 block_3_expand_relu[0][0] __________________________________________________________________________________________________ block_3_depthwise (DepthwiseCon (None, 20, 20, 144) 1296 block_3_pad[0][0] __________________________________________________________________________________________________ block_3_depthwise_BN (BatchNorm (None, 20, 20, 144) 576 block_3_depthwise[0][0] __________________________________________________________________________________________________ block_3_depthwise_relu (ReLU) (None, 20, 20, 144) 0 block_3_depthwise_BN[0][0] __________________________________________________________________________________________________ block_3_project (Conv2D) (None, 20, 20, 32) 4608 block_3_depthwise_relu[0][0] __________________________________________________________________________________________________ block_3_project_BN (BatchNormal (None, 20, 20, 32) 128 block_3_project[0][0] __________________________________________________________________________________________________ block_4_expand (Conv2D) (None, 20, 20, 192) 6144 block_3_project_BN[0][0] __________________________________________________________________________________________________ block_4_expand_BN (BatchNormali (None, 20, 20, 192) 768 block_4_expand[0][0] __________________________________________________________________________________________________ block_4_expand_relu (ReLU) (None, 20, 20, 192) 0 block_4_expand_BN[0][0] __________________________________________________________________________________________________ block_4_depthwise (DepthwiseCon (None, 20, 20, 192) 1728 block_4_expand_relu[0][0] __________________________________________________________________________________________________ block_4_depthwise_BN (BatchNorm (None, 20, 20, 192) 768 block_4_depthwise[0][0] __________________________________________________________________________________________________ block_4_depthwise_relu (ReLU) (None, 20, 20, 192) 0 block_4_depthwise_BN[0][0] __________________________________________________________________________________________________ block_4_project (Conv2D) (None, 20, 20, 32) 6144 block_4_depthwise_relu[0][0] __________________________________________________________________________________________________ block_4_project_BN (BatchNormal (None, 20, 20, 32) 128 block_4_project[0][0] __________________________________________________________________________________________________ block_4_add (Add) (None, 20, 20, 32) 0 block_3_project_BN[0][0] block_4_project_BN[0][0] __________________________________________________________________________________________________ block_5_expand (Conv2D) (None, 20, 20, 192) 6144 block_4_add[0][0] __________________________________________________________________________________________________ block_5_expand_BN (BatchNormali (None, 20, 20, 192) 768 block_5_expand[0][0] __________________________________________________________________________________________________ block_5_expand_relu (ReLU) (None, 20, 20, 192) 0 block_5_expand_BN[0][0] __________________________________________________________________________________________________ block_5_depthwise (DepthwiseCon (None, 20, 20, 192) 1728 block_5_expand_relu[0][0] __________________________________________________________________________________________________ block_5_depthwise_BN (BatchNorm (None, 20, 20, 192) 768 block_5_depthwise[0][0] __________________________________________________________________________________________________ block_5_depthwise_relu (ReLU) (None, 20, 20, 192) 0 block_5_depthwise_BN[0][0] __________________________________________________________________________________________________ block_5_project (Conv2D) (None, 20, 20, 32) 6144 block_5_depthwise_relu[0][0] __________________________________________________________________________________________________ block_5_project_BN (BatchNormal (None, 20, 20, 32) 128 block_5_project[0][0] __________________________________________________________________________________________________ block_5_add (Add) (None, 20, 20, 32) 0 block_4_add[0][0] block_5_project_BN[0][0] __________________________________________________________________________________________________ block_6_expand (Conv2D) (None, 20, 20, 192) 6144 block_5_add[0][0] __________________________________________________________________________________________________ block_6_expand_BN (BatchNormali (None, 20, 20, 192) 768 block_6_expand[0][0] __________________________________________________________________________________________________ block_6_expand_relu (ReLU) (None, 20, 20, 192) 0 block_6_expand_BN[0][0] __________________________________________________________________________________________________ block_6_pad (ZeroPadding2D) (None, 21, 21, 192) 0 block_6_expand_relu[0][0] __________________________________________________________________________________________________ block_6_depthwise (DepthwiseCon (None, 10, 10, 192) 1728 block_6_pad[0][0] __________________________________________________________________________________________________ block_6_depthwise_BN (BatchNorm (None, 10, 10, 192) 768 block_6_depthwise[0][0] __________________________________________________________________________________________________ block_6_depthwise_relu (ReLU) (None, 10, 10, 192) 0 block_6_depthwise_BN[0][0] __________________________________________________________________________________________________ block_6_project (Conv2D) (None, 10, 10, 64) 12288 block_6_depthwise_relu[0][0] __________________________________________________________________________________________________ block_6_project_BN (BatchNormal (None, 10, 10, 64) 256 block_6_project[0][0] __________________________________________________________________________________________________ block_7_expand (Conv2D) (None, 10, 10, 384) 24576 block_6_project_BN[0][0] __________________________________________________________________________________________________ block_7_expand_BN (BatchNormali (None, 10, 10, 384) 1536 block_7_expand[0][0] __________________________________________________________________________________________________ block_7_expand_relu (ReLU) (None, 10, 10, 384) 0 block_7_expand_BN[0][0] __________________________________________________________________________________________________ block_7_depthwise (DepthwiseCon (None, 10, 10, 384) 3456 block_7_expand_relu[0][0] __________________________________________________________________________________________________ block_7_depthwise_BN (BatchNorm (None, 10, 10, 384) 1536 block_7_depthwise[0][0] __________________________________________________________________________________________________ block_7_depthwise_relu (ReLU) (None, 10, 10, 384) 0 block_7_depthwise_BN[0][0] __________________________________________________________________________________________________ block_7_project (Conv2D) (None, 10, 10, 64) 24576 block_7_depthwise_relu[0][0] __________________________________________________________________________________________________ block_7_project_BN (BatchNormal (None, 10, 10, 64) 256 block_7_project[0][0] __________________________________________________________________________________________________ block_7_add (Add) (None, 10, 10, 64) 0 block_6_project_BN[0][0] block_7_project_BN[0][0] __________________________________________________________________________________________________ block_8_expand (Conv2D) (None, 10, 10, 384) 24576 block_7_add[0][0] __________________________________________________________________________________________________ block_8_expand_BN (BatchNormali (None, 10, 10, 384) 1536 block_8_expand[0][0] __________________________________________________________________________________________________ block_8_expand_relu (ReLU) (None, 10, 10, 384) 0 block_8_expand_BN[0][0] __________________________________________________________________________________________________ block_8_depthwise (DepthwiseCon (None, 10, 10, 384) 3456 block_8_expand_relu[0][0] __________________________________________________________________________________________________ block_8_depthwise_BN (BatchNorm (None, 10, 10, 384) 1536 block_8_depthwise[0][0] __________________________________________________________________________________________________ block_8_depthwise_relu (ReLU) (None, 10, 10, 384) 0 block_8_depthwise_BN[0][0] __________________________________________________________________________________________________ block_8_project (Conv2D) (None, 10, 10, 64) 24576 block_8_depthwise_relu[0][0] __________________________________________________________________________________________________ block_8_project_BN (BatchNormal (None, 10, 10, 64) 256 block_8_project[0][0] __________________________________________________________________________________________________ block_8_add (Add) (None, 10, 10, 64) 0 block_7_add[0][0] block_8_project_BN[0][0] __________________________________________________________________________________________________ block_9_expand (Conv2D) (None, 10, 10, 384) 24576 block_8_add[0][0] __________________________________________________________________________________________________ block_9_expand_BN (BatchNormali (None, 10, 10, 384) 1536 block_9_expand[0][0] __________________________________________________________________________________________________ block_9_expand_relu (ReLU) (None, 10, 10, 384) 0 block_9_expand_BN[0][0] __________________________________________________________________________________________________ block_9_depthwise (DepthwiseCon (None, 10, 10, 384) 3456 block_9_expand_relu[0][0] __________________________________________________________________________________________________ block_9_depthwise_BN (BatchNorm (None, 10, 10, 384) 1536 block_9_depthwise[0][0] __________________________________________________________________________________________________ block_9_depthwise_relu (ReLU) (None, 10, 10, 384) 0 block_9_depthwise_BN[0][0] __________________________________________________________________________________________________ block_9_project (Conv2D) (None, 10, 10, 64) 24576 block_9_depthwise_relu[0][0] __________________________________________________________________________________________________ block_9_project_BN (BatchNormal (None, 10, 10, 64) 256 block_9_project[0][0] __________________________________________________________________________________________________ block_9_add (Add) (None, 10, 10, 64) 0 block_8_add[0][0] block_9_project_BN[0][0] __________________________________________________________________________________________________ block_10_expand (Conv2D) (None, 10, 10, 384) 24576 block_9_add[0][0] __________________________________________________________________________________________________ block_10_expand_BN (BatchNormal (None, 10, 10, 384) 1536 block_10_expand[0][0] __________________________________________________________________________________________________ block_10_expand_relu (ReLU) (None, 10, 10, 384) 0 block_10_expand_BN[0][0] __________________________________________________________________________________________________ block_10_depthwise (DepthwiseCo (None, 10, 10, 384) 3456 block_10_expand_relu[0][0] __________________________________________________________________________________________________ block_10_depthwise_BN (BatchNor (None, 10, 10, 384) 1536 block_10_depthwise[0][0] __________________________________________________________________________________________________ block_10_depthwise_relu (ReLU) (None, 10, 10, 384) 0 block_10_depthwise_BN[0][0] __________________________________________________________________________________________________ block_10_project (Conv2D) (None, 10, 10, 96) 36864 block_10_depthwise_relu[0][0] __________________________________________________________________________________________________ block_10_project_BN (BatchNorma (None, 10, 10, 96) 384 block_10_project[0][0] __________________________________________________________________________________________________ block_11_expand (Conv2D) (None, 10, 10, 576) 55296 block_10_project_BN[0][0] __________________________________________________________________________________________________ block_11_expand_BN (BatchNormal (None, 10, 10, 576) 2304 block_11_expand[0][0] __________________________________________________________________________________________________ block_11_expand_relu (ReLU) (None, 10, 10, 576) 0 block_11_expand_BN[0][0] __________________________________________________________________________________________________ block_11_depthwise (DepthwiseCo (None, 10, 10, 576) 5184 block_11_expand_relu[0][0] __________________________________________________________________________________________________ block_11_depthwise_BN (BatchNor (None, 10, 10, 576) 2304 block_11_depthwise[0][0] __________________________________________________________________________________________________ block_11_depthwise_relu (ReLU) (None, 10, 10, 576) 0 block_11_depthwise_BN[0][0] __________________________________________________________________________________________________ block_11_project (Conv2D) (None, 10, 10, 96) 55296 block_11_depthwise_relu[0][0] __________________________________________________________________________________________________ block_11_project_BN (BatchNorma (None, 10, 10, 96) 384 block_11_project[0][0] __________________________________________________________________________________________________ block_11_add (Add) (None, 10, 10, 96) 0 block_10_project_BN[0][0] block_11_project_BN[0][0] __________________________________________________________________________________________________ block_12_expand (Conv2D) (None, 10, 10, 576) 55296 block_11_add[0][0] __________________________________________________________________________________________________ block_12_expand_BN (BatchNormal (None, 10, 10, 576) 2304 block_12_expand[0][0] __________________________________________________________________________________________________ block_12_expand_relu (ReLU) (None, 10, 10, 576) 0 block_12_expand_BN[0][0] __________________________________________________________________________________________________ block_12_depthwise (DepthwiseCo (None, 10, 10, 576) 5184 block_12_expand_relu[0][0] __________________________________________________________________________________________________ block_12_depthwise_BN (BatchNor (None, 10, 10, 576) 2304 block_12_depthwise[0][0] __________________________________________________________________________________________________ block_12_depthwise_relu (ReLU) (None, 10, 10, 576) 0 block_12_depthwise_BN[0][0] __________________________________________________________________________________________________ block_12_project (Conv2D) (None, 10, 10, 96) 55296 block_12_depthwise_relu[0][0] __________________________________________________________________________________________________ block_12_project_BN (BatchNorma (None, 10, 10, 96) 384 block_12_project[0][0] __________________________________________________________________________________________________ block_12_add (Add) (None, 10, 10, 96) 0 block_11_add[0][0] block_12_project_BN[0][0] __________________________________________________________________________________________________ block_13_expand (Conv2D) (None, 10, 10, 576) 55296 block_12_add[0][0] __________________________________________________________________________________________________ block_13_expand_BN (BatchNormal (None, 10, 10, 576) 2304 block_13_expand[0][0] __________________________________________________________________________________________________ block_13_expand_relu (ReLU) (None, 10, 10, 576) 0 block_13_expand_BN[0][0] __________________________________________________________________________________________________ block_13_pad (ZeroPadding2D) (None, 11, 11, 576) 0 block_13_expand_relu[0][0] __________________________________________________________________________________________________ block_13_depthwise (DepthwiseCo (None, 5, 5, 576) 5184 block_13_pad[0][0] __________________________________________________________________________________________________ block_13_depthwise_BN (BatchNor (None, 5, 5, 576) 2304 block_13_depthwise[0][0] __________________________________________________________________________________________________ block_13_depthwise_relu (ReLU) (None, 5, 5, 576) 0 block_13_depthwise_BN[0][0] __________________________________________________________________________________________________ block_13_project (Conv2D) (None, 5, 5, 160) 92160 block_13_depthwise_relu[0][0] __________________________________________________________________________________________________ block_13_project_BN (BatchNorma (None, 5, 5, 160) 640 block_13_project[0][0] __________________________________________________________________________________________________ block_14_expand (Conv2D) (None, 5, 5, 960) 153600 block_13_project_BN[0][0] __________________________________________________________________________________________________ block_14_expand_BN (BatchNormal (None, 5, 5, 960) 3840 block_14_expand[0][0] __________________________________________________________________________________________________ block_14_expand_relu (ReLU) (None, 5, 5, 960) 0 block_14_expand_BN[0][0] __________________________________________________________________________________________________ block_14_depthwise (DepthwiseCo (None, 5, 5, 960) 8640 block_14_expand_relu[0][0] __________________________________________________________________________________________________ block_14_depthwise_BN (BatchNor (None, 5, 5, 960) 3840 block_14_depthwise[0][0] __________________________________________________________________________________________________ block_14_depthwise_relu (ReLU) (None, 5, 5, 960) 0 block_14_depthwise_BN[0][0] __________________________________________________________________________________________________ block_14_project (Conv2D) (None, 5, 5, 160) 153600 block_14_depthwise_relu[0][0] __________________________________________________________________________________________________ block_14_project_BN (BatchNorma (None, 5, 5, 160) 640 block_14_project[0][0] __________________________________________________________________________________________________ block_14_add (Add) (None, 5, 5, 160) 0 block_13_project_BN[0][0] block_14_project_BN[0][0] __________________________________________________________________________________________________ block_15_expand (Conv2D) (None, 5, 5, 960) 153600 block_14_add[0][0] __________________________________________________________________________________________________ block_15_expand_BN (BatchNormal (None, 5, 5, 960) 3840 block_15_expand[0][0] __________________________________________________________________________________________________ block_15_expand_relu (ReLU) (None, 5, 5, 960) 0 block_15_expand_BN[0][0] __________________________________________________________________________________________________ block_15_depthwise (DepthwiseCo (None, 5, 5, 960) 8640 block_15_expand_relu[0][0] __________________________________________________________________________________________________ block_15_depthwise_BN (BatchNor (None, 5, 5, 960) 3840 block_15_depthwise[0][0] __________________________________________________________________________________________________ block_15_depthwise_relu (ReLU) (None, 5, 5, 960) 0 block_15_depthwise_BN[0][0] __________________________________________________________________________________________________ block_15_project (Conv2D) (None, 5, 5, 160) 153600 block_15_depthwise_relu[0][0] __________________________________________________________________________________________________ block_15_project_BN (BatchNorma (None, 5, 5, 160) 640 block_15_project[0][0] __________________________________________________________________________________________________ block_15_add (Add) (None, 5, 5, 160) 0 block_14_add[0][0] block_15_project_BN[0][0] __________________________________________________________________________________________________ block_16_expand (Conv2D) (None, 5, 5, 960) 153600 block_15_add[0][0] __________________________________________________________________________________________________ block_16_expand_BN (BatchNormal (None, 5, 5, 960) 3840 block_16_expand[0][0] __________________________________________________________________________________________________ block_16_expand_relu (ReLU) (None, 5, 5, 960) 0 block_16_expand_BN[0][0] __________________________________________________________________________________________________ block_16_depthwise (DepthwiseCo (None, 5, 5, 960) 8640 block_16_expand_relu[0][0] __________________________________________________________________________________________________ block_16_depthwise_BN (BatchNor (None, 5, 5, 960) 3840 block_16_depthwise[0][0] __________________________________________________________________________________________________ block_16_depthwise_relu (ReLU) (None, 5, 5, 960) 0 block_16_depthwise_BN[0][0] __________________________________________________________________________________________________ block_16_project (Conv2D) (None, 5, 5, 320) 307200 block_16_depthwise_relu[0][0] __________________________________________________________________________________________________ block_16_project_BN (BatchNorma (None, 5, 5, 320) 1280 block_16_project[0][0] __________________________________________________________________________________________________ Conv_1 (Conv2D) (None, 5, 5, 1280) 409600 block_16_project_BN[0][0] __________________________________________________________________________________________________ Conv_1_bn (BatchNormalization) (None, 5, 5, 1280) 5120 Conv_1[0][0] __________________________________________________________________________________________________ out_relu (ReLU) (None, 5, 5, 1280) 0 Conv_1_bn[0][0] ================================================================================================== Total params: 2,257,984 Trainable params: 0 Non-trainable params: 2,257,984 __________________________________________________________________________________________________
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Adding our ClassifierNow that we have our base layer setup, we can add the classifier. Instead of flattening the feature map of the base layer we will use a global average pooling layer that will average the entire 5x5 area of each 2D feature map and return to us a single 1280 element vector per filter.
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
_____no_output_____
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Finally, we will add the predicition layer that will be a single dense neuron. We can do this because we only have two classes to predict for.
prediction_layer = keras.layers.Dense(1)
_____no_output_____
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Now we will combine these layers together in a model.
model = tf.keras.Sequential([ base_model, global_average_layer, prediction_layer ]) model.summary()
Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= mobilenetv2_1.00_160 (Functi (None, 5, 5, 1280) 2257984 _________________________________________________________________ global_average_pooling2d (Gl (None, 1280) 0 _________________________________________________________________ dense_2 (Dense) (None, 1) 1281 ================================================================= Total params: 2,259,265 Trainable params: 1,281 Non-trainable params: 2,257,984 _________________________________________________________________
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Train the ModelWe will train and compile the model. We will use a very small learning rate to ensure that the model does not have any major changes made to it.
base_learning_rate = 0.0001 model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=base_learning_rate), loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy']) # We can evaluate the model right now to see how it does before training it on our new images initial_epochs = 3 validation_steps=20 loss0,accuracy0 = model.evaluate(validation_batches, steps = validation_steps) # Now we can train it on our images history = model.fit(train_batches, epochs=initial_epochs, validation_data=validation_batches) acc = history.history['accuracy'] print(acc) model.save("dogs_vs_cats.h5") # we can save the model and reload it at anytime in the future new_model = tf.keras.models.load_model('dogs_vs_cats.h5')
_____no_output_____
MIT
CNN_Examples/ComputerVision0.ipynb
cathyXie08/Deep-Learning-Course-Examples
Library Imports
from time import time notebook_start_time = time() import os import re import gc import pickle import random as r import numpy as np import pandas as pd import matplotlib.pyplot as plt import torch from torch import nn, optim from torch.utils.data import Dataset from torch.utils.data import DataLoader as DL from torch.nn.utils import weight_norm as WN from torchvision import models, transforms from time import time from sklearn.model_selection import KFold from sklearn.metrics import mean_squared_error from sklearn.preprocessing import StandardScaler import warnings warnings.filterwarnings("ignore")
_____no_output_____
MIT
PF-2/Notebooks/Train/D169 Last Block (Rand Init) (SGD0.9) (10CV).ipynb
pchandrasekaran1595/PetFinder.my---Pawpularity-Contest
Constants and Utilities
SEED = 49 DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") NUM_FEATURES = 1664 TRANSFORM = transforms.Compose([transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), ]) PATH = "../input/petfinder-pawpularity-score" IMAGE_PATH = "../input/petfinder-pretrained-images-nocrop" verbose = True DEBUG = False sc_y = StandardScaler() def breaker(num=50, char="*") -> None: print("\n" + num*char + "\n") def get_targets(path: str) -> np.ndarray: df = pd.read_csv(os.path.join(path, "train.csv"), engine="python") targets = df["Pawpularity"].copy().values return targets.reshape(-1, 1) def show_graphs(L: list, title=None) -> None: TL, VL = [], [] for i in range(len(L)): TL.append(L[i]["train"]) VL.append(L[i]["valid"]) x_Axis = np.arange(1, len(L) + 1) plt.figure() plt.plot(x_Axis, TL, "r", label="train") plt.plot(x_Axis, VL, "b", label="valid") plt.grid() plt.legend() if title: plt.title("{} Loss".format(title)) else: plt.title("Loss") plt.show()
_____no_output_____
MIT
PF-2/Notebooks/Train/D169 Last Block (Rand Init) (SGD0.9) (10CV).ipynb
pchandrasekaran1595/PetFinder.my---Pawpularity-Contest
Dataset Template and Build Dataloader
class DS(Dataset): def __init__(self, images=None, targets=None, transform=None): self.images = images self.targets = targets self.transform = transform def __len__(self): return self.images.shape[0] def __getitem__(self, idx): return self.transform(self.images[idx]), torch.FloatTensor(self.targets[idx]) def build_dataloaders(tr_images: np.ndarray, va_images: np.ndarray, tr_targets: np.ndarray, va_targets: np.ndarray, batch_size: int, seed: int, transform: transforms.transforms.Compose): if verbose: breaker() print("Building Train and Validation DataLoaders ...") tr_data_setup = DS(images=tr_images, targets=tr_targets, transform=transform) va_data_setup = DS(images=va_images, targets=va_targets, transform=transform) dataloaders = { "train" : DL(tr_data_setup, batch_size=batch_size, shuffle=True, generator=torch.manual_seed(seed)), "valid" : DL(va_data_setup, batch_size=batch_size, shuffle=False) } return dataloaders
_____no_output_____
MIT
PF-2/Notebooks/Train/D169 Last Block (Rand Init) (SGD0.9) (10CV).ipynb
pchandrasekaran1595/PetFinder.my---Pawpularity-Contest
Build Model
def build_model(IL: int, seed: int): class Model(nn.Module): def __init__(self, IL=None): super(Model, self).__init__() self.features = models.densenet169(pretrained=True, progress=False) self.features = nn.Sequential(*[*self.features.children()][:-1]) self.freeze() self.features.add_module("Adaptive Average Pool", nn.AdaptiveAvgPool2d(output_size=(1, 1))) self.features.add_module("Flatten", nn.Flatten()) self.predictor = nn.Sequential() self.predictor.add_module("BN", nn.BatchNorm1d(num_features=IL, eps=1e-5)) self.predictor.add_module("FC", WN(nn.Linear(in_features=IL, out_features=1))) def freeze(self): for params in self.parameters(): params.requires_grad = False for names, params in self.named_parameters(): if re.match(r"features.0.denseblock4", names, re.IGNORECASE): params.requires_grad = True if re.match(r"features.0.norm5", names, re.IGNORECASE): params.requires_grad = True def get_optimizer(self, lr=1e-3, wd=0.0): params = [p for p in self.parameters() if p.requires_grad] return optim.SGD(params, lr=lr, momentum=0.9, weight_decay=wd) def forward(self, x1, x2=None): if x2 is not None: x1 = self.features(x1) x2 = self.features(x2) return self.predictor(x1), self.predictor(x2) else: x1 = self.features(x1) return self.predictor(x1) if verbose: breaker() print("Building Model ...") torch.manual_seed(seed) model = Model(IL=IL) return model
_____no_output_____
MIT
PF-2/Notebooks/Train/D169 Last Block (Rand Init) (SGD0.9) (10CV).ipynb
pchandrasekaran1595/PetFinder.my---Pawpularity-Contest
Fit and Predict
def fit(model=None, optimizer=None, scheduler=None, epochs=None, early_stopping_patience=None, dataloaders=None, fold=None, verbose=False) -> tuple: name = "./Fold_{}_state.pt".format(fold) if verbose: breaker() print("Training Fold {}...".format(fold)) breaker() else: print("Training Fold {}...".format(fold)) Losses = [] bestLoss = {"train" : np.inf, "valid" : np.inf} start_time = time() for e in range(epochs): e_st = time() epochLoss = {"train" : np.inf, "valid" : np.inf} for phase in ["train", "valid"]: if phase == "train": model.train() else: model.eval() lossPerPass = [] for X, y in dataloaders[phase]: X, y = X.to(DEVICE), y.to(DEVICE) optimizer.zero_grad() with torch.set_grad_enabled(phase == "train"): output = model(X) loss = torch.nn.MSELoss()(output, y) if phase == "train": loss.backward() optimizer.step() lossPerPass.append(loss.item()) epochLoss[phase] = np.mean(np.array(lossPerPass)) Losses.append(epochLoss) if early_stopping_patience: if epochLoss["valid"] < bestLoss["valid"]: bestLoss = epochLoss BLE = e + 1 torch.save({"model_state_dict": model.state_dict(), "optim_state_dict": optimizer.state_dict()}, name) early_stopping_step = 0 else: early_stopping_step += 1 if early_stopping_step > early_stopping_patience: if verbose: print("\nEarly Stopping at Epoch {}".format(e)) break if epochLoss["valid"] < bestLoss["valid"]: bestLoss = epochLoss BLE = e + 1 torch.save({"model_state_dict": model.state_dict(), "optim_state_dict": optimizer.state_dict()}, name) if scheduler: scheduler.step(epochLoss["valid"]) if verbose: print("Epoch: {} | Train Loss: {:.5f} | Valid Loss: {:.5f} | Time: {:.2f} seconds".format(e+1, epochLoss["train"], epochLoss["valid"], time()-e_st)) if verbose: breaker() print("Best Validation Loss at Epoch {}".format(BLE)) breaker() print("Time Taken [{} Epochs] : {:.2f} minutes".format(len(Losses), (time()-start_time)/60)) breaker() print("Training Completed") breaker() return Losses, BLE, name ##################################################################################################### def predict_batch(model=None, dataloader=None, mode="test", path=None) -> np.ndarray: model.load_state_dict(torch.load(path, map_location=DEVICE)["model_state_dict"]) model.to(DEVICE) model.eval() y_pred = torch.zeros(1, 1).to(DEVICE) if re.match(r"valid", mode, re.IGNORECASE): for X, _ in dataloader: X = X.to(DEVICE) with torch.no_grad(): output = model(X) y_pred = torch.cat((y_pred, output.view(-1, 1)), dim=0) elif re.match(r"test", mode, re.IGNORECASE): for X in dataloader: X = X.to(DEVICE) with torch.no_grad(): output = model(X) y_pred = torch.cat((y_pred, output.view(-1, 1)), dim=0) return y_pred[1:].detach().cpu().numpy()
_____no_output_____
MIT
PF-2/Notebooks/Train/D169 Last Block (Rand Init) (SGD0.9) (10CV).ipynb
pchandrasekaran1595/PetFinder.my---Pawpularity-Contest
Train
def train(images: np.ndarray, targets: np.ndarray, n_splits: int, batch_size: int, lr: float, wd: float, epochs: int, early_stopping: int, patience=None, eps=None) -> list: metrics = [] KFold_start_time = time() breaker() print("Performing {} Fold CV ...".format(n_splits)) if verbose: pass else: breaker() fold = 1 for tr_idx, va_idx in KFold(n_splits=n_splits, shuffle=True, random_state=SEED).split(images): tr_images, va_images = images[tr_idx], images[va_idx] tr_targets, va_targets = targets[tr_idx], targets[va_idx] tr_targets = sc_y.fit_transform(tr_targets) va_targets = sc_y.transform(va_targets) dataloaders = build_dataloaders(tr_images, va_images, tr_targets, va_targets, batch_size, SEED, TRANSFORM) model = build_model(IL=NUM_FEATURES, seed=SEED).to(DEVICE) optimizer = model.get_optimizer(lr=lr, wd=wd) scheduler = None if isinstance(patience, int) and isinstance(eps, float): scheduler = model.get_plateau_scheduler(optimizer, patience, eps) L, _, name = fit(model=model, optimizer=optimizer, scheduler=scheduler, epochs=epochs, early_stopping_patience=early_stopping, dataloaders=dataloaders, fold=fold, verbose=verbose) y_pred = predict_batch(model=model, dataloader=dataloaders["valid"], mode="valid", path=name) RMSE = np.sqrt(mean_squared_error(sc_y.inverse_transform(y_pred), sc_y.inverse_transform(va_targets))) if verbose: print("Validation RMSE [Fold {}]: {:.5f}".format(fold, RMSE)) breaker() show_graphs(L) metrics_dict = {"Fold" : fold, "RMSE" : RMSE} metrics.append(metrics_dict) fold += 1 breaker() print("Total Time to {} Fold CV : {:.2f} minutes".format(n_splits, (time() - KFold_start_time)/60)) return metrics, (time() - KFold_start_time)/60 def main(): breaker() print("Clean Memory, {} Objects Collected ...".format(gc.collect())) ########### Params ########### if DEBUG: n_splits = 3 patience, eps = 5, 1e-8 epochs, early_stopping = 5, 5 batch_size = 64 lr = 1e-5 wd = 1e-3 else: n_splits = 10 patience, eps = 5, 1e-8 epochs, early_stopping = 25, 5 batch_size = 64 lr = 1e-5 wd = 1e-3 ############################## if verbose: breaker() print("Loading Data ...") feature_start_time = time() images = np.load(os.path.join(IMAGE_PATH, "Images_224x224.npy")) targets = get_targets(PATH) # Without Scheduler metrics, _ = train(images, targets, n_splits, batch_size, lr, wd, epochs, early_stopping, patience=None, eps=None) # # With Plateau Scheduler # metrics, _ = train(images, targets, n_splits, batch_size, lr, wd, epochs, early_stopping, patience=patience, eps=eps) rmse = [] breaker() for i in range(len(metrics)): print("Fold {}, RMSE: {:.5f}".format(metrics[i]["Fold"], metrics[i]["RMSE"])) rmse.append(metrics[i]["RMSE"]) best_index = rmse.index(min(rmse)) breaker() print("Best RMSE : {:.5f}".format(metrics[best_index]["RMSE"])) print("Avg RMSE : {:.5f}".format(sum(rmse) / len(rmse))) breaker() with open("metrics.pkl", "wb") as fp: pickle.dump(metrics, fp) main()
************************************************** Clean Memory, 63 Objects Collected ... ************************************************** Loading Data ... ************************************************** Performing 10 Fold CV ... ************************************************** Building Train and Validation DataLoaders ... ************************************************** Building Model ...
MIT
PF-2/Notebooks/Train/D169 Last Block (Rand Init) (SGD0.9) (10CV).ipynb
pchandrasekaran1595/PetFinder.my---Pawpularity-Contest
End
breaker() print("Notebook Rumtime : {:.2f} minutes".format((time() - notebook_start_time)/60)) breaker()
************************************************** Notebook Rumtime : 149.44 minutes **************************************************
MIT
PF-2/Notebooks/Train/D169 Last Block (Rand Init) (SGD0.9) (10CV).ipynb
pchandrasekaran1595/PetFinder.my---Pawpularity-Contest
linear regression 解析式直接求解
df['x4'] = 1 X = df.iloc[:,(0,1,2,4)].values y = df.y.values
_____no_output_____
MIT
Linear_regression/Linear_regression.ipynb
xpgeng/exercises_of_machine_learning
$y = Xw$ $ w = (X^T*X)^[-1]*X^T*y$
inv_XX_T = inv(X.T.dot(X)) w = inv_XX_T.dot(X.T).dot(df.y.values) w
_____no_output_____
MIT
Linear_regression/Linear_regression.ipynb
xpgeng/exercises_of_machine_learning
Resultsw1 = 2.97396653 w2 = -0.54139002 w3 = 0.97132913 b = 2.03076198
qr(inv_XX_T) X.shape #solve(X,y)##只能解方阵
_____no_output_____
MIT
Linear_regression/Linear_regression.ipynb
xpgeng/exercises_of_machine_learning
梯度下降法求解- 目标函数选取要合适一些, 前边乘以适当的系数.- 注意检验梯度的计算是否正确...
def f(w,X,y): return ((X.dot(w)-y)**2/(2*1000)).sum() def grad_f(w,X,y): return (X.dot(w) - y).dot(X)/1000 w0 = np.array([100.0,100.0,100.0,100.0]) epsilon = 1e-10 alpha = 0.1 check_condition = 1 while check_condition > epsilon: w0 += -alpha*grad_f(w0,X,y) check_condition = abs(grad_f(w0,X,y)).sum() print w0
[ 2.97396671 -0.5414066 0.97132728 2.03076759]
MIT
Linear_regression/Linear_regression.ipynb
xpgeng/exercises_of_machine_learning
随机梯度下降法求解- Stochastic gradient descent- 使用了固定步长 - 一开始用的0.1, 始终达不到给定的精度- 于是添加了判定条件用来更新步长.
def cost_function(w,X,y): return (X.dot(w)-y)**2/2 def grad_cost_f(w,X,y): return (np.dot(X, w) - y)*X w0 = np.array([1.0, 1.0, 1.0, 1.0]) epsilon = 1e-3 alpha = 0.01 # 生成随机index,用来随机索引数据. random_index = np.arange(1000) np.random.shuffle(random_index) cost_value = np.inf #初始化目标函数值 while abs(grad_f(w0,X,y)).sum() > epsilon: for i in range(1000): w0 += -alpha*grad_cost_f(w0,X[random_index[i]],y[random_index[i]]) #检查目标函数变化趋势, 如果趋势变化达到临界值, 更新更小的步长继续计算 difference = cost_value - f(w0, X, y) if difference < 1e-10: alpha *= 0.9 cost_value = f(w0, X, y) print w0
[ 2.97376767 -0.54075842 0.97217986 2.03067711]
MIT
Linear_regression/Linear_regression.ipynb
xpgeng/exercises_of_machine_learning
--- Merge Datasets
combo = pd.merge(train_values, train_labels, on = 'building_id') combo.head(5) plt.figure (figsize=(5,10)); sns.heatmap(df.corr()[['damage_grade']].sort_values(by='damage_grade', ascending = False),annot=True ) sns.scatterplot(data=combo, x='age', y='damage_grade', hue='age');
_____no_output_____
CC0-1.0
Workspace/.ipynb_checkpoints/EricCheng-checkpoint.ipynb
vleong1/modeling_earthquake_damage
---
#Baseline train_labels['damage_grade'].value_counts(normalize=True) le = LabelEncoder() train_enc = train_values.apply(le.fit_transform) train_enc #From Chris #X = train_enc #y = trainlabel['damage_grade'] #X_train, X_test, y_train, y_test = train_test_split(X,y,stratify=y, random_state=123) #pipe_forest = make_pipeline(StandardScaler(), DecisionTreeClassifier()) #params = {'decisiontreeclassifier__max_depth' : [2, 3, 4, 5]} #grid_forest = GridSearchCV(pipe_forest, param_grid = params) #grid_forest.fit(X_train,y_train) #grid_forest.score(X_test,y_test) # I got 0.646815042210284 #grid_forest.best_estimator_ #TTS X = train_enc y = train_labels['damage_grade'] #X_train, X_test, y_train, y_test = train_test_split(train_enc,train_labels, random_state=123 ) X_train, X_test, y_train, y_test = train_test_split(X,y,stratify=y, random_state=123)
_____no_output_____
CC0-1.0
Workspace/.ipynb_checkpoints/EricCheng-checkpoint.ipynb
vleong1/modeling_earthquake_damage
Model
#from Hackathon2 #Cvect and logreg #pipe = make_pipeline(CountVectorizer(stop_words = 'english'), LogisticRegression(n_jobs=-1)) # #params = {'countvectorizer__max_features':[500, 1000, 15000, 2000, 2500]} # #grid=GridSearchCV(pipe, param_grid=params, n_jobs= -1) #grid.fit(X_train, y_train) #
_____no_output_____
CC0-1.0
Workspace/.ipynb_checkpoints/EricCheng-checkpoint.ipynb
vleong1/modeling_earthquake_damage
logreg
#Cvect and logreg pipe = make_pipeline(StandardScaler(),LogisticRegression(n_jobs=-1)) # #params = {'countvectorizer__max_features':[500, 1000, 15000, 2000, 2500]} # #grid=GridSearchCV(pipe, n_jobs= -1) pipe.fit(X_train, y_train) pipe.score(X_train, y_train) pipe.fit(X_test, y_test) pipe.score(X_test, y_test) pipe.get_params().keys() #LogisticRegression pipe_lgr = make_pipeline(StandardScaler(), LogisticRegression(n_jobs = -1, max_iter = 1000)) params = {'logisticregression__C' : [0.1, 0.75, 1, 10], 'logisticregression__solver' : ['newton-cg', 'lbfgs', 'liblinear']} grid_lgr = GridSearchCV(pipe_lgr, param_grid = params) grid_lgr.fit(X_train, y_train) print(f'Train Score: {grid_lgr.score(X_train, y_train)}') print(f'Test Score: {grid_lgr.score(X_test, y_test)}') grid_lgr.best_params_
_____no_output_____
CC0-1.0
Workspace/.ipynb_checkpoints/EricCheng-checkpoint.ipynb
vleong1/modeling_earthquake_damage
Modeling KNN
# define models and parameters # model = KNeighborsClassifier() # n_neighbors = range(1, 21, 2) # weights = ['uniform', 'distance'] # #metric = ['euclidean', 'manhattan', 'minkowski'] # metric = ['euclidean'] # # define grid search # grid = dict(n_neighbors=n_neighbors,weights=weights,metric=metric) # #cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) # #grid_search = GridSearchCV(estimator=model, param_grid=grid, n_jobs=-1, cv=cv, scoring='accuracy',error_score=0) # grid_search = GridSearchCV(estimator=model, param_grid=grid, n_jobs=-1, scoring='accuracy',error_score=0) # grid_result = grid_search.fit(X, y) # # summarize results # print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_)) # means = grid_result.cv_results_['mean_test_score'] # stds = grid_result.cv_results_['std_test_score'] # params = grid_result.cv_results_['params'] # for mean, stdev, param in zip(means, stds, params): # print("%f (%f) with: %r" % (mean, stdev, param)) #Basic KNN X = train_enc y = train_labels['damage_grade'] X_train, X_test, y_train, y_test = train_test_split(X,y,stratify=y, random_state=123) pipe_knn = make_pipeline(StandardScaler(),KNeighborsClassifier(n_jobs=-1)) pipe_knn.fit(X_train, y_train) pipe_knn.score(X_train, y_train) print(f'Train Score: {pipe_knn.score(X_train, y_train)}') print(f'Test Score: {pipe_knn.score(X_test, y_test)}')
_____no_output_____
CC0-1.0
Workspace/.ipynb_checkpoints/EricCheng-checkpoint.ipynb
vleong1/modeling_earthquake_damage
Trying Veronica's code - KNN testing
pipe_knn = make_pipeline(StandardScaler(), KNeighborsClassifier(n_jobs = -1)) # n_neighbors must be odd to avoid an even split #Note: tried leaf size and p, but it didn't give us any value params = {'kneighborsclassifier__n_neighbors' : [5, 7, 9, 11]} grid_knn = GridSearchCV(pipe_knn, param_grid = params) grid_knn.fit(X_train, y_train) print(f'Train Score: {grid_knn.score(X_train, y_train)}') print(f'Test Score: {grid_knn.score(X_test, y_test)}') grid_knn.best_params_
_____no_output_____
CC0-1.0
Workspace/.ipynb_checkpoints/EricCheng-checkpoint.ipynb
vleong1/modeling_earthquake_damage
5/13 per Jacob, use OHE instead of LabelEncoding LOG with OHE
X = train_values y = train_labels['damage_grade'] X_train, X_test, y_train, y_test = train_test_split(X,y,stratify=y, random_state=123)
_____no_output_____
CC0-1.0
Workspace/.ipynb_checkpoints/EricCheng-checkpoint.ipynb
vleong1/modeling_earthquake_damage
---
#Cvect and logreg #define X and y X = train_values y = train_labels['damage_grade'] X_train, X_test, y_train, y_test = train_test_split(X,y,stratify=y, random_state=123) #Create pipeline pipe = make_pipeline(OneHotEncoder(),StandardScaler(with_mean=False), LogisticRegression(n_jobs=-1)) params = {'logisticregression__C' : [0.1, 0.75, 1, 10], #'logisticregression__solver' : ['newton-cg', 'lbfgs', 'liblinear'] } grid_lgr = GridSearchCV(pipe, param_grid = params) grid_lgr.fit(X_train, y_train) grid_lgr.score(X_test, y_test) print(f'Train Score: {grid_lgr.score(X_train, y_train)}') print(f'Test Score: {grid_lgr.score(X_test, y_test)}')
Train Score: 0.5925300588385777 Test Score: 0.585111281657713
CC0-1.0
Workspace/.ipynb_checkpoints/EricCheng-checkpoint.ipynb
vleong1/modeling_earthquake_damage
KNN with OHE
#train_values = train_values.head(int(len(train_values) * 0.1)) #train_labels = train_labels.head(int(len(train_labels) * 0.1)) X = train_values y = train_labels['damage_grade'] X_train, X_test, y_train, y_test = train_test_split(X,y,stratify=y, random_state=123) pipe_knn = make_pipeline(OneHotEncoder(),StandardScaler(), KNeighborsClassifier(n_jobs = -1)) # n_neighbors must be odd to avoid an even split params = {'kneighborsclassifier__n_neighbors' : [5, 7, 9, 11]} #'kneighborsclassifier__leaf_size': [1,5,10,30]} #define parameters for hypertuning #params = { # 'n_neighbors': [5, 7, 9, 11], # 'leaf_size': (1,30), # 'p': (1,2) grid_knn = GridSearchCV(pipe_knn, param_grid = params) grid_knn.fit(X_train, y_train) print(f'Train Score: {grid_knn.score(X_train, y_train)}') print(f'Test Score: {grid_knn.score(X_test, y_test)}') grid_knn.best_params_
Train Score: 0.6720900486057815 Test Score: 0.575287797390637
CC0-1.0
Workspace/.ipynb_checkpoints/EricCheng-checkpoint.ipynb
vleong1/modeling_earthquake_damage
---
#https://medium.datadriveninvestor.com/k-nearest-neighbors-in-python-hyperparameters-tuning-716734bc557f #List Hyperparameters that we want to tune. leaf_size = list(range(1,50)) n_neighbors = list(range(1,30)) p=[1,2] #Convert to dictionary hyperparameters = dict(leaf_size=leaf_size, n_neighbors=n_neighbors, p=p) #Create new KNN object knn_2 = KNeighborsClassifier() #Use GridSearch clf = GridSearchCV(knn_2, hyperparameters, cv=10) parameters_KNN = { 'n_neighbors': (1,10, 1), 'leaf_size': (20,40,1), 'p': (1,2), 'weights': ('uniform', 'distance'), 'metric': ('minkowski', 'chebyshev'), # with GridSearch grid_search_KNN = GridSearchCV( estimator=estimator_KNN, param_grid=parameters_KNN, scoring = 'accuracy', n_jobs = -1, cv = 5
_____no_output_____
CC0-1.0
Workspace/.ipynb_checkpoints/EricCheng-checkpoint.ipynb
vleong1/modeling_earthquake_damage
LDA TrainingThe LDA training algorithm from Parameter estimation for text analysis
import random import numpy as np from collections import defaultdict, OrderedDict from types import SimpleNamespace from tqdm.notebook import tqdm from visualize import visualize_topic_word # === corpus loading === class NeurIPSCorpus: def __init__(self, data_path, num_topics, mode, start_doc_idx=0, max_num_docs=100, max_num_words=10000, max_doc_length=1000, train_corpus=None): self.docs = [] self.word2id = OrderedDict() self.max_doc_length = max_doc_length self.mode = mode # only keep the most frequent words if self.mode == "train": word2cnt = defaultdict(int) with open(data_path) as fin: for i, line in enumerate(list(fin)[::-1]): # use more recent papers if i >= max_num_docs: break for word in line.strip().split(): word2cnt[word] += 1 word2cnt = sorted(list(word2cnt.items()), key=lambda x: x[1], reverse=True) if len(word2cnt) > max_num_words: word2cnt = word2cnt[:max_num_words] word2cnt = dict(word2cnt) # read in the doc and convert words to integers with open(data_path) as fin: for i, line in enumerate(list(fin)[::-1]): # use more recent papers if i < start_doc_idx: continue if i - start_doc_idx >= max_num_docs: break doc = [] for word in line.strip().split(): if len(doc) >= self.max_doc_length: break if self.mode == "train": if word not in word2cnt: continue if word not in self.word2id: self.word2id[word] = len(self.word2id) doc.append(self.word2id[word]) else: if word not in train_corpus.word2id: continue doc.append(train_corpus.word2id[word]) self.docs.append(doc) self.num_docs = len(self.docs) self.num_topics = num_topics self.num_words = len(self.word2id) self.id2word = {v: k for k, v in self.word2id.items()} print( "num_docs:", self.num_docs, "num_topics:", self.num_topics, "num_words:", self.num_words ) corpus = NeurIPSCorpus( data_path="data/papers.txt", mode="train", num_topics=10, start_doc_idx=0, max_num_docs=1000, max_num_words=10000, max_doc_length=200, ) hparams = SimpleNamespace( alpha=np.ones([corpus.num_topics], dtype=float) / corpus.num_topics, beta = np.ones([corpus.num_words], dtype=float) / corpus.num_topics, gibbs_sampling_max_iters=500, ) # === initialization === print("Initializing...", flush=True) n_doc_topic = np.zeros([corpus.num_docs, corpus.num_topics], dtype=float) # n_m^(k) n_topic_word = np.zeros([corpus.num_topics, corpus.num_words], dtype=float) # n_k^(t) z_doc_word = np.zeros([corpus.num_docs, corpus.max_doc_length], dtype=int) for doc_i in range(corpus.num_docs): for j, word_j in enumerate(corpus.docs[doc_i]): topic_ij = random.randint(0, corpus.num_topics - 1) n_doc_topic[doc_i, topic_ij] += 1 n_topic_word[topic_ij, word_j] += 1 z_doc_word[doc_i, j] = topic_ij # === Gibbs sampling === print("Gibbs sampling...", flush=True) for iteration in tqdm(range(hparams.gibbs_sampling_max_iters)): for doc_i in range(corpus.num_docs): for j, word_j in enumerate(corpus.docs[doc_i]): # remove the old assignment topic_ij = z_doc_word[doc_i, j] n_doc_topic[doc_i, topic_ij] -= 1 n_topic_word[topic_ij, word_j] -= 1 # compute the new assignment p_doc_topic = (n_doc_topic[doc_i, :] + hparams.alpha) \ / np.sum(n_doc_topic[doc_i] + hparams.alpha) p_topic_word = (n_topic_word[:, word_j] + hparams.beta[word_j]) \ / np.sum(n_topic_word + hparams.beta, axis=1) p_topic = p_doc_topic * p_topic_word p_topic /= np.sum(p_topic) # record the new assignment new_topic_ij = np.random.choice(np.arange(corpus.num_topics), p=p_topic) n_doc_topic[doc_i, new_topic_ij] += 1 n_topic_word[new_topic_ij, word_j] += 1 z_doc_word[doc_i, j] = new_topic_ij if iteration % 50 == 0: print(f"Iter [{iteration}]===") # === Check convergence and read out parameters === theta = (n_doc_topic + hparams.alpha) / np.sum(n_doc_topic + hparams.alpha, axis=1, keepdims=True) phi = (n_topic_word + hparams.beta) / np.sum(n_topic_word + hparams.beta, axis=1, keepdims=True) all_top_words = [] all_top_probs = [] for topic in range(corpus.num_topics): top_words = np.argsort(phi[topic])[::-1][:10] top_probs = phi[topic, top_words] top_words = [corpus.id2word[word] for word in top_words] all_top_words.append(top_words) all_top_probs.append(top_probs) print(f"Topic {topic}:", top_words) visualize_topic_word(all_top_words, all_top_probs)
Initializing... Gibbs sampling...
MIT
Understanding_the_LDA_Algorithm_20xx.ipynb
mistylight/Understanding_the_LDA_Algorithm
Inference on unseen documents
# === inference on unseen documents === test_corpus = NeurIPSCorpus( data_path="data/papers.txt", mode="test", num_topics=10, start_doc_idx=1000, max_num_docs=5, max_num_words=10000, max_doc_length=200, train_corpus=corpus, ) # === inference via Gibbs sampling === for i, doc in enumerate(test_corpus.docs): print(f"\nTest Doc [{i}] ===") doc_i = 0 # only infer 1 test doc at a time test_n_doc_topic = np.zeros([1, corpus.num_topics], dtype=float) test_n_topic_word = np.zeros([corpus.num_topics, corpus.num_words], dtype=float) test_z_doc_word = np.zeros([1, corpus.max_doc_length], dtype=int) print(" ".join([corpus.id2word[x] for x in doc])) for j, word_j in enumerate(doc): topic_ij = random.randint(0, corpus.num_topics - 1) test_n_doc_topic[doc_i, topic_ij] += 1 test_n_topic_word[topic_ij, word_j] += 1 test_z_doc_word[doc_i, j] = topic_ij for iteration in tqdm(range(100)): for j, word_j in enumerate(doc): # remove the old assignment topic_ij = test_z_doc_word[doc_i, j] test_n_doc_topic[doc_i, topic_ij] -= 1 test_n_topic_word[topic_ij, word_j] -= 1 # compute the new assignment (new sampling formula!) p_doc_topic = (test_n_doc_topic[doc_i, :] + hparams.alpha) \ / np.sum(test_n_doc_topic[doc_i] + hparams.alpha) p_topic_word = (test_n_topic_word[:, word_j] + n_topic_word[:, word_j] + hparams.beta[word_j]) \ / np.sum(test_n_topic_word + n_topic_word + hparams.beta, axis=1) p_topic = p_doc_topic * p_topic_word p_topic /= np.sum(p_topic) # record the new assignment new_topic_ij = np.random.choice(np.arange(corpus.num_topics), p=p_topic) test_n_doc_topic[doc_i, new_topic_ij] += 1 test_n_topic_word[new_topic_ij, word_j] += 1 test_z_doc_word[doc_i, j] = new_topic_ij # === Check convergence and read out parameters === test_theta = (test_n_doc_topic + hparams.alpha) / np.sum(test_n_doc_topic + hparams.alpha, axis=1, keepdims=True) test_phi = (test_n_topic_word + hparams.beta) / np.sum(test_n_topic_word + hparams.beta, axis=1, keepdims=True) print("Topic distribution:", [float(f"{x:.4f}") for x in test_theta[0]]) print("Top 3 topics:", np.argsort(test_theta[0])[::-1][:3])
num_docs: 5 num_topics: 10 num_words: 0 Test Doc [0] === inference graphical models semidefinite programming a microsoft research cs toronto edu mit microsoft research mit edu andrea montanari stanford university montanari stanford edu abstract maximum posteriori probability map inference graphical model amount solve graph structure combinatorial optimization problem popular inference algorithm belief propagation bp generalize belief propagation intimately related linear programming lp relaxation adams hierarchy despite popularity algorithm understand sum square hierarchy base semidefinite programming provide superior guarantee unfortunately relaxation graph n vertex require solve n d variable d degree hierarchy practice d approach scale ten variable paper propose binary relaxation map inference hierarchy innovation focus computational efficiency firstly analogy bp variant introduce decision variable correspond region graphical model secondly solve result non convex style method develop sequential procedure demonstrate result algorithm solve problem ten thousand variable minute outperform bp practical problem image denoising spin glass finally specific graph type establish sufficient condition tightness propose partial relaxation introduction graphical model provide powerful framework analyze system comprise large number interact variable inference graphical model crucial scientific methodology application variety field include causal inference computer vision statistical physics information theory genome research wj kf mm paper propose class inference algorithm pairwise graphical model model fully specify assign finite domain x variable ii
MIT
Understanding_the_LDA_Algorithm_20xx.ipynb
mistylight/Understanding_the_LDA_Algorithm
Set plot font size
FS = 18
_____no_output_____
BSD-3-Clause
190211_evolve_p3_p7_compute_best_conv_ts_from_err_and_p-val_info.ipynb
amwilson149/baby-andross
Get dictionary with information about errors and p-values during convergent time steps
fname = './data/p3_p7_evolve_results/190211_errs_per_conv_ts_pr_0.005_g_1.1_niter_100.json' with open(fname,'r') as f: c_err_results = json.loads(f.read()) # Inspect keys print(c_err_results.keys()) # Go through simulation iterations and compute the min, max, and best # (where errors are minimized and p-values are maximized) time step for each itercurr = [] min_c_ts = [] max_c_ts = [] mean_c_ts = [] best_c_ts = [] iters = list(set(c_err_results['iteration'])) for ic in iters: rowscurr = [i for i,q in enumerate(c_err_results['iteration']) if q == ic] encfscurr = [c_err_results['err_ncfs'][q] for q in rowscurr] enpcscurr = [c_err_results['err_npcs'][q] for q in rowscurr] pnsynscurr = [c_err_results['p_nsyns'][q] for q in rowscurr] pnsynspcfcurr = [c_err_results['p_nsynspcf'][q] for q in rowscurr] pnpcspcfcurr = [c_err_results['p_npcspcf'][q] for q in rowscurr] pncfsppccurr = [c_err_results['p_ncfsppc'][q] for q in rowscurr] tscurr = [c_err_results['time_step'][q] for q in rowscurr] itercurr.append(ic) min_c_ts.append(np.min(tscurr)) max_c_ts.append(np.max(tscurr)) mean_c_ts.append(np.mean(tscurr)) b_encfs = [i for i,q in enumerate(encfscurr) if q == np.min(encfscurr)] b_enpcs = [i for i,q in enumerate(enpcscurr) if q == np.min(enpcscurr)] b_pnsyns = [i for i,q in enumerate(pnsynscurr) if q == np.max(pnsynscurr)] b_pnsynspcf = [i for i,q in enumerate(pnsynspcfcurr) if q == np.max(pnsynspcfcurr)] b_pnpcspcf = [i for i,q in enumerate(pnpcspcfcurr) if q == np.max(pnpcspcfcurr)] b_pncfsppc = [i for i,q in enumerate(pncfsppccurr) if q == np.max(pncfsppccurr)] tben = [tscurr[q] for q in b_encfs] tbep = [tscurr[q] for q in b_enpcs] tpnsyns = [tscurr[q] for q in b_pnsyns] tpnspcf = [tscurr[q] for q in b_pnsynspcf] tpnpcpcf = [tscurr[q] for q in b_pnpcspcf] tpncfppc = [tscurr[q] for q in b_pncfsppc] # Find the time step where most of these conditions are true b_ts = st.mode(tben + tbep + tpnsyns + tpnspcf + tpnpcpcf + tpncfppc)[0][0] best_c_ts.append(b_ts) plt.figure(figsize=(10,10)) plt.hist(best_c_ts) plt.xlabel('time step of best convergence',fontsize=FS) plt.ylabel('number of occurrences',fontsize=FS) plt.title('Best convergence times for iterations of simulation with pr 0.005, g 1.1',fontsize=FS) plt.show() print('mean best convergence time = {0} +/- {1} time steps'.format(np.mean(best_c_ts),st.sem(best_c_ts))) plt.figure(figsize=(10,10)) plt.hist(mean_c_ts) plt.xlabel('mean time step of convergence',fontsize=FS) plt.ylabel('number of occurrences',fontsize=FS) plt.title('Mean convergence times for iterations of simulation with pr 0.005, g 1.1',fontsize=FS) plt.show() print('mean of mean convergent time steps = {0}'.format(np.mean(mean_c_ts))) np.max(iters)
_____no_output_____
BSD-3-Clause
190211_evolve_p3_p7_compute_best_conv_ts_from_err_and_p-val_info.ipynb
amwilson149/baby-andross
[View in Colaboratory](https://colab.research.google.com/github/schwaaweb/aimlds1_11-NLP/blob/master/M11_A_DJ_NLP_Assignment.ipynb) Assignment: Natural Language Processing In this assignment, you will work with a data set that contains restaurant reviews. You will use a Naive Bayes model to classify the reviews (positive or negative) based on the words in the review. The main objective of this assignment is gauge the performance of a Naive Bayes model by using a confusion matrix; however in order to ascertain the efficiacy of the model, you will have to first train the Naive Bayes model with a portion (i.e. 70%) of the underlying data set and then test it against the remainder of the data set . Before you can train the model, you will have to go through a sequence of steps to get the data ready for training the model.Steps you may need to perform:**1) **Read in the list of restaurant reviews**2)** Convert the reviews into a list of tokens**3) **You will most likely have to eliminate stop words**4)** You may have to utilize stemming or lemmatization to determine the base form of the words**5) **You will have to vectorize the data (i.e. construct a document term/word matix) wherein select words from the reviews will constitute the columns of the matrix and the individual reviews will be part of the rows of the matrix**6) ** Create 'Train' and 'Test' data sets (i.e. 70% of the underlying data set will constitute the training set and 30% of the underlying data set will constitute the test set)**7)** Train a Naive Bayes model on the Train data set and test it against the test data set**8) **Construct a confusion matirx to gauge the performance of the model**Dataset**: https://www.dropbox.com/s/yl5r7kx9nq15gmi/Restaurant_Reviews.tsv?raw=1 **1) **Read in the list of restaurant reviews
#%%time #!wget -c https://www.dropbox.com/s/yl5r7kx9nq15gmi/Restaurant_Reviews.tsv?raw=1 && mv Restaurant_Reviews.tsv?raw=1 Restaurant_Reviews.tsv !ls -lh *tsv %%time import numpy as np import pandas as pd import matplotlib.pyplot as plt import re import string import nltk nltk.download('all') df = pd.read_csv('Restaurant_Reviews.tsv', sep='\t') df.head() df.tail()
_____no_output_____
Unlicense
M11_A_DJ_NLP_Assignment.ipynb
schwaaweb/aimlds1_11-NLP
**2)** Convert the reviews into a list of tokens
review = df['Review'] # dropping the like here print(review) len(review)
0 Wow... Loved this place. 1 Crust is not good. 2 Not tasty and the texture was just nasty. 3 Stopped by during the late May bank holiday of... 4 The selection on the menu was great and so wer... 5 Now I am getting angry and I want my damn pho. 6 Honeslty it didn't taste THAT fresh.) 7 The potatoes were like rubber and you could te... 8 The fries were great too. 9 A great touch. 10 Service was very prompt. 11 Would not go back. 12 The cashier had no care what so ever on what I... 13 I tried the Cape Cod ravoli, chicken, with cra... 14 I was disgusted because I was pretty sure that... 15 I was shocked because no signs indicate cash o... 16 Highly recommended. 17 Waitress was a little slow in service. 18 This place is not worth your time, let alone V... 19 did not like at all. 20 The Burrittos Blah! 21 The food, amazing. 22 Service is also cute. 23 I could care less... The interior is just beau... 24 So they performed. 25 That's right....the red velvet cake.....ohhh t... 26 - They never brought a salad we asked for. 27 This hole in the wall has great Mexican street... 28 Took an hour to get our food only 4 tables in ... 29 The worst was the salmon sashimi. ... 970 I immediately said I wanted to talk to the man... 971 The ambiance isn't much better. 972 Unfortunately, it only set us up for disapppoi... 973 The food wasn't good. 974 Your servers suck, wait, correction, our serve... 975 What happened next was pretty....off putting. 976 too bad cause I know it's family owned, I real... 977 Overpriced for what you are getting. 978 I vomited in the bathroom mid lunch. 979 I kept looking at the time and it had soon bec... 980 I have been to very few places to eat that und... 981 We started with the tuna sashimi which was bro... 982 Food was below average. 983 It sure does beat the nachos at the movies but... 984 All in all, Ha Long Bay was a bit of a flop. 985 The problem I have is that they charge $11.99 ... 986 Shrimp- When I unwrapped it (I live only 1/2 a... 987 It lacked flavor, seemed undercooked, and dry. 988 It really is impressive that the place hasn't ... 989 I would avoid this place if you are staying in... 990 The refried beans that came with my meal were ... 991 Spend your money and time some place else. 992 A lady at the table next to us found a live gr... 993 the presentation of the food was awful. 994 I can't tell you how disappointed I was. 995 I think food should have flavor and texture an... 996 Appetite instantly gone. 997 Overall I was not impressed and would not go b... 998 The whole experience was underwhelming, and I ... 999 Then, as if I hadn't wasted enough of my life ... Name: Review, Length: 1000, dtype: object
Unlicense
M11_A_DJ_NLP_Assignment.ipynb
schwaaweb/aimlds1_11-NLP
**3) **You will most likely have to eliminate stop words **4)** You may have to utilize stemming or lemmatization to determine the base form of the words
stopwords = nltk.corpus.stopwords.words('english') ps = nltk.PorterStemmer() #Elmiminate punctations #Tokenize based on whitespace #Stem the text #Remove stopwords def process_text(txt): eliminate_punct = "".join([word.lower() for word in txt if word not in string.punctuation]) tokens = re.split('\W+', txt) txt = [ps.stem(word) for word in tokens if word not in stopwords] return txt df['clean_review'] = df['Review'].apply(lambda x: process_text(x)) df.head() import gensim # Use the Gensim document to create a dictionary - a dictionary maps every word to a number dictionary = gensim.corpora.Dictionary(df['clean_review']) # Examine the length of the dictionary num_of_words = len(dictionary) print("# of words in dictionary: {}".format(num_of_words)) #for index,word in dictionary.items(): # print(index,word) print(dictionary) #print(dictionary.token2id)
_____no_output_____
Unlicense
M11_A_DJ_NLP_Assignment.ipynb
schwaaweb/aimlds1_11-NLP
**5) **You will have to vectorize the data (i.e. construct a document term/word matix) wherein select words from the reviews will constitute the columns of the matrix and the individual reviews will be part of the rows of the matrix
from pprint import pprint %%time from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer def cv(data): count_vectorizer = CountVectorizer() emb = count_vectorizer.fit_transform(data) return emb, count_vectorizer list_corpus = df["clean_review"].tolist() list_labels = df["Liked"].tolist() X_train, X_test, y_train, y_test = train_test_split(list_corpus, list_labels, test_size=0.3, random_state=42) #X_train_counts, count_vectorizer = cv(X_train) #X_test_counts = count_vectorizer.transform(X_test) #pprint(X_train) #from sklearn.feature_extraction.text import CountVectorizer #count_vect = CountVectorizer(analyzer=process_text, max_features=1668) #W_counts = count_vect.fit_transform(df['clean_review']) #print(W_counts.shape) #print(count_vect.get_feature_names()) %%time corpus = [dictionary.doc2bow(text) for text in list_corpus] tfidf = gensim.models.TfidfModel(corpus) corpus_tfidf = tfidf[corpus] index = gensim.similarities.MatrixSimilarity(tfidf[corpus]) sims = index[corpus_tfidf] #for vector in corpus: # print(vector) print(sims.shape)
(1000, 1000)
Unlicense
M11_A_DJ_NLP_Assignment.ipynb
schwaaweb/aimlds1_11-NLP
Machine Learning Engineer Nanodegree Unsupervised Learning Project 3: Creating Customer Segments Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `'TODO'` statement. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. >**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Getting StartedIn this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in *monetary units*) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.The dataset for this project can be found on the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Wholesale+customers). For the purposes of this project, the features `'Channel'` and `'Region'` will be excluded in the analysis — with focus instead on the six product categories recorded for customers.Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
# Import libraries necessary for this project import numpy as np import pandas as pd import renders as rs from IPython.display import display # Allows the use of display() for DataFrames # Show matplotlib plots inline (nicely formatted in the notebook) %matplotlib inline # Load the wholesale customers dataset try: data = pd.read_csv("customers.csv") data.drop(['Region', 'Channel'], axis = 1, inplace = True) print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape) except: print "Dataset could not be loaded. Is the dataset missing?"
Wholesale customers dataset has 440 samples with 6 features each.
MIT
projects/creating_customer_segments/customer_segments.ipynb
ankdesh/Udacity-MachineLearning-Nanodegree
Data ExplorationIn this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: **'Fresh'**, **'Milk'**, **'Grocery'**, **'Frozen'**, **'Detergents_Paper'**, and **'Delicatessen'**. Consider what each category represents in terms of products you could purchase.
# Display a description of the dataset display(data.describe())
_____no_output_____
MIT
projects/creating_customer_segments/customer_segments.ipynb
ankdesh/Udacity-MachineLearning-Nanodegree
Implementation: Selecting SamplesTo get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add **three** indices of your choice to the `indices` list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.
# TODO: Select three indices of your choice you wish to sample from the dataset indices = [0, 15, 45] # Create a DataFrame of the chosen samples samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True) print "Chosen samples of wholesale customers dataset:" display(samples)
Chosen samples of wholesale customers dataset:
MIT
projects/creating_customer_segments/customer_segments.ipynb
ankdesh/Udacity-MachineLearning-Nanodegree
Question 1Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers. *What kind of establishment (customer) could each of the three samples you've chosen represent?* **Hint:** Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying *"McDonalds"* when describing a sample customer as a restaurant. **Answer:** Customer 0 - Seems to be a cafe or restaurant given higher than average consumption of Milk and grocery and lower consumption for other products.Customer 15 - Only product which is near the mean for this customer is "Fresh". In general there is low consumption for almost all other products (nearing 50 percentile). This customer is probably a small grocery shop.Customer 45 - Very high comsumption of Milk, groceries and detergents above average points to this is probably a big restaurant or Bakery. Implementation: Feature RelevanceOne interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.In the code block below, you will need to implement the following: - Assign `new_data` a copy of the data by removing a feature of your choice using the `DataFrame.drop` function. - Use `sklearn.cross_validation.train_test_split` to split the dataset into training and testing sets. - Use the removed feature as your target label. Set a `test_size` of `0.25` and set a `random_state`. - Import a decision tree regressor, set a `random_state`, and fit the learner to the training data. - Report the prediction score of the testing set using the regressor's `score` function.
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature new_data = data.drop('Detergents_Paper',axis=1) # TODO: Split the data into training and testing sets using the given feature as the target from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(new_data, data['Detergents_Paper'], test_size = 0.25, random_state = 0) # TODO: Create a decision tree regressor and fit it to the training set from sklearn.tree import DecisionTreeRegressor regressor = DecisionTreeRegressor() # TODO: Report the score of the prediction using the testing set from sklearn.metrics import r2_score model = regressor.fit(X_train,y_train) score = r2_score(y_test, model.predict(X_test)) print (score)
0.671835598453
MIT
projects/creating_customer_segments/customer_segments.ipynb
ankdesh/Udacity-MachineLearning-Nanodegree
Question 2*Which feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?* **Hint:** The coefficient of determination, `R^2`, is scored between 0 and 1, with 1 being a perfect fit. A negative `R^2` implies the model fails to fit the data. **Answer:** I chose Deteregents_Paper for prediction. The R^2 score achieved was 0.67 which shows that this feature is dependent on some (non-linear) combination of other features. Since this feature is not independent of other features, this feature may not provide unique information about customer's spending habits. Visualize Feature DistributionsTo get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.
# Produce a scatter matrix for each pair of features in the data pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
_____no_output_____
MIT
projects/creating_customer_segments/customer_segments.ipynb
ankdesh/Udacity-MachineLearning-Nanodegree
Question 3*Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?* **Hint:** Is the data normally distributed? Where do most of the data points lie? **Answer:** Above figure shows a strong correlation between Detergent_Paper and Grocery. Also, there is a weaker correlation between Milk and Detergent_Paper, Milk and Grocery. This infact confirms my suspicion of Detergent_Paper being dependent on other features. The data is definetly not normally distirbuted and most of it is cluttered near the origin. Data PreprocessingIn this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful. Implementation: Feature ScalingIf data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most [often appropriate](http://econbrowser.com/archives/2014/02/use-of-logarithms-in-economics) to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a [Box-Cox test](http://scipy.github.io/devdocs/generated/scipy.stats.boxcox.html), which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.In the code block below, you will need to implement the following: - Assign a copy of the data to `log_data` after applying a logarithm scaling. Use the `np.log` function for this. - Assign a copy of the sample data to `log_samples` after applying a logrithm scaling. Again, use `np.log`.
# TODO: Scale the data using the natural logarithm log_data = np.log(data) # TODO: Scale the sample data using the natural logarithm log_samples = np.log(samples) # Produce a scatter matrix for each pair of newly-transformed features pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
_____no_output_____
MIT
projects/creating_customer_segments/customer_segments.ipynb
ankdesh/Udacity-MachineLearning-Nanodegree
ObservationAfter applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
# Display the log-transformed sample data display(log_samples)
_____no_output_____
MIT
projects/creating_customer_segments/customer_segments.ipynb
ankdesh/Udacity-MachineLearning-Nanodegree
Implementation: Outlier DetectionDetecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use [Tukey's Method for identfying outliers](http://datapigtechnologies.com/blog/index.php/highlighting-outliers-in-your-data-with-the-tukey-method/): An *outlier step* is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.In the code block below, you will need to implement the following: - Assign the value of the 25th percentile for the given feature to `Q1`. Use `np.percentile` for this. - Assign the value of the 75th percentile for the given feature to `Q3`. Again, use `np.percentile`. - Assign the calculation of an outlier step for the given feature to `step`. - Optionally remove data points from the dataset by adding indices to the `outliers` list.**NOTE:** If you choose to remove any outliers, ensure that the sample data does not contain any of these points! Once you have performed this implementation, the dataset will be stored in the variable `good_data`.
from sets import Set outliers_indices = {} # For each feature find the data points with extreme high or low values for feature in log_data.keys(): # TODO: Calculate Q1 (25th percentile of the data) for the given feature Q1 = np.percentile(log_data[feature], 25) # TODO: Calculate Q3 (75th percentile of the data) for the given feature Q3 = np.percentile(log_data[feature], 75) # TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range) step = 1.5 * (Q3 - Q1) # Display the outliers print "Data points considered outliers for the feature '{}':".format(feature) outlier = log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))] outliers_indices[feature] = Set(outlier.index) display(outlier) # Find outlier in all the Feature consistent_outliers = Set(outliers_indices['Fresh']) for feature in outliers_indices.keys(): consistent_outliers.intersection_update(feature) #print ("Outlier in all of the features: " + str(consistent_outliers)) # Create histogram for outliers => map of customer index to num of outlier features hist_outliers = {} for feature in outliers_indices.keys(): for idx in outliers_indices[feature]: hist_outliers[idx] = hist_outliers[idx] + 1 if idx in hist_outliers.keys() else 1 # Find out liers in more than one feature twice_outliers = [key for key,item in hist_outliers.iteritems() if item > 1] # print twice_outliers # OPTIONAL: Select the indices for data points you wish to remove outliers = twice_outliers # Remove the outliers, if any were specified good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
Data points considered outliers for the feature 'Fresh':
MIT
projects/creating_customer_segments/customer_segments.ipynb
ankdesh/Udacity-MachineLearning-Nanodegree
Question 4*Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the `outliers` list to be removed, explain why.* **Answer:** Datapoints which were outliers in the more than one feature should be considered outliers. Such points have been assigned to outliers and hence have been removed from dataset. There is no datapoint which is outlier in all the features. Feature TransformationIn this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers. Implementation: PCANow that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the `good_data` to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the *explained variance ratio* of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.In the code block below, you will need to implement the following: - Import `sklearn.decomposition.PCA` and assign the results of fitting PCA in six dimensions with `good_data` to `pca`. - Apply a PCA transformation of the sample log-data `log_samples` using `pca.transform`, and assign the results to `pca_samples`.
# TODO: Apply PCA by fitting the good data with the same number of dimensions as features from sklearn.decomposition import PCA pca = PCA(n_components=6) pca.fit(good_data) # TODO: Transform the sample log-data using the PCA fit above pca_samples = pca.transform(log_samples) # Generate PCA results plot pca_results = rs.pca_results(good_data, pca) print pca.explained_variance_ratio_.cumsum()
[ 0.44302505 0.70681723 0.82988103 0.93109011 0.97959207 1. ]
MIT
projects/creating_customer_segments/customer_segments.ipynb
ankdesh/Udacity-MachineLearning-Nanodegree
Question 5*How much variance in the data is explained* ***in total*** *by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.* **Hint:** A positive increase in a specific dimension corresponds with an *increase* of the *positive-weighted* features and a *decrease* of the *negative-weighted* features. The rate of increase or decrease is based on the indivdual feature weights. **Answer:** As shown above, approx 44% and 70% resp. is explained by the first and second principle component. First four components together explains approx 93% of data. Dim 1 - The prevalant components in this dim are Milk Grocery and Detergents_paper. This makes sense as the visualizations in the scatter matrix showed the strong pair-wise dependence in these three components.Dim 2 - Fresh, Frozen, and delicatessen are most prominent components in this dim. which did not show the correlation with the components of first dim in scatter matrix.Dim 3 - Fresh and Delicatessen seem to be major components here capturing the negative correlation.Dim 4 - This dim shows negative correlation in frozen and Delicatessen components. ObservationRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.
# Display sample log-data after having a PCA transformation applied display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
_____no_output_____
MIT
projects/creating_customer_segments/customer_segments.ipynb
ankdesh/Udacity-MachineLearning-Nanodegree
Implementation: Dimensionality ReductionWhen using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the *cumulative explained variance ratio* is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.In the code block below, you will need to implement the following: - Assign the results of fitting PCA in two dimensions with `good_data` to `pca`. - Apply a PCA transformation of `good_data` using `pca.transform`, and assign the reuslts to `reduced_data`. - Apply a PCA transformation of the sample log-data `log_samples` using `pca.transform`, and assign the results to `pca_samples`.
# TODO: Apply PCA by fitting the good data with only two dimensions pca = PCA(n_components=2).fit(good_data) # TODO: Transform the good data using the PCA fit above reduced_data = pca.transform(good_data) # TODO: Transform the sample log-data using the PCA fit above pca_samples = pca.transform(log_samples) # Create a DataFrame for the reduced data reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
_____no_output_____
MIT
projects/creating_customer_segments/customer_segments.ipynb
ankdesh/Udacity-MachineLearning-Nanodegree
ObservationRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
# Display sample log-data after applying PCA transformation in two dimensions display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
_____no_output_____
MIT
projects/creating_customer_segments/customer_segments.ipynb
ankdesh/Udacity-MachineLearning-Nanodegree
ClusteringIn this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale. Question 6*What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?* **Answer:** Major advantage of K-means clustering is that it is converges quite fast as compared to other clustering algorithms. However K means is ameneable to getting stuck in local minima. Gaussian Mixture model is generalization of K-means clustering. It does not need to assign hard clusters to the data-points and hence can have a probablistic association of a data point to clusters.Since we need to find appropriate number of clusters here, we need to experiment multiple times with different number of clusters, I will use faster and clearer K-means clustering algo. Implementation: Creating ClustersDepending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known *a priori*, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's *silhouette coefficient*. The [silhouette coefficient](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html) for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the *mean* silhouette coefficient provides for a simple scoring method of a given clustering.In the code block below, you will need to implement the following: - Fit a clustering algorithm to the `reduced_data` and assign it to `clusterer`. - Predict the cluster for each data point in `reduced_data` using `clusterer.predict` and assign them to `preds`. - Find the cluster centers using the algorithm's respective attribute and assign them to `centers`. - Predict the cluster for each sample data point in `pca_samples` and assign them `sample_preds`. - Import sklearn.metrics.silhouette_score and calculate the silhouette score of `reduced_data` against `preds`. - Assign the silhouette score to `score` and print the result.
scores = [] # TODO: Apply your clustering algorithm of choice to the reduced data for num_clusters in range(2,15): from sklearn.cluster import KMeans clusterer = KMeans(n_clusters=num_clusters).fit(reduced_data) # TODO: Predict the cluster for each data point preds = clusterer.predict(reduced_data) # TODO: Find the cluster centers centers = clusterer.cluster_centers_ # TODO: Predict the cluster for each transformed sample data point sample_preds = clusterer.predict(pca_samples) # TODO: Calculate the mean silhouette coefficient for the number of clusters chosen from sklearn.metrics import silhouette_score score = silhouette_score(reduced_data, preds) scores.append(score) scores_series = pd.Series(scores, index = range(2,15)) print scores_series.argmax() %matplotlib inline #import matplotlib.pyplot as plt #plt.bar(range(len(scores)),scores) score_df = pd.DataFrame(np.array(scores), index=range(2,15), columns= {"Scores"}) score_df.plot(kind='bar') print score_df # Use K=2 from sklearn.cluster import KMeans clusterer = KMeans(n_clusters=2).fit(reduced_data) preds = clusterer.predict(reduced_data) centers = clusterer.cluster_centers_ sample_preds = clusterer.predict(pca_samples)
_____no_output_____
MIT
projects/creating_customer_segments/customer_segments.ipynb
ankdesh/Udacity-MachineLearning-Nanodegree
Question 7*Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?* **Answer:** I tried the cluster numbers from 2 to 14 as shown above. The best silhoutte score is reported from K=2. Cluster VisualizationOnce you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.
# Display the results of the clustering from implementation rs.cluster_results(reduced_data, preds, centers, pca_samples)
_____no_output_____
MIT
projects/creating_customer_segments/customer_segments.ipynb
ankdesh/Udacity-MachineLearning-Nanodegree
Implementation: Data RecoveryEach cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the *averages* of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to *the average customer of that segment*. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.In the code block below, you will need to implement the following: - Apply the inverse transform to `centers` using `pca.inverse_transform` and assign the new centers to `log_centers`. - Apply the inverse function of `np.log` to `log_centers` using `np.exp` and assign the true centers to `true_centers`.
# TODO: Inverse transform the centers log_centers = pca.inverse_transform(centers) # TODO: Exponentiate the centers true_centers = np.exp(log_centers) # Display the true centers segments = ['Segment {}'.format(i) for i in range(0,len(centers))] true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys()) true_centers.index = segments display(true_centers)
_____no_output_____
MIT
projects/creating_customer_segments/customer_segments.ipynb
ankdesh/Udacity-MachineLearning-Nanodegree
Question 8Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. *What set of establishments could each of the customer segments represent?* **Hint:** A customer who is assigned to `'Cluster X'` should best identify with the establishments represented by the feature set of `'Segment X'`. **Answer:** The Segment 0 cutomer seems to be showing high projection value on the second principal component in PCA analysis. It has relatively high values for Fresh and Frozen category. However, overall values are smaller than mean for each category. This data point would correspond to small ice-cream parlor. Segment 1 cutomer has high values on dominating categories for first principal component in PCA. This customer has a high on Milk, Grocery and Detergent_Paper category. This customer probably is a restaurant. Question 9*For each sample point, which customer segment from* ***Question 8*** *best represents it? Are the predictions for each sample point consistent with this?*Run the code block below to find which cluster each sample point is predicted to be.
# Display the predictions for i, pred in enumerate(sample_preds): print "Sample point", i, "predicted to be in Cluster", pred
Sample point 0 predicted to be in Cluster 1 Sample point 1 predicted to be in Cluster 0 Sample point 2 predicted to be in Cluster 1
MIT
projects/creating_customer_segments/customer_segments.ipynb
ankdesh/Udacity-MachineLearning-Nanodegree
**Answer:** The predictions for Sample point 0 and 2 are assigned to cluster 1. This indeed matches the prediction at the start of this assignment where these customers were predicted to be restaurant or caffe.The prediction for sample point 1 is cluster 0 which I predicted to be small ice-cream parlor. However, at the start I predicted it to be a small grocery shop given high consumption of Fresh. However, the profile do match for this cluster 0 and this sample data point with high fresh and frozen components. Conclusion In this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the ***customer segments***, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which *segment* that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the ***customer segments*** to a hidden variable present in the data, to see whether the clustering identified certain relationships. Question 10Companies will often run [A/B tests](https://en.wikipedia.org/wiki/A/B_testing) when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. *How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?* **Hint:** Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most? **Answer:** Based on the profile of each segment the change in delivery serive of different groups should affect differently. We can use A/B tests for each cluster to determine the results of changes in delivery service. First we choose a random sample from each of the cluster (treatment group) and use the remaining points in the cluster as control group. Then apply the changes in delivery services to the treatment group and can find if the customer segment will affect positively or negatively for the cluster. Question 11Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a ***customer segment*** it best identifies with (depending on the clustering algorithm applied), we can consider *'customer segment'* as an **engineered feature** for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a ***customer segment*** to determine the most appropriate delivery service. *How can the wholesale distributor label the new customers using only their estimated product spending and the* ***customer segment*** *data?* **Hint:** A supervised learner could be used to train on the original customers. What would be the target variable? **Answer:** To know the customer segment of a new customer any supervised learning algo can be used (preferably with non-linear hidden variables) with cluster id as the target variable. Also, the model learned while cluster creation can be used for predicting the customer segment. E.g. if K-means was used, the new customer can be assined a customer id based on distance of the customer attributes to the center of K-clusters. Visualizing Underlying DistributionsAt the beginning of this project, it was discussed that the `'Channel'` and `'Region'` features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the `'Channel'` feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.Run the code block below to see how each data point is labeled either `'HoReCa'` (Hotel/Restaurant/Cafe) or `'Retail'` the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.
# Display the clustering results based on 'Channel' data rs.channel_results(reduced_data, outliers, pca_samples)
_____no_output_____
MIT
projects/creating_customer_segments/customer_segments.ipynb
ankdesh/Udacity-MachineLearning-Nanodegree
Simple Oscillator ExampleThis example shows the most simple way of using a solver.We solve free vibration of a simple oscillator:$$m \ddot{u} + k u = 0,\quad u(0) = u_0,\quad \dot{u}(0) = \dot{u}_0$$using the CVODE solver. An analytical solution exists, given by$$u(t) = u_0 \cos\left(\sqrt{\frac{k}{m}} t\right)+\frac{\dot{u}_0}{\sqrt{\frac{k}{m}}} \sin\left(\sqrt{\frac{k}{m}} t\right)$$
from __future__ import print_function import matplotlib.pyplot as plt import numpy as np from scikits.odes import ode #data of the oscillator k = 4.0 m = 1.0 #initial position and speed data on t=0, x[0] = u, x[1] = \dot{u}, xp = \dot{x} initx = [1, 0.1]
_____no_output_____
BSD-3-Clause
ipython_examples/Simple Oscillator.ipynb
tinosulzer/odes