hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
list
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
list
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
list
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
list
cell_types
list
cell_type_groups
list
4a472ddbee1c012564a14502cc7b01cda1627dc1
81,107
ipynb
Jupyter Notebook
assignment2/PyTorch.ipynb
guoanjie/CS231n
480d19e182b116ad83a457735f8c3604870d511a
[ "MIT" ]
2
2021-11-10T12:40:19.000Z
2021-11-12T17:28:57.000Z
assignment2/PyTorch.ipynb
guoanjie/CS231n
480d19e182b116ad83a457735f8c3604870d511a
[ "MIT" ]
14
2021-02-02T22:55:33.000Z
2022-03-12T00:43:42.000Z
assignment2/PyTorch.ipynb
guoanjie/CS231n
480d19e182b116ad83a457735f8c3604870d511a
[ "MIT" ]
null
null
null
40.777778
890
0.563946
[ [ [ "# What's this PyTorch business?\n\nYou've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.\n\nFor the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, PyTorch (or TensorFlow, if you choose to use that notebook).", "_____no_output_____" ], [ "### What is PyTorch?\n\nPyTorch is a system for executing dynamic computational graphs over Tensor objects that behave similarly as numpy ndarray. It comes with a powerful automatic differentiation engine that removes the need for manual back-propagation. \n\n### Why?\n\n* Our code will now run on GPUs! Much faster training. When using a framework like PyTorch or TensorFlow you can harness the power of the GPU for your own custom neural network architectures without having to write CUDA code directly (which is beyond the scope of this class).\n* We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand. \n* We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :) \n* We want you to be exposed to the sort of deep learning code you might run into in academia or industry.\n\n### PyTorch versions\nThis notebook assumes that you are using **PyTorch version 1.4**. In some of the previous versions (e.g. before 0.4), Tensors had to be wrapped in Variable objects to be used in autograd; however Variables have now been deprecated. In addition 1.0+ versions separate a Tensor's datatype from its device, and use numpy-style factories for constructing Tensors rather than directly invoking Tensor constructors.", "_____no_output_____" ], [ "## How will I learn PyTorch?\n\nJustin Johnson has made an excellent [tutorial](https://github.com/jcjohnson/pytorch-examples) for PyTorch. \n\nYou can also find the detailed [API doc](http://pytorch.org/docs/stable/index.html) here. If you have other questions that are not addressed by the API docs, the [PyTorch forum](https://discuss.pytorch.org/) is a much better place to ask than StackOverflow.\n\n## Install PyTorch 1.4 (ONLY IF YOU ARE WORKING LOCALLY)\n\n1. Have the latest version of Anaconda installed on your machine.\n2. Create a new conda environment starting from Python 3.7. In this setup example, we'll call it `torch_env`.\n3. Run the command: `conda activate torch_env`\n4. Run the command: `pip install torch==1.4 torchvision==0.5.0`", "_____no_output_____" ], [ "# Table of Contents\n\nThis assignment has 5 parts. You will learn PyTorch on **three different levels of abstraction**, which will help you understand it better and prepare you for the final project. \n\n1. Part I, Preparation: we will use CIFAR-10 dataset.\n2. Part II, Barebones PyTorch: **Abstraction level 1**, we will work directly with the lowest-level PyTorch Tensors. \n3. Part III, PyTorch Module API: **Abstraction level 2**, we will use `nn.Module` to define arbitrary neural network architecture. \n4. Part IV, PyTorch Sequential API: **Abstraction level 3**, we will use `nn.Sequential` to define a linear feed-forward network very conveniently. \n5. Part V, CIFAR-10 open-ended challenge: please implement your own network to get as high accuracy as possible on CIFAR-10. You can experiment with any layer, optimizer, hyperparameters or other advanced features. \n\nHere is a table of comparison:\n\n| API | Flexibility | Convenience |\n|---------------|-------------|-------------|\n| Barebone | High | Low |\n| `nn.Module` | High | Medium |\n| `nn.Sequential` | Low | High |", "_____no_output_____" ], [ "# Part I. Preparation\n\nFirst, we load the CIFAR-10 dataset. This might take a couple minutes the first time you do it, but the files should stay cached after that.\n\nIn previous parts of the assignment we had to write our own code to download the CIFAR-10 dataset, preprocess it, and iterate through it in minibatches; PyTorch provides convenient tools to automate this process for us.", "_____no_output_____" ] ], [ [ "import torch\nassert '.'.join(torch.__version__.split('.')[:2]) == '1.4'\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torch.utils.data import DataLoader\nfrom torch.utils.data import sampler\n\nimport torchvision.datasets as dset\nimport torchvision.transforms as T\n\nimport numpy as np", "_____no_output_____" ], [ "NUM_TRAIN = 49000\n\n# The torchvision.transforms package provides tools for preprocessing data\n# and for performing data augmentation; here we set up a transform to\n# preprocess the data by subtracting the mean RGB value and dividing by the\n# standard deviation of each RGB value; we've hardcoded the mean and std.\ntransform = T.Compose([\n T.ToTensor(),\n T.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))\n ])\n\n# We set up a Dataset object for each split (train / val / test); Datasets load\n# training examples one at a time, so we wrap each Dataset in a DataLoader which\n# iterates through the Dataset and forms minibatches. We divide the CIFAR-10\n# training set into train and val sets by passing a Sampler object to the\n# DataLoader telling how it should sample from the underlying Dataset.\ncifar10_train = dset.CIFAR10('./cs231n/datasets', train=True, download=True,\n transform=transform)\nloader_train = DataLoader(cifar10_train, batch_size=64, \n sampler=sampler.SubsetRandomSampler(range(NUM_TRAIN)))\n\ncifar10_val = dset.CIFAR10('./cs231n/datasets', train=True, download=True,\n transform=transform)\nloader_val = DataLoader(cifar10_val, batch_size=64, \n sampler=sampler.SubsetRandomSampler(range(NUM_TRAIN, 50000)))\n\ncifar10_test = dset.CIFAR10('./cs231n/datasets', train=False, download=True, \n transform=transform)\nloader_test = DataLoader(cifar10_test, batch_size=64)", "Files already downloaded and verified\nFiles already downloaded and verified\nFiles already downloaded and verified\n" ] ], [ [ "You have an option to **use GPU by setting the flag to True below**. It is not necessary to use GPU for this assignment. Note that if your computer does not have CUDA enabled, `torch.cuda.is_available()` will return False and this notebook will fallback to CPU mode.\n\nThe global variables `dtype` and `device` will control the data types throughout this assignment.\n\n## Colab Users\n\nIf you are using Colab, you need to manually switch to a GPU device. You can do this by clicking `Runtime -> Change runtime type` and selecting `GPU` under `Hardware Accelerator`. Note that you have to rerun the cells from the top since the kernel gets restarted upon switching runtimes.", "_____no_output_____" ] ], [ [ "USE_GPU = True\n\ndtype = torch.float32 # we will be using float throughout this tutorial\n\nif USE_GPU and torch.cuda.is_available():\n device = torch.device('cuda')\nelse:\n device = torch.device('cpu')\n\n# Constant to control how frequently we print train loss\nprint_every = 100\n\nprint('using device:', device)", "using device: cuda\n" ] ], [ [ "# Part II. Barebones PyTorch\n\nPyTorch ships with high-level APIs to help us define model architectures conveniently, which we will cover in Part II of this tutorial. In this section, we will start with the barebone PyTorch elements to understand the autograd engine better. After this exercise, you will come to appreciate the high-level model API more.\n\nWe will start with a simple fully-connected ReLU network with two hidden layers and no biases for CIFAR classification. \nThis implementation computes the forward pass using operations on PyTorch Tensors, and uses PyTorch autograd to compute gradients. It is important that you understand every line, because you will write a harder version after the example.\n\nWhen we create a PyTorch Tensor with `requires_grad=True`, then operations involving that Tensor will not just compute values; they will also build up a computational graph in the background, allowing us to easily backpropagate through the graph to compute gradients of some Tensors with respect to a downstream loss. Concretely if x is a Tensor with `x.requires_grad == True` then after backpropagation `x.grad` will be another Tensor holding the gradient of x with respect to the scalar loss at the end.", "_____no_output_____" ], [ "### PyTorch Tensors: Flatten Function\nA PyTorch Tensor is conceptionally similar to a numpy array: it is an n-dimensional grid of numbers, and like numpy PyTorch provides many functions to efficiently operate on Tensors. As a simple example, we provide a `flatten` function below which reshapes image data for use in a fully-connected neural network.\n\nRecall that image data is typically stored in a Tensor of shape N x C x H x W, where:\n\n* N is the number of datapoints\n* C is the number of channels\n* H is the height of the intermediate feature map in pixels\n* W is the height of the intermediate feature map in pixels\n\nThis is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we use fully connected affine layers to process the image, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a \"flatten\" operation to collapse the `C x H x W` values per representation into a single long vector. The flatten function below first reads in the N, C, H, and W values from a given batch of data, and then returns a \"view\" of that data. \"View\" is analogous to numpy's \"reshape\" method: it reshapes x's dimensions to be N x ??, where ?? is allowed to be anything (in this case, it will be C x H x W, but we don't need to specify that explicitly). ", "_____no_output_____" ] ], [ [ "def flatten(x):\n N = x.shape[0] # read in N, C, H, W\n return x.view(N, -1) # \"flatten\" the C * H * W values into a single vector per image\n\ndef test_flatten():\n x = torch.arange(12).view(2, 1, 3, 2)\n print('Before flattening: ', x)\n print('After flattening: ', flatten(x))\n\ntest_flatten()", "Before flattening: tensor([[[[ 0, 1],\n [ 2, 3],\n [ 4, 5]]],\n\n\n [[[ 6, 7],\n [ 8, 9],\n [10, 11]]]])\nAfter flattening: tensor([[ 0, 1, 2, 3, 4, 5],\n [ 6, 7, 8, 9, 10, 11]])\n" ] ], [ [ "### Barebones PyTorch: Two-Layer Network\n\nHere we define a function `two_layer_fc` which performs the forward pass of a two-layer fully-connected ReLU network on a batch of image data. After defining the forward pass we check that it doesn't crash and that it produces outputs of the right shape by running zeros through the network.\n\nYou don't have to write any code here, but it's important that you read and understand the implementation.", "_____no_output_____" ] ], [ [ "import torch.nn.functional as F # useful stateless functions\n\ndef two_layer_fc(x, params):\n \"\"\"\n A fully-connected neural networks; the architecture is:\n NN is fully connected -> ReLU -> fully connected layer.\n Note that this function only defines the forward pass; \n PyTorch will take care of the backward pass for us.\n \n The input to the network will be a minibatch of data, of shape\n (N, d1, ..., dM) where d1 * ... * dM = D. The hidden layer will have H units,\n and the output layer will produce scores for C classes.\n \n Inputs:\n - x: A PyTorch Tensor of shape (N, d1, ..., dM) giving a minibatch of\n input data.\n - params: A list [w1, w2] of PyTorch Tensors giving weights for the network;\n w1 has shape (D, H) and w2 has shape (H, C).\n \n Returns:\n - scores: A PyTorch Tensor of shape (N, C) giving classification scores for\n the input data x.\n \"\"\"\n # first we flatten the image\n x = flatten(x) # shape: [batch_size, C x H x W]\n \n w1, w2 = params\n \n # Forward pass: compute predicted y using operations on Tensors. Since w1 and\n # w2 have requires_grad=True, operations involving these Tensors will cause\n # PyTorch to build a computational graph, allowing automatic computation of\n # gradients. Since we are no longer implementing the backward pass by hand we\n # don't need to keep references to intermediate values.\n # you can also use `.clamp(min=0)`, equivalent to F.relu()\n x = F.relu(x.mm(w1))\n x = x.mm(w2)\n return x\n \n\ndef two_layer_fc_test():\n hidden_layer_size = 42\n x = torch.zeros((64, 50), dtype=dtype) # minibatch size 64, feature dimension 50\n w1 = torch.zeros((50, hidden_layer_size), dtype=dtype)\n w2 = torch.zeros((hidden_layer_size, 10), dtype=dtype)\n scores = two_layer_fc(x, [w1, w2])\n print(scores.size()) # you should see [64, 10]\n\ntwo_layer_fc_test()", "torch.Size([64, 10])\n" ] ], [ [ "### Barebones PyTorch: Three-Layer ConvNet\n\nHere you will complete the implementation of the function `three_layer_convnet`, which will perform the forward pass of a three-layer convolutional network. Like above, we can immediately test our implementation by passing zeros through the network. The network should have the following architecture:\n\n1. A convolutional layer (with bias) with `channel_1` filters, each with shape `KW1 x KH1`, and zero-padding of two\n2. ReLU nonlinearity\n3. A convolutional layer (with bias) with `channel_2` filters, each with shape `KW2 x KH2`, and zero-padding of one\n4. ReLU nonlinearity\n5. Fully-connected layer with bias, producing scores for C classes.\n\nNote that we have **no softmax activation** here after our fully-connected layer: this is because PyTorch's cross entropy loss performs a softmax activation for you, and by bundling that step in makes computation more efficient.\n\n**HINT**: For convolutions: http://pytorch.org/docs/stable/nn.html#torch.nn.functional.conv2d; pay attention to the shapes of convolutional filters!", "_____no_output_____" ] ], [ [ "def three_layer_convnet(x, params):\n \"\"\"\n Performs the forward pass of a three-layer convolutional network with the\n architecture defined above.\n\n Inputs:\n - x: A PyTorch Tensor of shape (N, 3, H, W) giving a minibatch of images\n - params: A list of PyTorch Tensors giving the weights and biases for the\n network; should contain the following:\n - conv_w1: PyTorch Tensor of shape (channel_1, 3, KH1, KW1) giving weights\n for the first convolutional layer\n - conv_b1: PyTorch Tensor of shape (channel_1,) giving biases for the first\n convolutional layer\n - conv_w2: PyTorch Tensor of shape (channel_2, channel_1, KH2, KW2) giving\n weights for the second convolutional layer\n - conv_b2: PyTorch Tensor of shape (channel_2,) giving biases for the second\n convolutional layer\n - fc_w: PyTorch Tensor giving weights for the fully-connected layer. Can you\n figure out what the shape should be?\n - fc_b: PyTorch Tensor giving biases for the fully-connected layer. Can you\n figure out what the shape should be?\n \n Returns:\n - scores: PyTorch Tensor of shape (N, C) giving classification scores for x\n \"\"\"\n conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b = params\n scores = None\n ################################################################################\n # TODO: Implement the forward pass for the three-layer ConvNet. #\n ################################################################################\n # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n\n x = F.conv2d(x, conv_w1, bias=conv_b1, padding=conv_w1.size()[-1] // 2)\n x = F.relu(x)\n x = F.conv2d(x, conv_w2, bias=conv_b2, padding=conv_w2.size()[-1] // 2)\n x = F.relu(x)\n x = x.view(x.size()[0], -1)\n scores = F.linear(x, fc_w.transpose(0, 1), bias=fc_b)\n\n # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n ################################################################################\n # END OF YOUR CODE #\n ################################################################################\n return scores", "_____no_output_____" ] ], [ [ "After defining the forward pass of the ConvNet above, run the following cell to test your implementation.\n\nWhen you run this function, scores should have shape (64, 10).", "_____no_output_____" ] ], [ [ "def three_layer_convnet_test():\n x = torch.zeros((64, 3, 32, 32), dtype=dtype) # minibatch size 64, image size [3, 32, 32]\n\n conv_w1 = torch.zeros((6, 3, 5, 5), dtype=dtype) # [out_channel, in_channel, kernel_H, kernel_W]\n conv_b1 = torch.zeros((6,)) # out_channel\n conv_w2 = torch.zeros((9, 6, 3, 3), dtype=dtype) # [out_channel, in_channel, kernel_H, kernel_W]\n conv_b2 = torch.zeros((9,)) # out_channel\n\n # you must calculate the shape of the tensor after two conv layers, before the fully-connected layer\n fc_w = torch.zeros((9 * 32 * 32, 10))\n fc_b = torch.zeros(10)\n\n scores = three_layer_convnet(x, [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b])\n print(scores.size()) # you should see [64, 10]\nthree_layer_convnet_test()", "torch.Size([64, 10])\n" ] ], [ [ "### Barebones PyTorch: Initialization\nLet's write a couple utility methods to initialize the weight matrices for our models.\n\n- `random_weight(shape)` initializes a weight tensor with the Kaiming normalization method.\n- `zero_weight(shape)` initializes a weight tensor with all zeros. Useful for instantiating bias parameters.\n\nThe `random_weight` function uses the Kaiming normal initialization method, described in:\n\nHe et al, *Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification*, ICCV 2015, https://arxiv.org/abs/1502.01852", "_____no_output_____" ] ], [ [ "def random_weight(shape):\n \"\"\"\n Create random Tensors for weights; setting requires_grad=True means that we\n want to compute gradients for these Tensors during the backward pass.\n We use Kaiming normalization: sqrt(2 / fan_in)\n \"\"\"\n if len(shape) == 2: # FC weight\n fan_in = shape[0]\n else:\n fan_in = np.prod(shape[1:]) # conv weight [out_channel, in_channel, kH, kW]\n # randn is standard normal distribution generator. \n w = torch.randn(shape, device=device, dtype=dtype) * np.sqrt(2. / fan_in)\n w.requires_grad = True\n return w\n\ndef zero_weight(shape):\n return torch.zeros(shape, device=device, dtype=dtype, requires_grad=True)\n\n# create a weight of shape [3 x 5]\n# you should see the type `torch.cuda.FloatTensor` if you use GPU. \n# Otherwise it should be `torch.FloatTensor`\nrandom_weight((3, 5))", "_____no_output_____" ] ], [ [ "### Barebones PyTorch: Check Accuracy\nWhen training the model we will use the following function to check the accuracy of our model on the training or validation sets.\n\nWhen checking accuracy we don't need to compute any gradients; as a result we don't need PyTorch to build a computational graph for us when we compute scores. To prevent a graph from being built we scope our computation under a `torch.no_grad()` context manager.", "_____no_output_____" ] ], [ [ "def check_accuracy_part2(loader, model_fn, params):\n \"\"\"\n Check the accuracy of a classification model.\n \n Inputs:\n - loader: A DataLoader for the data split we want to check\n - model_fn: A function that performs the forward pass of the model,\n with the signature scores = model_fn(x, params)\n - params: List of PyTorch Tensors giving parameters of the model\n \n Returns: Nothing, but prints the accuracy of the model\n \"\"\"\n split = 'val' if loader.dataset.train else 'test'\n print('Checking accuracy on the %s set' % split)\n num_correct, num_samples = 0, 0\n with torch.no_grad():\n for x, y in loader:\n x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU\n y = y.to(device=device, dtype=torch.int64)\n scores = model_fn(x, params)\n _, preds = scores.max(1)\n num_correct += (preds == y).sum()\n num_samples += preds.size(0)\n acc = float(num_correct) / num_samples\n print('Got %d / %d correct (%.2f%%)' % (num_correct, num_samples, 100 * acc))", "_____no_output_____" ] ], [ [ "### BareBones PyTorch: Training Loop\nWe can now set up a basic training loop to train our network. We will train the model using stochastic gradient descent without momentum. We will use `torch.functional.cross_entropy` to compute the loss; you can [read about it here](http://pytorch.org/docs/stable/nn.html#cross-entropy).\n\nThe training loop takes as input the neural network function, a list of initialized parameters (`[w1, w2]` in our example), and learning rate.", "_____no_output_____" ] ], [ [ "def train_part2(model_fn, params, learning_rate):\n \"\"\"\n Train a model on CIFAR-10.\n \n Inputs:\n - model_fn: A Python function that performs the forward pass of the model.\n It should have the signature scores = model_fn(x, params) where x is a\n PyTorch Tensor of image data, params is a list of PyTorch Tensors giving\n model weights, and scores is a PyTorch Tensor of shape (N, C) giving\n scores for the elements in x.\n - params: List of PyTorch Tensors giving weights for the model\n - learning_rate: Python scalar giving the learning rate to use for SGD\n \n Returns: Nothing\n \"\"\"\n for t, (x, y) in enumerate(loader_train):\n # Move the data to the proper device (GPU or CPU)\n x = x.to(device=device, dtype=dtype)\n y = y.to(device=device, dtype=torch.long)\n\n # Forward pass: compute scores and loss\n scores = model_fn(x, params)\n loss = F.cross_entropy(scores, y)\n\n # Backward pass: PyTorch figures out which Tensors in the computational\n # graph has requires_grad=True and uses backpropagation to compute the\n # gradient of the loss with respect to these Tensors, and stores the\n # gradients in the .grad attribute of each Tensor.\n loss.backward()\n\n # Update parameters. We don't want to backpropagate through the\n # parameter updates, so we scope the updates under a torch.no_grad()\n # context manager to prevent a computational graph from being built.\n with torch.no_grad():\n for w in params:\n w -= learning_rate * w.grad\n\n # Manually zero the gradients after running the backward pass\n w.grad.zero_()\n\n if t % print_every == 0:\n print('Iteration %d, loss = %.4f' % (t, loss.item()))\n check_accuracy_part2(loader_val, model_fn, params)\n print()", "_____no_output_____" ] ], [ [ "### BareBones PyTorch: Train a Two-Layer Network\nNow we are ready to run the training loop. We need to explicitly allocate tensors for the fully connected weights, `w1` and `w2`. \n\nEach minibatch of CIFAR has 64 examples, so the tensor shape is `[64, 3, 32, 32]`. \n\nAfter flattening, `x` shape should be `[64, 3 * 32 * 32]`. This will be the size of the first dimension of `w1`. \nThe second dimension of `w1` is the hidden layer size, which will also be the first dimension of `w2`. \n\nFinally, the output of the network is a 10-dimensional vector that represents the probability distribution over 10 classes. \n\nYou don't need to tune any hyperparameters but you should see accuracies above 40% after training for one epoch.", "_____no_output_____" ] ], [ [ "hidden_layer_size = 4000\nlearning_rate = 1e-2\n\nw1 = random_weight((3 * 32 * 32, hidden_layer_size))\nw2 = random_weight((hidden_layer_size, 10))\n\ntrain_part2(two_layer_fc, [w1, w2], learning_rate)", "Iteration 0, loss = 3.2468\nChecking accuracy on the val set\nGot 153 / 1000 correct (15.30%)\n\nIteration 100, loss = 1.9199\nChecking accuracy on the val set\nGot 342 / 1000 correct (34.20%)\n\nIteration 200, loss = 2.2632\nChecking accuracy on the val set\nGot 384 / 1000 correct (38.40%)\n\nIteration 300, loss = 1.7609\nChecking accuracy on the val set\nGot 339 / 1000 correct (33.90%)\n\nIteration 400, loss = 1.6727\nChecking accuracy on the val set\nGot 444 / 1000 correct (44.40%)\n\nIteration 500, loss = 1.8160\nChecking accuracy on the val set\nGot 408 / 1000 correct (40.80%)\n\nIteration 600, loss = 1.4776\nChecking accuracy on the val set\nGot 464 / 1000 correct (46.40%)\n\nIteration 700, loss = 1.5610\nChecking accuracy on the val set\nGot 445 / 1000 correct (44.50%)\n\n" ] ], [ [ "### BareBones PyTorch: Training a ConvNet\n\nIn the below you should use the functions defined above to train a three-layer convolutional network on CIFAR. The network should have the following architecture:\n\n1. Convolutional layer (with bias) with 32 5x5 filters, with zero-padding of 2\n2. ReLU\n3. Convolutional layer (with bias) with 16 3x3 filters, with zero-padding of 1\n4. ReLU\n5. Fully-connected layer (with bias) to compute scores for 10 classes\n\nYou should initialize your weight matrices using the `random_weight` function defined above, and you should initialize your bias vectors using the `zero_weight` function above.\n\nYou don't need to tune any hyperparameters, but if everything works correctly you should achieve an accuracy above 42% after one epoch.", "_____no_output_____" ] ], [ [ "learning_rate = 3e-3\n\nchannel_1 = 32\nchannel_2 = 16\n\nconv_w1 = None\nconv_b1 = None\nconv_w2 = None\nconv_b2 = None\nfc_w = None\nfc_b = None\n\n################################################################################\n# TODO: Initialize the parameters of a three-layer ConvNet. #\n################################################################################\n# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n\nconv_w1 = random_weight((32, 3, 5, 5))\nconv_b1 = zero_weight(32)\nconv_w2 = random_weight((16, 32, 3, 3))\nconv_b2 = zero_weight(16)\nfc_w = random_weight((16 * 32 * 32, 10))\nfc_b = zero_weight(10)\n\n# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\nparams = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b]\ntrain_part2(three_layer_convnet, params, learning_rate)", "Iteration 0, loss = 3.7764\nChecking accuracy on the val set\nGot 107 / 1000 correct (10.70%)\n\nIteration 100, loss = 1.7490\nChecking accuracy on the val set\nGot 323 / 1000 correct (32.30%)\n\nIteration 200, loss = 1.7332\nChecking accuracy on the val set\nGot 383 / 1000 correct (38.30%)\n\nIteration 300, loss = 1.6232\nChecking accuracy on the val set\nGot 433 / 1000 correct (43.30%)\n\nIteration 400, loss = 1.9123\nChecking accuracy on the val set\nGot 434 / 1000 correct (43.40%)\n\nIteration 500, loss = 1.5398\nChecking accuracy on the val set\nGot 432 / 1000 correct (43.20%)\n\nIteration 600, loss = 1.5763\nChecking accuracy on the val set\nGot 446 / 1000 correct (44.60%)\n\nIteration 700, loss = 1.4069\nChecking accuracy on the val set\nGot 468 / 1000 correct (46.80%)\n\n" ] ], [ [ "# Part III. PyTorch Module API\n\nBarebone PyTorch requires that we track all the parameter tensors by hand. This is fine for small networks with a few tensors, but it would be extremely inconvenient and error-prone to track tens or hundreds of tensors in larger networks.\n\nPyTorch provides the `nn.Module` API for you to define arbitrary network architectures, while tracking every learnable parameters for you. In Part II, we implemented SGD ourselves. PyTorch also provides the `torch.optim` package that implements all the common optimizers, such as RMSProp, Adagrad, and Adam. It even supports approximate second-order methods like L-BFGS! You can refer to the [doc](http://pytorch.org/docs/master/optim.html) for the exact specifications of each optimizer.\n\nTo use the Module API, follow the steps below:\n\n1. Subclass `nn.Module`. Give your network class an intuitive name like `TwoLayerFC`. \n\n2. In the constructor `__init__()`, define all the layers you need as class attributes. Layer objects like `nn.Linear` and `nn.Conv2d` are themselves `nn.Module` subclasses and contain learnable parameters, so that you don't have to instantiate the raw tensors yourself. `nn.Module` will track these internal parameters for you. Refer to the [doc](http://pytorch.org/docs/master/nn.html) to learn more about the dozens of builtin layers. **Warning**: don't forget to call the `super().__init__()` first!\n\n3. In the `forward()` method, define the *connectivity* of your network. You should use the attributes defined in `__init__` as function calls that take tensor as input and output the \"transformed\" tensor. Do *not* create any new layers with learnable parameters in `forward()`! All of them must be declared upfront in `__init__`. \n\nAfter you define your Module subclass, you can instantiate it as an object and call it just like the NN forward function in part II.\n\n### Module API: Two-Layer Network\nHere is a concrete example of a 2-layer fully connected network:", "_____no_output_____" ] ], [ [ "class TwoLayerFC(nn.Module):\n def __init__(self, input_size, hidden_size, num_classes):\n super().__init__()\n # assign layer objects to class attributes\n self.fc1 = nn.Linear(input_size, hidden_size)\n # nn.init package contains convenient initialization methods\n # http://pytorch.org/docs/master/nn.html#torch-nn-init \n nn.init.kaiming_normal_(self.fc1.weight)\n self.fc2 = nn.Linear(hidden_size, num_classes)\n nn.init.kaiming_normal_(self.fc2.weight)\n \n def forward(self, x):\n # forward always defines connectivity\n x = flatten(x)\n scores = self.fc2(F.relu(self.fc1(x)))\n return scores\n\ndef test_TwoLayerFC():\n input_size = 50\n x = torch.zeros((64, input_size), dtype=dtype) # minibatch size 64, feature dimension 50\n model = TwoLayerFC(input_size, 42, 10)\n scores = model(x)\n print(scores.size()) # you should see [64, 10]\ntest_TwoLayerFC()", "torch.Size([64, 10])\n" ] ], [ [ "### Module API: Three-Layer ConvNet\nIt's your turn to implement a 3-layer ConvNet followed by a fully connected layer. The network architecture should be the same as in Part II:\n\n1. Convolutional layer with `channel_1` 5x5 filters with zero-padding of 2\n2. ReLU\n3. Convolutional layer with `channel_2` 3x3 filters with zero-padding of 1\n4. ReLU\n5. Fully-connected layer to `num_classes` classes\n\nYou should initialize the weight matrices of the model using the Kaiming normal initialization method.\n\n**HINT**: http://pytorch.org/docs/stable/nn.html#conv2d\n\nAfter you implement the three-layer ConvNet, the `test_ThreeLayerConvNet` function will run your implementation; it should print `(64, 10)` for the shape of the output scores.", "_____no_output_____" ] ], [ [ "class ThreeLayerConvNet(nn.Module):\n def __init__(self, in_channel, channel_1, channel_2, num_classes):\n super().__init__()\n ########################################################################\n # TODO: Set up the layers you need for a three-layer ConvNet with the #\n # architecture defined above. #\n ########################################################################\n # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n\n self.conv1 = nn.Conv2d(in_channel, channel_1, 5, padding=2)\n self.conv2 = nn.Conv2d(channel_1, channel_2, 3, padding=1)\n self.fc = nn.Linear(channel_2 * 32 * 32, num_classes)\n\n # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n ########################################################################\n # END OF YOUR CODE # \n ########################################################################\n\n def forward(self, x):\n scores = None\n ########################################################################\n # TODO: Implement the forward function for a 3-layer ConvNet. you #\n # should use the layers you defined in __init__ and specify the #\n # connectivity of those layers in forward() #\n ########################################################################\n # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n\n scores = self.fc(F.relu(self.conv2(F.relu(self.conv1(x)))).reshape(-1, self.fc.in_features))\n\n # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n ########################################################################\n # END OF YOUR CODE #\n ########################################################################\n return scores\n\n\ndef test_ThreeLayerConvNet():\n x = torch.zeros((64, 3, 32, 32), dtype=dtype) # minibatch size 64, image size [3, 32, 32]\n model = ThreeLayerConvNet(in_channel=3, channel_1=12, channel_2=8, num_classes=10)\n scores = model(x)\n print(scores.size()) # you should see [64, 10]\ntest_ThreeLayerConvNet()", "torch.Size([64, 10])\n" ] ], [ [ "### Module API: Check Accuracy\nGiven the validation or test set, we can check the classification accuracy of a neural network. \n\nThis version is slightly different from the one in part II. You don't manually pass in the parameters anymore.", "_____no_output_____" ] ], [ [ "def check_accuracy_part34(loader, model):\n if loader.dataset.train:\n print('Checking accuracy on validation set')\n else:\n print('Checking accuracy on test set') \n num_correct = 0\n num_samples = 0\n model.eval() # set model to evaluation mode\n with torch.no_grad():\n for x, y in loader:\n x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU\n y = y.to(device=device, dtype=torch.long)\n scores = model(x)\n _, preds = scores.max(1)\n num_correct += (preds == y).sum()\n num_samples += preds.size(0)\n acc = float(num_correct) / num_samples\n print('Got %d / %d correct (%.2f)' % (num_correct, num_samples, 100 * acc))", "_____no_output_____" ] ], [ [ "### Module API: Training Loop\nWe also use a slightly different training loop. Rather than updating the values of the weights ourselves, we use an Optimizer object from the `torch.optim` package, which abstract the notion of an optimization algorithm and provides implementations of most of the algorithms commonly used to optimize neural networks.", "_____no_output_____" ] ], [ [ "def train_part34(model, optimizer, epochs=1):\n \"\"\"\n Train a model on CIFAR-10 using the PyTorch Module API.\n \n Inputs:\n - model: A PyTorch Module giving the model to train.\n - optimizer: An Optimizer object we will use to train the model\n - epochs: (Optional) A Python integer giving the number of epochs to train for\n \n Returns: Nothing, but prints model accuracies during training.\n \"\"\"\n model = model.to(device=device) # move the model parameters to CPU/GPU\n for e in range(epochs):\n for t, (x, y) in enumerate(loader_train):\n model.train() # put model to training mode\n x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU\n y = y.to(device=device, dtype=torch.long)\n\n scores = model(x)\n loss = F.cross_entropy(scores, y)\n\n # Zero out all of the gradients for the variables which the optimizer\n # will update.\n optimizer.zero_grad()\n\n # This is the backwards pass: compute the gradient of the loss with\n # respect to each parameter of the model.\n loss.backward()\n\n # Actually update the parameters of the model using the gradients\n # computed by the backwards pass.\n optimizer.step()\n\n if t % print_every == 0:\n print('Iteration %d, loss = %.4f' % (t, loss.item()))\n check_accuracy_part34(loader_val, model)\n print()", "_____no_output_____" ] ], [ [ "### Module API: Train a Two-Layer Network\nNow we are ready to run the training loop. In contrast to part II, we don't explicitly allocate parameter tensors anymore.\n\nSimply pass the input size, hidden layer size, and number of classes (i.e. output size) to the constructor of `TwoLayerFC`. \n\nYou also need to define an optimizer that tracks all the learnable parameters inside `TwoLayerFC`.\n\nYou don't need to tune any hyperparameters, but you should see model accuracies above 40% after training for one epoch.", "_____no_output_____" ] ], [ [ "hidden_layer_size = 4000\nlearning_rate = 1e-2\nmodel = TwoLayerFC(3 * 32 * 32, hidden_layer_size, 10)\noptimizer = optim.SGD(model.parameters(), lr=learning_rate)\n\ntrain_part34(model, optimizer)", "Iteration 0, loss = 3.5132\nChecking accuracy on validation set\nGot 109 / 1000 correct (10.90)\n\nIteration 100, loss = 2.1515\nChecking accuracy on validation set\nGot 347 / 1000 correct (34.70)\n\nIteration 200, loss = 2.1026\nChecking accuracy on validation set\nGot 394 / 1000 correct (39.40)\n\nIteration 300, loss = 1.7910\nChecking accuracy on validation set\nGot 371 / 1000 correct (37.10)\n\nIteration 400, loss = 1.6017\nChecking accuracy on validation set\nGot 428 / 1000 correct (42.80)\n\nIteration 500, loss = 1.4998\nChecking accuracy on validation set\nGot 428 / 1000 correct (42.80)\n\nIteration 600, loss = 2.0602\nChecking accuracy on validation set\nGot 443 / 1000 correct (44.30)\n\nIteration 700, loss = 1.8337\nChecking accuracy on validation set\nGot 449 / 1000 correct (44.90)\n\n" ] ], [ [ "### Module API: Train a Three-Layer ConvNet\nYou should now use the Module API to train a three-layer ConvNet on CIFAR. This should look very similar to training the two-layer network! You don't need to tune any hyperparameters, but you should achieve above above 45% after training for one epoch.\n\nYou should train the model using stochastic gradient descent without momentum.", "_____no_output_____" ] ], [ [ "learning_rate = 3e-3\nchannel_1 = 32\nchannel_2 = 16\n\nmodel = None\noptimizer = None\n################################################################################\n# TODO: Instantiate your ThreeLayerConvNet model and a corresponding optimizer #\n################################################################################\n# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n\nmodel = ThreeLayerConvNet(in_channel=3, channel_1=channel_1, channel_2=channel_2, num_classes=10)\noptimizer = optim.SGD(model.parameters(), lr=learning_rate)\n\ntrain_part34(model, optimizer)\n\n# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n################################################################################\n# END OF YOUR CODE \n################################################################################\n\ntrain_part34(model, optimizer)", "Iteration 0, loss = 2.3233\nChecking accuracy on validation set\nGot 99 / 1000 correct (9.90)\n\nIteration 100, loss = 1.9277\nChecking accuracy on validation set\nGot 319 / 1000 correct (31.90)\n\nIteration 200, loss = 1.9072\nChecking accuracy on validation set\nGot 347 / 1000 correct (34.70)\n\nIteration 300, loss = 1.7271\nChecking accuracy on validation set\nGot 404 / 1000 correct (40.40)\n\nIteration 400, loss = 1.7997\nChecking accuracy on validation set\nGot 408 / 1000 correct (40.80)\n\nIteration 500, loss = 1.8151\nChecking accuracy on validation set\nGot 444 / 1000 correct (44.40)\n\nIteration 600, loss = 1.6813\nChecking accuracy on validation set\nGot 442 / 1000 correct (44.20)\n\nIteration 700, loss = 1.6079\nChecking accuracy on validation set\nGot 444 / 1000 correct (44.40)\n\nIteration 0, loss = 1.6559\nChecking accuracy on validation set\nGot 455 / 1000 correct (45.50)\n\nIteration 100, loss = 1.6988\nChecking accuracy on validation set\nGot 438 / 1000 correct (43.80)\n\nIteration 200, loss = 1.4312\nChecking accuracy on validation set\nGot 486 / 1000 correct (48.60)\n\nIteration 300, loss = 1.5734\nChecking accuracy on validation set\nGot 497 / 1000 correct (49.70)\n\nIteration 400, loss = 1.2945\nChecking accuracy on validation set\nGot 506 / 1000 correct (50.60)\n\nIteration 500, loss = 1.4341\nChecking accuracy on validation set\nGot 515 / 1000 correct (51.50)\n\nIteration 600, loss = 1.4170\nChecking accuracy on validation set\nGot 498 / 1000 correct (49.80)\n\nIteration 700, loss = 1.4069\nChecking accuracy on validation set\nGot 509 / 1000 correct (50.90)\n\n" ] ], [ [ "# Part IV. PyTorch Sequential API\n\nPart III introduced the PyTorch Module API, which allows you to define arbitrary learnable layers and their connectivity. \n\nFor simple models like a stack of feed forward layers, you still need to go through 3 steps: subclass `nn.Module`, assign layers to class attributes in `__init__`, and call each layer one by one in `forward()`. Is there a more convenient way? \n\nFortunately, PyTorch provides a container Module called `nn.Sequential`, which merges the above steps into one. It is not as flexible as `nn.Module`, because you cannot specify more complex topology than a feed-forward stack, but it's good enough for many use cases.\n\n### Sequential API: Two-Layer Network\nLet's see how to rewrite our two-layer fully connected network example with `nn.Sequential`, and train it using the training loop defined above.\n\nAgain, you don't need to tune any hyperparameters here, but you shoud achieve above 40% accuracy after one epoch of training.", "_____no_output_____" ] ], [ [ "# We need to wrap `flatten` function in a module in order to stack it\n# in nn.Sequential\nclass Flatten(nn.Module):\n def forward(self, x):\n return flatten(x)\n\nhidden_layer_size = 4000\nlearning_rate = 1e-2\n\nmodel = nn.Sequential(\n Flatten(),\n nn.Linear(3 * 32 * 32, hidden_layer_size),\n nn.ReLU(),\n nn.Linear(hidden_layer_size, 10),\n)\n\n# you can use Nesterov momentum in optim.SGD\noptimizer = optim.SGD(model.parameters(), lr=learning_rate,\n momentum=0.9, nesterov=True)\n\ntrain_part34(model, optimizer)", "Iteration 0, loss = 2.3557\nChecking accuracy on validation set\nGot 179 / 1000 correct (17.90)\n\nIteration 100, loss = 1.6765\nChecking accuracy on validation set\nGot 407 / 1000 correct (40.70)\n\nIteration 200, loss = 1.7576\nChecking accuracy on validation set\nGot 410 / 1000 correct (41.00)\n\nIteration 300, loss = 1.9059\nChecking accuracy on validation set\nGot 382 / 1000 correct (38.20)\n\nIteration 400, loss = 2.0573\nChecking accuracy on validation set\nGot 446 / 1000 correct (44.60)\n\nIteration 500, loss = 1.7264\nChecking accuracy on validation set\nGot 433 / 1000 correct (43.30)\n\nIteration 600, loss = 1.8122\nChecking accuracy on validation set\nGot 440 / 1000 correct (44.00)\n\nIteration 700, loss = 1.5659\nChecking accuracy on validation set\nGot 420 / 1000 correct (42.00)\n\n" ] ], [ [ "### Sequential API: Three-Layer ConvNet\nHere you should use `nn.Sequential` to define and train a three-layer ConvNet with the same architecture we used in Part III:\n\n1. Convolutional layer (with bias) with 32 5x5 filters, with zero-padding of 2\n2. ReLU\n3. Convolutional layer (with bias) with 16 3x3 filters, with zero-padding of 1\n4. ReLU\n5. Fully-connected layer (with bias) to compute scores for 10 classes\n\nYou should initialize your weight matrices using the `random_weight` function defined above, and you should initialize your bias vectors using the `zero_weight` function above.\n\nYou should optimize your model using stochastic gradient descent with Nesterov momentum 0.9.\n\nAgain, you don't need to tune any hyperparameters but you should see accuracy above 55% after one epoch of training.", "_____no_output_____" ] ], [ [ "channel_1 = 32\nchannel_2 = 16\nlearning_rate = 1e-2\n\nmodel = None\noptimizer = None\n\n################################################################################\n# TODO: Rewrite the 2-layer ConvNet with bias from Part III with the #\n# Sequential API. #\n################################################################################\n# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n\nmodel = nn.Sequential(\n nn.Conv2d(3, channel_1, 5, padding=2),\n nn.ReLU(),\n nn.Conv2d(channel_1, channel_2, 3, padding=1),\n nn.ReLU(),\n Flatten(),\n nn.Linear(channel_2 * 32 * 32, 10),\n)\n\noptimizer = optim.SGD(model.parameters(), lr=learning_rate,\n momentum=0.9, nesterov=True)\n\n# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n################################################################################\n# END OF YOUR CODE \n################################################################################\n\ntrain_part34(model, optimizer)", "Iteration 0, loss = 2.3191\nChecking accuracy on validation set\nGot 127 / 1000 correct (12.70)\n\nIteration 100, loss = 1.7792\nChecking accuracy on validation set\nGot 443 / 1000 correct (44.30)\n\nIteration 200, loss = 1.4817\nChecking accuracy on validation set\nGot 495 / 1000 correct (49.50)\n\nIteration 300, loss = 1.4233\nChecking accuracy on validation set\nGot 505 / 1000 correct (50.50)\n\nIteration 400, loss = 1.0110\nChecking accuracy on validation set\nGot 559 / 1000 correct (55.90)\n\nIteration 500, loss = 1.1580\nChecking accuracy on validation set\nGot 581 / 1000 correct (58.10)\n\nIteration 600, loss = 1.2063\nChecking accuracy on validation set\nGot 573 / 1000 correct (57.30)\n\nIteration 700, loss = 1.3064\nChecking accuracy on validation set\nGot 573 / 1000 correct (57.30)\n\n" ] ], [ [ "# Part V. CIFAR-10 open-ended challenge\n\nIn this section, you can experiment with whatever ConvNet architecture you'd like on CIFAR-10. \n\nNow it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves **at least 70%** accuracy on the CIFAR-10 **validation** set within 10 epochs. You can use the check_accuracy and train functions from above. You can use either `nn.Module` or `nn.Sequential` API. \n\nDescribe what you did at the end of this notebook.\n\nHere are the official API documentation for each component. One note: what we call in the class \"spatial batch norm\" is called \"BatchNorm2D\" in PyTorch.\n\n* Layers in torch.nn package: http://pytorch.org/docs/stable/nn.html\n* Activations: http://pytorch.org/docs/stable/nn.html#non-linear-activations\n* Loss functions: http://pytorch.org/docs/stable/nn.html#loss-functions\n* Optimizers: http://pytorch.org/docs/stable/optim.html\n\n\n### Things you might try:\n- **Filter size**: Above we used 5x5; would smaller filters be more efficient?\n- **Number of filters**: Above we used 32 filters. Do more or fewer do better?\n- **Pooling vs Strided Convolution**: Do you use max pooling or just stride convolutions?\n- **Batch normalization**: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster?\n- **Network architecture**: The network above has two layers of trainable parameters. Can you do better with a deep network? Good architectures to try include:\n - [conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]\n - [conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]\n - [batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM]\n- **Global Average Pooling**: Instead of flattening and then having multiple affine layers, perform convolutions until your image gets small (7x7 or so) and then perform an average pooling operation to get to a 1x1 image picture (1, 1 , Filter#), which is then reshaped into a (Filter#) vector. This is used in [Google's Inception Network](https://arxiv.org/abs/1512.00567) (See Table 1 for their architecture).\n- **Regularization**: Add l2 weight regularization, or perhaps use Dropout.\n\n### Tips for training\nFor each network architecture that you try, you should tune the learning rate and other hyperparameters. When doing this there are a couple important things to keep in mind:\n\n- If the parameters are working well, you should see improvement within a few hundred iterations\n- Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.\n- Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.\n- You should use the validation set for hyperparameter search, and save your test set for evaluating your architecture on the best parameters as selected by the validation set.\n\n### Going above and beyond\nIf you are feeling adventurous there are many other features you can implement to try and improve your performance. You are **not required** to implement any of these, but don't miss the fun if you have time!\n\n- Alternative optimizers: you can try Adam, Adagrad, RMSprop, etc.\n- Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.\n- Model ensembles\n- Data augmentation\n- New Architectures\n - [ResNets](https://arxiv.org/abs/1512.03385) where the input from the previous layer is added to the output.\n - [DenseNets](https://arxiv.org/abs/1608.06993) where inputs into previous layers are concatenated together.\n - [This blog has an in-depth overview](https://chatbotslife.com/resnets-highwaynets-and-densenets-oh-my-9bb15918ee32)\n\n### Have fun and happy training! ", "_____no_output_____" ] ], [ [ "################################################################################\n# TODO: # \n# Experiment with any architectures, optimizers, and hyperparameters. #\n# Achieve AT LEAST 70% accuracy on the *validation set* within 10 epochs. #\n# #\n# Note that you can use the check_accuracy function to evaluate on either #\n# the test set or the validation set, by passing either loader_test or #\n# loader_val as the second argument to check_accuracy. You should not touch #\n# the test set until you have finished your architecture and hyperparameter #\n# tuning, and only run the test set once at the end to report a final value. #\n################################################################################\nmodel = None\noptimizer = None\n\n# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n\nmodel = nn.Sequential(\n nn.Conv2d(3, 6, 3, padding=1),\n nn.BatchNorm2d(6),\n nn.ReLU(),\n nn.MaxPool2d(2),\n nn.Conv2d(6, 16, 3, padding=1),\n nn.BatchNorm2d(16),\n nn.ReLU(),\n nn.MaxPool2d(2),\n nn.Conv2d(16, 120, 3, padding=1),\n nn.BatchNorm2d(120),\n nn.ReLU(),\n nn.MaxPool2d(2),\n Flatten(),\n nn.Linear(120 * 4 * 4, 84 * 4),\n nn.Linear(84 * 4, 10)\n)\n\noptimizer = optim.Adam(model.parameters(), lr=1e-3)\n\n# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n################################################################################\n# END OF YOUR CODE \n################################################################################\n\n# You should get at least 70% accuracy\ntrain_part34(model, optimizer, epochs=10)", "Iteration 0, loss = 2.3753\nChecking accuracy on validation set\nGot 137 / 1000 correct (13.70)\n\nIteration 100, loss = 1.5885\nChecking accuracy on validation set\nGot 441 / 1000 correct (44.10)\n\nIteration 200, loss = 1.5793\nChecking accuracy on validation set\nGot 478 / 1000 correct (47.80)\n\nIteration 300, loss = 1.4790\nChecking accuracy on validation set\nGot 500 / 1000 correct (50.00)\n\nIteration 400, loss = 1.5002\nChecking accuracy on validation set\nGot 450 / 1000 correct (45.00)\n\nIteration 500, loss = 1.1218\nChecking accuracy on validation set\nGot 517 / 1000 correct (51.70)\n\nIteration 600, loss = 1.4941\nChecking accuracy on validation set\nGot 569 / 1000 correct (56.90)\n\nIteration 700, loss = 1.1259\nChecking accuracy on validation set\nGot 575 / 1000 correct (57.50)\n\nIteration 0, loss = 1.2054\nChecking accuracy on validation set\nGot 586 / 1000 correct (58.60)\n\nIteration 100, loss = 1.0427\nChecking accuracy on validation set\nGot 575 / 1000 correct (57.50)\n\nIteration 200, loss = 1.2106\nChecking accuracy on validation set\nGot 601 / 1000 correct (60.10)\n\nIteration 300, loss = 1.2216\nChecking accuracy on validation set\nGot 612 / 1000 correct (61.20)\n\nIteration 400, loss = 1.0355\nChecking accuracy on validation set\nGot 594 / 1000 correct (59.40)\n\nIteration 500, loss = 1.2592\nChecking accuracy on validation set\nGot 617 / 1000 correct (61.70)\n\nIteration 600, loss = 1.2038\nChecking accuracy on validation set\nGot 655 / 1000 correct (65.50)\n\nIteration 700, loss = 0.9989\nChecking accuracy on validation set\nGot 646 / 1000 correct (64.60)\n\nIteration 0, loss = 0.8951\nChecking accuracy on validation set\nGot 617 / 1000 correct (61.70)\n\nIteration 100, loss = 0.9050\nChecking accuracy on validation set\nGot 643 / 1000 correct (64.30)\n\nIteration 200, loss = 1.1215\nChecking accuracy on validation set\nGot 636 / 1000 correct (63.60)\n\nIteration 300, loss = 0.9664\nChecking accuracy on validation set\nGot 587 / 1000 correct (58.70)\n\nIteration 400, loss = 1.0399\nChecking accuracy on validation set\nGot 640 / 1000 correct (64.00)\n\nIteration 500, loss = 0.9132\nChecking accuracy on validation set\nGot 621 / 1000 correct (62.10)\n\nIteration 600, loss = 0.9753\nChecking accuracy on validation set\nGot 643 / 1000 correct (64.30)\n\nIteration 700, loss = 1.2486\nChecking accuracy on validation set\nGot 653 / 1000 correct (65.30)\n\nIteration 0, loss = 0.9558\nChecking accuracy on validation set\nGot 634 / 1000 correct (63.40)\n\nIteration 100, loss = 0.8438\nChecking accuracy on validation set\nGot 668 / 1000 correct (66.80)\n\nIteration 200, loss = 0.8531\nChecking accuracy on validation set\nGot 671 / 1000 correct (67.10)\n\nIteration 300, loss = 0.9668\nChecking accuracy on validation set\nGot 668 / 1000 correct (66.80)\n\nIteration 400, loss = 0.8754\nChecking accuracy on validation set\nGot 671 / 1000 correct (67.10)\n\nIteration 500, loss = 1.1350\nChecking accuracy on validation set\nGot 660 / 1000 correct (66.00)\n\nIteration 600, loss = 0.8302\nChecking accuracy on validation set\nGot 654 / 1000 correct (65.40)\n\nIteration 700, loss = 0.7673\nChecking accuracy on validation set\nGot 666 / 1000 correct (66.60)\n\nIteration 0, loss = 0.8661\nChecking accuracy on validation set\nGot 694 / 1000 correct (69.40)\n\nIteration 100, loss = 0.9832\nChecking accuracy on validation set\nGot 671 / 1000 correct (67.10)\n\nIteration 200, loss = 0.7182\nChecking accuracy on validation set\nGot 620 / 1000 correct (62.00)\n\nIteration 300, loss = 0.5029\nChecking accuracy on validation set\nGot 677 / 1000 correct (67.70)\n\nIteration 400, loss = 0.5441\nChecking accuracy on validation set\nGot 685 / 1000 correct (68.50)\n\nIteration 500, loss = 0.6566\nChecking accuracy on validation set\nGot 673 / 1000 correct (67.30)\n\nIteration 600, loss = 0.9985\nChecking accuracy on validation set\nGot 692 / 1000 correct (69.20)\n\nIteration 700, loss = 0.5003\nChecking accuracy on validation set\nGot 684 / 1000 correct (68.40)\n\nIteration 0, loss = 0.8380\nChecking accuracy on validation set\nGot 693 / 1000 correct (69.30)\n\nIteration 100, loss = 0.9265\nChecking accuracy on validation set\nGot 711 / 1000 correct (71.10)\n\nIteration 200, loss = 0.5421\nChecking accuracy on validation set\nGot 685 / 1000 correct (68.50)\n\nIteration 300, loss = 0.8714\nChecking accuracy on validation set\nGot 702 / 1000 correct (70.20)\n\nIteration 400, loss = 0.8943\nChecking accuracy on validation set\nGot 700 / 1000 correct (70.00)\n\nIteration 500, loss = 0.7346\nChecking accuracy on validation set\nGot 700 / 1000 correct (70.00)\n\nIteration 600, loss = 0.9355\nChecking accuracy on validation set\nGot 675 / 1000 correct (67.50)\n\nIteration 700, loss = 0.6933\nChecking accuracy on validation set\nGot 673 / 1000 correct (67.30)\n\nIteration 0, loss = 0.6369\nChecking accuracy on validation set\nGot 689 / 1000 correct (68.90)\n\nIteration 100, loss = 0.6611\nChecking accuracy on validation set\nGot 698 / 1000 correct (69.80)\n\nIteration 200, loss = 0.8875\nChecking accuracy on validation set\nGot 685 / 1000 correct (68.50)\n\nIteration 300, loss = 0.6727\nChecking accuracy on validation set\nGot 689 / 1000 correct (68.90)\n\nIteration 400, loss = 0.7209\nChecking accuracy on validation set\nGot 692 / 1000 correct (69.20)\n\nIteration 500, loss = 0.6811\nChecking accuracy on validation set\nGot 714 / 1000 correct (71.40)\n\nIteration 600, loss = 0.6099\nChecking accuracy on validation set\nGot 705 / 1000 correct (70.50)\n\nIteration 700, loss = 0.8403\nChecking accuracy on validation set\nGot 709 / 1000 correct (70.90)\n\nIteration 0, loss = 0.4305\nChecking accuracy on validation set\nGot 697 / 1000 correct (69.70)\n\nIteration 100, loss = 0.8669\nChecking accuracy on validation set\nGot 700 / 1000 correct (70.00)\n\nIteration 200, loss = 0.6460\nChecking accuracy on validation set\nGot 699 / 1000 correct (69.90)\n\nIteration 300, loss = 0.7090\nChecking accuracy on validation set\nGot 712 / 1000 correct (71.20)\n\nIteration 400, loss = 0.7257\nChecking accuracy on validation set\nGot 686 / 1000 correct (68.60)\n\nIteration 500, loss = 0.6296\nChecking accuracy on validation set\nGot 701 / 1000 correct (70.10)\n\nIteration 600, loss = 0.8724\nChecking accuracy on validation set\nGot 689 / 1000 correct (68.90)\n\nIteration 700, loss = 0.4758\nChecking accuracy on validation set\nGot 699 / 1000 correct (69.90)\n\nIteration 0, loss = 0.5267\nChecking accuracy on validation set\nGot 716 / 1000 correct (71.60)\n\nIteration 100, loss = 0.6599\nChecking accuracy on validation set\nGot 705 / 1000 correct (70.50)\n\nIteration 200, loss = 0.4413\nChecking accuracy on validation set\nGot 712 / 1000 correct (71.20)\n\nIteration 300, loss = 0.5768\nChecking accuracy on validation set\nGot 723 / 1000 correct (72.30)\n\nIteration 400, loss = 0.4732\nChecking accuracy on validation set\nGot 702 / 1000 correct (70.20)\n\nIteration 500, loss = 0.5340\nChecking accuracy on validation set\nGot 698 / 1000 correct (69.80)\n\nIteration 600, loss = 0.6988\nChecking accuracy on validation set\nGot 708 / 1000 correct (70.80)\n\nIteration 700, loss = 0.6722\nChecking accuracy on validation set\nGot 712 / 1000 correct (71.20)\n\nIteration 0, loss = 0.7462\nChecking accuracy on validation set\nGot 721 / 1000 correct (72.10)\n\nIteration 100, loss = 0.5147\nChecking accuracy on validation set\nGot 720 / 1000 correct (72.00)\n\nIteration 200, loss = 0.6425\nChecking accuracy on validation set\nGot 718 / 1000 correct (71.80)\n\nIteration 300, loss = 0.4361\nChecking accuracy on validation set\nGot 717 / 1000 correct (71.70)\n\nIteration 400, loss = 0.5102\nChecking accuracy on validation set\nGot 734 / 1000 correct (73.40)\n\nIteration 500, loss = 0.5063\nChecking accuracy on validation set\nGot 714 / 1000 correct (71.40)\n\nIteration 600, loss = 0.5547\nChecking accuracy on validation set\nGot 723 / 1000 correct (72.30)\n\nIteration 700, loss = 0.7104\nChecking accuracy on validation set\nGot 701 / 1000 correct (70.10)\n\n" ] ], [ [ "## Describe what you did \n\nIn the cell below you should write an explanation of what you did, any additional features that you implemented, and/or any graphs that you made in the process of training and evaluating your network.", "_____no_output_____" ], [ "TODO: Describe what you did", "_____no_output_____" ], [ "## Test set -- run this only once\n\nNow that we've gotten a result we're happy with, we test our final model on the test set (which you should store in best_model). Think about how this compares to your validation set accuracy.", "_____no_output_____" ] ], [ [ "best_model = model\ncheck_accuracy_part34(loader_test, best_model)", "Checking accuracy on test set\nGot 7010 / 10000 correct (70.10)\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ] ]
4a47414f040b8fc720c5163ce605753ff680bf39
8,533
ipynb
Jupyter Notebook
_unittests/ut_helpgen/data_gallery/notebooks/td1a/td1a_correction_session6.ipynb
janjagusch/pyquickhelper
d42e1579ea20f5add9a9cd2b6d2d0a3533aee40b
[ "MIT" ]
18
2015-11-10T08:09:23.000Z
2022-02-16T11:46:45.000Z
_unittests/ut_helpgen/data_gallery/notebooks/td1a/td1a_correction_session6.ipynb
janjagusch/pyquickhelper
d42e1579ea20f5add9a9cd2b6d2d0a3533aee40b
[ "MIT" ]
321
2015-06-14T21:34:28.000Z
2021-11-28T17:10:03.000Z
_unittests/ut_helpgen/data_gallery/notebooks/td1a/td1a_correction_session6.ipynb
janjagusch/pyquickhelper
d42e1579ea20f5add9a9cd2b6d2d0a3533aee40b
[ "MIT" ]
10
2015-06-20T01:35:00.000Z
2022-01-19T15:54:32.000Z
31.371324
148
0.382632
[ [ [ "# TD 6 : Classes, héritage (correction)", "_____no_output_____" ] ], [ [ "from jyquickhelper import add_notebook_menu\nadd_notebook_menu()", "_____no_output_____" ] ], [ [ "### Exercice 1 : pièce normale", "_____no_output_____" ] ], [ [ "import random\nclass Piece :\n def tirage_aleatoire(self, precedent) :\n return random.randint(0,1)\n def moyenne_tirage(self, n):\n tirage = [ ]\n for i in range (n) :\n precedent = tirage[-1] if i > 0 else None\n tirage.append( self.tirage_aleatoire (precedent) )\n s = sum(tirage)\n return s * 1.0 / len(tirage)\n \np = Piece()\nprint (p.moyenne_tirage(100))", "0.48\n" ] ], [ [ "### Exercice 2 : pièce truquée", "_____no_output_____" ] ], [ [ "class PieceTruquee (Piece) :\n def tirage_aleatoire(self, precedent) :\n if precedent == None or precedent == 1 :\n return random.randint(0,1)\n else :\n return 1 if random.randint(0,9) >= 3 else 0\n \np = PieceTruquee()\nprint (p.moyenne_tirage(100))", "0.58\n" ] ], [ [ "### Exercice 3 : Pièce mixte", "_____no_output_____" ] ], [ [ "class PieceTruqueeMix (PieceTruquee) :\n def tirage_aleatoire(self, precedent) :\n if random.randint(0,1) == 0 :\n return Piece.tirage_aleatoire(self, precedent)\n else :\n return PieceTruquee.tirage_aleatoire(self, precedent)\n \np = PieceTruqueeMix()\nprint (p.moyenne_tirage(100))", "0.67\n" ] ], [ [ "### Exercice 4 : pièce mixte avec des fonctions", "_____no_output_____" ] ], [ [ "# ce qui vient de l'énoncé\ndef moyenne_tirage(n, fonction):\n \"\"\"\n cette fonction fait la moyenne des résultats produits par la fonction passée en argument\n \"\"\"\n tirage = [ ]\n for i in range (n) :\n precedent = tirage[-1] if i > 0 else None\n tirage.append( fonction (precedent) )\n s = sum(tirage)\n return s * 1.0 / len(tirage)\n \ndef truquee (precedent) :\n if precedent == None or precedent == 1 :\n return random.randint(0,1)\n else :\n return 1 if random.randint(0,9) >= 3 else 0\n\n# la partie ajoutée pour la correction\nprint (moyenne_tirage(100, lambda v : random.randint(0,1) if random.randint(0,1) == 0 \\\n else truquee(v)))", "0.51\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a476564bd219c75b0b8dd30532689c2dd305709
176,172
ipynb
Jupyter Notebook
module4-classification-metrics/Unit_2_Sprint_2_Module_4_CLASS-lecture-notebook.ipynb
tmbern/DS-Unit-2-Kaggle-Challenge
9ae9c68ae55f1e192343044689b43b9298738d4f
[ "MIT" ]
null
null
null
module4-classification-metrics/Unit_2_Sprint_2_Module_4_CLASS-lecture-notebook.ipynb
tmbern/DS-Unit-2-Kaggle-Challenge
9ae9c68ae55f1e192343044689b43b9298738d4f
[ "MIT" ]
null
null
null
module4-classification-metrics/Unit_2_Sprint_2_Module_4_CLASS-lecture-notebook.ipynb
tmbern/DS-Unit-2-Kaggle-Challenge
9ae9c68ae55f1e192343044689b43b9298738d4f
[ "MIT" ]
null
null
null
81.410351
22,398
0.770514
[ [ [ "<a href=\"https://colab.research.google.com/github/tmbern/DS-Unit-2-Kaggle-Challenge/blob/master/module4-classification-metrics/Unit_2_Sprint_2_Module_4_CLASS-lecture-notebook.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "Lambda School Data Science\n\n*Unit 2, Sprint 2, Module 4*\n\n---", "_____no_output_____" ], [ "# Classification Metrics\n\n- get and interpret the **confusion matrix** for classification models\n- use classification metrics: **precision, recall**\n- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**\n- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve)", "_____no_output_____" ], [ "### Setup\n\nRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.\n\nLibraries\n\n- category_encoders\n- ipywidgets\n- matplotlib\n- numpy\n- pandas\n- scikit-learn\n- seaborn", "_____no_output_____" ] ], [ [ "%%capture\nimport sys\n\n# If you're on Colab:\nif 'google.colab' in sys.modules:\n DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'\n !pip install category_encoders==2.*\n\n# If you're working locally:\nelse:\n DATA_PATH = '../data/'", "_____no_output_____" ] ], [ [ "# Get and interpret the confusion matrix for classification models", "_____no_output_____" ], [ "## Overview", "_____no_output_____" ], [ "First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport category_encoders as ce\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.ensemble import RandomForestClassifier\n\ndef wrangle(X):\n \"\"\"Wrangles train, validate, and test sets in the same way\"\"\"\n X = X.copy()\n\n # Convert date_recorded to datetime\n X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)\n \n # Extract components from date_recorded, then drop the original column\n X['year_recorded'] = X['date_recorded'].dt.year\n X['month_recorded'] = X['date_recorded'].dt.month\n X['day_recorded'] = X['date_recorded'].dt.day\n X = X.drop(columns='date_recorded')\n \n # Engineer feature: how many years from construction_year to date_recorded\n X['years'] = X['year_recorded'] - X['construction_year'] \n \n # Drop recorded_by (never varies) and id (always varies, random)\n unusable_variance = ['recorded_by', 'id']\n X = X.drop(columns=unusable_variance)\n \n # Drop duplicate columns\n duplicate_columns = ['quantity_group']\n X = X.drop(columns=duplicate_columns)\n \n # About 3% of the time, latitude has small values near zero,\n # outside Tanzania, so we'll treat these like null values\n X['latitude'] = X['latitude'].replace(-2e-08, np.nan)\n \n # When columns have zeros and shouldn't, they are like null values\n cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']\n for col in cols_with_zeros:\n X[col] = X[col].replace(0, np.nan)\n \n return X\n\n\n# Merge train_features.csv & train_labels.csv\ntrain = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), \n pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))\n\n# Read test_features.csv & sample_submission.csv\ntest = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')\nsample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')\n\n# Split train into train & val. Make val the same size as test.\ntarget = 'status_group'\ntrain, val = train_test_split(train, test_size=len(test), \n stratify=train[target], random_state=42)\n\n# Wrangle train, validate, and test sets in the same way\ntrain = wrangle(train)\nval = wrangle(val)\ntest = wrangle(test)\n\n# Arrange data into X features matrix and y target vector\nX_train = train.drop(columns=target)\ny_train = train[target]\nX_val = val.drop(columns=target)\ny_val = val[target]\nX_test = test\n\n# Make pipeline!\npipeline = make_pipeline(\n ce.OrdinalEncoder(), \n SimpleImputer(strategy='mean'), \n RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)\n)\n\n# Fit on train, score on val\npipeline.fit(X_train, y_train)\ny_pred = pipeline.predict(X_val)\nprint('Validation Accuracy', accuracy_score(y_val, y_pred))", "Validation Accuracy 0.8140409527789386\n" ] ], [ [ "## Follow Along\n\nScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22!", "_____no_output_____" ] ], [ [ "import sklearn\nsklearn.__version__\n\nfrom sklearn.metrics import plot_confusion_matrix\n\nplot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical')", "_____no_output_____" ] ], [ [ "#### How many correct predictions were made?", "_____no_output_____" ] ], [ [ "correct_predictions = 7005 +332 + 4351\ncorrect_predictions", "_____no_output_____" ] ], [ [ "#### How many total predictions were made?", "_____no_output_____" ] ], [ [ "total_predictions = y_val.shape[0]\ntotal_predictions", "_____no_output_____" ] ], [ [ "#### What was the classification accuracy?", "_____no_output_____" ] ], [ [ "correct_predictions / total_predictions\n", "_____no_output_____" ], [ "accuracy_score(y_val, y_pred)", "_____no_output_____" ], [ "sum(y_pred == y_val) / len(y_pred)", "_____no_output_____" ] ], [ [ "# Use classification metrics: precision, recall", "_____no_output_____" ], [ "## Overview\n\n[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.html#classification-report)", "_____no_output_____" ] ], [ [ "from sklearn.metrics import classification_report\nprint(classification_report(y_val, y_pred))", " precision recall f1-score support\n\n functional 0.81 0.90 0.85 7798\nfunctional needs repair 0.58 0.32 0.41 1043\n non functional 0.85 0.79 0.82 5517\n\n accuracy 0.81 14358\n macro avg 0.75 0.67 0.69 14358\n weighted avg 0.81 0.81 0.81 14358\n\n" ] ], [ [ "#### Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)\n\n> Both precision and recall are based on an understanding and measure of relevance.\n\n> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.\n\n> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results.\n\n<img src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/2/26/Precisionrecall.svg/700px-Precisionrecall.svg.png\" width=\"400\">", "_____no_output_____" ], [ "## Follow Along", "_____no_output_____" ], [ "#### [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recall#Definition_(classification_context))", "_____no_output_____" ] ], [ [ "cm = plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical')\ncm\n\n# precision = true_positives / (true_positives + positives)\n# recall = true_positives / (true_positives + false_negatives)", "_____no_output_____" ] ], [ [ "#### How many correct predictions of \"non functional\"?", "_____no_output_____" ] ], [ [ "correct_predictions_nonfunctional = 4351", "_____no_output_____" ] ], [ [ "#### How many total predictions of \"non functional\"?", "_____no_output_____" ] ], [ [ "total_predictions_nonfunctional = 622 + 156 +4351", "_____no_output_____" ] ], [ [ "#### What's the precision for \"non functional\"?", "_____no_output_____" ] ], [ [ "correct_predictions_nonfunctional / total_predictions_nonfunctional", "_____no_output_____" ], [ "print(classification_report(y_val, y_pred))", " precision recall f1-score support\n\n functional 0.81 0.90 0.85 7798\nfunctional needs repair 0.58 0.32 0.41 1043\n non functional 0.85 0.79 0.82 5517\n\n accuracy 0.81 14358\n macro avg 0.75 0.67 0.69 14358\n weighted avg 0.81 0.81 0.81 14358\n\n" ] ], [ [ "#### How many actual \"non functional\" waterpumps?", "_____no_output_____" ] ], [ [ "actual_non_functional = 1098 + 68 + 4351", "_____no_output_____" ] ], [ [ "#### What's the recall for \"non functional\"?", "_____no_output_____" ] ], [ [ "correct_predictions_nonfunctional / actual_non_functional", "_____no_output_____" ] ], [ [ "# Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets", "_____no_output_____" ], [ "## Overview", "_____no_output_____" ], [ "### Imagine this scenario...\n\nSuppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.", "_____no_output_____" ] ], [ [ "len(test)", "_____no_output_____" ] ], [ [ "**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.\n\nYou have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.", "_____no_output_____" ] ], [ [ "len(train) + len(val)", "_____no_output_____" ] ], [ [ "You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.\n\nBased on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.", "_____no_output_____" ] ], [ [ "y_train.value_counts(normalize=True)", "_____no_output_____" ], [ "2000 * 0.46", "_____no_output_____" ] ], [ [ "**Can you do better than random at prioritizing inspections?**", "_____no_output_____" ], [ "In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:", "_____no_output_____" ] ], [ [ "y_train = y_train != 'functional'\ny_val = y_val != 'functional'\ny_train.value_counts(normalize=True)", "_____no_output_____" ] ], [ [ "We already made our validation set the same size as our test set.", "_____no_output_____" ] ], [ [ "len(val) == len(test)", "_____no_output_____" ] ], [ [ "We can refit our model, using the redefined target.\n\nThen make predictions for the validation set.", "_____no_output_____" ] ], [ [ "pipeline.fit(X_train, y_train)\ny_pred = pipeline.predict(X_val)", "_____no_output_____" ] ], [ [ "## Follow Along", "_____no_output_____" ], [ "#### Look at the confusion matrix:", "_____no_output_____" ] ], [ [ "plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical')", "_____no_output_____" ] ], [ [ "#### How many total predictions of \"True\" (\"non functional\" or \"functional needs repair\") ?", "_____no_output_____" ] ], [ [ "5032 + 977", "_____no_output_____" ], [ "print(classification_report(y_val, y_pred))", " precision recall f1-score support\n\n False 0.82 0.87 0.84 7798\n True 0.84 0.77 0.80 6560\n\n accuracy 0.83 14358\n macro avg 0.83 0.82 0.82 14358\nweighted avg 0.83 0.83 0.82 14358\n\n" ] ], [ [ "### We don't have \"budget\" to take action on all these predictions\n\n- But we can get predicted probabilities, to rank the predictions. \n- Then change the threshold, to change the number of positive predictions, based on our budget.", "_____no_output_____" ], [ "### Get predicted probabilities and plot the distribution", "_____no_output_____" ] ], [ [ "pipeline.predict_proba(X_val)", "_____no_output_____" ], [ "pipeline.predict(X_val)\n", "_____no_output_____" ], [ "#Predicted probabilites for the positive class\npipeline.predict_proba(X_val)[:, 1]", "_____no_output_____" ], [ "threshold = 0.92\nsum(pipeline.predict_proba(X_val)[:, 1] > threshold)", "_____no_output_____" ] ], [ [ "### Change the threshold", "_____no_output_____" ] ], [ [ "import seaborn as sns\n\ny_pred_proba = pipeline.predict_proba(X_val)[:, 1]\nax = sns.distplot(y_pred_proba)\nthreshold = 0.9\nax.axvline(threshold, color='red' )", "_____no_output_____" ] ], [ [ "### Or, get exactly 2,000 positive predictions", "_____no_output_____" ], [ "Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.", "_____no_output_____" ] ], [ [ "from ipywidgets import interact, fixed\nimport seaborn as sns\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.utils.multiclass import unique_labels\n\ndef my_confusion_matrix(y_true, y_pred):\n labels = unique_labels(y_true)\n columns = [f'Predicted {label}' for label in labels]\n index = [f'Actual {label}' for label in labels]\n table = pd.DataFrame(confusion_matrix(y_true, y_pred), \n columns=columns, index=index)\n return sns.heatmap(table, annot=True, fmt='d', cmap='viridis')\n\ndef set_threshold(y_true, y_pred_proba, threshold=0.5):\n y_pred = y_pred_proba > threshold\n ax = sns.distplot(y_pred_proba)\n ax.axvline(threshold, color='red')\n plt.show()\n print(classification_report(y_true, y_pred))\n my_confusion_matrix(y_true, y_pred)\n\ninteract(set_threshold, \n y_true=fixed(y_val), \n y_pred_proba=fixed(y_pred_proba), \n threshold=(0, 1, 0.02));", "_____no_output_____" ] ], [ [ "Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.\n\nSome of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.\n\nLet's look at a random sample of 50 out of these top 2,000:", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ], [ [ "So how many of our recommendations were relevant? ...", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ], [ [ "What's the precision for this subset of 2,000 predictions?", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ], [ [ "### In this scenario ... \n\nAccuracy _isn't_ the best metric!\n\nInstead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)\n\nThen, evaluate with the precision for \"non functional\"/\"functional needs repair\".\n\nThis is conceptually like **Precision@K**, where k=2,000.\n\nRead more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)\n\n> Precision at k is the proportion of recommended items in the top-k set that are relevant\n\n> Mathematically precision@k is defined as: `Precision@k = (# of recommended items @k that are relevant) / (# of recommended items @k)`\n\n> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.\n\nWe asked, can you do better than random at prioritizing inspections?\n\nIf we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)\n\nBut using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!\n\nSo we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.\n\nWe will predict which 2,000 are most likely non-functional or in need of repair.\n\nWe estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.\n\nSo we're confident that our predictive model will help triage and prioritize waterpump inspections.", "_____no_output_____" ], [ "### But ...\n\nThis metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.\n\nCan we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?\n\nYes — the most common such metric is **ROC AUC.**", "_____no_output_____" ], [ "## Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)\n\n[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) \"A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**\"\n\nROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as \"the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative.\" \n\nROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**\n\nROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** \n\n#### Scikit-Learn docs\n- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.html#receiver-operating-characteristic-roc)\n- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)\n- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html)\n\n#### More links\n- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)\n- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)", "_____no_output_____" ] ], [ [ "# \"The ROC curve is created by plotting the true positive rate (TPR) \n# against the false positive rate (FPR) \n# at various threshold settings.\"\n\n# Use scikit-learn to calculate TPR & FPR at various thresholds\nfrom sklearn.metrics import roc_curve\nfpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)", "_____no_output_____" ], [ "# See the results in a table\npd.DataFrame({\n 'False Positive Rate': fpr, \n 'True Positive Rate': tpr, \n 'Threshold': thresholds\n})", "_____no_output_____" ], [ "# See the results on a plot. \n# This is the \"Receiver Operating Characteristic\" curve\nplt.scatter(fpr, tpr)\nplt.title('ROC curve')\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate');", "_____no_output_____" ], [ "# Use scikit-learn to calculate the area under the curve.\nfrom sklearn.metrics import roc_auc_score\nroc_auc_score(y_val, y_pred_proba)", "_____no_output_____" ] ], [ [ "**Recap:** ROC AUC measures how well a classifier ranks predicted probabilities. So, when you get your classifier’s ROC AUC score, you need to use predicted probabilities, not discrete predictions. \n\nYour code may look something like this:\n\n```python\nfrom sklearn.metrics import roc_auc_score\ny_pred_proba = model.predict_proba(X_test_transformed)[:, -1] # Probability for last class\nprint('Test ROC AUC:', roc_auc_score(y_test, y_pred_proba))\n```\n\nROC AUC ranges from 0 to 1. Higher is better. A naive majority class baseline will have an ROC AUC score of 0.5.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ] ]
4a476e3ea643ad6f5d8e0bb0795dd8e5661bd1de
49,984
ipynb
Jupyter Notebook
ait_repository/test/tests/eval_mnist_data_coverage_0.1.ipynb
ads-ad-itcenter/qunomon.forked
48d532692d353fe2d3946f62b227f834f9349034
[ "Apache-2.0" ]
16
2020-11-18T05:43:55.000Z
2021-11-27T14:43:26.000Z
ait_repository/test/tests/eval_mnist_data_coverage_0.1.ipynb
aistairc/qunomon
d4e9c5cb569b16addfbe6c33c73812065065a1df
[ "Apache-2.0" ]
1
2022-03-23T07:55:54.000Z
2022-03-23T13:24:11.000Z
ait_repository/test/tests/eval_mnist_data_coverage_0.1.ipynb
ads-ad-itcenter/qunomon.forked
48d532692d353fe2d3946f62b227f834f9349034
[ "Apache-2.0" ]
3
2021-02-12T01:56:31.000Z
2022-03-23T02:45:02.000Z
55.109151
746
0.400128
[ [ [ "# test note\n\n\n* jupyterはコンテナ起動すること\n* テストベッド一式起動済みであること\n", "_____no_output_____" ] ], [ [ "!pip install --upgrade pip\n!pip install --force-reinstall ../lib/ait_sdk-0.1.7-py3-none-any.whl", "Requirement already satisfied: pip in /usr/local/lib/python3.6/dist-packages (21.1.1)\n\u001b[33mWARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv\u001b[0m\nProcessing /workdir/root/lib/ait_sdk-0.1.7-py3-none-any.whl\nCollecting keras<=2.4.3\n Using cached Keras-2.4.3-py2.py3-none-any.whl (36 kB)\nCollecting nbformat<=5.0.8\n Using cached nbformat-5.0.8-py3-none-any.whl (172 kB)\nCollecting py-cpuinfo<=7.0.0\n Using cached py_cpuinfo-7.0.0-py3-none-any.whl\nCollecting numpy<=1.19.3\n Using cached numpy-1.19.3-cp36-cp36m-manylinux2010_x86_64.whl (14.9 MB)\nCollecting psutil<=5.7.3\n Using cached psutil-5.7.3-cp36-cp36m-linux_x86_64.whl\nCollecting nbconvert<=6.0.7\n Using cached nbconvert-6.0.7-py3-none-any.whl (552 kB)\nCollecting scipy>=0.14\n Using cached scipy-1.5.4-cp36-cp36m-manylinux1_x86_64.whl (25.9 MB)\nCollecting pyyaml\n Using cached PyYAML-5.4.1-cp36-cp36m-manylinux1_x86_64.whl (640 kB)\nCollecting h5py\n Using cached h5py-3.1.0-cp36-cp36m-manylinux1_x86_64.whl (4.0 MB)\nCollecting bleach\n Using cached bleach-3.3.0-py2.py3-none-any.whl (283 kB)\nCollecting defusedxml\n Using cached defusedxml-0.7.1-py2.py3-none-any.whl (25 kB)\nCollecting nbclient<0.6.0,>=0.5.0\n Using cached nbclient-0.5.3-py3-none-any.whl (82 kB)\nCollecting traitlets>=4.2\n Using cached traitlets-4.3.3-py2.py3-none-any.whl (75 kB)\nCollecting jinja2>=2.4\n Using cached Jinja2-2.11.3-py2.py3-none-any.whl (125 kB)\nCollecting jupyterlab-pygments\n Using cached jupyterlab_pygments-0.1.2-py2.py3-none-any.whl (4.6 kB)\nCollecting entrypoints>=0.2.2\n Using cached entrypoints-0.3-py2.py3-none-any.whl (11 kB)\nCollecting jupyter-core\n Using cached jupyter_core-4.7.1-py3-none-any.whl (82 kB)\nCollecting pandocfilters>=1.4.1\n Using cached pandocfilters-1.4.3-py3-none-any.whl\nCollecting testpath\n Using cached testpath-0.4.4-py2.py3-none-any.whl (163 kB)\nCollecting pygments>=2.4.1\n Using cached Pygments-2.9.0-py3-none-any.whl (1.0 MB)\nCollecting mistune<2,>=0.8.1\n Using cached mistune-0.8.4-py2.py3-none-any.whl (16 kB)\nCollecting MarkupSafe>=0.23\n Using cached MarkupSafe-1.1.1-cp36-cp36m-manylinux2010_x86_64.whl (32 kB)\nCollecting nest-asyncio\n Using cached nest_asyncio-1.5.1-py3-none-any.whl (5.0 kB)\nCollecting async-generator\n Using cached async_generator-1.10-py3-none-any.whl (18 kB)\nCollecting jupyter-client>=6.1.5\n Using cached jupyter_client-6.1.12-py3-none-any.whl (112 kB)\nCollecting python-dateutil>=2.1\n Using cached python_dateutil-2.8.1-py2.py3-none-any.whl (227 kB)\nCollecting tornado>=4.1\n Using cached tornado-6.1-cp36-cp36m-manylinux2010_x86_64.whl (427 kB)\nCollecting pyzmq>=13\n Using cached pyzmq-22.0.3-cp36-cp36m-manylinux1_x86_64.whl (1.1 MB)\nCollecting jsonschema!=2.5.0,>=2.4\n Using cached jsonschema-3.2.0-py2.py3-none-any.whl (56 kB)\nCollecting ipython-genutils\n Using cached ipython_genutils-0.2.0-py2.py3-none-any.whl (26 kB)\nCollecting importlib-metadata\n Using cached importlib_metadata-4.0.1-py3-none-any.whl (16 kB)\nCollecting pyrsistent>=0.14.0\n Using cached pyrsistent-0.17.3-cp36-cp36m-linux_x86_64.whl\nCollecting six>=1.11.0\n Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)\nCollecting setuptools\n Using cached setuptools-56.2.0-py3-none-any.whl (785 kB)\nCollecting attrs>=17.4.0\n Using cached attrs-21.2.0-py2.py3-none-any.whl (53 kB)\nCollecting decorator\n Using cached decorator-5.0.7-py3-none-any.whl (8.8 kB)\nCollecting webencodings\n Using cached webencodings-0.5.1-py2.py3-none-any.whl (11 kB)\nCollecting packaging\n Using cached packaging-20.9-py2.py3-none-any.whl (40 kB)\nCollecting cached-property\n Using cached cached_property-1.5.2-py2.py3-none-any.whl (7.6 kB)\nCollecting zipp>=0.5\n Using cached zipp-3.4.1-py3-none-any.whl (5.2 kB)\nCollecting typing-extensions>=3.6.4\n Using cached typing_extensions-3.10.0.0-py3-none-any.whl (26 kB)\nCollecting pyparsing>=2.0.2\n Using cached pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)\nInstalling collected packages: zipp, typing-extensions, six, ipython-genutils, decorator, traitlets, setuptools, pyrsistent, importlib-metadata, attrs, tornado, pyzmq, python-dateutil, pyparsing, jupyter-core, jsonschema, webencodings, pygments, packaging, numpy, nest-asyncio, nbformat, MarkupSafe, jupyter-client, cached-property, async-generator, testpath, scipy, pyyaml, pandocfilters, nbclient, mistune, jupyterlab-pygments, jinja2, h5py, entrypoints, defusedxml, bleach, py-cpuinfo, psutil, nbconvert, keras, ait-sdk\n Attempting uninstall: zipp\n Found existing installation: zipp 3.4.1\n Uninstalling zipp-3.4.1:\n Successfully uninstalled zipp-3.4.1\n Attempting uninstall: typing-extensions\n Found existing installation: typing-extensions 3.10.0.0\n Uninstalling typing-extensions-3.10.0.0:\n Successfully uninstalled typing-extensions-3.10.0.0\n Attempting uninstall: six\n Found existing installation: six 1.16.0\n Uninstalling six-1.16.0:\n Successfully uninstalled six-1.16.0\n Attempting uninstall: ipython-genutils\n Found existing installation: ipython-genutils 0.2.0\n Uninstalling ipython-genutils-0.2.0:\n Successfully uninstalled ipython-genutils-0.2.0\n Attempting uninstall: decorator\n Found existing installation: decorator 5.0.7\n Uninstalling decorator-5.0.7:\n Successfully uninstalled decorator-5.0.7\n Attempting uninstall: traitlets\n Found existing installation: traitlets 4.3.3\n Uninstalling traitlets-4.3.3:\n Successfully uninstalled traitlets-4.3.3\n Attempting uninstall: setuptools\n Found existing installation: setuptools 56.2.0\n Uninstalling setuptools-56.2.0:\n Successfully uninstalled setuptools-56.2.0\n Attempting uninstall: pyrsistent\n Found existing installation: pyrsistent 0.17.3\n Uninstalling pyrsistent-0.17.3:\n Successfully uninstalled pyrsistent-0.17.3\n Attempting uninstall: importlib-metadata\n Found existing installation: importlib-metadata 4.0.1\n Uninstalling importlib-metadata-4.0.1:\n Successfully uninstalled importlib-metadata-4.0.1\n Attempting uninstall: attrs\n Found existing installation: attrs 21.2.0\n Uninstalling attrs-21.2.0:\n Successfully uninstalled attrs-21.2.0\n Attempting uninstall: tornado\n Found existing installation: tornado 6.1\n Uninstalling tornado-6.1:\n Successfully uninstalled tornado-6.1\n Attempting uninstall: pyzmq\n Found existing installation: pyzmq 22.0.3\n Uninstalling pyzmq-22.0.3:\n Successfully uninstalled pyzmq-22.0.3\n Attempting uninstall: python-dateutil\n Found existing installation: python-dateutil 2.8.1\n Uninstalling python-dateutil-2.8.1:\n Successfully uninstalled python-dateutil-2.8.1\n Attempting uninstall: pyparsing\n Found existing installation: pyparsing 2.4.7\n Uninstalling pyparsing-2.4.7:\n Successfully uninstalled pyparsing-2.4.7\n Attempting uninstall: jupyter-core\n Found existing installation: jupyter-core 4.7.1\n Uninstalling jupyter-core-4.7.1:\n Successfully uninstalled jupyter-core-4.7.1\n Attempting uninstall: jsonschema\n Found existing installation: jsonschema 3.2.0\n Uninstalling jsonschema-3.2.0:\n Successfully uninstalled jsonschema-3.2.0\n Attempting uninstall: webencodings\n Found existing installation: webencodings 0.5.1\n Uninstalling webencodings-0.5.1:\n Successfully uninstalled webencodings-0.5.1\n Attempting uninstall: pygments\n Found existing installation: Pygments 2.9.0\n Uninstalling Pygments-2.9.0:\n Successfully uninstalled Pygments-2.9.0\n Attempting uninstall: packaging\n Found existing installation: packaging 20.9\n Uninstalling packaging-20.9:\n Successfully uninstalled packaging-20.9\n Attempting uninstall: numpy\n Found existing installation: numpy 1.19.3\n Uninstalling numpy-1.19.3:\n Successfully uninstalled numpy-1.19.3\n Attempting uninstall: nest-asyncio\n Found existing installation: nest-asyncio 1.5.1\n Uninstalling nest-asyncio-1.5.1:\n Successfully uninstalled nest-asyncio-1.5.1\n Attempting uninstall: nbformat\n Found existing installation: nbformat 5.0.8\n Uninstalling nbformat-5.0.8:\n Successfully uninstalled nbformat-5.0.8\n Attempting uninstall: MarkupSafe\n Found existing installation: MarkupSafe 1.1.1\n Uninstalling MarkupSafe-1.1.1:\n Successfully uninstalled MarkupSafe-1.1.1\n Attempting uninstall: jupyter-client\n Found existing installation: jupyter-client 6.1.12\n Uninstalling jupyter-client-6.1.12:\n Successfully uninstalled jupyter-client-6.1.12\n Attempting uninstall: cached-property\n Found existing installation: cached-property 1.5.2\n Uninstalling cached-property-1.5.2:\n Successfully uninstalled cached-property-1.5.2\n Attempting uninstall: async-generator\n Found existing installation: async-generator 1.10\n Uninstalling async-generator-1.10:\n Successfully uninstalled async-generator-1.10\n Attempting uninstall: testpath\n Found existing installation: testpath 0.4.4\n Uninstalling testpath-0.4.4:\n Successfully uninstalled testpath-0.4.4\n Attempting uninstall: scipy\n Found existing installation: scipy 1.5.4\n Uninstalling scipy-1.5.4:\n Successfully uninstalled scipy-1.5.4\n Attempting uninstall: pyyaml\n Found existing installation: PyYAML 5.4.1\n Uninstalling PyYAML-5.4.1:\n Successfully uninstalled PyYAML-5.4.1\n Attempting uninstall: pandocfilters\n Found existing installation: pandocfilters 1.4.3\n Uninstalling pandocfilters-1.4.3:\n Successfully uninstalled pandocfilters-1.4.3\n Attempting uninstall: nbclient\n Found existing installation: nbclient 0.5.3\n Uninstalling nbclient-0.5.3:\n Successfully uninstalled nbclient-0.5.3\n Attempting uninstall: mistune\n Found existing installation: mistune 0.8.4\n Uninstalling mistune-0.8.4:\n Successfully uninstalled mistune-0.8.4\n Attempting uninstall: jupyterlab-pygments\n Found existing installation: jupyterlab-pygments 0.1.2\n Uninstalling jupyterlab-pygments-0.1.2:\n Successfully uninstalled jupyterlab-pygments-0.1.2\n Attempting uninstall: jinja2\n Found existing installation: Jinja2 2.11.3\n Uninstalling Jinja2-2.11.3:\n Successfully uninstalled Jinja2-2.11.3\n Attempting uninstall: h5py\n Found existing installation: h5py 3.1.0\n Uninstalling h5py-3.1.0:\n Successfully uninstalled h5py-3.1.0\n Attempting uninstall: entrypoints\n Found existing installation: entrypoints 0.3\n Uninstalling entrypoints-0.3:\n Successfully uninstalled entrypoints-0.3\n Attempting uninstall: defusedxml\n Found existing installation: defusedxml 0.7.1\n Uninstalling defusedxml-0.7.1:\n Successfully uninstalled defusedxml-0.7.1\n Attempting uninstall: bleach\n Found existing installation: bleach 3.3.0\n Uninstalling bleach-3.3.0:\n Successfully uninstalled bleach-3.3.0\n Attempting uninstall: py-cpuinfo\n Found existing installation: py-cpuinfo 7.0.0\n Uninstalling py-cpuinfo-7.0.0:\n Successfully uninstalled py-cpuinfo-7.0.0\n Attempting uninstall: psutil\n Found existing installation: psutil 5.7.3\n Uninstalling psutil-5.7.3:\n Successfully uninstalled psutil-5.7.3\n Attempting uninstall: nbconvert\n Found existing installation: nbconvert 6.0.7\n Uninstalling nbconvert-6.0.7:\n Successfully uninstalled nbconvert-6.0.7\n Attempting uninstall: keras\n Found existing installation: Keras 2.4.3\n Uninstalling Keras-2.4.3:\n Successfully uninstalled Keras-2.4.3\n Attempting uninstall: ait-sdk\n Found existing installation: ait-sdk 0.1.7\n Uninstalling ait-sdk-0.1.7:\n Successfully uninstalled ait-sdk-0.1.7\n\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\ntensorflow 2.3.0 requires h5py<2.11.0,>=2.10.0, but you have h5py 3.1.0 which is incompatible.\ntensorflow 2.3.0 requires numpy<1.19.0,>=1.16.0, but you have numpy 1.19.3 which is incompatible.\ntensorflow 2.3.0 requires scipy==1.4.1, but you have scipy 1.5.4 which is incompatible.\u001b[0m\nSuccessfully installed MarkupSafe-1.1.1 ait-sdk-0.1.7 async-generator-1.10 attrs-21.2.0 bleach-3.3.0 cached-property-1.5.2 decorator-5.0.7 defusedxml-0.7.1 entrypoints-0.3 h5py-3.1.0 importlib-metadata-4.0.1 ipython-genutils-0.2.0 jinja2-2.11.3 jsonschema-3.2.0 jupyter-client-6.1.12 jupyter-core-4.7.1 jupyterlab-pygments-0.1.2 keras-2.4.3 mistune-0.8.4 nbclient-0.5.3 nbconvert-6.0.7 nbformat-5.0.8 nest-asyncio-1.5.1 numpy-1.19.3 packaging-20.9 pandocfilters-1.4.3 psutil-5.7.3 py-cpuinfo-7.0.0 pygments-2.9.0 pyparsing-2.4.7 pyrsistent-0.17.3 python-dateutil-2.8.1 pyyaml-5.4.1 pyzmq-22.0.3 scipy-1.5.4 setuptools-56.2.0 six-1.16.0 testpath-0.4.4 tornado-6.1 traitlets-4.3.3 typing-extensions-3.10.0.0 webencodings-0.5.1 zipp-3.4.1\n\u001b[33mWARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv\u001b[0m\n" ], [ "from pathlib import Path\nimport pprint\nfrom ait_sdk.test.hepler import Helper\nimport json", "_____no_output_____" ], [ "# settings cell\n\n# mounted dir\nroot_dir = Path('/workdir/root/ait')\n\nait_name='eval_mnist_data_coverage'\nait_version='0.1'\n\nait_full_name=f'{ait_name}_{ait_version}'\nait_dir = root_dir / ait_full_name\n\ntd_name=f'{ait_name}_test'\n\n# (dockerホスト側の)インベントリ登録用アセット格納ルートフォルダ\ncurrent_dir = %pwd\nwith open(f'{current_dir}/config.json', encoding='utf-8') as f:\n json_ = json.load(f)\n root_dir = json_['host_ait_root_dir']\n is_container = json_['is_container']\ninvenotory_root_dir = f'{root_dir}\\\\ait\\\\{ait_full_name}\\\\local_qai\\\\inventory'\n \n# entry point address\n# コンテナ起動かどうかでポート番号が変わるため、切り替える\nif is_container:\n backend_entry_point = 'http://host.docker.internal:8888/qai-testbed/api/0.0.1'\n ip_entry_point = 'http://host.docker.internal:8888/qai-ip/api/0.0.1'\nelse:\n backend_entry_point = 'http://host.docker.internal:5000/qai-testbed/api/0.0.1'\n ip_entry_point = 'http://host.docker.internal:6000/qai-ip/api/0.0.1'\n\n# aitのデプロイフラグ\n# 一度実施すれば、それ以降は実施しなくてOK\nis_init_ait = True\n#is_init_ait = False\n\n# インベントリの登録フラグ\n# 一度実施すれば、それ以降は実施しなくてOK\nis_init_inventory = True\n", "_____no_output_____" ], [ "helper = Helper(backend_entry_point=backend_entry_point, \n ip_entry_point=ip_entry_point,\n ait_dir=ait_dir,\n ait_full_name=ait_full_name)", "_____no_output_____" ], [ "# health check\n\nhelper.get_bk('/health-check')\nhelper.get_ip('/health-check')", "<Response [200]>\n{'Code': 0, 'Message': 'alive.'}\n<Response [200]>\n{'Code': 0, 'Message': 'alive.'}\n" ], [ "# create ml-component\nres = helper.post_ml_component(name=f'MLComponent_{ait_full_name}', description=f'Description of {ait_full_name}', problem_domain=f'ProbremDomain of {ait_full_name}')\nhelper.set_ml_component_id(res['MLComponentId'])", "<Response [200]>\n{'MLComponentId': 3,\n 'Result': {'Code': 'P22000', 'Message': 'add ml-component success.'}}\n" ], [ "# deploy AIT\nif is_init_ait:\n helper.deploy_ait_non_build()\nelse:\n print('skip deploy AIT')", "<Response [400]>\n{'Code': 'T54000',\n 'Message': 'already exist ait = eval_mnist_data_coverage-0.1'}\n<Response [200]>\n{'Code': 'D00001', 'Message': 'Deploy success'}\n" ], [ "res = helper.get_data_types()\nmodel_data_type_id = [d for d in res['DataTypes'] if d['Name'] == 'model'][0]['Id']\ndataset_data_type_id = [d for d in res['DataTypes'] if d['Name'] == 'dataset'][0]['Id']\nres = helper.get_file_systems()\nunix_file_system_id = [f for f in res['FileSystems'] if f['Name'] == 'UNIX_FILE_SYSTEM'][0]['Id']\nwindows_file_system_id = [f for f in res['FileSystems'] if f['Name'] == 'WINDOWS_FILE'][0]['Id']", "_____no_output_____" ], [ "# add inventories\n\nif is_init_inventory:\n inv1_name = helper.post_inventory('images', dataset_data_type_id, windows_file_system_id, \n f'{invenotory_root_dir}\\\\train_images\\\\train-images-idx3-ubyte.gz',\n 'MNIST images', ['gz'])\n inv2_name = helper.post_inventory('labels', dataset_data_type_id, windows_file_system_id, \n f'{invenotory_root_dir}\\\\train_labels\\\\train-labels-idx1-ubyte.gz',\n 'MNIST labels', ['gz'])\nelse:\n print('skip add inventories')", "<Response [200]>\n{'result': {'Code': 'I22000', 'Message': 'append Inventory success.'}}\n<Response [200]>\n{'result': {'Code': 'I22000', 'Message': 'append Inventory success.'}}\n" ], [ "# get ait_json and inventory_jsons\n\nres_json = helper.get_bk('/QualityMeasurements/RelationalOperators', is_print_json=False).json()\neq_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '=='][0])\nnq_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '!='][0])\ngt_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '>'][0])\nge_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '>='][0])\nlt_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '<'][0])\nle_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '<='][0])\n\nres_json = helper.get_bk('/testRunners', is_print_json=False).json()\nait_json = [j for j in res_json['TestRunners'] if j['Name'] == ait_name][-1]\n\ninv_1_json = helper.get_inventory(inv1_name)\ninv_2_json = helper.get_inventory(inv2_name)", "<Response [200]>\n<Response [200]>\n<Response [200]>\n<Response [200]>\n" ], [ "# add teast_descriptions\n\nhelper.post_td(td_name, 3,\n quality_measurements=[\n {\"Id\":ait_json['Report']['Measures'][0]['Id'], \"Value\":\"0.75\", \"RelationalOperatorId\":gt_id, \"Enable\":True},\n {\"Id\":ait_json['Report']['Measures'][1]['Id'], \"Value\":\"0.75\", \"RelationalOperatorId\":gt_id, \"Enable\":True}\n ],\n target_inventories=[\n {\"Id\":1, \"InventoryId\": inv_1_json['Id'], \"TemplateInventoryId\": ait_json['TargetInventories'][0]['Id']},\n {\"Id\":2, \"InventoryId\": inv_2_json['Id'], \"TemplateInventoryId\": ait_json['TargetInventories'][1]['Id']}\n ],\n test_runner={\n \"Id\":ait_json['Id'],\n \"Params\":[\n {\"TestRunnerParamTemplateId\":ait_json['ParamTemplates'][0]['Id'], \"Value\":\"Area\"},\n {\"TestRunnerParamTemplateId\":ait_json['ParamTemplates'][1]['Id'], \"Value\":\"100\"},\n {\"TestRunnerParamTemplateId\":ait_json['ParamTemplates'][2]['Id'], \"Value\":\"800\"}\n ]\n })", "<Response [200]>\n{'Result': {'Code': 'T22000', 'Message': 'append test description success.'}}\n" ], [ "# get test_description_jsons\ntd_1_json = helper.get_td(td_name)", "<Response [200]>\n" ], [ "# run test_descriptions\nhelper.post_run_and_wait(td_1_json['Id'])", "<Response [200]>\n{'Job': {'Id': '2', 'StartDateTime': '2021-05-11 14:38:50.251150+09:00'},\n 'Result': {'Code': 'R12000', 'Message': 'job launch success.'}}\n[{'Id': 2,\n 'Result': 'OK',\n 'ResultDetail': 'coverage_total_measures : OK.\\n'\n 'coverage_each_measures : OK.\\n',\n 'Status': 'DONE',\n 'TestDescriptionID': 3}]\n" ], [ "res_json = helper.get_td_detail(td_1_json['Id'])\npprint.pprint(res_json)", "<Response [200]>\n{'Result': {'Code': 'T32000', 'Message': 'get detail success.'},\n 'TestDescriptionDetail': {'Id': 3,\n 'Name': 'eval_mnist_data_coverage_test',\n 'Opinion': '',\n 'QualityDimension': {'Id': 3,\n 'Name': 'Diversity_of_test_data'},\n 'QualityMeasurements': [{'Description': 'Overall '\n 'coverage '\n 'within the '\n 'expected '\n 'range.',\n 'Enable': True,\n 'Id': 47,\n 'Name': 'coverage_total_measures',\n 'RelationalOperatorId': 3,\n 'Structure': 'single',\n 'Value': '0.75'},\n {'Description': 'Each class '\n 'coverage '\n 'within the '\n 'expected '\n 'range.',\n 'Enable': True,\n 'Id': 48,\n 'Name': 'coverage_each_measures',\n 'RelationalOperatorId': 3,\n 'Structure': 'sequence',\n 'Value': '0.75'}],\n 'Star': False,\n 'TargetInventories': [{'DataType': {'Id': 1,\n 'Name': 'dataset'},\n 'Description': 'MNIST images',\n 'Id': 4,\n 'Name': 'eval_mnist_data_coverage_0.1_images',\n 'TemplateInventoryId': 33},\n {'DataType': {'Id': 1,\n 'Name': 'dataset'},\n 'Description': 'MNIST labels',\n 'Id': 5,\n 'Name': 'eval_mnist_data_coverage_0.1_labels',\n 'TemplateInventoryId': 34}],\n 'TestDescriptionResult': {'Detail': 'coverage_total_measures '\n ': OK.\\n'\n 'coverage_each_measures '\n ': OK.\\n',\n 'Downloads': [{'Description': 'AITLog',\n 'DownloadURL': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/13',\n 'FileName': 'ait.log',\n 'Id': 1,\n 'Name': 'Log'}],\n 'Graphs': [{'Description': 'Total '\n 'coverage '\n 'for '\n 'all '\n 'classes.',\n 'FileName': 'count_total_class.png',\n 'Graph': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/2',\n 'GraphType': 'picture',\n 'Id': 1,\n 'Name': 'count_total_class',\n 'ReportIndex': 1,\n 'ReportName': 'count_total_class',\n 'ReportRequired': True},\n {'Description': 'Total '\n 'coverage '\n 'for '\n 'each '\n 'classes.',\n 'FileName': 'count_class0.png',\n 'Graph': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/3',\n 'GraphType': 'picture',\n 'Id': 2,\n 'Name': 'count_each_class',\n 'ReportIndex': 2,\n 'ReportName': 'count_each_class',\n 'ReportRequired': True},\n {'Description': 'Total '\n 'coverage '\n 'for '\n 'each '\n 'classes.',\n 'FileName': 'count_class1.png',\n 'Graph': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/4',\n 'GraphType': 'picture',\n 'Id': 3,\n 'Name': 'count_each_class',\n 'ReportIndex': 3,\n 'ReportName': 'count_each_class',\n 'ReportRequired': True},\n {'Description': 'Total '\n 'coverage '\n 'for '\n 'each '\n 'classes.',\n 'FileName': 'count_class2.png',\n 'Graph': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/5',\n 'GraphType': 'picture',\n 'Id': 4,\n 'Name': 'count_each_class',\n 'ReportIndex': 4,\n 'ReportName': 'count_each_class',\n 'ReportRequired': True},\n {'Description': 'Total '\n 'coverage '\n 'for '\n 'each '\n 'classes.',\n 'FileName': 'count_class3.png',\n 'Graph': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/6',\n 'GraphType': 'picture',\n 'Id': 5,\n 'Name': 'count_each_class',\n 'ReportIndex': 5,\n 'ReportName': 'count_each_class',\n 'ReportRequired': True},\n {'Description': 'Total '\n 'coverage '\n 'for '\n 'each '\n 'classes.',\n 'FileName': 'count_class4.png',\n 'Graph': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/7',\n 'GraphType': 'picture',\n 'Id': 6,\n 'Name': 'count_each_class',\n 'ReportIndex': 6,\n 'ReportName': 'count_each_class',\n 'ReportRequired': True},\n {'Description': 'Total '\n 'coverage '\n 'for '\n 'each '\n 'classes.',\n 'FileName': 'count_class5.png',\n 'Graph': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/8',\n 'GraphType': 'picture',\n 'Id': 7,\n 'Name': 'count_each_class',\n 'ReportIndex': 7,\n 'ReportName': 'count_each_class',\n 'ReportRequired': True},\n {'Description': 'Total '\n 'coverage '\n 'for '\n 'each '\n 'classes.',\n 'FileName': 'count_class6.png',\n 'Graph': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/9',\n 'GraphType': 'picture',\n 'Id': 8,\n 'Name': 'count_each_class',\n 'ReportIndex': 8,\n 'ReportName': 'count_each_class',\n 'ReportRequired': True},\n {'Description': 'Total '\n 'coverage '\n 'for '\n 'each '\n 'classes.',\n 'FileName': 'count_class7.png',\n 'Graph': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/10',\n 'GraphType': 'picture',\n 'Id': 9,\n 'Name': 'count_each_class',\n 'ReportIndex': 9,\n 'ReportName': 'count_each_class',\n 'ReportRequired': True},\n {'Description': 'Total '\n 'coverage '\n 'for '\n 'each '\n 'classes.',\n 'FileName': 'count_class8.png',\n 'Graph': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/11',\n 'GraphType': 'picture',\n 'Id': 10,\n 'Name': 'count_each_class',\n 'ReportIndex': 10,\n 'ReportName': 'count_each_class',\n 'ReportRequired': True},\n {'Description': 'Total '\n 'coverage '\n 'for '\n 'each '\n 'classes.',\n 'FileName': 'count_class9.png',\n 'Graph': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/12',\n 'GraphType': 'picture',\n 'Id': 11,\n 'Name': 'count_each_class',\n 'ReportIndex': 11,\n 'ReportName': 'count_each_class',\n 'ReportRequired': True}],\n 'LogFile': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/14',\n 'Summary': 'OK'},\n 'TestRunner': {'Author': 'AIST',\n 'Description': '\\n'\n ' Calculate '\n 'coverage for the '\n 'contour area and '\n 'perimeter of the '\n 'dataset MNIST '\n 'image.\\r\\n'\n '\\n'\n ' \\r\\n'\n '\\n'\n ' '\n '𝐶𝑜𝑣[𝑥(𝑛)]=|𝑚𝑎𝑥{𝑥(𝑛)}−𝑚𝑖𝑛{𝑥(𝑛)}|/| '\n '〖ℎ𝑖𝑔ℎ〗_𝑖−〖𝑙𝑜𝑤〗_𝑖 | \\n'\n ' ',\n 'Email': '',\n 'Id': 15,\n 'LandingPage': '',\n 'Name': 'eval_mnist_data_coverage',\n 'Params': [{'Id': 5,\n 'Name': 'coverage_category',\n 'TestRunnerParamTemplateId': 47,\n 'Value': 'Area'},\n {'Id': 6,\n 'Name': 'interval',\n 'TestRunnerParamTemplateId': 48,\n 'Value': '100'},\n {'Id': 7,\n 'Name': 'max_range',\n 'TestRunnerParamTemplateId': 49,\n 'Value': '800'}],\n 'Quality': 'https://airc.aist.go.jp/aiqm/quality/internal/Diversity_of_test_data',\n 'Version': '0.1'}}}\n" ], [ "# generate report\nres = helper.post_report(td_1_json['Id'])\npprint.pprint(res)", "<Response [200]>\n{'OutParams': {'ReportUrl': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/15'},\n 'Result': {'Code': 'D12000', 'Message': 'command invoke success.'}}\n{'OutParams': {'ReportUrl': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/15'},\n 'Result': {'Code': 'D12000', 'Message': 'command invoke success.'}}\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a476f804f45a600453ed20077e4fa395346e76b
19,155
ipynb
Jupyter Notebook
PracticeBooks/01_working_with_data.ipynb
amansharma2910/DataScienceJulia
fab3f689a42503e0adccdf13ae909b57e0d107bb
[ "MIT" ]
null
null
null
PracticeBooks/01_working_with_data.ipynb
amansharma2910/DataScienceJulia
fab3f689a42503e0adccdf13ae909b57e0d107bb
[ "MIT" ]
null
null
null
PracticeBooks/01_working_with_data.ipynb
amansharma2910/DataScienceJulia
fab3f689a42503e0adccdf13ae909b57e0d107bb
[ "MIT" ]
null
null
null
35.73694
1,680
0.517724
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
4a47775954cbf1fba6bd938fd58bb48ad1041a2d
2,605
ipynb
Jupyter Notebook
Módulo12Aula04.ipynb
julianovale/pythonparatodos
09c851f695b50a2810b927611a8a10b15b23af2b
[ "MIT" ]
null
null
null
Módulo12Aula04.ipynb
julianovale/pythonparatodos
09c851f695b50a2810b927611a8a10b15b23af2b
[ "MIT" ]
null
null
null
Módulo12Aula04.ipynb
julianovale/pythonparatodos
09c851f695b50a2810b927611a8a10b15b23af2b
[ "MIT" ]
null
null
null
24.12037
243
0.396545
[ [ [ "<a href=\"https://colab.research.google.com/github/julianovale/pythonparatodos/blob/main/M%C3%B3dulo12Aula04.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "class Retangulo:\r\n def __init__(self, x = 0.0 , y = 0.0): # valores padrão\r\n self.x = x\r\n self.y = y\r\n def printa(self):\r\n print(\"x = \", self.x)\r\n print(\"y = \", self.y)", "_____no_output_____" ], [ "ra = Retangulo()\r\nprint('ra')\r\nra.printa()\r\nprint()\r\nrb = Retangulo(y = 8)\r\nprint('rb')\r\nrb.printa()\r\nprint()\r\nrc = Retangulo(8.1,9)\r\nprint('rc')\r\nrc.printa()\r\nprint()\r\nrd = Retangulo(8)\r\nprint('rd')\r\nrd.printa()\r\n", "ra\nx = 0.0\ny = 0.0\n\nrb\nx = 0.0\ny = 8\n\nrc\nx = 8.1\ny = 9\n\nrd\nx = 8\ny = 0.0\n" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ] ]
4a4788fd24f9fb54f071763ca635ea82cf37cc81
20,586
ipynb
Jupyter Notebook
homework/HM2/HM2.ipynb
wangshusen/CS583-2021S
55f8ce5c1257f01333ab133f3ab447f208ff85a8
[ "MIT" ]
11
2020-11-30T12:48:06.000Z
2022-03-21T22:02:40.000Z
homework/HM2/HM2.ipynb
wangshusen/CS583-2021S
55f8ce5c1257f01333ab133f3ab447f208ff85a8
[ "MIT" ]
null
null
null
homework/HM2/HM2.ipynb
wangshusen/CS583-2021S
55f8ce5c1257f01333ab133f3ab447f208ff85a8
[ "MIT" ]
14
2021-02-05T09:48:48.000Z
2021-08-01T14:07:09.000Z
29.15864
346
0.524726
[ [ [ "# HM2: Numerical Optimization for Logistic Regression.\n\n### Name: [Your-Name?]\n", "_____no_output_____" ], [ "## 0. You will do the following:\n\n1. Read the lecture note: [click here](https://github.com/wangshusen/DeepLearning/blob/master/LectureNotes/Logistic/paper/logistic.pdf)\n\n2. Read, complete, and run my code.\n\n3. **Implement mini-batch SGD** and evaluate the performance.\n\n4. Convert the .IPYNB file to .HTML file.\n\n * The HTML file must contain **the code** and **the output after execution**.\n \n * Missing **the output after execution** will not be graded.\n \n5. Upload this .HTML file to your Google Drive, Dropbox, or your Github repo. (If you submit the file to Google Drive or Dropbox, you must make the file \"open-access\". The delay caused by \"deny of access\" may result in late penalty.)\n\n6. Submit the link to this .HTML file to Canvas.\n\n * Example: https://github.com/wangshusen/CS583-2020S/blob/master/homework/HM2/HM2.html\n\n\n## Grading criteria:\n\n1. When computing the ```gradient``` and ```objective function value``` using a batch of samples, use **matrix-vector multiplication** rather than a FOR LOOP of **vector-vector multiplications**.\n\n2. Plot ```objective function value``` against ```epochs```. In the plot, compare GD, SGD, and MB-SGD (with $b=8$ and $b=64$). The plot must look reasonable.", "_____no_output_____" ], [ "# 1. Data processing\n\n- Download the Diabete dataset from https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary/diabetes\n- Load the data using sklearn.\n- Preprocess the data.", "_____no_output_____" ], [ "## 1.1. Load the data", "_____no_output_____" ] ], [ [ "from sklearn import datasets\nimport numpy\n\nx_sparse, y = datasets.load_svmlight_file('diabetes')\nx = x_sparse.todense()\n\nprint('Shape of x: ' + str(x.shape))\nprint('Shape of y: ' + str(y.shape))", "_____no_output_____" ] ], [ [ "## 1.2. Partition to training and test sets", "_____no_output_____" ] ], [ [ "# partition the data to training and test sets\nn = x.shape[0]\nn_train = 640\nn_test = n - n_train\n\nrand_indices = numpy.random.permutation(n)\ntrain_indices = rand_indices[0:n_train]\ntest_indices = rand_indices[n_train:n]\n\nx_train = x[train_indices, :]\nx_test = x[test_indices, :]\ny_train = y[train_indices].reshape(n_train, 1)\ny_test = y[test_indices].reshape(n_test, 1)\n\nprint('Shape of x_train: ' + str(x_train.shape))\nprint('Shape of x_test: ' + str(x_test.shape))\nprint('Shape of y_train: ' + str(y_train.shape))\nprint('Shape of y_test: ' + str(y_test.shape))", "_____no_output_____" ] ], [ [ "## 1.3. Feature scaling", "_____no_output_____" ], [ "Use the standardization to trainsform both training and test features", "_____no_output_____" ] ], [ [ "# Standardization\nimport numpy\n\n# calculate mu and sig using the training set\nd = x_train.shape[1]\nmu = numpy.mean(x_train, axis=0).reshape(1, d)\nsig = numpy.std(x_train, axis=0).reshape(1, d)\n\n# transform the training features\nx_train = (x_train - mu) / (sig + 1E-6)\n\n# transform the test features\nx_test = (x_test - mu) / (sig + 1E-6)\n\nprint('test mean = ')\nprint(numpy.mean(x_test, axis=0))\n\nprint('test std = ')\nprint(numpy.std(x_test, axis=0))", "_____no_output_____" ] ], [ [ "## 1.4. Add a dimension of all ones", "_____no_output_____" ] ], [ [ "n_train, d = x_train.shape\nx_train = numpy.concatenate((x_train, numpy.ones((n_train, 1))), axis=1)\n\nn_test, d = x_test.shape\nx_test = numpy.concatenate((x_test, numpy.ones((n_test, 1))), axis=1)\n\nprint('Shape of x_train: ' + str(x_train.shape))\nprint('Shape of x_test: ' + str(x_test.shape))", "_____no_output_____" ] ], [ [ "# 2. Logistic regression model\n\nThe objective function is $Q (w; X, y) = \\frac{1}{n} \\sum_{i=1}^n \\log \\Big( 1 + \\exp \\big( - y_i x_i^T w \\big) \\Big) + \\frac{\\lambda}{2} \\| w \\|_2^2 $.", "_____no_output_____" ] ], [ [ "# Calculate the objective function value\n# Inputs:\n# w: d-by-1 matrix\n# x: n-by-d matrix\n# y: n-by-1 matrix\n# lam: scalar, the regularization parameter\n# Return:\n# objective function value (scalar)\ndef objective(w, x, y, lam):\n n, d = x.shape\n yx = numpy.multiply(y, x) # n-by-d matrix\n yxw = numpy.dot(yx, w) # n-by-1 matrix\n vec1 = numpy.exp(-yxw) # n-by-1 matrix\n vec2 = numpy.log(1 + vec1) # n-by-1 matrix\n loss = numpy.mean(vec2) # scalar\n reg = lam / 2 * numpy.sum(w * w) # scalar\n return loss + reg\n ", "_____no_output_____" ], [ "# initialize w\nd = x_train.shape[1]\nw = numpy.zeros((d, 1))\n\n# evaluate the objective function value at w\nlam = 1E-6\nobjval0 = objective(w, x_train, y_train, lam)\nprint('Initial objective function value = ' + str(objval0))", "_____no_output_____" ] ], [ [ "# 3. Numerical optimization", "_____no_output_____" ], [ "## 3.1. Gradient descent\n", "_____no_output_____" ], [ "The gradient at $w$ is $g = - \\frac{1}{n} \\sum_{i=1}^n \\frac{y_i x_i }{1 + \\exp ( y_i x_i^T w)} + \\lambda w$", "_____no_output_____" ] ], [ [ "# Calculate the gradient\n# Inputs:\n# w: d-by-1 matrix\n# x: n-by-d matrix\n# y: n-by-1 matrix\n# lam: scalar, the regularization parameter\n# Return:\n# g: g: d-by-1 matrix, full gradient\ndef gradient(w, x, y, lam):\n n, d = x.shape\n yx = numpy.multiply(y, x) # n-by-d matrix\n yxw = numpy.dot(yx, w) # n-by-1 matrix\n vec1 = numpy.exp(yxw) # n-by-1 matrix\n vec2 = numpy.divide(yx, 1+vec1) # n-by-d matrix\n vec3 = -numpy.mean(vec2, axis=0).reshape(d, 1) # d-by-1 matrix\n g = vec3 + lam * w\n return g", "_____no_output_____" ], [ "# Gradient descent for solving logistic regression\n# Inputs:\n# x: n-by-d matrix\n# y: n-by-1 matrix\n# lam: scalar, the regularization parameter\n# stepsize: scalar\n# max_iter: integer, the maximal iterations\n# w: d-by-1 matrix, initialization of w\n# Return:\n# w: d-by-1 matrix, the solution\n# objvals: a record of each iteration's objective value\ndef grad_descent(x, y, lam, stepsize, max_iter=100, w=None):\n n, d = x.shape\n objvals = numpy.zeros(max_iter) # store the objective values\n if w is None:\n w = numpy.zeros((d, 1)) # zero initialization\n \n for t in range(max_iter):\n objval = objective(w, x, y, lam)\n objvals[t] = objval\n print('Objective value at t=' + str(t) + ' is ' + str(objval))\n g = gradient(w, x, y, lam)\n w -= stepsize * g\n \n return w, objvals", "_____no_output_____" ] ], [ [ "Run gradient descent.", "_____no_output_____" ] ], [ [ "lam = 1E-6\nstepsize = 1.0\nw, objvals_gd = grad_descent(x_train, y_train, lam, stepsize)", "_____no_output_____" ] ], [ [ "## 3.2. Stochastic gradient descent (SGD)\n\nDefine $Q_i (w) = \\log \\Big( 1 + \\exp \\big( - y_i x_i^T w \\big) \\Big) + \\frac{\\lambda}{2} \\| w \\|_2^2 $.\n\nThe stochastic gradient at $w$ is $g_i = \\frac{\\partial Q_i }{ \\partial w} = -\\frac{y_i x_i }{1 + \\exp ( y_i x_i^T w)} + \\lambda w$.", "_____no_output_____" ] ], [ [ "# Calculate the objective Q_i and the gradient of Q_i\n# Inputs:\n# w: d-by-1 matrix\n# xi: 1-by-d matrix\n# yi: scalar\n# lam: scalar, the regularization parameter\n# Return:\n# obj: scalar, the objective Q_i\n# g: d-by-1 matrix, gradient of Q_i\ndef stochastic_objective_gradient(w, xi, yi, lam):\n yx = yi * xi # 1-by-d matrix\n yxw = float(numpy.dot(yx, w)) # scalar\n \n # calculate objective function Q_i\n loss = numpy.log(1 + numpy.exp(-yxw)) # scalar\n reg = lam / 2 * numpy.sum(w * w) # scalar\n obj = loss + reg\n \n # calculate stochastic gradient\n g_loss = -yx.T / (1 + numpy.exp(yxw)) # d-by-1 matrix\n g = g_loss + lam * w # d-by-1 matrix\n \n return obj, g", "_____no_output_____" ], [ "# SGD for solving logistic regression\n# Inputs:\n# x: n-by-d matrix\n# y: n-by-1 matrix\n# lam: scalar, the regularization parameter\n# stepsize: scalar\n# max_epoch: integer, the maximal epochs\n# w: d-by-1 matrix, initialization of w\n# Return:\n# w: the solution\n# objvals: record of each iteration's objective value\ndef sgd(x, y, lam, stepsize, max_epoch=100, w=None):\n n, d = x.shape\n objvals = numpy.zeros(max_epoch) # store the objective values\n if w is None:\n w = numpy.zeros((d, 1)) # zero initialization\n \n for t in range(max_epoch):\n # randomly shuffle the samples\n rand_indices = numpy.random.permutation(n)\n x_rand = x[rand_indices, :]\n y_rand = y[rand_indices, :]\n \n objval = 0 # accumulate the objective values\n for i in range(n):\n xi = x_rand[i, :] # 1-by-d matrix\n yi = float(y_rand[i, :]) # scalar\n obj, g = stochastic_objective_gradient(w, xi, yi, lam)\n objval += obj\n w -= stepsize * g\n \n stepsize *= 0.9 # decrease step size\n objval /= n\n objvals[t] = objval\n print('Objective value at epoch t=' + str(t) + ' is ' + str(objval))\n \n return w, objvals", "_____no_output_____" ] ], [ [ "Run SGD.", "_____no_output_____" ] ], [ [ "lam = 1E-6\nstepsize = 0.1\nw, objvals_sgd = sgd(x_train, y_train, lam, stepsize)", "_____no_output_____" ] ], [ [ "# 4. Compare GD with SGD\n\nPlot objective function values against epochs.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n%matplotlib inline\n\nfig = plt.figure(figsize=(6, 4))\n\nepochs_gd = range(len(objvals_gd))\nepochs_sgd = range(len(objvals_sgd))\n\nline0, = plt.plot(epochs_gd, objvals_gd, '--b', LineWidth=4)\nline1, = plt.plot(epochs_sgd, objvals_sgd, '-r', LineWidth=2)\nplt.xlabel('Epochs', FontSize=20)\nplt.ylabel('Objective Value', FontSize=20)\nplt.xticks(FontSize=16)\nplt.yticks(FontSize=16)\nplt.legend([line0, line1], ['GD', 'SGD'], fontsize=20)\nplt.tight_layout()\nplt.show()\nfig.savefig('compare_gd_sgd.pdf', format='pdf', dpi=1200)", "_____no_output_____" ] ], [ [ "# 5. Prediction", "_____no_output_____" ] ], [ [ "# Predict class label\n# Inputs:\n# w: d-by-1 matrix\n# X: m-by-d matrix\n# Return:\n# f: m-by-1 matrix, the predictions\ndef predict(w, X):\n xw = numpy.dot(X, w)\n f = numpy.sign(xw)\n return f", "_____no_output_____" ], [ "# evaluate training error\nf_train = predict(w, x_train)\ndiff = numpy.abs(f_train - y_train) / 2\nerror_train = numpy.mean(diff)\nprint('Training classification error is ' + str(error_train))", "_____no_output_____" ], [ "# evaluate test error\nf_test = predict(w, x_test)\ndiff = numpy.abs(f_test - y_test) / 2\nerror_test = numpy.mean(diff)\nprint('Test classification error is ' + str(error_test))", "_____no_output_____" ] ], [ [ "# 6. Mini-batch SGD (fill the code)\n\n", "_____no_output_____" ], [ "## 6.1. Compute the objective $Q_I$ and its gradient using a batch of samples\n\nDefine $Q_I (w) = \\frac{1}{b} \\sum_{i \\in I} \\log \\Big( 1 + \\exp \\big( - y_i x_i^T w \\big) \\Big) + \\frac{\\lambda}{2} \\| w \\|_2^2 $, where $I$ is a set containing $b$ indices randomly drawn from $\\{ 1, \\cdots , n \\}$ without replacement.\n\nThe stochastic gradient at $w$ is $g_I = \\frac{\\partial Q_I }{ \\partial w} = \\frac{1}{b} \\sum_{i \\in I} \\frac{- y_i x_i }{1 + \\exp ( y_i x_i^T w)} + \\lambda w$.", "_____no_output_____" ] ], [ [ "# Calculate the objective Q_I and the gradient of Q_I\n# Inputs:\n# w: d-by-1 matrix\n# xi: b-by-d matrix\n# yi: b-by-1 matrix\n# lam: scalar, the regularization parameter\n# b: integer, the batch size\n# Return:\n# obj: scalar, the objective Q_i\n# g: d-by-1 matrix, gradient of Q_i\ndef mb_stochastic_objective_gradient(w, xi, yi, lam, b):\n # Fill the function\n # Follow the implementation of stochastic_objective_gradient\n # Use matrix-vector multiplication; do not use FOR LOOP of vector-vector multiplications\n ...\n \n return obj, g", "_____no_output_____" ] ], [ [ "## 6.2. Implement mini-batch SGD\n\nHints:\n1. In every epoch, randomly permute the $n$ samples (just like SGD).\n2. Each epoch has $\\frac{n}{b}$ iterations. In every iteration, use $b$ samples, and compute the gradient and objective using the ``mb_stochastic_objective_gradient`` function. In the next iteration, use the next $b$ samples, and so on.\n", "_____no_output_____" ] ], [ [ "# Mini-Batch SGD for solving logistic regression\n# Inputs:\n# x: n-by-d matrix\n# y: n-by-1 matrix\n# lam: scalar, the regularization parameter\n# b: integer, the batch size\n# stepsize: scalar\n# max_epoch: integer, the maximal epochs\n# w: d-by-1 matrix, initialization of w\n# Return:\n# w: the solution\n# objvals: record of each iteration's objective value\ndef mb_sgd(x, y, lam, b, stepsize, max_epoch=100, w=None):\n # Fill the function\n # Follow the implementation of sgd\n # Record one objective value per epoch (not per iteration!)\n ...\n \n return w, objvals", "_____no_output_____" ] ], [ [ "## 6.3. Run MB-SGD", "_____no_output_____" ] ], [ [ "# MB-SGD with batch size b=8\nlam = 1E-6 # do not change\nb = 8 # do not change\nstepsize = 0.1 # you must tune this parameter\n\nw, objvals_mbsgd8 = mb_sgd(x_train, y_train, lam, b, stepsize)", "_____no_output_____" ], [ "# MB-SGD with batch size b=64\nlam = 1E-6 # do not change\nb = 64 # do not change\nstepsize = 0.1 # you must tune this parameter\n\nw, objvals_mbsgd64 = mb_sgd(x_train, y_train, lam, b, stepsize)", "_____no_output_____" ] ], [ [ "# 7. Plot and compare GD, SGD, and MB-SGD", "_____no_output_____" ], [ "You are required to compare the following algorithms:\n\n- Gradient descent (GD)\n\n- SGD\n\n- MB-SGD with b=8\n\n- MB-SGD with b=64\n\nFollow the code in Section 4 to plot ```objective function value``` against ```epochs```. There should be four curves in the plot; each curve corresponds to one algorithm.", "_____no_output_____" ], [ "Hint: Logistic regression with $\\ell_2$-norm regularization is a strongly convex optimization problem. All the algorithms will converge to the same solution. **In the end, the ``objective function value`` of the 4 algorithms will be the same. If not the same, your implementation must be wrong. Do NOT submit wrong code and wrong result!**", "_____no_output_____" ] ], [ [ "# plot the 4 curves:", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ] ]
4a478df93dc1d17fecf47e53902c8a1c2e6c3ed3
6,512
ipynb
Jupyter Notebook
Blanchett Smile.ipynb
hoostus/prime-harvesting
6606b94ea7859fbf217dbea4ace856e3fa4d154e
[ "BlueOak-1.0.0", "Apache-2.0" ]
23
2016-09-07T06:13:37.000Z
2022-02-17T23:49:03.000Z
Blanchett Smile.ipynb
hoostus/prime-harvesting
6606b94ea7859fbf217dbea4ace856e3fa4d154e
[ "BlueOak-1.0.0", "Apache-2.0" ]
null
null
null
Blanchett Smile.ipynb
hoostus/prime-harvesting
6606b94ea7859fbf217dbea4ace856e3fa4d154e
[ "BlueOak-1.0.0", "Apache-2.0" ]
12
2016-06-30T17:27:39.000Z
2021-12-12T07:54:27.000Z
24.389513
112
0.514896
[ [ [ "%matplotlib inline\n\nimport math\nimport numpy\nimport pandas\nimport seaborn\nimport matplotlib.pyplot as plt\nimport plot", "_____no_output_____" ], [ "def fmt_money(number):\n return \"${:,.0f}\".format(number)", "_____no_output_____" ], [ "def run_pmt(market, pmt_rate):\n portfolio = 1_000_000\n age = 65\n max_age = 100\n df = pandas.DataFrame(index=range(age, max_age), columns=['withdrawal', 'portfolio'])\n for i in range(age, max_age):\n withdraw = -numpy.pmt(pmt_rate, max_age-i, portfolio, 0, 1)\n portfolio -= withdraw\n portfolio *= (1 + market)\n df.loc[i] = [int(withdraw), int(portfolio)]\n return df", "_____no_output_____" ], [ "pmt_df = run_pmt(0.03, 0.04)\npmt_df.head()", "_____no_output_____" ], [ "def run_smile(target):\n spend = target\n s = pandas.Series(index=range(66,100), dtype=int)\n for age in range(66, 100):\n d = (0.00008 * age * age) - (0.0125 * age) - (0.0066 * math.log(target)) + 0.546\n spend *= (1 + d)\n s.loc[age] = int(spend)\n return s", "_____no_output_____" ], [ "smile_s = run_smile(pmt_df.iloc[0]['withdrawal'])\nsmile_s.head()", "_____no_output_____" ], [ "def rmse(s1, s2):\n return numpy.sqrt(numpy.mean((s1-s2)**2))", "_____no_output_____" ], [ "rmse(pmt_df['withdrawal'][1:26], smile_s[:26])", "_____no_output_____" ], [ "def harness():\n df = pandas.DataFrame(columns=['market', 'pmtrate', 'rmse'])\n for returns in numpy.arange(0.01, 0.10+0.001, 0.001):\n for pmt_rate in numpy.arange(0.01, 0.10+0.001, 0.001):\n pmt_df = run_pmt(returns, pmt_rate)\n iwd = pmt_df.iloc[0]['withdrawal']\n smile_s = run_smile(iwd)\n errors = rmse(pmt_df['withdrawal'], smile_s)\n df = df.append({'market': returns, 'pmtrate': pmt_rate, 'rmse': errors}, ignore_index=True)\n return df", "_____no_output_____" ], [ "error_df = harness()\nerror_df.head()", "_____no_output_____" ], [ "#seaborn.scatterplot(data=error_df, x='market', y='pmtrate', size='rmse')", "_____no_output_____" ], [ "#seaborn.scatterplot(data=error_df[0:19], x='pmtrate', y='rmse')", "_____no_output_____" ], [ "error_df[0:91]", "_____no_output_____" ], [ "slice_size = 91\nn_slices = int(len(error_df) / slice_size)\nprint(len(error_df), n_slices, slice_size)\nfor i in range(n_slices):\n start = i * slice_size\n end = i * slice_size + slice_size\n slice_df = error_df[start:end]\n delta = slice_df['pmtrate'] - slice_df['market']\n plot_df = pandas.DataFrame({'delta': delta, 'rmse': slice_df['rmse']})\n sp = seaborn.scatterplot(data=plot_df, x='delta', y='rmse')\n mkt_rate = slice_df.iloc[0]['market']\n plt.xticks(numpy.arange(-0.100, +0.100, 0.005), rotation='vertical')\n# plt.title(f'Market returns: {mkt_rate*100}%')", "_____no_output_____" ], [ "series = pandas.Series(index=range(40_000, 101_000, 5_000))\nfor t in range(40_000, 101_000, 5_000):\n s = run_smile(t)\n contingency = (t - s[0:20]).sum()\n series.loc[t] = contingency", "_____no_output_____" ], [ "series.plot()\nplt.xlabel('Targeted annual withdrawal at retirement')\nplt.ylabel('Contigency fund')\nxticks = plt.xticks()\nplt.xticks(xticks[0], [fmt_money(x) for x in xticks[0]])\nyticks = plt.yticks()\nplt.yticks(yticks[0], [fmt_money(y) for y in yticks[0]])\nplt.title('Contigency at age 85')", "_____no_output_____" ], [ "series", "_____no_output_____" ], [ "(series / series.index).plot()\nplt.title('Ratio of contingency to expected spending')\nxticks = plt.xticks()\nplt.xticks(xticks[0], [fmt_money(x) for x in xticks[0]])", "_____no_output_____" ], [ "len(error_df)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a47abb58305a6707246821347aa528ec6141ead
7,472
ipynb
Jupyter Notebook
sagemaker/4_model_training.ipynb
Otarkia/aws-fleet-predictive-maintenance
d6633bb49acd357d0c760c431368070e9680617f
[ "Apache-2.0" ]
37
2020-07-14T19:01:09.000Z
2022-03-04T20:58:03.000Z
sagemaker/4_model_training.ipynb
IronOnet/aws-fleet-predictive-maintenance
d6633bb49acd357d0c760c431368070e9680617f
[ "Apache-2.0" ]
null
null
null
sagemaker/4_model_training.ipynb
IronOnet/aws-fleet-predictive-maintenance
d6633bb49acd357d0c760c431368070e9680617f
[ "Apache-2.0" ]
17
2020-07-15T00:04:29.000Z
2022-02-06T05:29:58.000Z
30.373984
133
0.498796
[ [ [ "import os\nimport json\nimport boto3\nimport sagemaker\nimport numpy as np", "_____no_output_____" ], [ "from source.config import Config\nconfig = Config(filename=\"config/config.yaml\")", "_____no_output_____" ], [ "sage_session = sagemaker.session.Session()\ns3_bucket = config.S3_BUCKET \ns3_output_path = 's3://{}/'.format(s3_bucket)\nprint(\"S3 bucket path: {}\".format(s3_output_path))\n\n# run in local_mode on this machine, or as a SageMaker TrainingJob\nlocal_mode = False\n\nif local_mode:\n instance_type = 'local'\nelse:\n instance_type = \"ml.c5.xlarge\"\n \nrole = sagemaker.get_execution_role()\nprint(\"Using IAM role arn: {}\".format(role))\n# only run from SageMaker notebook instance\nif local_mode:\n !/bin/bash ./setup.sh\ncpu_or_gpu = 'gpu' if instance_type.startswith('ml.p') else 'cpu'", "_____no_output_____" ], [ "# create a descriptive job name \njob_name_prefix = 'HPO-pdm'", "_____no_output_____" ], [ "metric_definitions = [\n {'Name': 'Epoch', 'Regex': 'Epoch: ([-+]?[0-9]*[.]?[0-9]+([eE][-+]?[0-9]+)?)'},\n {'Name': 'train_loss', 'Regex': 'Train loss: ([-+]?[0-9]*[.]?[0-9]+([eE][-+]?[0-9]+)?)'},\n {'Name': 'train_acc', 'Regex': 'Train acc: ([-+]?[0-9]*[.]?[0-9]+([eE][-+]?[0-9]+)?)'},\n {'Name': 'train_auc', 'Regex': 'Train auc: ([-+]?[0-9]*[.]?[0-9]+([eE][-+]?[0-9]+)?)'},\n {'Name': 'test_loss', 'Regex': 'Test loss: ([-+]?[0-9]*[.]?[0-9]+([eE][-+]?[0-9]+)?)'},\n {'Name': 'test_acc', 'Regex': 'Test acc: ([-+]?[0-9]*[.]?[0-9]+([eE][-+]?[0-9]+)?)'},\n {'Name': 'test_auc', 'Regex': 'Test auc: ([-+]?[0-9]*[.]?[0-9]+([eE][-+]?[0-9]+)?)'},\n]", "_____no_output_____" ], [ "from sagemaker.pytorch import PyTorch", "_____no_output_____" ] ], [ [ "# Define your data", "_____no_output_____" ] ], [ [ "print(\"Using dataset {}\".format(config.train_dataset_fn))", "_____no_output_____" ], [ "from sagemaker.s3 import S3Uploader\n\nkey_prefix='fpm-data'\ntraining_data = S3Uploader.upload(config.train_dataset_fn, 's3://{}/{}'.format(s3_bucket, key_prefix))\ntesting_data = S3Uploader.upload(config.test_dataset_fn, 's3://{}/{}'.format(s3_bucket, key_prefix))\n\nprint(\"Training data: {}\".format(training_data))\nprint(\"Testing data: {}\".format(testing_data))", "_____no_output_____" ] ], [ [ "# HPO", "_____no_output_____" ] ], [ [ "from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner\nmax_jobs = 20\nmax_parallel_jobs = 5", "_____no_output_____" ], [ "hyperparameter_ranges = {\n 'lr': ContinuousParameter(1e-5, 1e-2),\n 'batch_size': IntegerParameter(16, 256),\n 'dropout': ContinuousParameter(0.0, 0.8),\n \n 'fc_hidden_units': CategoricalParameter([\"[256, 128]\", \"[256, 128, 128]\", \"[256, 256, 128]\", \"[256, 128, 64]\"]),\n 'conv_channels': CategoricalParameter([\"[2, 8, 2]\", \"[2, 16, 2]\", \"[2, 16, 16, 2]\"]),\n}", "_____no_output_____" ], [ "estimator = PyTorch(entry_point=\"train.py\",\n source_dir='source',\n role=role,\n dependencies=[\"source/dl_utils\"],\n train_instance_type=instance_type,\n train_instance_count=1,\n output_path=s3_output_path,\n framework_version=\"1.5.0\",\n py_version='py3',\n base_job_name=job_name_prefix,\n metric_definitions=metric_definitions,\n hyperparameters= {\n 'epoch': 5000,\n 'target_column': config.target_column,\n 'sensor_headers': json.dumps(config.sensor_headers),\n 'train_input_filename': os.path.basename(config.train_dataset_fn),\n 'test_input_filename': os.path.basename(config.test_dataset_fn),\n }\n )\n\nif local_mode:\n estimator.fit({'train': training_data, 'test': testing_data})", "_____no_output_____" ], [ "tuner = HyperparameterTuner(estimator,\n objective_metric_name='test_auc',\n objective_type='Maximize',\n hyperparameter_ranges=hyperparameter_ranges,\n metric_definitions=metric_definitions,\n max_jobs=max_jobs,\n max_parallel_jobs=max_parallel_jobs,\n base_tuning_job_name=job_name_prefix)\ntuner.fit({'train': training_data, 'test': testing_data})\n\n# Save the HPO job name\nhpo_job_name = tuner.describe()['HyperParameterTuningJobName']\nif \"hpo_job_name\" in config.__dict__:\n !sed -i 's/hpo_job_name: .*/hpo_job_name: \\\"{hpo_job_name}\\\"/' config/config.yaml\nelse:\n !echo -e \"\\n\" >> config/config.yaml\n !echo \"hpo_job_name: \\\"$hpo_job_name\\\"\" >> config/config.yaml ", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
4a47b373a602f26c860054af68b7a89c2686b453
9,285
ipynb
Jupyter Notebook
PythonDataScienceHandbook/notebooks/01.03-Magic-Commands.ipynb
gopala-kr/ds-notebooks
bc35430ecdd851f2ceab8f2437eec4d77cb59423
[ "MIT" ]
5
2017-09-21T14:03:10.000Z
2021-01-27T23:53:17.000Z
PythonDataScienceHandbook/notebooks/01.03-Magic-Commands.ipynb
gopala-kr/ds-notebooks
bc35430ecdd851f2ceab8f2437eec4d77cb59423
[ "MIT" ]
32
2017-09-06T11:48:47.000Z
2021-02-15T09:52:21.000Z
PythonDataScienceHandbook/notebooks/01.03-Magic-Commands.ipynb
gopala-kr/ds-notebooks
bc35430ecdd851f2ceab8f2437eec4d77cb59423
[ "MIT" ]
14
2017-05-08T10:37:34.000Z
2019-04-15T07:03:38.000Z
39.679487
349
0.604416
[ [ [ "<!--BOOK_INFORMATION-->\n<img align=\"left\" style=\"padding-right:10px;\" src=\"figures/PDSH-cover-small.png\">\n*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).*\n\n*The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!*", "_____no_output_____" ], [ "<!--NAVIGATION-->\n< [Keyboard Shortcuts in the IPython Shell](01.02-Shell-Keyboard-Shortcuts.ipynb) | [Contents](Index.ipynb) | [Input and Output History](01.04-Input-Output-History.ipynb) >", "_____no_output_____" ], [ "# IPython Magic Commands", "_____no_output_____" ], [ "The previous two sections showed how IPython lets you use and explore Python efficiently and interactively.\nHere we'll begin discussing some of the enhancements that IPython adds on top of the normal Python syntax.\nThese are known in IPython as *magic commands*, and are prefixed by the ``%`` character.\nThese magic commands are designed to succinctly solve various common problems in standard data analysis.\nMagic commands come in two flavors: *line magics*, which are denoted by a single ``%`` prefix and operate on a single line of input, and *cell magics*, which are denoted by a double ``%%`` prefix and operate on multiple lines of input.\nWe'll demonstrate and discuss a few brief examples here, and come back to more focused discussion of several useful magic commands later in the chapter.", "_____no_output_____" ], [ "## Pasting Code Blocks: ``%paste`` and ``%cpaste``\n\nWhen working in the IPython interpreter, one common gotcha is that pasting multi-line code blocks can lead to unexpected errors, especially when indentation and interpreter markers are involved.\nA common case is that you find some example code on a website and want to paste it into your interpreter.\nConsider the following simple function:\n\n``` python\n>>> def donothing(x):\n... return x\n\n```\nThe code is formatted as it would appear in the Python interpreter, and if you copy and paste this directly into IPython you get an error:\n\n```ipython\nIn [2]: >>> def donothing(x):\n ...: ... return x\n ...: \n File \"<ipython-input-20-5a66c8964687>\", line 2\n ... return x\n ^\nSyntaxError: invalid syntax\n```\n\nIn the direct paste, the interpreter is confused by the additional prompt characters.\nBut never fear–IPython's ``%paste`` magic function is designed to handle this exact type of multi-line, marked-up input:\n\n```ipython\nIn [3]: %paste\n>>> def donothing(x):\n... return x\n\n## -- End pasted text --\n```\n\nThe ``%paste`` command both enters and executes the code, so now the function is ready to be used:\n\n```ipython\nIn [4]: donothing(10)\nOut[4]: 10\n```\n\nA command with a similar intent is ``%cpaste``, which opens up an interactive multiline prompt in which you can paste one or more chunks of code to be executed in a batch:\n\n```ipython\nIn [5]: %cpaste\nPasting code; enter '--' alone on the line to stop or use Ctrl-D.\n:>>> def donothing(x):\n:... return x\n:--\n```\n\nThese magic commands, like others we'll see, make available functionality that would be difficult or impossible in a standard Python interpreter.", "_____no_output_____" ], [ "## Running External Code: ``%run``\nAs you begin developing more extensive code, you will likely find yourself working in both IPython for interactive exploration, as well as a text editor to store code that you want to reuse.\nRather than running this code in a new window, it can be convenient to run it within your IPython session.\nThis can be done with the ``%run`` magic.\n\nFor example, imagine you've created a ``myscript.py`` file with the following contents:\n\n```python\n#-------------------------------------\n# file: myscript.py\n\ndef square(x):\n \"\"\"square a number\"\"\"\n return x ** 2\n\nfor N in range(1, 4):\n print(N, \"squared is\", square(N))\n```\n\nYou can execute this from your IPython session as follows:\n\n```ipython\nIn [6]: %run myscript.py\n1 squared is 1\n2 squared is 4\n3 squared is 9\n```\n\nNote also that after you've run this script, any functions defined within it are available for use in your IPython session:\n\n```ipython\nIn [7]: square(5)\nOut[7]: 25\n```\n\nThere are several options to fine-tune how your code is run; you can see the documentation in the normal way, by typing **``%run?``** in the IPython interpreter.", "_____no_output_____" ], [ "## Timing Code Execution: ``%timeit``\nAnother example of a useful magic function is ``%timeit``, which will automatically determine the execution time of the single-line Python statement that follows it.\nFor example, we may want to check the performance of a list comprehension:\n\n```ipython\nIn [8]: %timeit L = [n ** 2 for n in range(1000)]\n1000 loops, best of 3: 325 µs per loop\n```\n\nThe benefit of ``%timeit`` is that for short commands it will automatically perform multiple runs in order to attain more robust results.\nFor multi line statements, adding a second ``%`` sign will turn this into a cell magic that can handle multiple lines of input.\nFor example, here's the equivalent construction with a ``for``-loop:\n\n```ipython\nIn [9]: %%timeit\n ...: L = []\n ...: for n in range(1000):\n ...: L.append(n ** 2)\n ...: \n1000 loops, best of 3: 373 µs per loop\n```\n\nWe can immediately see that list comprehensions are about 10% faster than the equivalent ``for``-loop construction in this case.\nWe'll explore ``%timeit`` and other approaches to timing and profiling code in [Profiling and Timing Code](01.07-Timing-and-Profiling.ipynb).", "_____no_output_____" ], [ "## Help on Magic Functions: ``?``, ``%magic``, and ``%lsmagic``\n\nLike normal Python functions, IPython magic functions have docstrings, and this useful\ndocumentation can be accessed in the standard manner.\nSo, for example, to read the documentation of the ``%timeit`` magic simply type this:\n\n```ipython\nIn [10]: %timeit?\n```\n\nDocumentation for other functions can be accessed similarly.\nTo access a general description of available magic functions, including some examples, you can type this:\n\n```ipython\nIn [11]: %magic\n```\n\nFor a quick and simple list of all available magic functions, type this:\n\n```ipython\nIn [12]: %lsmagic\n```\n\nFinally, I'll mention that it is quite straightforward to define your own magic functions if you wish.\nWe won't discuss it here, but if you are interested, see the references listed in [More IPython Resources](01.08-More-IPython-Resources.ipynb).", "_____no_output_____" ], [ "<!--NAVIGATION-->\n< [Keyboard Shortcuts in the IPython Shell](01.02-Shell-Keyboard-Shortcuts.ipynb) | [Contents](Index.ipynb) | [Input and Output History](01.04-Input-Output-History.ipynb) >", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
4a47cfb6f855cf94715ae165036011d12ca2c86d
69,452
ipynb
Jupyter Notebook
PyCitySchools/PyCitySchools_starter.ipynb
Kylee-Grant/pandas-challenge
df4fbf5dcad79f206483342c597930f06e1de598
[ "ADSL" ]
null
null
null
PyCitySchools/PyCitySchools_starter.ipynb
Kylee-Grant/pandas-challenge
df4fbf5dcad79f206483342c597930f06e1de598
[ "ADSL" ]
null
null
null
PyCitySchools/PyCitySchools_starter.ipynb
Kylee-Grant/pandas-challenge
df4fbf5dcad79f206483342c597930f06e1de598
[ "ADSL" ]
null
null
null
36.267363
202
0.433076
[ [ [ "### Note\n* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.", "_____no_output_____" ] ], [ [ "# Dependencies and Setup\nimport pandas as pd\n\n# File to Load \nschool_data_to_load = \"Resources/schools_complete.csv\"\nstudent_data_to_load = \"Resources/students_complete.csv\"\n\n# Read School and Student Data File and store into Pandas DataFrames\nschool_data = pd.read_csv(school_data_to_load)\nstudent_data = pd.read_csv(student_data_to_load)\n\n# Combine the data into a single dataset. \nschool_data_complete = pd.merge(student_data, school_data, how=\"left\", on=[\"school_name\", \"school_name\"])", "_____no_output_____" ] ], [ [ "## District Summary\n\n* Calculate the total number of schools\n\n* Calculate the total number of students\n\n* Calculate the total budget\n\n* Calculate the average math score \n\n* Calculate the average reading score\n\n* Calculate the percentage of students with a passing math score (70 or greater)\n\n* Calculate the percentage of students with a passing reading score (70 or greater)\n\n* Calculate the percentage of students who passed math **and** reading (% Overall Passing)\n\n* Create a dataframe to hold the above results\n\n* Optional: give the displayed data cleaner formatting", "_____no_output_____" ] ], [ [ "# Perform all variable calculations needed to fill out the data frame\n\n# Calculate the total number of schools\nschool_total = len(school_data_complete[\"school_name\"].unique())\n\n# Calculate the total number of students\nstudent_total = len(school_data_complete[\"Student ID\"].unique())\n\n# Calculate the total budget\nbudget_total = sum(school_data_complete[\"budget\"].unique())\n\n# Calculate the average math score\naverage_math = school_data_complete[\"math_score\"].mean()\n\n# Calculate the average reading score\naverage_reading = school_data_complete[\"reading_score\"].mean()\n\n# Calculate the percentage of students with a passing math score (70 or greater)\npass_math = 100 * (len(school_data_complete.loc[school_data_complete[\"math_score\"] >= 70]) / student_total)\n\n# Calculate the percentage of students with a passing reading score (70 or greater)\npass_reading = 100 * (len(school_data_complete.loc[school_data_complete[\"reading_score\"] >= 70]) / student_total)\n\n# Calculate the percentage of students who passed math and reading (% Overall Passing)\npass_total = 100 * (len(school_data_complete.loc[(school_data_complete[\"math_score\"] >= 70)&(school_data_complete[\"reading_score\"]>= 70)]) / student_total) \n", "_____no_output_____" ], [ "# Create a dataframe to hold the above results.\nDistrict_df = pd.DataFrame({\n \"Total Schools\": school_total,\n \"Total Students\": student_total,\n \"Total Budget\": budget_total,\n \"Average Math Score\": average_math,\n \"Average Reading Score\": average_reading,\n \"% Passing Math\": [pass_math],\n \"% Passing Reading\": [pass_reading],\n \"% Overall Passing\": [pass_total]\n})", "_____no_output_____" ], [ "# Use Map to format the Total Students and Total Budget columns, as shown in example\nDistrict_df[\"Total Students\"] = District_df[\"Total Students\"].map(\"{:,d}\".format)\nDistrict_df[\"Total Budget\"] = District_df[\"Total Budget\"].map(\"${:,.2f}\".format)", "_____no_output_____" ], [ "# Show our final district summary\nDistrict_df", "_____no_output_____" ] ], [ [ "## School Summary", "_____no_output_____" ], [ "* Create an overview table that summarizes key metrics about each school, including:\n * School Name\n * School Type\n * Total Students\n * Total School Budget\n * Per Student Budget\n * Average Math Score\n * Average Reading Score\n * % Passing Math\n * % Passing Reading\n * % Overall Passing (The percentage of students that passed math **and** reading.)\n \n* Create a dataframe to hold the above results", "_____no_output_____" ] ], [ [ "# Perform all variable calculations needed to fill out the data frame\n\n# Create an overview table that summarizes key metrics about each school\nschool_groupby = school_data_complete.groupby(\"school_name\")\n\n# School types of each school\nschool_type = school_groupby[\"type\"].unique()\nschool_type = school_type.str[0] # Extracting from brackets\n\n# Pull the total students by school\nschool_students = school_groupby[\"size\"].unique()\nschool_students = school_students.str[0].astype('int32') # Extracting from brackets and choosing data type\n\n# Calculate the total budget by school\nschool_budget = school_groupby[\"budget\"].unique()\nschool_budget = school_budget.str[0].astype('float') # Extracting from brackets and choosing data type\n\n# Per student budget by school\nper_student_budget = school_budget/school_students\n\n# Calculate the average math score by school\nschool_avg_math = school_groupby[\"math_score\"].mean()\n\n# Calculate the average reading score by school\nschool_avg_reading = school_groupby[\"reading_score\"].mean()\n\n# Calculate the percentage of students with a passing math score (70 or greater) by school \n# Resources: http://bit.ly/3bKcSXs and http://bit.ly/2KmubCR\nschool_math_temp = school_data_complete.loc[school_data_complete[\"math_score\"] >= 70, \"school_name\"]\nschool_math_pass = 100 * (school_math_temp.groupby(school_math_temp).size() / school_students)\n\n# Calculate the percentage of students with a passing reading score (70 or greater) by school\nschool_reading_temp = school_data_complete.loc[school_data_complete[\"reading_score\"] >= 70, \"school_name\"]\nschool_reading_pass = 100 * (school_reading_temp.groupby(school_reading_temp).size() / school_students)\n\n# Calculate the percentage of students who passed math and reading (% Overall Passing) by school. \n#Resource: http://bit.ly/3imB4Ri\nschool_overall_temp = school_data_complete.loc[((school_data_complete.math_score >= 70) & (school_data_complete.reading_score >= 70)), \"school_name\"]\nschool_overall_pass = 100 * (school_overall_temp.groupby(school_overall_temp).size() / school_students)\n", "_____no_output_____" ], [ "# Create a dataframe to hold the above results\nSchool_df = pd.DataFrame({\n \"School Type\": school_type,\n \"Total Students\": school_students,\n \"Total School Budget\": school_budget,\n \"Per Student Budget\": per_student_budget,\n \"Average Math Score\": school_avg_math,\n \"Average Reading Score\": school_avg_reading,\n \"% Passing Math\": school_math_pass,\n \"% Passing Reading\": school_reading_pass,\n \"% Overall Passing\": school_overall_pass \n})\n\n# Rename axis\nSchool_df = School_df.rename_axis(\"School Name\")", "_____no_output_____" ], [ "# Use Map to format the Total Students, Total Budget columns, as shown in example\nSchool_df[\"Total School Budget\"] = School_df[\"Total School Budget\"].map(\"${:,.2f}\".format)\nSchool_df[\"Per Student Budget\"] = School_df[\"Per Student Budget\"].map(\"${:,.2f}\".format)", "_____no_output_____" ], [ "# Show our summary by school\nSchool_df", "_____no_output_____" ] ], [ [ "## Top Performing Schools (By % Overall Passing)", "_____no_output_____" ], [ "* Sort and display the top five performing schools by % overall passing.", "_____no_output_____" ] ], [ [ "# Sorting the DataFrame based on \"% Overall Passing\" column\nSchool_df = School_df.sort_values(\"% Overall Passing\", ascending=False)\nSchool_df.head(5)", "_____no_output_____" ] ], [ [ "## Bottom Performing Schools (By % Overall Passing)", "_____no_output_____" ], [ "* Sort and display the five worst-performing schools by % overall passing.", "_____no_output_____" ] ], [ [ "# Sorting the DataFrame based on \"% Overall Passing\" column\nSchool_df = School_df.sort_values(\"% Overall Passing\", ascending=True)\nSchool_df.head(5)", "_____no_output_____" ] ], [ [ "## Math Scores by Grade", "_____no_output_____" ], [ "* Create a table that lists the average Reading Score for students of each grade level (9th, 10th, 11th, 12th) at each school.\n\n * Create a pandas series for each grade. Hint: use a conditional statement.\n \n * Group each series by school\n \n * Combine the series into a dataframe\n \n * Optional: give the displayed data cleaner formatting", "_____no_output_____" ] ], [ [ "# Create a pandas series for each grade. We will use these for this problem and the next. \ngrade_9 = school_data_complete.loc[school_data_complete[\"grade\"] == \"9th\"]\ngrade_10 = school_data_complete.loc[school_data_complete[\"grade\"] == \"10th\"]\ngrade_11 = school_data_complete.loc[school_data_complete[\"grade\"] == \"11th\"]\ngrade_12 = school_data_complete.loc[school_data_complete[\"grade\"] == \"12th\"]\n\n# Group each series by school and calculate avg for all columns. \ngrade_9 = grade_9.groupby(\"school_name\").mean()\ngrade_10 = grade_10.groupby(\"school_name\").mean()\ngrade_11 = grade_11.groupby(\"school_name\").mean()\ngrade_12 = grade_12.groupby(\"school_name\").mean()", "_____no_output_____" ], [ "# Combine the series into a dataframe, indexing specifically for the math scores\nGrades_Math_df = pd.DataFrame({\n \"9th\": grade_9[\"math_score\"],\n \"10th\": grade_10[\"math_score\"],\n \"11th\": grade_11[\"math_score\"],\n \"12th\": grade_12[\"math_score\"]\n})\n\n# Rename axis\nGrades_Math_df = Grades_Math_df.rename_axis(\"School Name\")\n\n# Show table that lists the average Math Score for students of each grade level by school.\nGrades_Math_df\n", "_____no_output_____" ] ], [ [ "## Reading Score by Grade ", "_____no_output_____" ], [ "* Perform the same operations as above for reading scores", "_____no_output_____" ] ], [ [ "# All of the work in the prior table does not need to be repeated. Please refer to the math score setup above. \n\n# Combine the series into a dataframe, indexing specifically for the reading scores\nGrades_Reading_df = pd.DataFrame({\n \"9th\": grade_9[\"reading_score\"],\n \"10th\": grade_10[\"reading_score\"],\n \"11th\": grade_11[\"reading_score\"],\n \"12th\": grade_12[\"reading_score\"]\n})\n\n# Rename axis\nGrades_Reading_df = Grades_Reading_df.rename_axis(\"School Name\")\n\n# Show table that lists the average Reading Score for students of each grade level by school.\nGrades_Reading_df", "_____no_output_____" ] ], [ [ "## Scores by School Spending", "_____no_output_____" ], [ "* Create a table that breaks down school performances based on average Spending Ranges (Per Student). Use 4 reasonable bins to group school spending. Include in the table each of the following:\n * Average Math Score\n * Average Reading Score\n * % Passing Math\n * % Passing Reading\n * Overall Passing Rate (Average of the above two)", "_____no_output_____" ] ], [ [ "# Create the bins for the spending ranges \nspending_bins = [0, 584.99, 629.99, 644.99, 680]\n\n# Create the labels for the bins\nspending_labels = [\"<$585\", \"$585-630\", \"$630-645\", \"$645-680\"]", "_____no_output_____" ], [ "# Copy over my School_df from the previous section\nSpending_df = School_df\n\n# Binning the information, calling upon the per_student_budget variable calculated earlier \nSpending_df[\"Spending Ranges (Per Student)\"] = pd.cut(per_student_budget, spending_bins, labels=spending_labels, right=False)", "_____no_output_____" ], [ "# Use groupby on the data frame to calculate the mean values needed \nspending_mathscore = Spending_df.groupby(\"Spending Ranges (Per Student)\").mean()[\"Average Math Score\"]\nspending_readscore = Spending_df.groupby(\"Spending Ranges (Per Student)\").mean()[\"Average Reading Score\"]\nspending_passmath = Spending_df.groupby(\"Spending Ranges (Per Student)\").mean()[\"% Passing Math\"]\nspending_passread = Spending_df.groupby(\"Spending Ranges (Per Student)\").mean()[\"% Passing Reading\"]\nspending_passoverall = Spending_df.groupby(\"Spending Ranges (Per Student)\").mean()[\"% Overall Passing\"]", "_____no_output_____" ], [ "# Overwrite data frame with this information \nSpending_df = pd.DataFrame({\n \"Average Math Score\": spending_mathscore,\n \"Average Reading Score\": spending_readscore,\n \"% Passing Math\": spending_passmath,\n \"% Passing Reading\": spending_passread,\n \"% Overall Passing\": spending_passoverall\n})\nSpending_df", "_____no_output_____" ] ], [ [ "## Scores by School Size", "_____no_output_____" ], [ "* Perform the same operations as above, based on school size.", "_____no_output_____" ] ], [ [ "# Create the bins for the size ranges \nsize_bins = [0, 999, 1999, 5000]\n\n# Create the labels for the bins\nsize_labels = [\"Small (<1000)\", \"Medium (1000-2000)\", \"Large (2000-5000)\"]", "_____no_output_____" ], [ "# Copy over my School_df from the previous section\nSize_df = School_df\n\n# Binning the information, calling upon the per_student_budget variable calculated earlier \nSize_df[\"School Size\"] = pd.cut(school_students, size_bins, labels=size_labels, right=False)", "_____no_output_____" ], [ "# Use groupby on the data frame to calculate the mean values needed \nsize_mathscore = Size_df.groupby(\"School Size\").mean()[\"Average Math Score\"]\nsize_readscore = Size_df.groupby(\"School Size\").mean()[\"Average Reading Score\"]\nsize_passmath = Size_df.groupby(\"School Size\").mean()[\"% Passing Math\"]\nsize_passread = Size_df.groupby(\"School Size\").mean()[\"% Passing Reading\"]\nsize_passoverall = Size_df.groupby(\"School Size\").mean()[\"% Overall Passing\"]", "_____no_output_____" ], [ "# Overwrite data frame with this information \nSize_df = pd.DataFrame({\n \"Average Math Score\": size_mathscore,\n \"Average Reading Score\": size_readscore,\n \"% Passing Math\": size_passmath,\n \"% Passing Reading\": size_passread,\n \"% Overall Passing\": size_passoverall, \n})\nSize_df", "_____no_output_____" ] ], [ [ "## Scores by School Type", "_____no_output_____" ], [ "* Perform the same operations as above, based on school type", "_____no_output_____" ] ], [ [ "# We don't need to bin because of the school_type variable and column from School_df\n\n# Copy over my School_df from the previous section.\nType_df = School_df\n\n# Use groupby on the data frame to calculate the mean values needed \ntype_mathscore = Type_df.groupby(\"School Type\").mean()[\"Average Math Score\"]\ntype_readscore = Type_df.groupby(\"School Type\").mean()[\"Average Reading Score\"]\ntype_passmath = Type_df.groupby(\"School Type\").mean()[\"% Passing Math\"]\ntype_passread = Type_df.groupby(\"School Type\").mean()[\"% Passing Reading\"]\ntype_passoverall = Type_df.groupby(\"School Type\").mean()[\"% Overall Passing\"]", "_____no_output_____" ], [ "# Overwrite data frame with this information \nType_df = pd.DataFrame({\n \"Average Math Score\": type_mathscore,\n \"Average Reading Score\": type_readscore,\n \"% Passing Math\": type_passmath,\n \"% Passing Reading\": type_passread,\n \"% Overall Passing\": type_passoverall, \n})\nType_df", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ] ]
4a47d103d1f14e2027c0b0b696cbc56f2ce01f80
45,608
ipynb
Jupyter Notebook
TV Script Generation/dlnd_tv_script_generation.ipynb
srimanthtenneti/Deep-Learning-NanoDegree
d99b09530a96f4aeca7adf3b9188e1d5fc4104d4
[ "Apache-2.0" ]
1
2021-04-25T08:29:39.000Z
2021-04-25T08:29:39.000Z
TV Script Generation/dlnd_tv_script_generation.ipynb
srimanthtenneti/Deep-Learning-NanoDegree
d99b09530a96f4aeca7adf3b9188e1d5fc4104d4
[ "Apache-2.0" ]
null
null
null
TV Script Generation/dlnd_tv_script_generation.ipynb
srimanthtenneti/Deep-Learning-NanoDegree
d99b09530a96f4aeca7adf3b9188e1d5fc4104d4
[ "Apache-2.0" ]
null
null
null
37.911887
995
0.554947
[ [ [ "# TV Script Generation\n\nIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chronicles#scripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,\"fake\" TV script, based on patterns it recognizes in this training data.\n\n## Get the Data\n\nThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. \n>* As a first step, we'll load in this data and look at some samples. \n* Then, you'll be tasked with defining and training an RNN to generate a new script!", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# load in data\nimport helper\ndata_dir = './data/Seinfeld_Scripts.txt'\ntext = helper.load_data(data_dir)", "_____no_output_____" ] ], [ [ "## Explore the Data\nPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\\n`.", "_____no_output_____" ] ], [ [ "view_line_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\n\nlines = text.split('\\n')\nprint('Number of lines: {}'.format(len(lines)))\nword_count_line = [len(line.split()) for line in lines]\nprint('Average number of words in each line: {}'.format(np.average(word_count_line)))\n\nprint()\nprint('The lines {} to {}:'.format(*view_line_range))\nprint('\\n'.join(text.split('\\n')[view_line_range[0]:view_line_range[1]]))", "Dataset Stats\nRoughly the number of unique words: 46367\nNumber of lines: 109233\nAverage number of words in each line: 5.544240293684143\n\nThe lines 0 to 10:\njerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go. \n\njerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother. \n\ngeorge: are you through? \n\njerry: you do of course try on, when you buy? \n\ngeorge: yes, it was purple, i liked it, i dont actually recall considering the buttons. \n\n" ] ], [ [ "---\n## Implement Pre-processing Functions\nThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:\n- Lookup Table\n- Tokenize Punctuation\n\n### Lookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call `vocab_to_int`\n- Dictionary to go from the id to word, we'll call `int_to_vocab`\n\nReturn these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`", "_____no_output_____" ] ], [ [ "import problem_unittests as tests\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n # TODO: Implement Function\n int_to_vocab = {}\n vocab_to_int = {}\n for sent in text:\n for word in sent.split():\n if word not in vocab_to_int:\n vocab_to_int[word] = len(vocab_to_int)\n \n int_to_vocab = {ii : ch for ch , ii in vocab_to_int.items()}\n # return tuple\n return (vocab_to_int, int_to_vocab)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)", "Tests Passed\n" ], [ "create_lookup_tables(\"Hi\")", "_____no_output_____" ] ], [ [ "### Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, \"bye\" and \"bye!\" would generate two different word ids.\n\nImplement the function `token_lookup` to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( **.** )\n- Comma ( **,** )\n- Quotation Mark ( **\"** )\n- Semicolon ( **;** )\n- Exclamation mark ( **!** )\n- Question mark ( **?** )\n- Left Parentheses ( **(** )\n- Right Parentheses ( **)** )\n- Dash ( **-** )\n- Return ( **\\n** )\n\nThis dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value \"dash\", try using something like \"||dash||\".", "_____no_output_____" ] ], [ [ "def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenized dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n punct = {'.' : '||dot||' , \n ',' : '||comma||' , \n '\"' : '||invcoma||', \n ';' : '||semicolon||', \n '!' : '||exclamation_mark||' , \n '?' : '||question_mark||' , \n '(' : '||openparanthesys||' , \n ')' : '||closeparanthesys||' , \n '-' : '||hyphen||' , \n '\\n' : '||line_feed||'}\n return punct\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)", "Tests Passed\n" ] ], [ [ "## Pre-process all the data and save it\n\nRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# pre-process training data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)", "_____no_output_____" ] ], [ [ "# Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()", "_____no_output_____" ], [ "token_dict", "_____no_output_____" ] ], [ [ "## Build the Neural Network\nIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions.\n\n### Check Access to GPU", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport torch\n\n# Check for a GPU\ntrain_on_gpu = torch.cuda.is_available()\nif not train_on_gpu:\n print('No GPU found. Please use a GPU to train your neural network.')", "_____no_output_____" ], [ "from torch.utils.data import TensorDataset, DataLoader\n\n\ndef batch_data(words, sequence_length, batch_size):\n \"\"\"\n Batch the neural network data using DataLoader\n :param words: The word ids of the TV scripts\n :param sequence_length: The sequence length of each batch\n :param batch_size: The size of each batch; the number of sequences in a batch\n :return: DataLoader with batched data\n \"\"\"\n n_batches = len(words)//batch_size\n words = words[:batch_size*n_batches]\n x , y = [] , []\n for idx in range(0 , len(words) - sequence_length):\n bx = words[idx:idx+sequence_length]\n by = words[idx+sequence_length]\n x.append(bx)\n y.append(by)\n \n x , y = np.array(x) , np.array(y)\n print(\"Feature Data : \",x[:20])\n print(\"Target Data : \", y[:20])\n # TODO: Implement function\n dataset = TensorDataset(torch.from_numpy(x) , torch.from_numpy(y))\n # return a dataloader\n return DataLoader(dataset , shuffle = True , batch_size = batch_size)\n\n# there is no test for this function, but you are encouraged to create\n# print statements and tests of your own\n", "_____no_output_____" ] ], [ [ "### Test your dataloader \n\nYou'll have to modify this code to test a batching function, but it should look fairly similar.\n\nBelow, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.\n\nYour code should return something like the following (likely in a different order, if you shuffled your data):\n\n```\ntorch.Size([10, 5])\ntensor([[ 28, 29, 30, 31, 32],\n [ 21, 22, 23, 24, 25],\n [ 17, 18, 19, 20, 21],\n [ 34, 35, 36, 37, 38],\n [ 11, 12, 13, 14, 15],\n [ 23, 24, 25, 26, 27],\n [ 6, 7, 8, 9, 10],\n [ 38, 39, 40, 41, 42],\n [ 25, 26, 27, 28, 29],\n [ 7, 8, 9, 10, 11]])\n\ntorch.Size([10])\ntensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])\n```\n\n### Sizes\nYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). \n\n### Values\n\nYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.", "_____no_output_____" ] ], [ [ "# test dataloader\n\ntest_text = list(range(50))\nt_loader = batch_data(test_text, sequence_length=5, batch_size=10)\n\ndata_iter = iter(t_loader)\nsample_x, sample_y = data_iter.next()\n\nprint(sample_x.shape)\nprint(sample_x)\nprint()\nprint(sample_y.shape)\nprint(sample_y)", "Feature Data : [[ 0 1 2 3 4]\n [ 1 2 3 4 5]\n [ 2 3 4 5 6]\n [ 3 4 5 6 7]\n [ 4 5 6 7 8]\n [ 5 6 7 8 9]\n [ 6 7 8 9 10]\n [ 7 8 9 10 11]\n [ 8 9 10 11 12]\n [ 9 10 11 12 13]\n [10 11 12 13 14]\n [11 12 13 14 15]\n [12 13 14 15 16]\n [13 14 15 16 17]\n [14 15 16 17 18]\n [15 16 17 18 19]\n [16 17 18 19 20]\n [17 18 19 20 21]\n [18 19 20 21 22]\n [19 20 21 22 23]]\nTarget Data : [ 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24]\ntorch.Size([10, 5])\ntensor([[ 23, 24, 25, 26, 27],\n [ 13, 14, 15, 16, 17],\n [ 28, 29, 30, 31, 32],\n [ 35, 36, 37, 38, 39],\n [ 6, 7, 8, 9, 10],\n [ 14, 15, 16, 17, 18],\n [ 3, 4, 5, 6, 7],\n [ 43, 44, 45, 46, 47],\n [ 25, 26, 27, 28, 29],\n [ 5, 6, 7, 8, 9]])\n\ntorch.Size([10])\ntensor([ 28, 18, 33, 40, 11, 19, 8, 48, 30, 10])\n" ] ], [ [ "---\n## Build the Neural Network\nImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.html#torch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class:\n - `__init__` - The initialize function. \n - `init_hidden` - The initialization function for an LSTM/GRU hidden state\n - `forward` - Forward propagation function.\n \nThe initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.\n\n**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word.\n\n### Hints\n\n1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`\n2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:\n\n```\n# reshape into (batch_size, seq_length, output_size)\noutput = output.view(batch_size, -1, self.output_size)\n# get last batch\nout = output[:, -1]\n```", "_____no_output_____" ] ], [ [ "import torch.nn as nn\n\nclass RNN(nn.Module):\n \n def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):\n \"\"\"\n Initialize the PyTorch RNN Module\n :param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)\n :param output_size: The number of output dimensions of the neural network\n :param embedding_dim: The size of embeddings, should you choose to use them \n :param hidden_dim: The size of the hidden layer outputs\n :param dropout: dropout to add in between LSTM/GRU layers\n \"\"\"\n super(RNN, self).__init__()\n # TODO: Implement function\n # set class variables\n self.vocab_size = vocab_size\n self.output_size = output_size\n self.embedding_dim = embedding_dim\n self.hidden_dim = hidden_dim\n self.n_layers = n_layers\n self.dropout_prob = dropout\n # define model layers\n self.embed = nn.Embedding(self.vocab_size , self.embedding_dim)\n self.lstm = nn.LSTM(self.embedding_dim , self.hidden_dim , self.n_layers , batch_first = True , dropout = self.dropout_prob)\n self.linear = nn.Linear(self.hidden_dim , self.output_size)\n \n def forward(self, nn_input, hidden):\n \"\"\"\n Forward propagation of the neural network\n :param nn_input: The input to the neural network\n :param hidden: The hidden state \n :return: Two Tensors, the output of the neural network and the latest hidden state\n \"\"\"\n # TODO: Implement function \n embed_out = self.embed(nn_input)\n lstm_out , hidden_out = self.lstm(embed_out , hidden)\n lstm_out = lstm_out.contiguous().view(-1 , self.hidden_dim)\n \n output = self.linear(lstm_out)\n output = output.view(nn_input.size(0) , -1 , self.output_size)\n output = output[: , -1]\n # return one batch of output word scores and the hidden state\n return output, hidden \n \n def init_hidden(self, batch_size):\n '''\n Initialize the hidden state of an LSTM/GRU\n :param batch_size: The batch_size of the hidden state\n :return: hidden state of dims (n_layers, batch_size, hidden_dim)\n '''\n # Implement function\n weight = next(self.parameters()).data\n # initialize hidden state with zero weights, and move to GPU if available\n if train_on_gpu:\n hidden = (weight.new(self.n_layers,batch_size,self.hidden_dim).zero_().cuda(),\n weight.new(self.n_layers,batch_size,self.hidden_dim).zero_().cuda())\n \n else:\n hidden = (weight.new(self.n_layers,batch_size,self.hidden_dim).zero_(),\n weight.new(self.n_layers,batch_size,self.hidden_dim).zero_())\n return hidden\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_rnn(RNN, train_on_gpu)", "Tests Passed\n" ] ], [ [ "### Define forward and backpropagation\n\nUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:\n```\nloss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)\n```\n\nAnd it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.\n\n**If a GPU is available, you should move your data to that GPU device, here.**", "_____no_output_____" ] ], [ [ "def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):\n \"\"\"\n Forward and backward propagation on the neural network\n :param decoder: The PyTorch Module that holds the neural network\n :param decoder_optimizer: The PyTorch optimizer for the neural network\n :param criterion: The PyTorch loss function\n :param inp: A batch of input to the neural network\n :param target: The target output for the batch of input\n :return: The loss and the latest hidden state Tensor\n \"\"\"\n \n # TODO: Implement Function\n # move data to GPU, if available\n if train_on_gpu:\n rnn.cuda()\n inp = inp.cuda()\n target = target.cuda()\n # perform backpropagation and optimization\n h = tuple([w.data for w in hidden])\n optimizer.zero_grad()\n out , h = rnn(inp , h)\n \n loss = criterion(out , target)\n loss.backward()\n nn.utils.clip_grad_norm_(rnn.parameters() , 5)\n optimizer.step()\n\n # return the loss over a batch and the hidden state produced by our model\n return loss.item(), h\n\n# Note that these tests aren't completely extensive.\n# they are here to act as general checks on the expected outputs of your functions\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)", "Tests Passed\n" ] ], [ [ "## Neural Network Training\n\nWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it.\n\n### Train Loop\n\nThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n\ndef train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):\n batch_losses = []\n \n rnn.train()\n\n print(\"Training for %d epoch(s)...\" % n_epochs)\n for epoch_i in range(1, n_epochs + 1):\n \n # initialize hidden state\n hidden = rnn.init_hidden(batch_size)\n \n for batch_i, (inputs, labels) in enumerate(train_loader, 1):\n \n # make sure you iterate over completely full batches, only\n n_batches = len(train_loader.dataset)//batch_size\n if(batch_i > n_batches):\n break\n \n # forward, back prop\n loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden) \n # record loss\n batch_losses.append(loss)\n\n # printing loss stats\n if batch_i % show_every_n_batches == 0:\n print('Epoch: {:>4}/{:<4} Loss: {}\\n'.format(\n epoch_i, n_epochs, np.average(batch_losses)))\n batch_losses = []\n\n # returns a trained rnn\n return rnn", "_____no_output_____" ] ], [ [ "### Hyperparameters\n\nSet and train the neural network with the following parameters:\n- Set `sequence_length` to the length of a sequence.\n- Set `batch_size` to the batch size.\n- Set `num_epochs` to the number of epochs to train for.\n- Set `learning_rate` to the learning rate for an Adam optimizer.\n- Set `vocab_size` to the number of uniqe tokens in our vocabulary.\n- Set `output_size` to the desired size of the output.\n- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.\n- Set `hidden_dim` to the hidden dimension of your RNN.\n- Set `n_layers` to the number of layers/cells in your RNN.\n- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.\n\nIf the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.", "_____no_output_____" ] ], [ [ "# Data params\n# Sequence Length\nsequence_length = 10 # of words in a sequence\n# Batch Size\nbatch_size = 256\n\n# data loader - do not change\ntrain_loader = batch_data(int_text, sequence_length, batch_size)", "Feature Data : [[ 0 1 2 3 3 3 4 2 1 5]\n [ 1 2 3 3 3 4 2 1 5 6]\n [ 2 3 3 3 4 2 1 5 6 7]\n [ 3 3 3 4 2 1 5 6 7 8]\n [ 3 3 4 2 1 5 6 7 8 9]\n [ 3 4 2 1 5 6 7 8 9 10]\n [ 4 2 1 5 6 7 8 9 10 11]\n [ 2 1 5 6 7 8 9 10 11 6]\n [ 1 5 6 7 8 9 10 11 6 12]\n [ 5 6 7 8 9 10 11 6 12 3]\n [ 6 7 8 9 10 11 6 12 3 13]\n [ 7 8 9 10 11 6 12 3 13 3]\n [ 8 9 10 11 6 12 3 13 3 3]\n [ 9 10 11 6 12 3 13 3 3 3]\n [10 11 6 12 3 13 3 3 3 14]\n [11 6 12 3 13 3 3 3 14 15]\n [ 6 12 3 13 3 3 3 14 15 16]\n [12 3 13 3 3 3 14 15 16 17]\n [ 3 13 3 3 3 14 15 16 17 13]\n [13 3 3 3 14 15 16 17 13 18]]\nTarget Data : [ 6 7 8 9 10 11 6 12 3 13 3 3 3 14 15 16 17 13 18 19]\n" ], [ "# Training parameters\n# Number of Epochs\nnum_epochs = 15\n# Learning Rate\nlearning_rate = 3e-3 \n\n# Model parameters\n# Vocab size\nvocab_size = len(vocab_to_int)\n# Output size\noutput_size = vocab_size\n# Embedding Dimension\nembedding_dim = 512\n# Hidden Dimension\nhidden_dim = 256\n# Number of RNN Layers\nn_layers = 3\n\n# Show stats for every n number of batches\nshow_every_n_batches = 2000", "_____no_output_____" ] ], [ [ "### Train\nIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. \n> **You should aim for a loss less than 3.5.** \n\nYou should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n\n# create model and move to gpu if available\nrnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)\nif train_on_gpu:\n rnn.cuda()\n\n# defining loss and optimization functions for training\noptimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)\ncriterion = nn.CrossEntropyLoss()\n\n# training the model\ntrained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)\n\n# saving the trained model\nhelper.save_model('./save/trained_rnn', trained_rnn)\nprint('Model Trained and Saved')", "Training for 15 epoch(s)...\nEpoch: 1/15 Loss: 5.912395806312561\n\nEpoch: 2/15 Loss: 5.475835382887046\n\nEpoch: 3/15 Loss: 4.373096769854764\n\nEpoch: 4/15 Loss: 4.092630126782269\n\nEpoch: 5/15 Loss: 3.922724954910125\n\nEpoch: 6/15 Loss: 3.8150751286801543\n\nEpoch: 7/15 Loss: 3.7299998814848143\n\nEpoch: 8/15 Loss: 3.6643555359425575\n\nEpoch: 9/15 Loss: 3.61033548488721\n\nEpoch: 10/15 Loss: 3.5618147862357996\n\nEpoch: 11/15 Loss: 3.523452976672019\n\nEpoch: 12/15 Loss: 3.4942494557227617\n\nEpoch: 13/15 Loss: 3.457214878442607\n\nEpoch: 14/15 Loss: 3.4303908989479446\n\nEpoch: 15/15 Loss: 3.4079759650075707\n\nModel Trained and Saved\n" ] ], [ [ "### Question: How did you decide on your model hyperparameters? \nFor example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those?", "_____no_output_____" ], [ "**Answer:** After training I observed that if the sequence_length is too large the model convergers faster. Also, taking bigger batches is improving the model. Coming to the hidden_dim I choose it to be 256 and I decided the value taking into consideration the embedding dim that is 512. Finally the n_layers I choose it to be 3 as 2 or 3 is the usually set value. ", "_____no_output_____" ], [ "---\n# Checkpoint\n\nAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport torch\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\ntrained_rnn = helper.load_model('./save/trained_rnn')", "_____no_output_____" ] ], [ [ "## Generate TV Script\nWith the network trained and saved, you'll use it to generate a new, \"fake\" Seinfeld TV script in this section.\n\n### Generate Text\nTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nimport torch.nn.functional as F\n\ndef generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):\n \"\"\"\n Generate text using the neural network\n :param decoder: The PyTorch Module that holds the trained neural network\n :param prime_id: The word id to start the first prediction\n :param int_to_vocab: Dict of word id keys to word values\n :param token_dict: Dict of puncuation tokens keys to puncuation values\n :param pad_value: The value used to pad a sequence\n :param predict_len: The length of text to generate\n :return: The generated text\n \"\"\"\n rnn.eval()\n \n # create a sequence (batch_size=1) with the prime_id\n current_seq = np.full((1, sequence_length), pad_value)\n current_seq[-1][-1] = prime_id\n predicted = [int_to_vocab[prime_id]]\n \n for _ in range(predict_len):\n if train_on_gpu:\n current_seq = torch.LongTensor(current_seq).cuda()\n else:\n current_seq = torch.LongTensor(current_seq)\n \n # initialize the hidden state\n hidden = rnn.init_hidden(current_seq.size(0))\n \n # get the output of the rnn\n output, _ = rnn(current_seq, hidden)\n \n # get the next word probabilities\n p = F.softmax(output, dim=1).data\n if(train_on_gpu):\n p = p.cpu() # move to cpu\n \n # use top_k sampling to get the index of the next word\n top_k = 5\n p, top_i = p.topk(top_k)\n top_i = top_i.numpy().squeeze()\n \n # select the likely next word index with some element of randomness\n p = p.numpy().squeeze()\n word_i = np.random.choice(top_i, p=p/p.sum())\n \n # retrieve that word from the dictionary\n word = int_to_vocab[word_i]\n predicted.append(word) \n \n # the generated word becomes the next \"current sequence\" and the cycle can continue\n current_seq = np.roll(current_seq, -1, 1)\n current_seq[-1][-1] = word_i\n \n gen_sentences = ' '.join(predicted)\n \n # Replace punctuation tokens\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n gen_sentences = gen_sentences.replace(' ' + token.lower(), key)\n gen_sentences = gen_sentences.replace('\\n ', '\\n')\n gen_sentences = gen_sentences.replace('( ', '(')\n \n # return all the sentences\n return gen_sentences", "_____no_output_____" ] ], [ [ "### Generate a New Script\nIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:\n- \"jerry\"\n- \"elaine\"\n- \"george\"\n- \"kramer\"\n\nYou can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)", "_____no_output_____" ] ], [ [ "# run the cell multiple times to get different results!\ngen_length = 400 # modify the length to your preference\nprime_word = 'jerry' # name for starting the script\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\npad_word = helper.SPECIAL_WORDS['PADDING']\ngenerated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)\nprint(generated_script)", "/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:37: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().\n" ] ], [ [ "#### Save your favorite scripts\n\nOnce you have a script that you like (or find interesting), save it to a text file!", "_____no_output_____" ] ], [ [ "# save script to a text file\nf = open(\"generated_script_1.txt\",\"w\")\nf.write(generated_script)\nf.close()", "_____no_output_____" ] ], [ [ "# The TV Script is Not Perfect\nIt's ok if the TV script doesn't make perfect sense. It should look like alternating lines of dialogue, here is one such example of a few generated lines.\n\n### Example generated script\n\n>jerry: what about me?\n>\n>jerry: i don't have to wait.\n>\n>kramer:(to the sales table)\n>\n>elaine:(to jerry) hey, look at this, i'm a good doctor.\n>\n>newman:(to elaine) you think i have no idea of this...\n>\n>elaine: oh, you better take the phone, and he was a little nervous.\n>\n>kramer:(to the phone) hey, hey, jerry, i don't want to be a little bit.(to kramer and jerry) you can't.\n>\n>jerry: oh, yeah. i don't even know, i know.\n>\n>jerry:(to the phone) oh, i know.\n>\n>kramer:(laughing) you know...(to jerry) you don't know.\n\nYou can see that there are multiple characters that say (somewhat) complete sentences, but it doesn't have to be perfect! It takes quite a while to get good results, and often, you'll have to use a smaller vocabulary (and discard uncommon words), or get more data. The Seinfeld dataset is about 3.4 MB, which is big enough for our purposes; for script generation you'll want more than 1 MB of text, generally. \n\n# Submitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save another copy as an HTML file by clicking \"File\" -> \"Download as..\"->\"html\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission. Once you download these files, compress them into one zip file for submission.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a47dd9445a26dfe461cf736f7408a9bb4ab8881
2,792
ipynb
Jupyter Notebook
00-Python Object and Data Structure Basics/07-Booleans.ipynb
dglynn/python-bootcamp-udemy
649b73eefa1bedcbd40b387076bfe5fc6e15cb27
[ "MIT" ]
null
null
null
00-Python Object and Data Structure Basics/07-Booleans.ipynb
dglynn/python-bootcamp-udemy
649b73eefa1bedcbd40b387076bfe5fc6e15cb27
[ "MIT" ]
null
null
null
00-Python Object and Data Structure Basics/07-Booleans.ipynb
dglynn/python-bootcamp-udemy
649b73eefa1bedcbd40b387076bfe5fc6e15cb27
[ "MIT" ]
null
null
null
16.045977
81
0.454871
[ [ [ "## Booleans\n\nare operators that convey true or false statements. \nThese are very important later on when we deal with control flow and logic!", "_____no_output_____" ] ], [ [ "True", "_____no_output_____" ], [ "False", "_____no_output_____" ], [ "type(True)", "_____no_output_____" ], [ "1 > 2", "_____no_output_____" ], [ "1 == 1", "_____no_output_____" ], [ "# None keyword as a placeholder for an object we dont wan to assign yet", "_____no_output_____" ], [ "b", "_____no_output_____" ], [ "type(b)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a47e898c7ff26153bc32ca81e1c838f6d0fc017
41,547
ipynb
Jupyter Notebook
05_reg_NN.ipynb
henokyemam/MarchMadness2021
01632281edcc99ee85c7cd364ca9c4272548bf68
[ "MIT" ]
1
2021-03-20T16:20:59.000Z
2021-03-20T16:20:59.000Z
05_reg_NN.ipynb
henokyemam/MarchMadness2021
01632281edcc99ee85c7cd364ca9c4272548bf68
[ "MIT" ]
null
null
null
05_reg_NN.ipynb
henokyemam/MarchMadness2021
01632281edcc99ee85c7cd364ca9c4272548bf68
[ "MIT" ]
null
null
null
36.253927
1,095
0.427973
[ [ [ "import pandas as pd\nimport numpy as np\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.wrappers.scikit_learn import KerasRegressor\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import KFold\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.model_selection import cross_validate, cross_val_predict\nfrom keras import models\nfrom keras import layers", "_____no_output_____" ], [ "model_data = pd.read_csv('write_data/stage_1/lr_modeling.csv')", "_____no_output_____" ], [ "model_data.head()", "_____no_output_____" ], [ "training = model_data[(model_data['Season'] < 2020) & (model_data['Target_clf'] >0)]", "_____no_output_____" ], [ "training.Target_clf.value_counts()", "_____no_output_____" ], [ "y = training['Target'].values\nX = training.drop(columns=['WTeamID', 'LTeamID', 'Season', 'Target', 'Target_clf']).values", "_____no_output_____" ], [ "# baseline neural networks\ndef baseline_model():\n # create model\n model = Sequential()\n model.add(Dense(15, input_dim=32, kernel_initializer='normal', activation='relu'))\n model.add(Dense(1, kernel_initializer='normal'))\n # Compile model\n model.compile(loss='mean_squared_error', optimizer='adam')\n return model", "_____no_output_____" ], [ "estimator = KerasRegressor(build_fn=baseline_model, epochs=100, batch_size=50, verbose=0)\nkfold = KFold(n_splits=10)\nresults = cross_val_score(estimator, X, y, cv=kfold, scoring='neg_mean_squared_error')\nprint(\"Baseline: %.2f (%.2f) MSE\" % (np.sqrt(-1 * results.mean()), results.std()))", "Baseline: 9.94 (99.65) MSE\n" ], [ "estimators = []\nestimators.append(('std', StandardScaler()))\nestimators.append(('mlp', KerasRegressor(build_fn=baseline_model, epochs=100, batch_size=50, verbose=0)))\npipeline = Pipeline(estimators)\nkfold = KFold(n_splits=10)\nresults = cross_val_score(pipeline, X, y, cv=kfold, scoring='neg_mean_squared_error')\nprint(\"Baseline: %.2f (%.2f) MSE\" % (np.sqrt(-1 * results.mean()), np.sqrt(results.std())))", "Baseline: 9.68 (9.97) MSE\n" ], [ "# baseline neural networks\ndef develop_model():\n model = Sequential()\n model.add(Dense(15, input_dim=32, kernel_initializer='normal', activation='relu'))\n model.add(Dense(5, kernel_initializer='normal', activation='relu'))\n model.add(Dense(1, kernel_initializer='normal'))\n # Compile model\n model.compile(loss='mean_squared_error', optimizer='adam')\n return model", "_____no_output_____" ], [ "def eval_nn(model):\n estimators = []\n estimators.append(('std', StandardScaler()))\n estimators.append(('mlp', KerasRegressor(build_fn=model, epochs=100, batch_size=50, verbose=0)))\n pipeline = Pipeline(estimators)\n kfold = KFold(n_splits=10)\n pipeline.fit(X, y)\n# results = cross_val_score(pipeline, X, y, cv=kfold, scoring='neg_mean_squared_error')\n# mod = cross_val_predict(pipeline, X, y, cv=kfold)\n\n# print('RMSE: ', np.round(np.sqrt(-1 * results.mean())))\n return pipeline\n# return [np.round(np.sqrt(-1 * results.mean())), np.round(np.sqrt(results.std()))], pipeline", "_____no_output_____" ], [ "# data", "_____no_output_____" ], [ "k = 5\nnum_val_samples = len(X_train) // k\nnum_epochs = 100\nmse = []\nrmse = []", "_____no_output_____" ], [ "X_train_ = StandardScaler().fit_transform(X_train)", "_____no_output_____" ], [ "# from keras.metrics import \nfrom keras.metrics import RootMeanSquaredError", "_____no_output_____" ], [ "\ndef build_model():\n model = models.Sequential()\n model.add(layers.Dense(15, activation='relu', input_shape=(X_train_.shape[1],)))\n model.add(layers.Dense(5, activation='relu'))\n model.add(layers.Dense(1))\n model.compile(optimizer='adam'\n , loss=\"mean_squared_error\"\n , metrics=[RootMeanSquaredError(name=\"root_mean_squared_error\"\n , dtype=None)])\n return model", "_____no_output_____" ], [ "for i in range(k):\n print('processing fold #', i)\n val_data = X_train_[i * num_val_samples: (i + 1) * num_val_samples]\n val_targets = y_train[i * num_val_samples: (i + 1) * num_val_samples]\n\n partial_train_data = np.concatenate([X_train_[:i * num_val_samples]\n , X_train_[(i + 1) * num_val_samples:]]\n ,axis=0)\n partial_train_targets = np.concatenate([y_train[:i * num_val_samples]\n ,y_train[(i + 1) * num_val_samples:]]\n , axis=0)\n model = build_model()\n model.fit(partial_train_data, partial_train_targets,\n epochs=num_epochs, batch_size=1, verbose=0)\n val_mse, val_rmse = model.evaluate(val_data, val_targets, verbose=0)\n print('mse is: ', val_mse)\n print('rmse is: ', val_rmse)\n mse.append(val_mse)\n rmse.append(val_rmse)", "processing fold # 0\nmse is: 100.31160953700943\nrmse is: 10.015567779541016\nprocessing fold # 1\nmse is: 123.5098450935927\nrmse is: 11.113497734069824\nprocessing fold # 2\nmse is: 97.11131173972315\nrmse is: 9.854507446289062\nprocessing fold # 3\nmse is: 173.0737163364487\nrmse is: 13.155747413635254\nprocessing fold # 4\nmse is: 109.29969685189675\nrmse is: 10.454648971557617\n" ], [ "np.array(rmse).mean()", "_____no_output_____" ], [ "data_test = pd.read_csv('write_data/stage_1/submission_1.csv')\ndata_test = data_test.drop(columns=['WTeamID', 'LTeamID'])\ndata_test.head()", "_____no_output_____" ], [ "data_lr = data_test[['ID']].copy()\ndata_lr['pred_nn'] = model.predict(data_test.drop(columns=['Season', 'ID']))\ndata_lr.head()", "_____no_output_____" ], [ "df_sub = load_dataframe('write_data/stage_1/01_spread_pred.csv')\ndata_linear_predict = df_sub\\\n .withColumn('Season', split(df_sub.ID, '_').getItem(0)) \\\n .withColumn('WTeamID', split(df_sub.ID, '_').getItem(1)) \\\n .withColumn('LTeamID', split(df_sub.ID, '_').getItem(2)) \\\n .toPandas() ", "_____no_output_____" ], [ "compare =data_linear_predict.join(data_lr, on='ID', how='inner')", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a48019eb48488506ffa0723f6afd4a89e581034
892,496
ipynb
Jupyter Notebook
Chapter05/CIFAR-10 dataset samples.ipynb
yanak/tensor-flow-deep-learning-tutorial
541bfe8154e4d12a3aeeb12f3d3e0dea634fbee5
[ "Apache-2.0" ]
null
null
null
Chapter05/CIFAR-10 dataset samples.ipynb
yanak/tensor-flow-deep-learning-tutorial
541bfe8154e4d12a3aeeb12f3d3e0dea634fbee5
[ "Apache-2.0" ]
null
null
null
Chapter05/CIFAR-10 dataset samples.ipynb
yanak/tensor-flow-deep-learning-tutorial
541bfe8154e4d12a3aeeb12f3d3e0dea634fbee5
[ "Apache-2.0" ]
null
null
null
2,010.126126
480,618
0.944381
[ [ [ "**[CDS-01]** 必要なモジュールをインポートして、乱数のシードを設定します。", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nnp.random.seed(20160704)\ntf.set_random_seed(20160704)", "_____no_output_____" ] ], [ [ "**[CDS-02]** CIFAR-10 のデータセットをダウンロードします。ダウンロード完了まで少し時間がかかります。", "_____no_output_____" ] ], [ [ "%%bash\nmkdir -p /tmp/cifar10_data\ncd /tmp/cifar10_data\ncurl -OL http://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz\ntar xzf cifar-10-binary.tar.gz", " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0\r 0 162M 0 14233 0 0 7564 0 6:14:41 0:00:01 6:14:40 7562\r 0 162M 0 479k 0 0 171k 0 0:16:06 0:00:02 0:16:04 171k\r 1 162M 1 3052k 0 0 787k 0 0:03:30 0:00:03 0:03:27 787k\r 3 162M 3 5460k 0 0 1140k 0 0:02:25 0:00:04 0:02:21 1140k\r 4 162M 4 7949k 0 0 1354k 0 0:02:02 0:00:05 0:01:57 1823k\r 6 162M 6 10.2M 0 0 1540k 0 0:01:47 0:00:06 0:01:41 2124k\r 7 162M 7 11.9M 0 0 1550k 0 0:01:47 0:00:07 0:01:40 2308k\r 8 162M 8 13.3M 0 0 1550k 0 0:01:47 0:00:08 0:01:39 2150k\r 8 162M 8 14.4M 0 0 1513k 0 0:01:49 0:00:09 0:01:40 1869k\r 9 162M 9 15.7M 0 0 1497k 0 0:01:50 0:00:10 0:01:40 1667k\r 10 162M 10 17.0M 0 0 1478k 0 0:01:52 0:00:11 0:01:41 1393k\r 11 162M 11 18.1M 0 0 1456k 0 0:01:54 0:00:12 0:01:42 1305k\r 11 162M 11 19.2M 0 0 1425k 0 0:01:56 0:00:13 0:01:43 1206k\r 12 162M 12 20.4M 0 0 1415k 0 0:01:57 0:00:14 0:01:43 1222k\r 13 162M 13 21.7M 0 0 1407k 0 0:01:57 0:00:15 0:01:42 1214k\r 14 162M 14 23.1M 0 0 1409k 0 0:01:57 0:00:16 0:01:41 1247k\r 14 162M 14 24.1M 0 0 1390k 0 0:01:59 0:00:17 0:01:42 1222k\r 15 162M 15 25.3M 0 0 1379k 0 0:02:00 0:00:18 0:01:42 1252k\r 16 162M 16 26.4M 0 0 1368k 0 0:02:01 0:00:19 0:01:42 1231k\r 17 162M 17 27.7M 0 0 1364k 0 0:02:01 0:00:20 0:01:41 1225k\r 17 162M 17 28.6M 0 0 1344k 0 0:02:03 0:00:21 0:01:42 1128k\r 18 162M 18 29.7M 0 0 1336k 0 0:02:04 0:00:22 0:01:42 1146k\r 18 162M 18 30.7M 0 0 1324k 0 0:02:05 0:00:23 0:01:42 1118k\r 19 162M 19 31.8M 0 0 1313k 0 0:02:06 0:00:24 0:01:42 1095k\r 20 162M 20 32.6M 0 0 1296k 0 0:02:08 0:00:25 0:01:43 1016k\r 20 162M 20 33.7M 0 0 1288k 0 0:02:08 0:00:26 0:01:42 1041k\r 21 162M 21 34.7M 0 0 1279k 0 0:02:09 0:00:27 0:01:42 1021k\r 22 162M 22 35.8M 0 0 1276k 0 0:02:10 0:00:28 0:01:42 1044k\r 22 162M 22 36.9M 0 0 1269k 0 0:02:10 0:00:29 0:01:41 1050k\r 23 162M 23 38.0M 0 0 1266k 0 0:02:11 0:00:30 0:01:41 1107k\r 24 162M 24 39.1M 0 0 1249k 0 0:02:12 0:00:32 0:01:40 1051k\r 24 162M 24 40.1M 0 0 1253k 0 0:02:12 0:00:32 0:01:40 1103k\r 25 162M 25 40.9M 0 0 1240k 0 0:02:13 0:00:33 0:01:40 1034k\r 25 162M 25 41.7M 0 0 1227k 0 0:02:15 0:00:34 0:01:41 976k\r 26 162M 26 42.5M 0 0 1217k 0 0:02:16 0:00:35 0:01:41 914k\r 26 162M 26 43.4M 0 0 1207k 0 0:02:17 0:00:36 0:01:41 922k\r 27 162M 27 44.2M 0 0 1199k 0 0:02:18 0:00:37 0:01:41 850k\r 27 162M 27 45.0M 0 0 1189k 0 0:02:19 0:00:38 0:01:41 848k\r 28 162M 28 46.0M 0 0 1183k 0 0:02:20 0:00:39 0:01:41 879k\r 28 162M 28 46.9M 0 0 1177k 0 0:02:20 0:00:40 0:01:40 897k\r 29 162M 29 47.7M 0 0 1167k 0 0:02:22 0:00:41 0:01:41 875k\r 30 162M 30 48.9M 0 0 1170k 0 0:02:21 0:00:42 0:01:39 948k\r 30 162M 30 50.0M 0 0 1169k 0 0:02:22 0:00:43 0:01:39 1009k\r 31 162M 31 51.0M 0 0 1167k 0 0:02:22 0:00:44 0:01:38 1039k\r 32 162M 32 52.3M 0 0 1169k 0 0:02:21 0:00:45 0:01:36 1102k\r 32 162M 32 53.4M 0 0 1170k 0 0:02:21 0:00:46 0:01:35 1196k\r 33 162M 33 54.7M 0 0 1172k 0 0:02:21 0:00:47 0:01:34 1189k\r 34 162M 34 55.8M 0 0 1172k 0 0:02:21 0:00:48 0:01:33 1197k\r 35 162M 35 56.8M 0 0 1168k 0 0:02:22 0:00:49 0:01:33 1177k\r 35 162M 35 57.9M 0 0 1166k 0 0:02:22 0:00:50 0:01:32 1140k\r 36 162M 36 59.1M 0 0 1168k 0 0:02:22 0:00:51 0:01:31 1150k\r 37 162M 37 60.2M 0 0 1168k 0 0:02:22 0:00:52 0:01:30 1133k\r 37 162M 37 61.4M 0 0 1170k 0 0:02:21 0:00:53 0:01:28 1157k\r 38 162M 38 62.6M 0 0 1171k 0 0:02:21 0:00:54 0:01:27 1193k\r 39 162M 39 63.9M 0 0 1173k 0 0:02:21 0:00:55 0:01:26 1238k\r 40 162M 40 65.1M 0 0 1174k 0 0:02:21 0:00:56 0:01:25 1240k\r 40 162M 40 66.2M 0 0 1171k 0 0:02:21 0:00:57 0:01:24 1204k\r 41 162M 41 67.4M 0 0 1171k 0 0:02:21 0:00:58 0:01:23 1184k\r 42 162M 42 68.6M 0 0 1174k 0 0:02:21 0:00:59 0:01:22 1217k\r 42 162M 42 69.6M 0 0 1172k 0 0:02:21 0:01:00 0:01:21 1161k\r 43 162M 43 70.5M 0 0 1163k 0 0:02:22 0:01:02 0:01:20 1045k\r 44 162M 44 71.4M 0 0 1165k 0 0:02:22 0:01:02 0:01:20 1086k\r 44 162M 44 72.3M 0 0 1161k 0 0:02:23 0:01:03 0:01:20 1029k\r 45 162M 45 73.3M 0 0 1159k 0 0:02:23 0:01:04 0:01:19 972k\r 45 162M 45 74.2M 0 0 1155k 0 0:02:23 0:01:05 0:01:18 949k\r 46 162M 46 75.1M 0 0 1151k 0 0:02:24 0:01:06 0:01:18 991k\r 46 162M 46 76.0M 0 0 1148k 0 0:02:24 0:01:07 0:01:17 937k\r 47 162M 47 77.0M 0 0 1146k 0 0:02:24 0:01:08 0:01:16 967k\r 48 162M 48 78.0M 0 0 1145k 0 0:02:24 0:01:09 0:01:15 964k\r 48 162M 48 79.1M 0 0 1144k 0 0:02:25 0:01:10 0:01:15 999k\r 49 162M 49 80.1M 0 0 1142k 0 0:02:25 0:01:11 0:01:14 1023k\r 50 162M 50 81.2M 0 0 1142k 0 0:02:25 0:01:12 0:01:13 1060k\r 50 162M 50 82.2M 0 0 1141k 0 0:02:25 0:01:13 0:01:12 1071k\r 51 162M 51 83.4M 0 0 1139k 0 0:02:25 0:01:15 0:01:10 1056k\r 52 162M 52 84.6M 0 0 1143k 0 0:02:25 0:01:15 0:01:10 1132k\r 52 162M 52 85.8M 0 0 1144k 0 0:02:25 0:01:16 0:01:09 1177k\r 53 162M 53 87.1M 0 0 1146k 0 0:02:24 0:01:17 0:01:07 1215k\r 54 162M 54 88.5M 0 0 1150k 0 0:02:24 0:01:18 0:01:06 1274k\r 55 162M 55 89.8M 0 0 1153k 0 0:02:24 0:01:19 0:01:05 1376k\r 56 162M 56 91.2M 0 0 1156k 0 0:02:23 0:01:20 0:01:03 1354k\r 57 162M 57 92.6M 0 0 1159k 0 0:02:23 0:01:21 0:01:02 1382k\r 57 162M 57 94.0M 0 0 1163k 0 0:02:22 0:01:22 0:01:00 1413k\r 58 162M 58 95.2M 0 0 1163k 0 0:02:22 0:01:23 0:00:59 1363k\r 59 162M 59 96.2M 0 0 1159k 0 0:02:23 0:01:25 0:00:58 1252k\r 60 162M 60 97.4M 0 0 1162k 0 0:02:22 0:01:25 0:00:57 1254k\r 60 162M 60 98.4M 0 0 1161k 0 0:02:23 0:01:26 0:00:57 1190k\r 61 162M 61 99.5M 0 0 1160k 0 0:02:23 0:01:27 0:00:56 1120k\r 62 162M 62 100M 0 0 1160k 0 0:02:23 0:01:28 0:00:55 1109k\r 62 162M 62 101M 0 0 1159k 0 0:02:23 0:01:29 0:00:54 1168k\r 63 162M 63 102M 0 0 1159k 0 0:02:23 0:01:30 0:00:53 1108k\r 64 162M 64 103M 0 0 1159k 0 0:02:23 0:01:31 0:00:52 1130k\r 64 162M 64 105M 0 0 1160k 0 0:02:23 0:01:32 0:00:51 1151k\r 65 162M 65 106M 0 0 1162k 0 0:02:22 0:01:33 0:00:49 1211k\r 66 162M 66 108M 0 0 1166k 0 0:02:22 0:01:34 0:00:48 1288k\r 67 162M 67 109M 0 0 1174k 0 0:02:21 0:01:35 0:00:46 1455k\r 69 162M 69 112M 0 0 1185k 0 0:02:20 0:01:36 0:00:44 1664k\r 70 162M 70 114M 0 0 1203k 0 0:02:17 0:01:37 0:00:40 2015k\r 73 162M 73 118M 0 0 1227k 0 0:02:15 0:01:38 0:00:37 2459k\r 75 162M 75 122M 0 0 1258k 0 0:02:11 0:01:39 0:00:32 2993k\r 77 162M 77 126M 0 0 1281k 0 0:02:09 0:01:40 0:00:29 3323k\r 79 162M 79 128M 0 0 1296k 0 0:02:08 0:01:41 0:00:27 3432k\r 81 162M 81 131M 0 0 1313k 0 0:02:06 0:01:42 0:00:24 3450k\r 83 162M 83 134M 0 0 1330k 0 0:02:04 0:01:43 0:00:21 3342k\r 85 162M 85 138M 0 0 1348k 0 0:02:03 0:01:44 0:00:19 3159k\r 87 162M 87 141M 0 0 1366k 0 0:02:01 0:01:45 0:00:16 3080k\r 88 162M 88 142M 0 0 1370k 0 0:02:01 0:01:46 0:00:15 2889k\r 89 162M 89 145M 0 0 1381k 0 0:02:00 0:01:47 0:00:13 2783k\r 90 162M 90 147M 0 0 1385k 0 0:01:59 0:01:48 0:00:11 2535k\r 91 162M 91 148M 0 0 1387k 0 0:01:59 0:01:49 0:00:10 2197k\r 92 162M 92 150M 0 0 1389k 0 0:01:59 0:01:50 0:00:09 1889k\r 93 162M 93 151M 0 0 1386k 0 0:01:59 0:01:51 0:00:08 1734k\r 94 162M 94 152M 0 0 1384k 0 0:01:59 0:01:52 0:00:07 1448k\r 94 162M 94 153M 0 0 1381k 0 0:02:00 0:01:53 0:00:07 1284k\r 95 162M 95 154M 0 0 1375k 0 0:02:00 0:01:54 0:00:06 1124k\r 95 162M 95 155M 0 0 1374k 0 0:02:00 0:01:55 0:00:05 1042k\r 96 162M 96 156M 0 0 1371k 0 0:02:01 0:01:56 0:00:05 1028k\r 97 162M 97 157M 0 0 1368k 0 0:02:01 0:01:57 0:00:04 1024k\r 97 162M 97 158M 0 0 1366k 0 0:02:01 0:01:58 0:00:03 1040k\r 98 162M 98 159M 0 0 1364k 0 0:02:01 0:01:59 0:00:02 1107k\r 99 162M 99 160M 0 0 1362k 0 0:02:01 0:02:00 0:00:01 1068k\r 99 162M 99 161M 0 0 1358k 0 0:02:02 0:02:01 0:00:01 1051k\r100 162M 100 162M 0 0 1357k 0 0:02:02 0:02:02 --:--:-- 1051k\n" ] ], [ [ "**[CDS-03]** ダウンロードしたデータを確認します。ここでは、テストセット用のデータ test_batch.bin を使用します。", "_____no_output_____" ] ], [ [ "!ls -lR /tmp/cifar10_data", "/tmp/cifar10_data:\r\ntotal 166072\r\ndrwxr-xr-x. 2 2156 1103 4096 Jun 4 2009 cifar-10-batches-bin\r\n-rw-r--r--. 1 root root 170052171 Jul 3 01:22 cifar-10-binary.tar.gz\r\n\r\n/tmp/cifar10_data/cifar-10-batches-bin:\r\ntotal 180080\r\n-rw-r--r--. 1 2156 1103 61 Jun 4 2009 batches.meta.txt\r\n-rw-r--r--. 1 2156 1103 30730000 Jun 4 2009 data_batch_1.bin\r\n-rw-r--r--. 1 2156 1103 30730000 Jun 4 2009 data_batch_2.bin\r\n-rw-r--r--. 1 2156 1103 30730000 Jun 4 2009 data_batch_3.bin\r\n-rw-r--r--. 1 2156 1103 30730000 Jun 4 2009 data_batch_4.bin\r\n-rw-r--r--. 1 2156 1103 30730000 Jun 4 2009 data_batch_5.bin\r\n-rw-r--r--. 1 2156 1103 88 Jun 4 2009 readme.html\r\n-rw-r--r--. 1 2156 1103 30730000 Jun 4 2009 test_batch.bin\r\n" ] ], [ [ "**[CDS-04]** データファイルから画像イメージとラベルデータを読み取る関数を用意します。", "_____no_output_____" ] ], [ [ "def read_cifar10(filename_queue):\n class CIFAR10Record(object):\n pass\n \n result = CIFAR10Record()\n label_bytes = 1\n result.height = 32\n result.width = 32\n result.depth = 3\n image_bytes = result.height * result.width * result.depth\n record_bytes = label_bytes + image_bytes\n reader = tf.FixedLengthRecordReader(record_bytes=record_bytes)\n result.key, value = reader.read(filename_queue)\n record_bytes = tf.decode_raw(value, tf.uint8)\n result.label = tf.cast(\n tf.slice(record_bytes, [0], [label_bytes]), tf.int32)\n depth_major = tf.reshape(tf.slice(record_bytes, [label_bytes], [image_bytes]),\n [result.depth, result.height, result.width])\n # Convert from [depth, height, width] to [height, width, depth].\n result.uint8image = tf.transpose(depth_major, [1, 2, 0])\n\n return result", "_____no_output_____" ] ], [ [ "**[CDS-04]** それぞれのラベルについて、8個ずつの画像イメージを表示します。", "_____no_output_____" ] ], [ [ "sess = tf.InteractiveSession()\nfilename = '/tmp/cifar10_data/cifar-10-batches-bin/test_batch.bin'\nq = tf.FIFOQueue(99, [tf.string], shapes=())\nq.enqueue([filename]).run(session=sess)\nq.close().run(session=sess)\nresult = read_cifar10(q)\n\nsamples = [[] for l in range(10)]\nwhile(True):\n label, image = sess.run([result.label, result.uint8image])\n label = label[0]\n if len(samples[label]) < 8:\n samples[label].append(image)\n if all([len(samples[l]) >= 8 for l in range(10)]):\n break\n \nfig = plt.figure(figsize=(8,10))\nfor l in range(10):\n for c in range(8):\n subplot = fig.add_subplot(10, 8, l*8+c+1)\n subplot.set_xticks([])\n subplot.set_yticks([])\n image = samples[l][c]\n subplot.imshow(image.astype(np.uint8))\n \nsess.close()", "_____no_output_____" ] ], [ [ "**[CDS-05]** 前処理を施した画像イメージを生成する関数を用意します。", "_____no_output_____" ] ], [ [ "def distorted_samples(image):\n\n reshaped_image = tf.cast(image, tf.float32)\n width, height = 24, 24\n float_images = []\n\n resized_image = tf.image.resize_image_with_crop_or_pad(reshaped_image,\n width, height)\n float_image = tf.image.per_image_whitening(resized_image)\n float_images.append(float_image)\n\n for _ in range(6):\n distorted_image = tf.random_crop(reshaped_image, [height, width, 3])\n distorted_image = tf.image.random_flip_left_right(distorted_image)\n distorted_image = tf.image.random_brightness(distorted_image,\n max_delta=63)\n distorted_image = tf.image.random_contrast(distorted_image,\n lower=0.2, upper=1.8)\n float_image = tf.image.per_image_whitening(distorted_image)\n float_images.append(float_image)\n\n return tf.concat(0,float_images)", "_____no_output_____" ] ], [ [ "**[CDS-06]** それぞれのラベルについて、オリジナル、および、前処理を施した画像イメージを表示します。", "_____no_output_____" ] ], [ [ "sess = tf.InteractiveSession()\nfilename = '/tmp/cifar10_data/cifar-10-batches-bin/test_batch.bin'\nq = tf.FIFOQueue(99, [tf.string], shapes=())\nq.enqueue([filename]).run(session=sess)\nq.close().run(session=sess)\nresult = read_cifar10(q)\n\nfig = plt.figure(figsize=(8,10))\nc = 0\noriginal = {}\nmodified = {}\n\nwhile len(original.keys()) < 10:\n label, orig, dists = sess.run([result.label,\n result.uint8image,\n distorted_samples(result.uint8image)])\n label = label[0]\n if not label in original.keys():\n original[label] = orig\n modified[label] = dists\n\nfor l in range(10):\n orig, dists = original[l], modified[l]\n c += 1\n subplot = fig.add_subplot(10, 8, c)\n subplot.set_xticks([])\n subplot.set_yticks([])\n subplot.imshow(orig.astype(np.uint8))\n\n for i in range(7):\n c += 1\n subplot = fig.add_subplot(10, 8, c)\n subplot.set_xticks([])\n subplot.set_yticks([])\n pos = i*24\n image = dists[pos:pos+24]*40+120\n subplot.imshow(image.astype(np.uint8))\n \nsess.close()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a481f4b2fa9c274d4edf4b1588fbe92eed11c33
272,696
ipynb
Jupyter Notebook
.ipynb_checkpoints/CoreML Conversion MacOS test-checkpoint.ipynb
jb-apps/U-2-Net
c8016e05202853b0203d32254f49e44de908ea60
[ "Apache-2.0" ]
null
null
null
.ipynb_checkpoints/CoreML Conversion MacOS test-checkpoint.ipynb
jb-apps/U-2-Net
c8016e05202853b0203d32254f49e44de908ea60
[ "Apache-2.0" ]
null
null
null
.ipynb_checkpoints/CoreML Conversion MacOS test-checkpoint.ipynb
jb-apps/U-2-Net
c8016e05202853b0203d32254f49e44de908ea60
[ "Apache-2.0" ]
1
2021-05-19T03:46:00.000Z
2021-05-19T03:46:00.000Z
1,490.142077
184,280
0.959739
[ [ [ "# Initialise packages \nfrom model import U2NET\nimport coremltools as ct\nfrom coremltools.proto import FeatureTypes_pb2 as ft\nimport torch\nimport os\nfrom PIL import Image\nfrom torchvision import transforms\nfrom skimage import io, transform", "_____no_output_____" ], [ "# Re-open model for modification and append new output layers.\nmodel = ct.models.MLModel(\"updated_model.mlmodel\")", "_____no_output_____" ], [ "# Create a test input.\n\n# Specify an image as input here\n# original_image = Image.open(\"test_data/test_images/0002-01.jpg\")\noriginal_image = io.imread(\"test_data/test_images/0002-01.jpg\")\ninput_image = original_image.resize((320,320))\nprint(input_image)\n\ndisplay(input_image)", "<PIL.Image.Image image mode=RGB size=320x320 at 0x7FDB81DE6090>\n" ], [ "# Test our model.\nout_dict = model.predict({'in_0': input_image})\nprint(len(out_dict))", "7\n" ], [ "im = out_dict['out_p0']\nim", "_____no_output_____" ], [ "imo = im.resize((original_image.size[0],original_image.size[1]),resample=Image.BILINEAR)\nprint(original_image.size, imo.size)\nimo", "(800, 657) (800, 657)\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
4a482e97877c552eef78ecd9805d75b6ea47887c
6,829
ipynb
Jupyter Notebook
python/figure_notebooks/suppfig6_7.ipynb
shirleyswirley/skipjack-bigeye-separation
91fcda6a0721262c72e90a00e9b732b31e31ae9f
[ "MIT" ]
2
2019-11-26T23:00:11.000Z
2020-02-08T01:22:03.000Z
python/figure_notebooks/suppfig6_7.ipynb
shirleyswirley/skipjack-bigeye-separation
91fcda6a0721262c72e90a00e9b732b31e31ae9f
[ "MIT" ]
null
null
null
python/figure_notebooks/suppfig6_7.ipynb
shirleyswirley/skipjack-bigeye-separation
91fcda6a0721262c72e90a00e9b732b31e31ae9f
[ "MIT" ]
null
null
null
39.935673
87
0.524235
[ [ [ "# - Decide which map to plot\n# in main notebook code\n#mapvarnow = 'skj' # choose: skj, bet", "_____no_output_____" ], [ "# - Define constant plot params\nstipsizenow = 10; stipmarknow = 'o'\nstipfacecolnow = 'none'\nstipedgeltcolnow = 'whitesmoke'\nstipewnow = 0.8 # marker edge width\n\neezfcnow = 'none'; eezlcnow = 'lightgray' #'silver'\neezlsnow = '-'; eezlwnow = 0.9", "_____no_output_____" ], [ "# - Define subplot-variable plot params\nif mapvarnow=='skj':\n fignamenow = 'S6_fig'\n unitsnow = 8*['[metric tons/set]']\n mapsnow = [skj_cp_tot_seas.sel(season='DJF'),\n skj_cp_tot_seas.sel(season='DJF')-skj_cp_tot_mean,\n skj_cp_tot_seas.sel(season='MAM'),\n skj_cp_tot_seas.sel(season='MAM')-skj_cp_tot_mean,\n skj_cp_tot_seas.sel(season='JJA'),\n skj_cp_tot_seas.sel(season='JJA')-skj_cp_tot_mean,\n skj_cp_tot_seas.sel(season='SON'),\n skj_cp_tot_seas.sel(season='SON')-skj_cp_tot_mean]\n vmaxsnow = 4*[60, 12]\n vminsnow = 4*[0, -12]\n pvsnow = 8*[skj_cp_tot_seas_kw_pval]\n ptfsnow = 8*[skj_cp_tot_seas_kw_ptf]\n titlesnow = ['SKJ CPUE - Winter','SKJ CPUE - Winter minus mean',\n 'SKJ CPUE - Spring','SKJ CPUE - Spring minus mean',\n 'SKJ CPUE - Summer','SKJ CPUE - Summer minus mean',\n 'SKJ CPUE - Fall','SKJ CPUE - Fall minus mean']\nelif mapvarnow=='bet':\n fignamenow = 'S7_fig'\n unitsnow = 8*['[metric tons/set]']\n mapsnow = [bet_cp_tot_seas.sel(season='DJF'),\n bet_cp_tot_seas.sel(season='DJF')-bet_cp_tot_mean,\n bet_cp_tot_seas.sel(season='MAM'),\n bet_cp_tot_seas.sel(season='MAM')-bet_cp_tot_mean,\n bet_cp_tot_seas.sel(season='JJA'),\n bet_cp_tot_seas.sel(season='JJA')-bet_cp_tot_mean,\n bet_cp_tot_seas.sel(season='SON'),\n bet_cp_tot_seas.sel(season='SON')-bet_cp_tot_mean]\n vmaxsnow = 4*[15, 4]\n vminsnow = 4*[0, -4]\n pvsnow = 8*[bet_cp_tot_seas_kw_pval]\n ptfsnow = 8*[bet_cp_tot_seas_kw_ptf]\n titlesnow = ['BET CPUE - Winter','BET CPUE - Winter minus mean',\n 'BET CPUE - Spring','BET CPUE - Spring minus mean',\n 'BET CPUE - Summer','BET CPUE - Summer minus mean',\n 'BET CPUE - Fall','BET CPUE - Fall minus mean']\n \nstipltdkcosnow = 0.5*np.asarray(vmaxsnow) # light/dark stip cutoff value\nstipedgedkcolsnow = 4*['lightgray', 'darkslategray', None]\nsignifstipsnow = 4*[0, 1, None]\nploteezsnow = 4*[1, 0, None]\n\ncmseqnow = plt.cm.get_cmap('viridis',11)\ncmdivnow = plt.cm.get_cmap('PuOr',11)\ncmsnow = 4*[cmseqnow, cmdivnow]\n\nstipltdkcosnow = 0.5*np.asarray(vmaxsnow) # light/dark stip cutoff value\nstipedgedkcolsnow = 4*['lightgray', 'darkslategray']\n\nsignifstipsnow = 4*[0,1]\nploteezsnow = 4*[1,0]", "_____no_output_____" ], [ "# - Set proj and define axes\nfig,axes = plt.subplots(nrows=4, ncols=2, figsize=(12,10),\n subplot_kw={'projection': ccrs.PlateCarree(central_longitude=200)})\n\n# - Make maps pretty + plot\nisp = 0\nfor irow in range(4):\n for icol in range(2):\n ax = axes[irow][icol]\n exec(open('helper_scripts/create_map_bgs.py').read())\n ax.text(-0.08, 1.08, string.ascii_uppercase[isp],\n transform=ax.transAxes, size=16, weight='bold') \n \n mapsnow[isp].plot(\n ax=ax, transform=ccrs.PlateCarree(), cmap=cmsnow[isp],\n vmin=vminsnow[isp], vmax=vmaxsnow[isp],\n cbar_kwargs={'pad': 0.02, 'label': unitsnow[isp]})\n \n if ploteezsnow[isp]==1:\n nueezs.plot(ax=ax, transform=ccrs.PlateCarree(),\n color=eezfcnow, edgecolor=eezlcnow, linewidth=eezlwnow)\n \n if signifstipsnow[isp]==1:\n [ltcol_signiflonnow,ltcol_signiflatnow]=find_where_pval_small(\n pvsnow[isp].where(abs(mapsnow[isp])>stipltdkcosnow[isp]),\n ptfsnow[isp])\n [dkcol_signiflonnow,dkcol_signiflatnow]=find_where_pval_small(\n pvsnow[isp].where(abs(mapsnow[isp])<=stipltdkcosnow[isp]),\n ptfsnow[isp])\n ax.scatter(ltcol_signiflonnow, ltcol_signiflatnow,\n marker=stipmarknow, linewidths=stipewnow,\n facecolors=stipfacecolnow, edgecolors=stipedgeltcolnow,\n s=stipsizenow, transform=ccrs.PlateCarree())\n ax.scatter(dkcol_signiflonnow, dkcol_signiflatnow,\n marker=stipmarknow, linewidths=stipewnow,\n facecolors=stipfacecolnow, edgecolors=stipedgedkcolnow,\n s=stipsizenow, transform=ccrs.PlateCarree())\n \n ax.set_xlabel(''); ax.set_ylabel('')\n ax.set_title(titlesnow[isp])\n isp = isp + 1\n \n\n# - Save fig\nfig.savefig(figpath + fignamenow + '.pdf',\n bbox_inches='tight', pad_inches = 0, dpi = 300) \nfig.savefig(figpath + fignamenow + '.png',\n bbox_inches='tight', pad_inches = 0, dpi = 300) ", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
4a483923b75398e649865f22009835c59b8ec171
248,551
ipynb
Jupyter Notebook
python/Extras/Fisica1/simulacion.ipynb
LTGiardino/talleresfifabsas
a711b4425b0811478f21e6c405eeb4a52e889844
[ "MIT" ]
17
2015-10-23T17:14:34.000Z
2021-12-31T02:18:29.000Z
python/Extras/Fisica1/simulacion.ipynb
Fifabsas/TayeresFifabsas
a711b4425b0811478f21e6c405eeb4a52e889844
[ "MIT" ]
5
2016-04-03T23:39:11.000Z
2020-04-03T02:09:02.000Z
python/Extras/Fisica1/simulacion.ipynb
Fifabsas/TayeresFifabsas
a711b4425b0811478f21e6c405eeb4a52e889844
[ "MIT" ]
29
2015-10-16T04:16:01.000Z
2021-09-18T16:55:48.000Z
193.12432
83,416
0.86752
[ [ [ "![fifa](./logo_fifa.png)\n\n# Ejemplo de simulación numérica", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom scipy.integrate import odeint\nfrom matplotlib import rc\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nrc(\"text\", usetex=True)\nrc(\"font\", size=18)\nrc(\"figure\", figsize=(6,4))\nrc(\"axes\", grid=True)", "_____no_output_____" ] ], [ [ "## Problema físico\n\n![esquema](esquema.png)\n\nDefinimos un SR con el origen en el orificio donde el hilo atravieza el plano, la coordenada $\\hat{z}$ apuntando hacia abajo. Con esto sacamos, de la segunda ley de Newton para las particulas:\n\n$$\n\\begin{align}\n\\text{Masa 1)}\\quad&\\vec{F}_1 = m_1 \\vec{a}_1 \\\\\n&-T \\hat{r} = m_1 \\vec{a}_1 \\\\\n&-T \\hat{r} = m_1 \\left\\{ \\left(\\ddot{r} - r \\dot{\\theta}^2\\right) \\hat{r} + \\left(r\\ddot{\\theta} + 2\\dot{r}\\dot{\\theta}\\right)\\hat{\\theta} \\right\\} \\\\\n&\\begin{cases}\n\\hat{r})\\ - T = m_1\\left( \\ddot{r} - r\\, \\dot{\\theta}^2\\right)\\\\\n\\hat{\\theta})\\ 0 = m_1 \\left(r \\ddot{\\theta} + 2 \\dot{r}\\dot{\\theta}\\right)\\\\\n\\end{cases}\\\\\n\\\\\n\\text{Masa 2)}\\quad&\\vec{F}_2 = m_2 \\vec{a}_2 \\\\\n&-T \\hat{z} + m_2 g \\hat{z} = m_2 \\ddot{z} \\hat{z} \\\\\n\\implies & \\boxed{T = m_2 \\left( g - \\ddot{z} \\right)}\\\\\n\\end{align}\n$$\n\nAhora reemplazando este resultado para la tension (que es igual en ambas expresiones) y entendiendo que $\\ddot{z} = -\\ddot{r}$ pues la soga es ideal y de largo constante, podemos rescribir las ecuaciones obtenidas para la masa 1 como:\n\n$$\n\\begin{cases}\n\\hat{r})\\quad - m_2 \\left( g + \\ddot{r} \\right) = m_1\\left( \\ddot{r} - r\\, \\dot{\\theta}^2\\right)\\\\\n\\\\\n\\hat{\\theta})\\quad 0 = m_1 \\left(r \\ddot{\\theta} + 2 \\dot{r}\\dot{\\theta}\\right)\n\\end{cases}\n\\implies\n\\begin{cases}\n\\hat{r})\\quad \\ddot{r} = \\dfrac{- m_2 g + m_1 r \\dot{\\theta}^2}{m_1 + m_2}\\\\\n\\\\\n\\hat{\\theta})\\quad \\ddot{\\theta} = -2 \\dfrac{\\dot{r}\\dot{\\theta}}{r}\\\\\n\\end{cases}\n$$\n", "_____no_output_____" ], [ "La gracia de estos métodos es lograr encontrar una expresión de la forma $y'(x) = f(x,t)$ donde x será la solución buscada, aca como estamos en un sistema de segundo orden en dos variables diferentes ($r$ y $\\theta$) sabemos que nuestra solución va a tener que involucrar 4 componentes. Es como en el oscilador armónico, que uno tiene que definir posicion y velocidad inicial para poder conocer el sistema, solo que aca tenemos dos para $r$ y dos para $\\theta$.\n\nSe puede ver entonces que vamos a necesitar una solucion del tipo:\n$$\\mathbf{X} = \\begin{pmatrix} r \\\\ \\dot{r}\\\\ \\theta \\\\ \\dot{\\theta} \\end{pmatrix} $$\nY entonces\n$$\n\\dot{\\mathbf{X}} = \n\\begin{pmatrix} \\dot{r} \\\\ \\ddot{r}\\\\ \\dot{\\theta} \\\\ \\ddot{\\theta} \\end{pmatrix} =\n\\begin{pmatrix} \\dot{r} \\\\ \\dfrac{-m_2 g + m_1 r \\dot{\\theta}^2}{m_1 + m_2} \\\\ \\dot{\\theta} \\\\ -2 \\dfrac{\\dot{r}\\dot{\\theta}}{r} \\end{pmatrix} =\n\\mathbf{f}(\\mathbf{X}, t)\n$$\n\n---", "_____no_output_____" ], [ "Si alguno quiere, tambien se puede escribir la evolucion del sistema de una forma piola, que no es otra cosa que una querida expansión de Taylor a orden lineal.\n\n$$\n\\begin{align}\n r(t+dt) &= r(t) + \\dot{r}(t)\\cdot dt \\\\\n \\dot{r}(t+dt) &= \\dot{r}(t) + \\ddot{r}(t)\\cdot dt \\\\\n \\theta(t+dt) &= \\theta(t) + \\dot{\\theta}(t)\\cdot dt \\\\\n \\dot{\\theta}(t+dt) &= \\dot{\\theta}(t) + \\ddot{\\theta}(t)\\cdot dt\n\\end{align}\n\\implies\n\\begin{pmatrix}\n r\\\\\n \\dot{r}\\\\\n \\theta\\\\\n \\ddot{\\theta}\n\\end{pmatrix}(t + dt) = \n\\begin{pmatrix}\n r\\\\\n \\dot{r}\\\\\n \\theta\\\\\n \\ddot{\\theta}\n\\end{pmatrix}(t) + \n\\begin{pmatrix}\n \\dot{r}\\\\\n \\ddot{r}\\\\\n \\dot{\\theta}\\\\\n \\ddot{\\theta}\n\\end{pmatrix}(t) \\cdot dt\n$$\n\nAca tenemos que recordar que la compu no puede hacer cosas continuas, porque son infinitas cuentas, entones si o si hay que discretizar el tiempo y el paso temporal!\n\n$$\n\\begin{pmatrix}\nr\\\\\n\\dot{r}\\\\\n\\theta\\\\\n\\ddot{\\theta}\n\\end{pmatrix}_{i+1} = \n\\begin{pmatrix}\nr\\\\\n\\dot{r}\\\\\n\\theta\\\\\n\\ddot{\\theta}\n\\end{pmatrix}_i + \n\\begin{pmatrix}\n\\dot{r}\\\\\n\\ddot{r}\\\\\n\\dot{\\theta}\\\\\n\\ddot{\\theta}\n\\end{pmatrix}_i \\cdot dt\n$$\n\nSi entonces decido llamar a este vector columna $\\mathbf{X}$, el sistema queda escrito como:\n\n$$\n\\mathbf{X}_{i+1} = \\mathbf{X}_i + \\dot{\\mathbf{X}}_i\\ dt\n$$\n\nDonde sale denuevo que $\\dot{\\mathbf{X}}$ es lo que está escrito arriba.\n\nEs decir que para encontrar cualquier valor, solo hace falta saber el vector anterior y la derivada, pero las derivadas ya las tenemos (es todo el trabajo que hicimos de fisica antes)!!\n\n---\n---\n\n\nDe cualquier forma que lo piensen, ojala hayan entendido que entonces con tener las condiciones iniciales y las ecuaciones diferenciales ya podemos resolver (tambien llamado *integrar*) el sistema.", "_____no_output_____" ] ], [ [ "# Constantes del problema:\nM1 = 3\nM2 = 3\ng = 9.81\n\n# Condiciones iniciales del problema:\nr0 = 2\nr_punto0 = 0\ntita0 = 0\ntita_punto0 = 1\n\nC1 = (M2*g)/(M1+M2) # Defino constantes utiles\nC2 = (M1)/(M1+M2)\ncond_iniciales = [r0, r_punto0, tita0, tita_punto0]\n\ndef derivada(X, t, c1, c2): # esto sería la f del caso { x' = f(x,t) }\n r, r_punto, tita, tita_punto = X\n deriv = [0, 0, 0, 0] # es como el vector columna de arriba pero en filado\n \n deriv[0] = r_punto # derivada de r\n deriv[1] = -c1 + c2*r*(tita_punto)**2 # r dos puntos\n deriv[2] = tita_punto # derivada de tita\n deriv[3] = -2*r_punto*tita_punto/r\n return deriv\n\n\ndef resuelvo_sistema(m1, m2, tmax = 20):\n t0 = 0\n c1 = (m2*g)/(m1+m2) # Defino constantes utiles\n c2 = (m1)/(m1+m2)\n t = np.arange(t0, tmax, 0.001)\n \n # aca podemos definirnos nuestro propio algoritmo de integracion\n # o bien usar el que viene a armado de scipy. \n # Ojo que no es perfecto eh, a veces es mejor escribirlo uno\n out = odeint(derivada, cond_iniciales, t, args = (c1, c2,))\n\n return [t, out.T]\n\nt, (r, rp, tita, titap) = resuelvo_sistema(M1, M2, tmax=10)\n\nplt.figure()\nplt.plot(t, r/r0, 'r')\nplt.ylabel(r\"$r / r_0$\")\nplt.xlabel(r\"tiempo\")\n# plt.savefig(\"directorio/r_vs_t.pdf\", dpi=300)\n\nplt.figure()\nplt.plot(t, tita-tita0, 'b')\nplt.ylabel(r\"$\\theta - \\theta_0$\")\nplt.xlabel(r\"tiempo\")\n# plt.savefig(\"directorio/tita_vs_t.pdf\", dpi=300)\n\n\nplt.figure()\nplt.plot(r*np.cos(tita-tita0)/r0, r*np.sin(tita-tita0)/r0, 'g')\nplt.ylabel(r\"$r/r_0\\ \\sin\\left(\\theta - \\theta_0\\right)$\")\nplt.xlabel(r\"$r/r_0\\ \\cos\\left(\\theta - \\theta_0\\right)$\")\n# plt.savefig(\"directorio/trayectoria.pdf\", dpi=300)", "_____no_output_____" ] ], [ [ "Todo muy lindo!!\n\nCómo podemos verificar si esto está andando ok igual? Porque hasta acá solo sabemos que dio razonable, pero el ojímetro no es una medida cuantitativa.\n\nUna opción para ver que el algoritmo ande bien (y que no hay errores numéricos, y que elegimos un integrador apropiado **ojo con esto eh... te estoy mirando a vos, Runge-Kutta**), es ver si se conserva la energía.\n\nLes recuerdo que la energía cinética del sistema es $K = \\frac{1}{2} m_1 \\left|\\vec{v}_1 \\right|^2 + \\frac{1}{2} m_2 \\left|\\vec{v}_2 \\right|^2$, cuidado con cómo se escribe cada velocidad, y que la energía potencial del sistema únicamente depende de la altura de la pelotita colgante.\nHace falta conocer la longitud $L$ de la cuerda para ver si se conserva la energía mecánica total? (Spoiler: No. Pero piensen por qué)\n\nLes queda como ejercicio a ustedes verificar eso, y también pueden experimentar con distintos metodos de integración a ver qué pasa con cada uno, abajo les dejamos una ayudita para que prueben.", "_____no_output_____" ] ], [ [ "from scipy.integrate import solve_ivp\n\ndef resuelvo_sistema(m1, m2, tmax = 20, metodo='RK45'):\n t0 = 0\n c1 = (m2*g)/(m1+m2) # Defino constantes utiles\n c2 = (m1)/(m1+m2)\n t = np.arange(t0, tmax, 0.001)\n \n # acá hago uso de las lambda functions, solamente para usar \n # la misma funcion que definimos antes. Pero como ahora\n # voy a usar otra funcion de integracion (no odeint)\n # que pide otra forma de definir la funcion, en vez de pedir\n # f(x,t) esta te pide f(t, x), entonces nada, hay que dar vuelta\n # parametros y nada mas...\n \n deriv_bis = lambda t, x: derivada(x, t, c1, c2)\n out = solve_ivp(fun=deriv_bis, t_span=(t0, tmax), y0=cond_iniciales,\\\n method=metodo, t_eval=t)\n\n return out\n\n# Aca armo dos arrays con los metodos posibles y otro con colores\nall_metodos = ['RK45', 'RK23', 'Radau', 'BDF', 'LSODA']\nall_colores = ['r', 'b', 'm', 'g', 'c']\n\n# Aca les dejo la forma piola de loopear sobre dos arrays a la par\nfor met, col in zip(all_metodos, all_colores):\n result = resuelvo_sistema(M1, M2, tmax=30, metodo=met)\n t = result.t\n r, rp, tita, titap = result.y\n plt.plot(t, r/r0, col, label=met)\n \nplt.xlabel(\"tiempo\")\nplt.ylabel(r\"$r / r_0$\")\nplt.legend(loc=3)", "_____no_output_____" ] ], [ [ "Ven cómo los distintos métodos van modificando más y más la curva de $r(t)$ a medida que van pasando los pasos de integración. Tarea para ustedes es correr el mismo código con la conservación de energía.\n\nCuál es mejor, por qué y cómo saberlo son preguntas que deberán hacerse e investigar si en algún momento trabajan con esto.\n\nPor ejemplo, pueden buscar en Wikipedia \"Symplectic Integrator\" y ver qué onda.", "_____no_output_____" ], [ "### Les dejamos también abajo la simulación de la trayectoria de la pelotita", "_____no_output_____" ] ], [ [ "from matplotlib import animation\n%matplotlib notebook\n\nresult = resuelvo_sistema(M1, M2, tmax=30, metodo='Radau')\nt = result.t\nr, rp, tita, titap = result.y\n\nfig, ax = plt.subplots()\nax.set_xlim([-1, 1])\nax.set_ylim([-1, 1])\nax.plot(r*np.cos(tita)/r0, r*np.sin(tita)/r0, 'm', lw=0.2)\nline, = ax.plot([], [], 'ko', ms=5)\n\nN_SKIP = 50\nN_FRAMES = int(len(r)/N_SKIP)\n\ndef animate(frame_no):\n i = frame_no*N_SKIP\n r_i = r[i]/r0\n tita_i = tita[i]\n line.set_data(r_i*np.cos(tita_i), r_i*np.sin(tita_i))\n return line,\n \nanim = animation.FuncAnimation(fig, animate, frames=N_FRAMES,\n interval=50, blit=False)", "_____no_output_____" ] ], [ [ "Recuerden que esta animación no va a parar eh, sabemos que verla te deja en una especie de trance místico, pero recuerden pararla cuando haya transcurrido suficiente tiempo", "_____no_output_____" ], [ "# Animación Interactiva\n\nUsando `ipywidgets` podemos agregar sliders a la animación, para modificar el valor de las masitas", "_____no_output_____" ] ], [ [ "from ipywidgets import interactive, interact, FloatProgress\nfrom IPython.display import clear_output, display\n%matplotlib inline\n\n@interact(m1=(0,5,0.5), m2=(0,5,0.5), tmax=(0.01,20,0.5)) #Permite cambiar el parámetro de la ecuación\ndef resuelvo_sistema(m1, m2, tmax = 20):\n t0 = 0\n c1 = (m2*g)/(m1+m2) # Defino constantes utiles\n c2 = (m1)/(m1+m2)\n t = np.arange(t0, tmax, 0.05)\n# out = odeint(derivada, cond_iniciales, t, args = (c1, c2,))\n r, rp, tita, titap = odeint(derivada, cond_iniciales, t, args=(c1, c2,)).T\n plt.xlim((-1,1))\n plt.ylim((-1,1))\n plt.plot(r*np.cos(tita)/r0, r*np.sin(tita)/r0,'b-')\n \n# plt.xlabel(\"tiempo\")\n# plt.ylabel(r\"$r / r_0$\")\n# plt.show()\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
4a483ae3f1266de9a03b19248a10fdb8f7348d37
188,841
ipynb
Jupyter Notebook
Chapter04.ipynb
takanory/python-machine-learning
6c3a2d7ff02366e10cd5e5ac44c062c4f10e2304
[ "MIT" ]
2
2016-11-04T01:12:54.000Z
2018-03-12T01:43:49.000Z
Chapter04.ipynb
takanory/python-machine-learning
6c3a2d7ff02366e10cd5e5ac44c062c4f10e2304
[ "MIT" ]
null
null
null
Chapter04.ipynb
takanory/python-machine-learning
6c3a2d7ff02366e10cd5e5ac44c062c4f10e2304
[ "MIT" ]
null
null
null
103.815833
83,056
0.808823
[ [ [ "# 4 データ前処理\n\n## 4.1 欠損データへの対処", "_____no_output_____" ] ], [ [ "from IPython.core.display import display\nimport pandas as pd\nfrom io import StringIO\ncsv_data = '''A,B,C,D\n1.0,2.0,3.0,4.0\n5.0,6.0,,8.0\n10.0,11.0,12.0,'''\ndf = pd.read_csv(StringIO(csv_data))\ndf", "_____no_output_____" ], [ "# 各特徴量の欠測値をカウント\ndf.isnull().sum()", "_____no_output_____" ], [ "df.values", "_____no_output_____" ] ], [ [ "### 4.1.1 欠測値を持つサンプル/特徴量を取り除く", "_____no_output_____" ] ], [ [ "# 欠測値を含む行を削除\ndf.dropna()", "_____no_output_____" ], [ "# 欠測値を含む列を削除\ndf.dropna(axis=1)", "_____no_output_____" ], [ "# すべての列がNaNである行だけを削除\ndf.dropna(how='all')\n# 非NaN値が4つ未満の行を削除\ndf.dropna(thresh=4)\n# 特定の列にNaNが含まれている行だけを削除\ndf.dropna(subset=['C'])", "_____no_output_____" ] ], [ [ "### 4.1.2 欠測値を補完する", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import Imputer\n# 欠測値補完のインスタンスを生成(平均値補完)\n# median: 中央値、most_frequent: 最頻値\nimr = Imputer(missing_values='NaN', strategy='mean', axis=0)\n# データを適合\nimr = imr.fit(df)\n# 補完を実行\nimputed_data = imr.transform(df.values)\nimputed_data", "_____no_output_____" ] ], [ [ "## 4.2 カテゴリデータの処理", "_____no_output_____" ] ], [ [ "import pandas as pd\n# サンプルデータを生成\ndf = pd.DataFrame([\n ['green', 'M', 10.1, 'class1'],\n ['red', 'L', 13.5, 'class2'],\n ['blue', 'XL', 15.3, 'class1'],\n])\n# 列名を設定\ndf.columns = ['color', 'size', 'price', 'classlabel']\ndf", "_____no_output_____" ] ], [ [ "### 4.2.1 順序特徴量のマッピング", "_____no_output_____" ] ], [ [ "# Tシャツのサイズと整数を対応させるディクショナリを生成\nsize_mapping = {'XL': 3, 'L': 2, 'M': 1}\n# Tシャツのサイズを整数に変換\ndf['size'] = df['size'].map(size_mapping)\ndf", "_____no_output_____" ], [ "# Tシャツのサイズを文字列に戻す辞書\ninv_size_mapping = {v: k for k, v in size_mapping.items()}\ninv_size_mapping", "_____no_output_____" ] ], [ [ "### 4.2.2 クラスラベルのエンコーディング", "_____no_output_____" ] ], [ [ "import numpy as np\n# クラスラベルと整数を対応させる辞書\nclass_mapping = {label: i for i, label in enumerate(np.unique(df['classlabel']))}\nclass_mapping", "_____no_output_____" ], [ "# クラスラベルを整数に変換\ndf['classlabel'] = df['classlabel'].map(class_mapping)\ndf", "_____no_output_____" ], [ "inv_class_mapping = {v: k for k, v in class_mapping.items()}\n# 整数からクラスラベルに変換\ndf['classlabel'] = df['classlabel'].map(inv_class_mapping)\ndf", "_____no_output_____" ], [ "from sklearn.preprocessing import LabelEncoder\nclass_le = LabelEncoder()\ny = class_le.fit_transform(df['classlabel'].values)\ny", "_____no_output_____" ], [ "class_le.inverse_transform(y)", "_____no_output_____" ] ], [ [ "### 4.2.3 名義特徴量での one-hot エンコーディング", "_____no_output_____" ] ], [ [ "# Tシャツの色、サイズ、価格を抽出\nX = df[['color', 'size', 'price']].values\ncolor_le = LabelEncoder()\nX[:, 0] = color_le.fit_transform(X[:, 0])\nX", "_____no_output_____" ], [ "from sklearn.preprocessing import OneHotEncoder\n# one-hot エンコーダの生成\nohe = OneHotEncoder(categorical_features=[0])\n# one-hot エンコーディングを実行\nohe.fit_transform(X).toarray()", "_____no_output_____" ], [ "# one-hot エンコーディングを実行\npd.get_dummies(df[['price', 'color', 'size']])", "_____no_output_____" ] ], [ [ "## 4.3 データセットをトレーニングデータセットとテストデータセットに分割する", "_____no_output_____" ] ], [ [ "# http://archive.ics.uci.edu/ml/datasets/Wine\ndf_wine = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data', header=None)\ndisplay(df_wine.head())\n# 列名を設定\ndf_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Magnesium', 'Total phenols',\n 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue',\n 'OD280/OD315 of diluted wines', 'Proline']\ndisplay(df_wine.head())\nprint('Class labels', np.unique(df_wine['Class label']))", "_____no_output_____" ], [ "from sklearn.cross_validation import train_test_split\n# 特徴量とクラスラベルを別々に抽出\nX, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values\n# 全体の30%をテストデータにする\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)", "/Users/takanori/Private/python-machine-learning/venv/lib/python3.5/site-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.\n \"This module will be removed in 0.20.\", DeprecationWarning)\n" ] ], [ [ "## 4.4 特徴量の尺度を揃える", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import MinMaxScaler\n# min-max スケーリングのインスタンスを生成\nmms = MinMaxScaler()\n# トレーニングデータをスケーリング\nX_train_norm = mms.fit_transform(X_train)\n# テストデータをスケーリング\nX_test_norm = mms.transform(X_test)\nX_train, X_train_norm", "_____no_output_____" ], [ "from sklearn.preprocessing import StandardScaler\n# 標準化のインスタンスを生成\nstdsc = StandardScaler()\nX_train_std = stdsc.fit_transform(X_train)\nX_test_std = stdsc.transform(X_test)\nX_train_std", "_____no_output_____" ] ], [ [ "## 4.5 有益な特徴量の選択\n\n### 4.5.1 L1 正則化による疎な解", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LogisticRegression\n# L1正則化ロジスティック回帰のインスタンスを生成\nLogisticRegression(penalty='l1')\n# L1正則化ロジスティック回帰のインスタンスを生成(逆正則化パラメータ C=0.1)\nlr = LogisticRegression(penalty='l1', C=0.1)\nlr.fit(X_train_std, y_train)\nprint('Training accuracy:', lr.score(X_train_std, y_train))\nprint('Test accuracy:', lr.score(X_test_std, y_test))", "Training accuracy: 0.983870967742\nTest accuracy: 0.981481481481\n" ], [ "# 切片の表示\nlr.intercept_", "_____no_output_____" ], [ "# 重み係数の表示\nlr.coef_", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\nfig = plt.figure()\nax = plt.subplot(111)\ncolors = ['blue', 'green', 'red', 'cyan', 'magenta', 'yellow', 'black',\n 'pink', 'lightgreen', 'lightblue', 'gray', 'indigo', 'orange']\n# 空のリストを生成(重み係数、逆正則化パラメータ\nweights, params = [], []\n# 逆正則化パラメータの値ごとに処理\nfor c in np.arange(-4, 6):\n # print(c) # -4~5 \n lr = LogisticRegression(penalty='l1', C=10 ** c, random_state=0)\n lr.fit(X_train_std, y_train)\n weights.append(lr.coef_[1])\n params.append(10 ** c)\n \n# 重み係数をNumPy配列に変換\nweights = np.array(weights)\n# 各重み係数をプロット\n# print(weights.shape[1]) # -> 13\nfor column, color in zip(range(weights.shape[1]), colors):\n plt.plot(params, weights[:, column], label=df_wine.columns[column + 1], color=color)\n \n# y=0 に黒い破線を引く\nplt.axhline(0, color='black', linestyle='--', linewidth=3)\nplt.xlim([10 ** (-5), 10 ** 5])\n# 軸のラベルの設定\nplt.ylabel('weight coefficient')\nplt.xlabel('C')\n# 横軸を対数スケールに設定\nplt.xscale('log')\nplt.legend(loc='upper left')\nax.legend(loc='upper center', bbox_to_anchor=(1.38, 1.03), ncol=1, fancybox=True)\nplt.show()", "_____no_output_____" ] ], [ [ "### 4.5.2 逐次特徴選択アルゴリズム", "_____no_output_____" ] ], [ [ "from sklearn.base import clone\nfrom itertools import combinations\nimport numpy as np\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.metrics import accuracy_score\n\nclass SBS():\n \"\"\"\n 逐次後退選択(sequencial backward selection)を実行するクラス\n \"\"\"\n \n def __init__(self, estimator, k_features, scoring=accuracy_score,\n test_size=0.25, random_state=1):\n self.scoring = scoring # 特徴量を評価する指標\n self.estimator = clone(estimator) # 推定器\n self.k_features = k_features # 選択する特徴量の個数\n self.test_size = test_size # テストデータの悪愛\n self.random_state = random_state # 乱数種を固定する random_state\n \n def fit(self, X, y):\n # トレーニングデータとテストデータに分割\n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=self.test_size,\n random_state=self.random_state)\n #print(len(X_train), len(X_test), len(y_train), len(y_test))\n # 全ての特徴量の個数、列インデックス\n dim = X_train.shape[1]\n self.indices_ = tuple(range(dim))\n self.subsets_ = [self.indices_]\n #print(self.indices_)\n # 全ての特徴量を用いてスコアを算出\n score = self._calc_score(X_train, y_train, X_test, y_test, self.indices_)\n # スコアを格納\n self.scores_ = [score]\n # 指定した特徴量の個数になるまで処理を反復\n while dim > self.k_features:\n # 空のリストの生成(スコア、列インデックス)\n scores = []\n subsets = []\n # 特徴量の部分集合を表す列インデックスの組み合わせ毎に処理を反復\n for p in combinations(self.indices_, r=dim - 1):\n # スコアを算出して格納\n score = self._calc_score(X_train, y_train, X_test, y_test, p)\n scores.append(score)\n # 特徴量の部分集合を表す列インデックスのリストを格納\n subsets.append(p)\n \n # 最良のスコアのインデックスを抽出\n best = np.argmax(scores)\n # 最良のスコアとなる列インデックスを抽出して格納\n self.indices_ = subsets[best]\n self.subsets_.append(self.indices_)\n # 特徴量の個数を1つだけ減らして次のステップへ\n dim -= 1\n \n # スコアを格納\n self.scores_.append(scores[best])\n \n # 最後に格納したスコア\n self.k_score_ = self.scores_[-1]\n \n return self\n \n def transform(self, X):\n # 抽出した特徴量を返す\n return X[:, self.indices_]\n \n def _calc_score(self, X_train, y_train, X_test, y_test, indices):\n # 指定された列番号 indices の特徴量を抽出してモデルに適合\n self.estimator.fit(X_train[:, indices], y_train)\n # テストデータを用いてクラスラベルを予測\n y_pred = self.estimator.predict(X_test[:, indices])\n # 真のクラスラベルと予測値を用いてスコアを算出\n score = self.scoring(y_test, y_pred)\n return score", "_____no_output_____" ], [ "from sklearn.neighbors import KNeighborsClassifier\nimport matplotlib.pyplot as plt\nknn = KNeighborsClassifier(n_neighbors=2)\nsbs = SBS(knn, k_features=1)\nsbs.fit(X_train_std, y_train)", "93 31 93 31\n(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)\n" ], [ "# 近傍点の個数のリスト\nk_feat = [len(k) for k in sbs.subsets_]\ndisplay(k_feat)\n# 横軸を近傍店の個数、縦軸をスコアとした折れ線グラフのプロット\nplt.plot(k_feat, sbs.scores_, marker='o')\nplt.ylim([0.7, 1.1])\nplt.ylabel('Accuracy')\nplt.xlabel('Number of features')\nplt.grid()\nplt.show()", "_____no_output_____" ], [ "k5 = list(sbs.subsets_[8])\nprint(k5)\nprint(df_wine.columns[1:][k5])", "[0, 1, 3, 10, 12]\nIndex(['Alcohol', 'Malic acid', 'Alcalinity of ash', 'Hue', 'Proline'], dtype='object')\n" ], [ "# 13個全ての特徴量を用いてモデルに適合\nknn.fit(X_train_std, y_train)\n# トレーニングの正解率を出力\nprint('Training accuracy:', knn.score(X_train_std, y_train))\n# テストの正解率を出力\nprint('Test accuracy:', knn.score(X_test_std, y_test))", "Training accuracy: 0.983870967742\nTest accuracy: 0.944444444444\n" ], [ "# 5個の特徴量を用いてモデルに適合\nknn.fit(X_train_std[:, k5], y_train)\n# トレーニングの正解率を出力\nprint('Training accuracy:', knn.score(X_train_std[:, k5], y_train))\n# テストの正解率を出力\nprint('Test accuracy:', knn.score(X_test_std[:, k5], y_test))", "Training accuracy: 0.959677419355\nTest accuracy: 0.962962962963\n" ] ], [ [ "## 4.6 ランダムフォレストで特徴量の重要度にアクセスする", "_____no_output_____" ] ], [ [ "from sklearn.ensemble import RandomForestClassifier\n# Wine データセットの特徴量の名所\nfeat_labels = df_wine.columns[1:]\n# ランダムフォレストオブジェクトの生成\n# (木の個数=10,000、すべての怖を用いて並列計算を実行\nforest = RandomForestClassifier(n_estimators=10000, random_state=0, n_jobs=-1)\n# モデルに適合\nforest.fit(X_train, y_train)\n# 特徴量の重要度を抽出\nimportances = forest.feature_importances_\n# 重要度の降順で特徴量のインデックスを抽出\nindices = np.argsort(importances)[::-1]\n# 重要度の降順で特徴量の名称、重要度を表示\nfor f in range(X_train.shape[1]):\n print(\"{:2d}) {:<30} {:f}\".format(f + 1, feat_labels[indices[f]], importances[indices[f]]))\n\nplt.title('Feature Importances')\nplt.bar(range(X_train.shape[1]), importances[indices], color='lightblue', align='center')\nplt.xticks(range(X_train.shape[1]), feat_labels[indices], rotation=90)\nplt.xlim([-1, X_train.shape[1]])\nplt.tight_layout()\nplt.show()", " 1) Color intensity 0.182483\n 2) Proline 0.158610\n 3) Flavanoids 0.150948\n 4) OD280/OD315 of diluted wines 0.131987\n 5) Alcohol 0.106589\n 6) Hue 0.078243\n 7) Total phenols 0.060718\n 8) Alcalinity of ash 0.032033\n 9) Malic acid 0.025400\n10) Proanthocyanins 0.022351\n11) Magnesium 0.022078\n12) Nonflavanoid phenols 0.014645\n13) Ash 0.013916\n" ], [ "from sklearn.feature_selection import SelectFromModel\n# 特徴選択オブジェクトの生成(重要度のしきい値を0.15に設定)\nsfm = SelectFromModel(forest, prefit=True, threshold=0.15)\n# 特徴量を抽出\nX_selected = sfm.transform(X_train)\nX_selected.shape", "_____no_output_____" ], [ "for f in range(X_selected.shape[1]):\n print(\"{:2d}) {:<30} {:f}\".format(f + 1, feat_labels[indices[f]], importances[indices[f]]))\n", " 1) Color intensity 0.182483\n 2) Proline 0.158610\n 3) Flavanoids 0.150948\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
4a483b23b5476dca1cdae0e3ca8167497c862e93
76,978
ipynb
Jupyter Notebook
assignments/assignment_3/assignment_3_Jason_Wang.ipynb
hua-mike/CYPLAN255
03493a41f51bcd56f587cc088468eccbeafadc05
[ "CNRI-Python" ]
null
null
null
assignments/assignment_3/assignment_3_Jason_Wang.ipynb
hua-mike/CYPLAN255
03493a41f51bcd56f587cc088468eccbeafadc05
[ "CNRI-Python" ]
null
null
null
assignments/assignment_3/assignment_3_Jason_Wang.ipynb
hua-mike/CYPLAN255
03493a41f51bcd56f587cc088468eccbeafadc05
[ "CNRI-Python" ]
null
null
null
30.414066
397
0.38411
[ [ [ "REMEMBER: FIRST CREATE A COPY OF THIS FILE WITH A UNIQUE NAME AND DO YOUR WORK THERE. AND MAKE SURE YOU COMMIT YOUR CHANGES TO THE `hw3_submissions` BRANCH.", "_____no_output_____" ], [ "# Assignment 3 | Cleaning and Exploring Data with Pandas\n\n", "_____no_output_____" ], [ "<img src=\"data/scoreCard.jpg\" width=250>\n\nIn this assignment, you will investigate restaurant food safety scores for restaurants in San Francisco. Above is a sample score card for a restaurant. The scores and violation information have been made available by the San Francisco Department of Public Health. ", "_____no_output_____" ], [ "## Loading Food Safety Data\n\n\nThere are 2 files in the data directory:\n1. business.csv containing food establishments in San Francisco\n1. inspections.csv containing retaurant inspections records\n\nLet's start by loading them into Pandas dataframes. One of the files, business.csv, has encoding (ISO-8859-1), so you will need to account for that when reading it.", "_____no_output_____" ], [ "### Question 1\n\n#### Question 1a\nRead the two files noted above into two pandas dataframes named `bus` and `ins`, respectively. Print the first 5 rows of each to inspect them.\n", "_____no_output_____" ] ], [ [ "import pandas as pd", "_____no_output_____" ], [ "bus = pd.read_csv('data/businesses.csv', encoding='ISO-8859-1')\nins = pd.read_csv('data/inspections.csv')", "_____no_output_____" ], [ "bus.head()", "_____no_output_____" ], [ "ins.head()", "_____no_output_____" ] ], [ [ "## Examining the Business data\n\nFrom its name alone, we expect the `businesses.csv` file to contain information about the restaurants. Let's investigate this dataset.", "_____no_output_____" ], [ "### Question 2\n\n#### Question 2a: How many records are there?", "_____no_output_____" ], [ "<font color='Red'>There are 6315 Records</font>", "_____no_output_____" ] ], [ [ "len(bus)", "_____no_output_____" ] ], [ [ "#### Question 2b: How many unique business IDs are there? ", "_____no_output_____" ], [ "<font color='Red'>There are 6315 Unique Business ID's</font>", "_____no_output_____" ] ], [ [ "len(bus['business_id'].unique())", "_____no_output_____" ] ], [ [ "#### Question 2c: What are the 5 most common businesses by name, and how many are there in San Francisco?", "_____no_output_____" ], [ "<font color='Red'>The 5 most common business names are: Starbucks Coffee:72, Peet's Coffee & Tea:24, McDonalds:12, San Francisco Soup Company:11, Walgreens:11</font>", "_____no_output_____" ] ], [ [ "bus['name'].value_counts().to_frame().head(5)", "_____no_output_____" ] ], [ [ "## Zip code\n\nNext, let's explore some of the variables in the business table. We begin by examining the postal code.\n\n### Question 3\n\n#### Question 3a\nHow are the zip code values stored in python (i.e. data type)?\n\nTo answer this you might want to examine a particular entry.", "_____no_output_____" ], [ "<font color='Red'>Zip codes are stored as strings. This makes sense to me, because they are an identifier and should be used for any mathematical operations. This is also good to know for string manipulation purposes and also merging with other dataframes</font>", "_____no_output_____" ] ], [ [ "type(bus['postal_code'][0])", "_____no_output_____" ] ], [ [ "#### Question 3b\n\nWhat are the unique values of postal_code?", "_____no_output_____" ], [ "<font color='Red'>There are 46 unique values. The Unique values are shown below.</font>", "_____no_output_____" ] ], [ [ "bus['postal_code'].value_counts().to_frame()", "_____no_output_____" ] ], [ [ "#### Question 3c\n\nLet's say we decide to exclude the businesses that have no zipcode for our analysis (which might include food trucks for example). Use the list of valid 5-digit zip codes below to create a new dataframe called bus_valid, with only businesses whose postal_codes show up in this list of valid zipcodes. How many businesses are there in this new dataframe?", "_____no_output_____" ], [ "<font color='Red'>There are 5999 businesses in the new dataframe</font>", "_____no_output_____" ] ], [ [ "validZip = [\"94102\", \"94103\", \"94104\", \"94105\", \"94107\", \"94108\",\n \"94109\", \"94110\", \"94111\", \"94112\", \"94114\", \"94115\",\n \"94116\", \"94117\", \"94118\", \"94121\", \"94122\", \"94123\", \n \"94124\", \"94127\", \"94131\", \"94132\", \"94133\", \"94134\"]", "_____no_output_____" ], [ "bus_valid = bus[bus['postal_code'].isin(validZip)]", "_____no_output_____" ], [ "len(bus_valid)", "_____no_output_____" ] ], [ [ "## Latitude and Longitude\n\nAnother aspect of the data we want to consider is the prevalence of missing values. If many records have missing values then we might be concerned about whether the nonmissing values are representative of the population.\n\n### Question 4\n \nConsider the longitude and latitude in the business DataFrame. \n\n#### Question 4a\n\nHow many businesses are missing longitude values, working with only the businesses that are in the list of valid zipcodes?", "_____no_output_____" ], [ "<font color='Red'>There are 2483 records with missing longitude values</font>", "_____no_output_____" ] ], [ [ "bus_valid[pd.isnull(bus_valid['longitude'])]", "_____no_output_____" ], [ "sum(pd.isnull(bus_valid['longitude']))", "_____no_output_____" ] ], [ [ "#### Question 4b\n\nCreate a new dataframe with one row for each valid zipcode. The dataframe should include the following three columns:\n\n1. `postal_code`: Contains the zip codes in the `validZip` variable above.\n2. `null_lon`: The number of businesses in that zipcode with missing `longitude` values.\n3. `not_null_lon`: The number of businesses without missing `longitude` values.", "_____no_output_____" ] ], [ [ "#There's gotta be an easier way - pls excuse my OVERCOMPLICATED CODE\n\n#initialize dataframe with postal codes\npostal_code = list(bus_valid['postal_code'].value_counts().to_frame().index)\npostaldf = pd.DataFrame(postal_code)\npostaldf = postaldf.rename(columns={0:'postal_code'})\n\n\n#how many null/not null values in each postal code?\nnull_counts = []\nnot_null_counts = []\nfor code in postal_code:\n zipdf = bus_valid[bus_valid['postal_code'] == code]\n null_counts.append(sum(pd.isnull(zipdf['longitude'])))\n not_null_counts.append(sum(pd.notnull(zipdf['longitude'])))\npostaldf['null_lon'] = null_counts\npostaldf['not_null_lon'] = not_null_counts\n\npostaldf.head()", "_____no_output_____" ] ], [ [ "#### 4c. Do any zip codes appear to have more than their 'fair share' of missing longitude? \n\nTo answer this, you will want to compute the proportion of missing longitude values for each zip code, and print the proportion missing longitude, and print the top five zipcodes in descending order of proportion missing postal_code.\n", "_____no_output_____" ], [ "<font color='Red'>Zip code 94107 has the most null with 0.55 fraction null</font>", "_____no_output_____" ] ], [ [ "postaldf['missing_lon_frac'] = postaldf['null_lon'] / (postaldf['null_lon'] + postaldf['not_null_lon'])\npostaldf_frac_sorted = postaldf.sort_values('missing_lon_frac', ascending=False)\npostaldf_frac_sorted.head()", "_____no_output_____" ] ], [ [ "# Investigate the inspection data\n\nLet's now turn to the inspection DataFrame. Earlier, we found that `ins` has 4 columns, these are named `business_id`, `score`, `date` and `type`. In this section, we determine the granularity of `ins` and investigate the kinds of information provided for the inspections. \n\n### Question 5\n\n#### Question 5a\nAs with the business data, assess whether there is one inspection record for each business, by counting how many rows are in the data and how many unique businesses there are in the data. If they are exactly the same number, it means there is only one inspection per business, clearly.", "_____no_output_____" ], [ "<font color='Red'>Since there are more inspections than businesses, there is more than one inspection record for each business</font>", "_____no_output_____" ] ], [ [ "print(len(bus))\nprint(len(ins))", "6315\n15430\n" ] ], [ [ "#### Question 5b\n\nWhat values does `type` take on? How many occurrences of each value is in the DataFrame? Create a new dataframe named `ins2` by copying `ins` and keeping only records with values of `type` that occur more than 10 times in the original table. In other words, eliminate records that have values of `type` that occur rarely (< 10 times). Check the result to make sure rare types are eliminated.", "_____no_output_____" ], [ "<font color='Red'>type takes on \"routine\" and \"complaint\". \"complaint\" has only 1 record while \"routine\" has 15429 records. I have eliminated complaints by subsetting only routine inspections.</font>", "_____no_output_____" ] ], [ [ "ins['type'].value_counts().to_frame()", "_____no_output_____" ], [ "ins2 = ins[ins['type'] == 'routine']\nins2", "_____no_output_____" ] ], [ [ "#### Question 5c\n\nSince the data was stored in a .csv file, the dates are formatted as strings such as `20160503`. Once we read in the data, we would like to have dates in an appropriate format for analysis. Add a new column called `year` by capturing the first four characters of the date column. \n\nHint: we have seen multiple ways of doing this in class, includings `str` operations, `lambda` functions, `datetime` operations, and others. Choose the method that works best for you :)", "_____no_output_____" ] ], [ [ "ins2['year'] = ins2['date']\nyear_only = lambda x: str(x)[0:4]\nins2['year'] = ins2['year'].apply(year_only)\nins2", "C:\\Users\\wangj\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n \"\"\"Entry point for launching an IPython kernel.\nC:\\Users\\wangj\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:3: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n This is separate from the ipykernel package so we can avoid doing imports until\n" ] ], [ [ "#### Question 5d\n\nWhat range of years is covered in this data set? Are there roughly same number of inspections each year? Try dropping records for any years with less than 50 inspections and store the result in a new dataframe named `ins3`.", "_____no_output_____" ], [ "<font color='Red'>2013 only has 38 records whereas 2015-2016 has thousands of records. ins3 is created without records of 2013.</font>", "_____no_output_____" ] ], [ [ "ins2['year'].value_counts().to_frame()", "_____no_output_____" ], [ "ins3 = ins2[ins2['year'] != '2013']\nins3", "_____no_output_____" ] ], [ [ "Let's examine only the inspections for one year: 2016. This puts businesses on a more equal footing because [inspection guidelines](https://www.sfdph.org/dph/eh/Food/Inspections.asp) generally refer to how many inspections should occur in a given year.", "_____no_output_____" ], [ "### Question 6\n\n#### Question 6a\n\nMerge the business and 2016 inspections data, keeping all businesses regardless of whether they show up in the inspections file. Show the first several rows of the resulting dataframe.", "_____no_output_____" ] ], [ [ "ins3_2016 = ins3[ins3['year']=='2016']\nRatings_2016 = pd.merge(ins3_2016, bus_valid, how=\"inner\", left_on ='business_id', right_on ='business_id')", "_____no_output_____" ], [ "Ratings_2016.head(7)", "_____no_output_____" ] ], [ [ "#### Question 6b\nPrint the 20 lowest rated businesses names, their addresses, and their ratings.", "_____no_output_____" ] ], [ [ "Ratings_2016_sorted = Ratings_2016.sort_values('score')\nRatings_2016_sorted.head(20)[['name', 'address', 'score']]", "_____no_output_____" ] ], [ [ "## Done!\n\nNow commit this notebook to your `hw3_submissions` branch, push it to your GitHub repo, and open a PR!", "_____no_output_____" ], [ "NICE!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
4a4843ec62990b5822c549525809fd5e0d64711c
77,404
ipynb
Jupyter Notebook
Bitcoin_Prediction_with_the_use_of_an_LSTM.ipynb
Ipal23/LSTM-Neural-Network-Bitcoin-Stock-prediction
b14305c99fd9fc618a98eab50e6deca66ccebbc1
[ "MIT" ]
null
null
null
Bitcoin_Prediction_with_the_use_of_an_LSTM.ipynb
Ipal23/LSTM-Neural-Network-Bitcoin-Stock-prediction
b14305c99fd9fc618a98eab50e6deca66ccebbc1
[ "MIT" ]
null
null
null
Bitcoin_Prediction_with_the_use_of_an_LSTM.ipynb
Ipal23/LSTM-Neural-Network-Bitcoin-Stock-prediction
b14305c99fd9fc618a98eab50e6deca66ccebbc1
[ "MIT" ]
null
null
null
43.681716
14,514
0.510542
[ [ [ "<a href=\"https://colab.research.google.com/github/Ipal23/LSTM-Neural-Network-Bitcoin-Stock-prediction/blob/main/Bitcoin_Prediction_with_the_use_of_an_LSTM.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "Project 1. The prediction of the Bitcoin Stock Price with the use of an Long-Short-Term-Memory Neural Network.\r\n\r\nIn case you track Artificial Intelligence, you have certainly read about Neural Networks. They have gained traction due to the fact that they perform extraordinarily well on a lot of different problems. They also have great ability to handle incomplete data. As such Deep learning is an important topic nowadays. \r\n\r\nIndividuals can be empowered to make decisions based on statistical analysis and may lean on the intuition the prediction provides. \r\nLong Short Term Memory networks were introduced by Hochreiter & Schmidhuber(1997).\r\nThe stock market industry is producing huge amounts of data which need to be mined to discover hidden information for effective decision making in terms of shareholder purchases and sales. \r\nLSTMs are designed to avoid issues such as the long-term dependency problem. Recalling information for long periods of time refer to their default behavior. \r\nThe preparation of independent variables was one of the major challenges faced.\r\n\r\nInput Data\r\n\r\nChoosing the proper set of independent variables is of utmost importance for accurate forecasting. The data used in this paper work were historical daily stock prices. \r\nIn this study the closing price is chosen to be modeled and predicted. \r\nIn terms of Time Series and econometric methods, MSE is considered an acceptable measure of performance. \r\n\r\n\r\n\r\n\r\n\r\n", "_____no_output_____" ] ], [ [ "# Author Iliana Paliari\r\n\r\n#The below code depicts the creation of an LSTM Neural Network for the prediction of the Bitcoin Stock prices.\r\n\r\n\r\nimport pandas as pd\r\nfrom google.colab import files\r\n\r\nimport pandas as pd\r\nfrom matplotlib import pyplot as plt\r\nimport numpy as np\r\nimport math\r\n\r\nfrom sklearn.preprocessing import normalize, MinMaxScaler\r\nfrom sklearn.metrics import mean_squared_error\r\nfrom keras import backend as K\r\n\r\nimport keras\r\nfrom keras.models import Sequential\r\nfrom keras.layers import GRU, Dense\r\nfrom keras.layers import LSTM\r\nfrom keras import callbacks\r\nfrom keras.utils import np_utils\r\nfrom keras.layers.core import Dense, Dropout, Activation\r\n\r\nfrom keras import optimizers\r\nimport tensorflow as tf\r\n\r\n# Dataset is now stored in a Pandas Dataframe\r\n\r\nuploaded = files.upload()\r\n\r\n\r\nimport io\r\ndf = pd.read_csv(io.BytesIO(uploaded['BCHAIN-MKPRU.csv']))", "_____no_output_____" ], [ "#Display first values\r\ndf.head()", "_____no_output_____" ], [ "#Convert df to Dataframe\r\ndf = pd.DataFrame(df, columns=['Date', 'Value'])\r\nprint(df)", " Date Value\n0 2021-03-06 48861.38\n1 2021-03-05 48448.91\n2 2021-03-04 50477.70\n3 2021-03-03 48356.04\n4 2021-03-02 49618.43\n... ... ...\n4442 2009-01-06 0.00\n4443 2009-01-05 0.00\n4444 2009-01-04 0.00\n4445 2009-01-03 0.00\n4446 2009-01-02 0.00\n\n[4447 rows x 2 columns]\n" ], [ "#Data preparation\r\ncolumns_to_view = ['Value']\r\ndf = df[columns_to_view]\r\ndf.index.names = ['Date']\r\ndf.sort_index(inplace=True)\r\nprint('Total rows: {}'.format(len(df)))\r\ndf.head()", "Total rows: 4447\n" ], [ "# Display plot diagram\r\ndf.plot()", "_____no_output_____" ], [ "#Additional Data checks\r\ndf.isnull().sum()\r\nnull_columns=df.columns[df.isnull().any()]\r\ndf[null_columns].isnull().sum()", "_____no_output_____" ], [ "#Additional Data checks\r\nnull_columns=df.columns[df.isnull().any()]", "_____no_output_____" ], [ "#Additional Data checks\r\ndf.isnull().sum()", "_____no_output_____" ], [ "print(df[df.isnull().any(axis=1)][null_columns].head())\r\ndf.dropna(inplace=True)", "Empty DataFrame\nColumns: []\nIndex: []\n" ], [ "#Print the Min and the Max value\r\nprint('Min', np.min(df))\r\nprint('Max', np.max(df))", "Min Value 0.0\ndtype: float64\nMax Value 57487.86\ndtype: float64\n" ], [ "df.values.tolist()", "_____no_output_____" ], [ "df.dtypes", "_____no_output_____" ], [ "df['Value'] = pd.to_numeric(df['Value'],errors='coerce')", "_____no_output_____" ], [ "dataset = df.astype('float64')", "_____no_output_____" ], [ "def create_dataset(dataset, look_back=1):\r\n print(len(dataset), look_back)\r\n dataX, dataY = [], []\r\n for i in range(len(dataset)-look_back-1):\r\n #The target is always the next value. And the lookback are the previous prices\r\n a = dataset[i:(i+look_back), 0] \r\n print(i)\r\n print('X {} to {}'.format(i, i+look_back))\r\n print(a)\r\n print('Y {}'.format(i + look_back))\r\n print(dataset[i + look_back, 0])\r\n dataset[i + look_back, 0]\r\n dataX.append(a)\r\n dataY.append(dataset[i + look_back, :][0])#Isolate the target with [0] it must be 1st\r\n \r\n return np.array(dataX), np.array(dataY)", "_____no_output_____" ], [ "#Normalize the dataset\r\nscaler = MinMaxScaler(feature_range=(0, 1))\r\nscaled = scaler.fit_transform(dataset)\r\nprint('Min', np.min(scaled))\r\nprint('Max', np.max(scaled))", "Min 0.0\nMax 1.0\n" ], [ "df.dropna(inplace=True)", "_____no_output_____" ], [ "df.hist(bins=10)", "_____no_output_____" ], [ "len(df[df['Value'] == 0])", "_____no_output_____" ], [ "print(scaled[:10])", "[[0.84994258]\n [0.84276767]\n [0.87805843]\n [0.8411522 ]\n [0.86311145]\n [0.7847556 ]\n [0.8028803 ]\n [0.80608862]\n [0.81409223]\n [0.88061793]]\n" ], [ "#split into train and test sets\r\ntrain_size = int(len(scaled) * 0.70)\r\ntest_size = len(scaled - train_size)\r\ntrain, test = scaled[0:train_size, :], scaled[train_size: len(scaled), :]\r\nprint('train: {}\\ntest: {}'.format(len(train), len(test)))\r\n", "train: 3112\ntest: 1335\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a484441946a6d92935333e38bf2b7ac4bae332a
39,538
ipynb
Jupyter Notebook
notebooks/network_mapping/CLX_Supervised_Asset_Classification.ipynb
VibhuJawa/clx
678c539474bff4f58b32af346f3975040b3a9ecd
[ "Apache-2.0" ]
143
2019-11-06T16:08:50.000Z
2022-03-22T12:14:59.000Z
notebooks/network_mapping/CLX_Supervised_Asset_Classification.ipynb
VibhuJawa/clx
678c539474bff4f58b32af346f3975040b3a9ecd
[ "Apache-2.0" ]
361
2019-11-06T20:33:24.000Z
2022-03-31T19:59:12.000Z
notebooks/network_mapping/CLX_Supervised_Asset_Classification.ipynb
VibhuJawa/clx
678c539474bff4f58b32af346f3975040b3a9ecd
[ "Apache-2.0" ]
82
2019-11-06T17:36:42.000Z
2022-03-17T07:03:04.000Z
33.851027
630
0.391724
[ [ [ "# CLX Asset Classification (Supervised)", "_____no_output_____" ], [ "## Authors\n- Eli Fajardo (NVIDIA)\n- Görkem Batmaz (NVIDIA)\n- Bhargav Suryadevara (NVIDIA)\n", "_____no_output_____" ], [ "## Table of Contents \n* Introduction\n* Dataset\n* Reading in the datasets\n* Training and inference\n* References", "_____no_output_____" ], [ "# Introduction", "_____no_output_____" ], [ "In this notebook, we will show how to predict the function of a server with Windows Event Logs using cudf, cuml and pytorch. The machines are labeled as DC, SQL, WEB, DHCP, MAIL and SAP. The dependent variable will be the type of the machine. The features are selected from Windows Event Logs which is in a tabular format. This is a first step to learn the behaviours of certain types of machines in data-centres by classifying them probabilistically. It could help to detect unusual behaviour in a data-centre. For example, some compromised computers might be acting as web/database servers but with their original tag. \n\nThis work could be expanded by using different log types or different events from the machines as features to improve accuracy. Various labels can be selected to cover different types of machines or data-centres.", "_____no_output_____" ], [ "## Library imports", "_____no_output_____" ] ], [ [ "from clx.analytics.asset_classification import AssetClassification\nimport cudf\nfrom cuml.preprocessing import train_test_split\nfrom cuml.preprocessing import LabelEncoder\nimport torch\nfrom sklearn.metrics import accuracy_score, f1_score, confusion_matrix\nimport pandas as pd\nfrom os import path\nimport s3fs", "_____no_output_____" ] ], [ [ "## Initialize variables", "_____no_output_____" ], [ "10000 is chosen as the batch size to optimise the performance for this dataset. It can be changed depending on the data loading mechanism or the setup used. \n\nEPOCH should also be adjusted depending on convergence for a specific dataset. \n\nlabel_col indicates the total number of features used plus the dependent variable. Feature names are listed below.", "_____no_output_____" ] ], [ [ "batch_size = 10000\nlabel_col = '19'\nepochs = 15", "_____no_output_____" ], [ "ac = AssetClassification()", "_____no_output_____" ] ], [ [ "## Read the dataset into a GPU dataframe with `cudf.read_csv()` \n\nThe original data had many other fields. Many of them were either static or mostly blank. After filtering those, there were 18 meaningful columns left. In this notebook we use a fake continuous feature to show the inclusion of continuous features too. When you are using raw data the cell below need to be uncommented", "_____no_output_____" ] ], [ [ "# win_events_gdf = cudf.read_csv(\"raw_features_and_labels.csv\")", "_____no_output_____" ] ], [ [ "```\nwin_events_gdf.dtypes\n\neventcode int64\nkeywords object\nprivileges object\nmessage object\nsourcename object\ntaskcategory object\naccount_for_which_logon_failed_account_domain object\ndetailed_authentication_information_authentication_package object\ndetailed_authentication_information_key_length float64\ndetailed_authentication_information_logon_process object\ndetailed_authentication_information_package_name_ntlm_only object\nlogon_type float64\nnetwork_information_workstation_name object\nnew_logon_security_id object\nimpersonation_level object\nnetwork_information_protocol float64\nnetwork_information_direction object\nfilter_information_layer_name object\ncont1 int64\nlabel object\ndtype: object\n```", "_____no_output_____" ], [ "### Define categorical and continuous feature columns.", "_____no_output_____" ] ], [ [ "cat_cols = [\n \"eventcode\",\n \"keywords\",\n \"privileges\",\n \"message\",\n \"sourcename\",\n \"taskcategory\",\n \"account_for_which_logon_failed_account_domain\",\n \"detailed_authentication_information_authentication_package\",\n \"detailed_authentication_information_key_length\",\n \"detailed_authentication_information_logon_process\",\n \"detailed_authentication_information_package_name_ntlm_only\",\n \"logon_type\",\n \"network_information_workstation_name\",\n \"new_logon_security_id\",\n \"impersonation_level\",\n \"network_information_protocol\",\n \"network_information_direction\",\n \"filter_information_layer_name\",\n \"label\"\n]", "_____no_output_____" ], [ "cont_cols = [\n \"cont1\"\n]", "_____no_output_____" ] ], [ [ "The following are functions used to preprocess categorical and continuous feature columns. This can very depending on what best fits your application and data.", "_____no_output_____" ] ], [ [ "def categorize_columns(cat_gdf):\n for col in cat_gdf.columns:\n cat_gdf[col] = cat_gdf[col].astype('str')\n cat_gdf[col] = cat_gdf[col].fillna(\"NA\")\n cat_gdf[col] = LabelEncoder().fit_transform(cat_gdf[col])\n cat_gdf[col] = cat_gdf[col].astype('int16')\n \n return cat_gdf", "_____no_output_____" ], [ "def normalize_conts(cont_gdf):\n means, stds = (cont_gdf.mean(0), cont_gdf.std(ddof=0))\n cont_gdf = (cont_gdf - means) / stds\n \n return cont_gdf", "_____no_output_____" ] ], [ [ "Preprocessing steps below are not executed in this notebook, because we release already preprocessed data.", "_____no_output_____" ] ], [ [ "#win_events_gdf[cat_cols] = categorize_columns(win_events_gdf[cat_cols])", "_____no_output_____" ], [ "#win_events_gdf[cont_cols] = normalize_conts(win_events_gdf[cont_cols])", "_____no_output_____" ] ], [ [ "Read Windows Event data already preprocessed by above steps", "_____no_output_____" ] ], [ [ "S3_BASE_PATH = \"rapidsai-data/cyber/clx\"\nWINEVT_PREPROC_CSV = \"win_events_features_preproc.csv\"\n\n# Download Zeek conn log\nif not path.exists(WINEVT_PREPROC_CSV):\n fs = s3fs.S3FileSystem(anon=True)\n fs.get(S3_BASE_PATH + \"/\" + WINEVT_PREPROC_CSV, WINEVT_PREPROC_CSV)\n\nwin_events_gdf = cudf.read_csv(\"win_events_features_preproc.csv\")", "_____no_output_____" ], [ "win_events_gdf.head()", "_____no_output_____" ] ], [ [ "### Split the dataset into training and test sets using cuML `train_test_split` function\nColumn 19 contains the ground truth about each machine's function that the logs come from. i.e. DC, SQL, WEB, DHCP, MAIL and SAP. Hence it will be used as a label.", "_____no_output_____" ] ], [ [ "X_train, X_test, Y_train, Y_test = train_test_split(win_events_gdf, \"label\", train_size=0.9)\nX_train[\"label\"] = Y_train", "_____no_output_____" ], [ "X_train.head()", "_____no_output_____" ], [ "Y_train.unique()", "_____no_output_____" ] ], [ [ "### Print Labels\nMaking sure the test set contains all labels", "_____no_output_____" ] ], [ [ "Y_test.unique()", "_____no_output_____" ] ], [ [ "## Training \n\nAsset Classification training uses the fastai tabular model. More details can be found at https://github.com/fastai/fastai/blob/master/fastai/tabular/models.py#L6\n\nFeature columns will be embedded so that they can be used as categorical values. The limit can be changed depending on the accuracy of the dataset.", "_____no_output_____" ], [ "Adam is the optimizer used in the training process; it is popular because it produces good results in various tasks. In its paper, computing the first and the second moment estimates and updating the parameters are summarized as follows", "_____no_output_____" ], [ "$$\\alpha_{t}=\\alpha \\cdot \\sqrt{1-\\beta_{2}^{t}} /\\left(1-\\beta_{1}^{t}\\right)$$", "_____no_output_____" ], [ "More detailson Adam can be found at https://arxiv.org/pdf/1412.6980.pdf", "_____no_output_____" ], [ "We have found that the way we partition the dataframes with a 10000 batch size gives us the optimum data loading capability. The **batch_size** argument can be adjusted for different sizes of datasets.", "_____no_output_____" ] ], [ [ "cat_cols.remove(\"label\")\nac.train_model(X_train, cat_cols, cont_cols, \"label\", batch_size, epochs, lr=0.01, wd=0.0)", "/opt/conda/envs/rapids/lib/python3.7/site-packages/cudf/io/dlpack.py:74: UserWarning: WARNING: cuDF to_dlpack() produces column-major (Fortran order) output. If the output tensor needs to be row major, transpose the output of this function.\n return libdlpack.to_dlpack(gdf_cols)\n" ] ], [ [ "## Evaluation", "_____no_output_____" ] ], [ [ "pred_results = ac.predict(X_test, cat_cols, cont_cols).to_array()\ntrue_results = Y_test.to_array()", "_____no_output_____" ], [ "f1_score_ = f1_score(pred_results, true_results, average='micro')\nprint('micro F1 score: %s'%(f1_score_))", "micro F1 score: 0.9171640881958826\n" ], [ "torch.cuda.empty_cache()", "_____no_output_____" ], [ "labels = [\"DC\",\"DHCP\",\"MAIL\",\"SAP\",\"SQL\",\"WEB\"]\na = confusion_matrix(true_results, pred_results)", "_____no_output_____" ], [ "pd.DataFrame(a, index=labels, columns=labels)", "_____no_output_____" ] ], [ [ "The confusion matrix shows that some machines' function can be predicted really well, whereas some of them need more tuning or more features. This work can be improved and expanded to cover individual data-centres to create a realistic map of the network using ML by not just relying on the naming conventions. It could also help to detect more prominent scale anomalies like multiple machines, not acting per their tag.", "_____no_output_____" ], [ "## References:\n* https://github.com/fastai/fastai/blob/master/fastai/tabular/models.py#L6\n* https://jovian.ml/aakashns/04-feedforward-nn\n* https://www.kaggle.com/dienhoa/reverse-tabular-module-of-fast-ai-v1\n* https://github.com/fastai/fastai/blob/master/fastai/layers.py#L44", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ] ]
4a484d3c425ae8240c69c81eb704acde821731e4
10,508
ipynb
Jupyter Notebook
imagenet/Load URLs.ipynb
bstell-ml001/ml
7e1801e8accf6be77fa93cb6d7a02cabc7b97700
[ "Apache-2.0" ]
null
null
null
imagenet/Load URLs.ipynb
bstell-ml001/ml
7e1801e8accf6be77fa93cb6d7a02cabc7b97700
[ "Apache-2.0" ]
null
null
null
imagenet/Load URLs.ipynb
bstell-ml001/ml
7e1801e8accf6be77fa93cb6d7a02cabc7b97700
[ "Apache-2.0" ]
null
null
null
43.421488
166
0.539399
[ [ [ "from glob import glob\nimport os\nimport inspect\nimport requests\n# import urllib.request\nfrom urlparse import urlparse\nimport wget", "_____no_output_____" ], [ "STARTING_DIR = os.getcwd()", "_____no_output_____" ], [ "URLS_START_RANGE = 1\nURLS_END_RANGE = 101", "_____no_output_____" ], [ "def fetch_urls(path):\n os.chdir(path)\n urls = [line.strip() for line in open('urls')]\n for url in urls[URLS_START_RANGE:URLS_END_RANGE]:\n u = url_parts = urlparse(url)\n #filename = os.path.basename(url_parts.path)\n hostname = u.hostname #.replace('/', '_')\n if u.port:\n hostname += '_' + str(u.port)\n path = u.path.replace('/', '_').replace('.JPG', '.jpg')\n filename = '%s_%s%s' % (u.scheme, hostname, path)\n #print('filename: %s' % filename)\n extension = os.path.splitext(filename)[1]\n if not extension.lower() == '.jpg':\n print('not downloading ' + url)\n continue\n if os.path.exists(filename):\n #print('already downloaded: ' + url)\n continue\n try:\n r = requests.get(url, timeout=1)\n print('fetched ' + url)\n with open(filename, 'wb') as f: \n f.write(r.content)\n f.close()\n except:\n print('failed to load ' + url)\n\n os.chdir(STARTING_DIR)\n\n", "_____no_output_____" ], [ "PATH_TO_IMAGE_DIRS = 'produce'\nIMAGE_DIRS = glob(os.path.join(STARTING_DIR, PATH_TO_IMAGE_DIRS, '*'))\nIMAGE_DIRS.sort()\nfor dir in IMAGE_DIRS:\n print('\\n====================================')\n print(dir)\n print('====================================')\n fetch_urls(dir)\n #break", "\n====================================\n/home/bstell_ml001/ml/imagenet/produce/apple\n====================================\n\n====================================\n/home/bstell_ml001/ml/imagenet/produce/banana\n====================================\n\n====================================\n/home/bstell_ml001/ml/imagenet/produce/bell_pepper\n====================================\nfailed to load http://www.jnhfoods.com/1005%20Bell%20Pepper%20yellow.JPG\nfailed to load http://www.freeclipartnow.com/d/20801-2/bell-pepper-red.jpg\n\n====================================\n/home/bstell_ml001/ml/imagenet/produce/broccoli\n====================================\nfailed to load http://www.freeclipartnow.com/d/20513-2/broccoli.jpg\nfailed to load http://healthandfitnessreporter.com/Images/broccoli.jpg\nfailed to load http://guide.physiciansfitnesscoach.com/Image/Nutrition/SpinachBroccoli.jpg\nfailed to load http://aura.gaia.com/photos/28/275197/medium/broccoli.jpg\nnot downloading http://www.victoriapacking.com/images/veggy/broccoli.gif\nfailed to load http://www.pulsodepuertorico.com/Fotos%20a%20usarse/Broccoli.jpg\nfailed to load http://www.ilgirasoledimarika.com/images/144df132ceec6a.jpg\nnot downloading http://ecollegey.com/images/broccoli.gif\nnot downloading http://tracykrell.com/broccoli.png\nfailed to load http://www.titefood.com/images/broccoli.jpg\nfailed to load http://stg64.kraftcanada.com/SiteCollectionImages/ImageRepository/2/Broccoli.jpg\n\n====================================\n/home/bstell_ml001/ml/imagenet/produce/carrot\n====================================\nfailed to load http://www.wsu.edu:8080/~wsherb/edpages/delicious/images/carrot2.jpg\nfailed to load http://p2.iecool.com/photo/n/1%20(3)/sc2_010.jpg\nfailed to load http://kepu.gzst.net.cn/News/UploadImages/News/9294314281.jpg\nfailed to load http://happyfarming.com/images/siamese_twin_carrot.jpg\nfailed to load http://www.wulanchabu.gov.cn/pic/2005325103950.jpg\nfailed to load http://www.eatba.com/img/20081025021429921.jpg\nfailed to load http://jx.ganzhou.com/Jiankang/UploadFiles_4377/200801/20080130113011473.jpg\nfailed to load http://www.cocinaya.com/files/cocinaya.com/imagecache/full/files/cocinaya.com/fotos_articulos/zanahoria.jpg\nfailed to load http://kminter.coa.gov.tw/site/coa/public/data/harvest/country/960836.1.jpg\nfailed to load http://www.yt315.gov.cn/1/UploadFile/20086554056471.jpg\nfailed to load http://www.kidsturncentral.com/games/sliders/slider31.jpg\nfailed to load http://www.wymaengineering.co.nz/files/carrot-sample-3.jpg\n\n====================================\n/home/bstell_ml001/ml/imagenet/produce/cucumber\n====================================\nfailed to load http://images.suite101.com/88452_cukefreeimage.jpg\nfailed to load http://www.universalenergycenter.com/images/cucumber.jpg\nfailed to load http://blogs1.marthastewart.com/homegrown/images/2008/02/12/cucumber_picklebush.jpg\nfailed to load http://www.plantcare.com/oldSite/httpdocs/images/namedImages/cucumber.jpg\nfailed to load http://www.oldshawfarm.com/archives/upload/2008/05/IMG_0813a.jpg\nfailed to load http://teenage-secret.myfirstpost.net/1217886397_ogurec.jpg\nfailed to load http://www.coopext.colostate.edu/4dmg/images/diva.jpg\n\n====================================\n/home/bstell_ml001/ml/imagenet/produce/grape\n====================================\nfailed to load http://xholiday.cn/image/xholiday_article_2104.jpg\n\n====================================\n/home/bstell_ml001/ml/imagenet/produce/lemon\n====================================\nfailed to load http://images.inmagine.com/img/photoalto/paa276/paa276000020.jpg\nfailed to load http://www.artificialplants.co.uk/lemonfruit_small.jpg\nfailed to load http://www.simplestockshots.com/imageLibrary/processedImages/thumbnail/fav02113-03.jpg\nfailed to load http://www.uni-graz.ac.at/~katzer/pictures/citr_43.jpg\nnot downloading http://designbyrussell.com/fruit_photos/lemon.gif\n\n====================================\n/home/bstell_ml001/ml/imagenet/produce/orange\n====================================\n\n====================================\n/home/bstell_ml001/ml/imagenet/produce/peach\n====================================\n\n====================================\n/home/bstell_ml001/ml/imagenet/produce/pear\n====================================\n\n====================================\n/home/bstell_ml001/ml/imagenet/produce/pineapple\n====================================\nfailed to load http://www.furlongphoto.com/pictures/Fiji2/thumbnails/pineapple-box.jpg\nfailed to load http://www.shutterandpupil.com/images/pineapple.jpg\n\n====================================\n/home/bstell_ml001/ml/imagenet/produce/potato\n====================================\nnot downloading http://www.inerboristeria.com/files/imagecache/resize/files/patate-succo.png\nfailed to load http://www.ecofriend.org/images/earthshell_corp_makes_compostable_plates_and_bowls_from_potato_and_corn_starches_and_lime_from_limestone.jpg\nnot downloading http://www.dicoma.it/patata.gif\nfailed to load http://allergyadvisor.com/Educational/images/Potato1.jpg\nfailed to load http://prodottitipici.provincia.cuneo.it/_images/prodotti/foto/2124.jpg\nfailed to load http://www.eladerezo.com/wp-content/uploads/2007/01/WindowsLiveWriter/Enbuscadelapatataperfecta_971E/patata%5B3%5D.jpg\nfailed to load http://thenewspointer.blogdns.com/thenewspointer/wp-content/uploads/2008/04/800px-patates1.jpg\nfailed to load http://z.about.com/d/americanfood/1/0/Y/0/-/-/potato_maesejose.jpg\nfetched http://www.foodieobsessed.com/wp-content/uploads/2008/04/potato-free-image.jpg\nfailed to load http://www.ricette-italia.net/wp-content/uploads/2008/07/patate-thumb.jpg\nfailed to load http://files.myopera.com/EspenAO/albums/141474/food.jpg\nfailed to load http://zt.tibet.cn/web/linzhixian/lzx/..%5Clzx/pic/2008011406.jpg\nfailed to load http://z.about.com/d/americanfood/1/I/d/1/-/-/potatosal.jpg\n\n====================================\n/home/bstell_ml001/ml/imagenet/produce/strawberry\n====================================\n\n====================================\n/home/bstell_ml001/ml/imagenet/produce/tomato\n====================================\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
4a4862c2425a88a8ba8925a5a9034385464b8f16
338,826
ipynb
Jupyter Notebook
chapters/chapter8-wrangling-basics.ipynb
unverciftci/python-programming-for-data-science
61b355a44015f0c3f1c9b0f7a82f04b4907c1ff9
[ "CC0-1.0" ]
32
2020-12-24T17:12:21.000Z
2022-03-25T22:51:47.000Z
chapters/chapter8-wrangling-basics.ipynb
unverciftci/python-programming-for-data-science
61b355a44015f0c3f1c9b0f7a82f04b4907c1ff9
[ "CC0-1.0" ]
1
2022-03-31T06:42:57.000Z
2022-03-31T06:42:57.000Z
chapters/chapter8-wrangling-basics.ipynb
unverciftci/python-programming-for-data-science
61b355a44015f0c3f1c9b0f7a82f04b4907c1ff9
[ "CC0-1.0" ]
26
2020-12-24T23:23:11.000Z
2022-03-25T22:52:02.000Z
35.173466
18,612
0.416524
[ [ [ "![](../docs/banner.png)", "_____no_output_____" ], [ "# Chapter 8: Basic Data Wrangling With Pandas", "_____no_output_____" ], [ "<h2>Chapter Outline<span class=\"tocSkip\"></span></h2>\n<hr>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#1.-DataFrame-Characteristics\" data-toc-modified-id=\"1.-DataFrame-Characteristics-2\">1. DataFrame Characteristics</a></span></li><li><span><a href=\"#2.-Basic-DataFrame-Manipulations\" data-toc-modified-id=\"2.-Basic-DataFrame-Manipulations-3\">2. Basic DataFrame Manipulations</a></span></li><li><span><a href=\"#3.-DataFrame-Reshaping\" data-toc-modified-id=\"3.-DataFrame-Reshaping-4\">3. DataFrame Reshaping</a></span></li><li><span><a href=\"#4.-Working-with-Multiple-DataFrames\" data-toc-modified-id=\"4.-Working-with-Multiple-DataFrames-5\">4. Working with Multiple DataFrames</a></span></li><li><span><a href=\"#5.-More-DataFrame-Operations\" data-toc-modified-id=\"5.-More-DataFrame-Operations-6\">5. More DataFrame Operations</a></span></li></ul></div>", "_____no_output_____" ], [ "## Chapter Learning Objectives\n<hr>", "_____no_output_____" ], [ "- Inspect a dataframe with `df.head()`, `df.tail()`, `df.info()`, `df.describe()`.\n- Obtain dataframe summaries with `df.info()` and `df.describe()`.\n- Manipulate how a dataframe displays in Jupyter by modifying Pandas configuration options such as `pd.set_option(\"display.max_rows\", n)`.\n- Rename columns of a dataframe using the `df.rename()` function or by accessing the `df.columns` attribute.\n- Modify the index name and index values of a dataframe using `.set_index()`, `.reset_index()` , `df.index.name`, `.index`.\n- Use `df.melt()` and `df.pivot()` to reshape dataframes, specifically to make tidy dataframes.\n- Combine dataframes using `df.merge()` and `pd.concat()` and know when to use these different methods.\n- Apply functions to a dataframe `df.apply()` and `df.applymap()`\n- Perform grouping and aggregating operations using `df.groupby()` and `df.agg()`.\n- Perform aggregating methods on grouped or ungrouped objects such as finding the minimum, maximum and sum of values in a dataframe using `df.agg()`.\n- Remove or fill missing values in a dataframe with `df.dropna()` and `df.fillna()`.", "_____no_output_____" ], [ "## 1. DataFrame Characteristics\n<hr>", "_____no_output_____" ], [ "Last chapter we looked at how we can create dataframes. Let's now look at some helpful ways we can view our dataframe.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd", "_____no_output_____" ] ], [ [ "### Head/Tail", "_____no_output_____" ], [ "The `.head()` and `.tail()` methods allow you to view the top/bottom *n* (default 5) rows of a dataframe. Let's load in the cycling data set from last chapter and try them out:", "_____no_output_____" ] ], [ [ "df = pd.read_csv('data/cycling_data.csv')\ndf.head()", "_____no_output_____" ] ], [ [ "The default return value is 5 rows, but we can pass in any number we like. For example, let's take a look at the top 10 rows:", "_____no_output_____" ] ], [ [ "df.head(10)", "_____no_output_____" ] ], [ [ "Or the bottom 5 rows:", "_____no_output_____" ] ], [ [ "df.tail()", "_____no_output_____" ] ], [ [ "### DataFrame Summaries", "_____no_output_____" ], [ "Three very helpful attributes/functions for getting high-level summaries of your dataframe are:\n- `.shape`\n- `.info()`\n- `.describe()`", "_____no_output_____" ], [ "`.shape` is just like the ndarray attribute we've seen previously. It gives the shape (rows, cols) of your dataframe:", "_____no_output_____" ] ], [ [ "df.shape", "_____no_output_____" ] ], [ [ "`.info()` prints information about the dataframe itself, such as dtypes, memory usages, non-null values, etc:", "_____no_output_____" ] ], [ [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 33 entries, 0 to 32\nData columns (total 6 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Date 33 non-null object \n 1 Name 33 non-null object \n 2 Type 33 non-null object \n 3 Time 33 non-null int64 \n 4 Distance 31 non-null float64\n 5 Comments 33 non-null object \ndtypes: float64(1), int64(1), object(4)\nmemory usage: 1.7+ KB\n" ] ], [ [ "`.describe()` provides summary statistics of the values within a dataframe:", "_____no_output_____" ] ], [ [ "df.describe()", "_____no_output_____" ] ], [ [ "By default, `.describe()` only print summaries of numeric features. We can force it to give summaries on all features using the argument `include='all'` (although they may not make sense!):", "_____no_output_____" ] ], [ [ "df.describe(include='all')", "_____no_output_____" ] ], [ [ "### Displaying DataFrames", "_____no_output_____" ], [ "Displaying your dataframes effectively can be an important part of your workflow. If a dataframe has more than 60 rows, Pandas will only display the first 5 and last 5 rows:", "_____no_output_____" ] ], [ [ "pd.DataFrame(np.random.rand(100))", "_____no_output_____" ] ], [ [ "For dataframes of less than 60 rows, Pandas will print the whole dataframe:", "_____no_output_____" ] ], [ [ "df", "_____no_output_____" ] ], [ [ "I find the 60 row threshold to be a little too much, I prefer something more like 20. You can change the setting using `pd.set_option(\"display.max_rows\", 20)` so that anything with more than 20 rows will be summarised by the first and last 5 rows as before:", "_____no_output_____" ] ], [ [ "pd.set_option(\"display.max_rows\", 20)\ndf", "_____no_output_____" ] ], [ [ "There are also other display options you can change, such as how many columns are shown, how numbers are formatted, etc. See the [official documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/options.html#options-and-settings) for more.\n\nOne display option I will point out is that Pandas allows you to style your tables, for example by highlighting negative values, or adding conditional colour maps to your dataframe. Below I'll style values based on their value ranging from negative (purple) to postive (yellow) but you can see the [styling documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html#Styling) for more examples.", "_____no_output_____" ] ], [ [ "test = pd.DataFrame(np.random.randn(5, 5),\n index = [f\"row_{_}\" for _ in range(5)],\n columns = [f\"feature_{_}\" for _ in range(5)])\ntest.style.background_gradient(cmap='plasma')", "_____no_output_____" ] ], [ [ "### Views vs Copies", "_____no_output_____" ], [ "In previous chapters we've discussed views (\"looking\" at a part of an existing object) and copies (making a new copy of the object in memory). These things get a little abstract with Pandas and \"...it’s very hard to predict whether it will return a view or a copy\" (that's a quote straight [from a dedicated section in the Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy)).\n\nBasically, it depends on the operation you are trying to perform, your dataframe's structure and the memory layout of the underlying array. But don't worry, let me tell you all you need to know. Firstly, the most common warning you'll encounter in Pandas is the `SettingWithCopy`, Pandas raises it as a warning that you might not be doing what you think you're doing. Let's see an example. You may recall there is one outlier `Time` in our dataframe:", "_____no_output_____" ] ], [ [ "df[df['Time'] > 4000]", "_____no_output_____" ] ], [ [ "Imagine we wanted to change this to `2000`. You'd probably do the following:", "_____no_output_____" ] ], [ [ "df[df['Time'] > 4000]['Time'] = 2000", "/opt/miniconda3/lib/python3.7/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n \"\"\"Entry point for launching an IPython kernel.\n" ] ], [ [ "Ah, there's that warning. Did our dataframe get changed?", "_____no_output_____" ] ], [ [ "df[df['Time'] > 4000]", "_____no_output_____" ] ], [ [ "No it didn't, even though you probably thought it did. What happened above is that `df[df['Time'] > 4000]` was executed first and returned a copy of the dataframe, we can confirm by using `id()`:", "_____no_output_____" ] ], [ [ "print(f\"The id of the original dataframe is: {id(df)}\")\nprint(f\" The id of the indexed dataframe is: {id(df[df['Time'] > 4000])}\")", "The id of the original dataframe is: 5762156560\n The id of the indexed dataframe is: 5781171152\n" ] ], [ [ "We then tried to set a value on this new object by appending `['Time'] = 2000`. Pandas is warning us that we are doing that operation on a copy of the original dataframe, which is probably not what we want. To fix this, you need to index in a single go, using `.loc[]` for example:", "_____no_output_____" ] ], [ [ "df.loc[df['Time'] > 4000, 'Time'] = 2000", "_____no_output_____" ] ], [ [ "No error this time! And let's confirm the change:", "_____no_output_____" ] ], [ [ "df[df['Time'] > 4000]", "_____no_output_____" ] ], [ [ "The second thing you need to know is that if you're ever in doubt about whether something is a view or a copy, you can just use the `.copy()` method to force a copy of a dataframe. Just like this:", "_____no_output_____" ] ], [ [ "df2 = df[df['Time'] > 4000].copy()", "_____no_output_____" ] ], [ [ "That way, your guaranteed a copy that you can modify as you wish.", "_____no_output_____" ], [ "## 2. Basic DataFrame Manipulations\n<hr>", "_____no_output_____" ], [ "### Renaming Columns", "_____no_output_____" ], [ "We can rename columns two ways:\n1. Using `.rename()` (to selectively change column names)\n2. By setting the `.columns` attribute (to change all column names at once)", "_____no_output_____" ] ], [ [ "df", "_____no_output_____" ] ], [ [ "Let's give it a go:", "_____no_output_____" ] ], [ [ "df.rename(columns={\"Date\": \"Datetime\",\n \"Comments\": \"Notes\"})\ndf", "_____no_output_____" ] ], [ [ "Wait? What happened? Nothing changed? In the code above we did actually rename columns of our dataframe but we didn't modify the dataframe inplace, we made a copy of it. There are generally two options for making permanent dataframe changes:\n- 1. Use the argument `inplace=True`, e.g., `df.rename(..., inplace=True)`, available in most functions/methods\n- 2. Re-assign, e.g., `df = df.rename(...)`\nThe Pandas team recommends **Method 2 (re-assign)**, for a [few reasons](https://www.youtube.com/watch?v=hK6o_TDXXN8&t=700) (mostly to do with how memory is allocated under the hood).", "_____no_output_____" ] ], [ [ "df = df.rename(columns={\"Date\": \"Datetime\",\n \"Comments\": \"Notes\"})\ndf", "_____no_output_____" ] ], [ [ "If you wish to change all of the columns of a dataframe, you can do so by setting the `.columns` attribute:", "_____no_output_____" ] ], [ [ "df.columns = [f\"Column {_}\" for _ in range(1, 7)]\ndf", "_____no_output_____" ] ], [ [ "### Changing the Index", "_____no_output_____" ], [ "You can change the index labels of a dataframe in 3 main ways:\n1. `.set_index()` to make one of the columns of the dataframe the index\n2. Directly modify `df.index.name` to change the index name\n3. `.reset_index()` to move the current index as a column and to reset the index with integer labels starting from 0\n4. Directly modify the `.index()` attribute", "_____no_output_____" ] ], [ [ "df", "_____no_output_____" ] ], [ [ "Below I will set the index as `Column 1` and rename the index to \"New Index\":", "_____no_output_____" ] ], [ [ "df = df.set_index(\"Column 1\")\ndf.index.name = \"New Index\"\ndf", "_____no_output_____" ] ], [ [ "I can send the index back to a column and have a default integer index using `.reset_index()`:", "_____no_output_____" ] ], [ [ "df = df.reset_index()\ndf", "_____no_output_____" ] ], [ [ "Like with column names, we can also modify the index directly, but I can't remember ever doing this, usually I'll use `.set_index()`:", "_____no_output_____" ] ], [ [ "df.index", "_____no_output_____" ], [ "df.index = range(100, 133, 1)\ndf", "_____no_output_____" ] ], [ [ "### Adding/Removing Columns", "_____no_output_____" ], [ "There are two main ways to add/remove columns of a dataframe:\n1. Use `[]` to add columns\n2. Use `.drop()` to drop columns\n\nLet's re-read in a fresh copy of the cycling dataset.", "_____no_output_____" ] ], [ [ "df = pd.read_csv('data/cycling_data.csv')\ndf", "_____no_output_____" ] ], [ [ "We can add a new column to a dataframe by simply using `[]` with a new column name and value(s):", "_____no_output_____" ] ], [ [ "df['Rider'] = 'Tom Beuzen'\ndf['Avg Speed'] = df['Distance'] * 1000 / df['Time'] # avg. speed in m/s\ndf", "_____no_output_____" ], [ "df = df.drop(columns=['Rider', 'Avg Speed'])\ndf", "_____no_output_____" ] ], [ [ "### Adding/Removing Rows", "_____no_output_____" ], [ "You won't often be adding rows to a dataframe manually (you'll usually add rows through concatenating/joining - that's coming up next). You can add/remove rows of a dataframe in two ways:\n1. Use `.append()` to add rows\n2. Use `.drop()` to drop rows", "_____no_output_____" ] ], [ [ "df", "_____no_output_____" ] ], [ [ "Let's add a new row to the bottom of this dataframe:", "_____no_output_____" ] ], [ [ "another_row = pd.DataFrame([[\"12 Oct 2019, 00:10:57\", \"Morning Ride\", \"Ride\",\n 2331, 12.67, \"Washed and oiled bike last night\"]],\n columns = df.columns,\n index = [33])\ndf = df.append(another_row)\ndf", "_____no_output_____" ] ], [ [ "We can drop all rows above index 30 using `.drop()`:", "_____no_output_____" ] ], [ [ "df.drop(index=range(30, 34))", "_____no_output_____" ] ], [ [ "## 3. DataFrame Reshaping\n<hr>", "_____no_output_____" ], [ "[Tidy data](https://vita.had.co.nz/papers/tidy-data.pdf) is about \"linking the structure of a dataset with its semantics (its meaning)\". It is defined by:\n1. Each variable forms a column\n2. Each observation forms a row\n3. Each type of observational unit forms a table\n\nOften you'll need to reshape a dataframe to make it tidy (or for some other purpose).\n \n![](img/chapter8/tidy.png)\n\nSource: [r4ds](https://r4ds.had.co.nz/tidy-data.html#fig:tidy-structure)", "_____no_output_____" ], [ "### Melt and Pivot", "_____no_output_____" ], [ "Pandas `.melt()`, `.pivot()` and `.pivot_table()` can help reshape dataframes\n- `.melt()`: make wide data long.\n- `.pivot()`: make long data width.\n- `.pivot_table()`: same as `.pivot()` but can handle multiple indexes.\n \n![](img/chapter8/melt_pivot.gif)\n\nSource: [Garrick Aden-Buie's GitHub](https://github.com/gadenbuie/tidyexplain#spread-and-gather)", "_____no_output_____" ], [ "The below data shows how many courses different instructors taught across different years. If the question you want to answer is something like: \"Does the number of courses taught vary depending on year?\" then the below would probably not be considered tidy because there are multiple observations of courses taught in a year per row (i.e., there is data for 2018, 2019 and 2020 in a single row):", "_____no_output_____" ] ], [ [ "df = pd.DataFrame({\"Name\": [\"Tom\", \"Mike\", \"Tiffany\", \"Varada\", \"Joel\"],\n \"2018\": [1, 3, 4, 5, 3],\n \"2019\": [2, 4, 3, 2, 1],\n \"2020\": [5, 2, 4, 4, 3]})\ndf", "_____no_output_____" ] ], [ [ "Let's make it tidy with `.melt()`. `.melt()` takes a few arguments, most important is the `id_vars` which indicated which column should be the \"identifier\".", "_____no_output_____" ] ], [ [ "df_melt = df.melt(id_vars=\"Name\",\n var_name=\"Year\",\n value_name=\"Courses\")\ndf_melt", "_____no_output_____" ] ], [ [ "The `value_vars` argument allows us to select which specific variables we want to \"melt\" (if you don't specify `value_vars`, all non-identifier columns will be used). For example, below I'm omitting the `2018` column:", "_____no_output_____" ] ], [ [ "df.melt(id_vars=\"Name\",\n value_vars=[\"2019\", \"2020\"],\n var_name=\"Year\",\n value_name=\"Courses\")", "_____no_output_____" ] ], [ [ "Sometimes, you want to make long data wide, which we can do with `.pivot()`. When using `.pivot()` we need to specify the `index` to pivot on, and the `columns` that will be used to make the new columns of the wider dataframe:", "_____no_output_____" ] ], [ [ "df_pivot = df_melt.pivot(index=\"Name\",\n columns=\"Year\",\n values=\"Courses\")\ndf_pivot", "_____no_output_____" ] ], [ [ "You'll notice that Pandas set our specified `index` as the index of the new dataframe and preserved the label of the columns. We can easily remove these names and reset the index to make our dataframe look like it originally did:", "_____no_output_____" ] ], [ [ "df_pivot = df_pivot.reset_index()\ndf_pivot.columns.name = None\ndf_pivot", "_____no_output_____" ] ], [ [ "`.pivot()` will often get you what you want, but it won't work if you want to:\n- Use multiple indexes (next chapter), or\n- Have duplicate index/column labels\n\nIn these cases you'll have to use `.pivot_table()`. I won't focus on it too much here because I'd rather you learn about `pivot()` first.", "_____no_output_____" ] ], [ [ "df = pd.DataFrame({\"Name\": [\"Tom\", \"Tom\", \"Mike\", \"Mike\"],\n \"Department\": [\"CS\", \"STATS\", \"CS\", \"STATS\"],\n \"2018\": [1, 2, 3, 1],\n \"2019\": [2, 3, 4, 2],\n \"2020\": [5, 1, 2, 2]}).melt(id_vars=[\"Name\", \"Department\"], var_name=\"Year\", value_name=\"Courses\")\ndf", "_____no_output_____" ] ], [ [ "In the above case, we have duplicates in `Name`, so `pivot()` won't work. It will throw us a `ValueError: Index contains duplicate entries, cannot reshape`:", "_____no_output_____" ] ], [ [ "df.pivot(index=\"Name\",\n columns=\"Year\",\n values=\"Courses\")", "_____no_output_____" ] ], [ [ "In such a case, we'd use `.pivot_table()`. It will apply an aggregation function to our duplicates, in this case, we'll `sum()` them up:", "_____no_output_____" ] ], [ [ "df.pivot_table(index=\"Name\", columns='Year', values='Courses', aggfunc='sum')", "_____no_output_____" ] ], [ [ "If we wanted to keep the numbers per department, we could specify both `Name` and `Department` as multiple indexes:", "_____no_output_____" ] ], [ [ "df.pivot_table(index=[\"Name\", \"Department\"], columns='Year', values='Courses')", "_____no_output_____" ] ], [ [ "The result above is a mutlti-index or \"hierarchically indexed\" dataframe (more on those next chapter). If you ever have a need to use it, you can read more about `pivot_table()` in the [documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/reshaping.html#pivot-tables).", "_____no_output_____" ], [ "## 4. Working with Multiple DataFrames\n<hr>", "_____no_output_____" ], [ "Often you'll work with multiple dataframes that you want to stick together or merge. `df.merge()` and `df.concat()` are all you need to know for combining dataframes. The Pandas [documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html) is very helpful for these functions, but they are pretty easy to grasp.\n\n```{note}\nThe example joins shown in this section are inspired by [Chapter 15](https://stat545.com/join-cheatsheet.html) of Jenny Bryan's STAT 545 materials.\n```", "_____no_output_____" ], [ "### Sticking DataFrames Together with `pd.concat()`", "_____no_output_____" ], [ "You can use `pd.concat()` to stick dataframes together:\n- Vertically: if they have the same **columns**, OR\n- Horizontally: if they have the same **rows**", "_____no_output_____" ] ], [ [ "df1 = pd.DataFrame({'A': [1, 3, 5],\n 'B': [2, 4, 6]})\ndf2 = pd.DataFrame({'A': [7, 9, 11],\n 'B': [8, 10, 12]})", "_____no_output_____" ], [ "df1", "_____no_output_____" ], [ "df2", "_____no_output_____" ], [ "pd.concat((df1, df2), axis=0) # axis=0 specifies a vertical stick, i.e., on the columns", "_____no_output_____" ] ], [ [ "Notice that the indexes were simply joined together? This may or may not be what you want. To reset the index, you can specify the argument `ignore_index=True`:", "_____no_output_____" ] ], [ [ "pd.concat((df1, df2), axis=0, ignore_index=True)", "_____no_output_____" ] ], [ [ "Use `axis=1` to stick together horizontally:", "_____no_output_____" ] ], [ [ "pd.concat((df1, df2), axis=1, ignore_index=True)", "_____no_output_____" ] ], [ [ "You are not limited to just two dataframes, you can concatenate as many as you want:", "_____no_output_____" ] ], [ [ "pd.concat((df1, df2, df1, df2), axis=0, ignore_index=True)", "_____no_output_____" ] ], [ [ "### Joining DataFrames with `pd.merge()`", "_____no_output_____" ], [ "`pd.merge()` gives you the ability to \"join\" dataframes using different rules (just like with SQL if you're familiar with it). You can use `df.merge()` to join dataframes based on shared `key` columns. Methods include:\n- \"inner join\"\n- \"outer join\"\n- \"left join\"\n- \"right join\"\n\nSee this great [cheat sheet](https://pandas.pydata.org/pandas-docs/stable/getting_started/comparison/comparison_with_sql.html#compare-with-sql-join) and [these great animations](https://github.com/gadenbuie/tidyexplain) for more insights.", "_____no_output_____" ] ], [ [ "df1 = pd.DataFrame({\"name\": ['Magneto', 'Storm', 'Mystique', 'Batman', 'Joker', 'Catwoman', 'Hellboy'],\n 'alignment': ['bad', 'good', 'bad', 'good', 'bad', 'bad', 'good'],\n 'gender': ['male', 'female', 'female', 'male', 'male', 'female', 'male'],\n 'publisher': ['Marvel', 'Marvel', 'Marvel', 'DC', 'DC', 'DC', 'Dark Horse Comics']})\ndf2 = pd.DataFrame({'publisher': ['DC', 'Marvel', 'Image'],\n 'year_founded': [1934, 1939, 1992]})", "_____no_output_____" ] ], [ [ "![](img/chapter8/join.png)", "_____no_output_____" ], [ "An \"inner\" join will return all rows of `df1` where matching values for \"publisher\" are found in `df2`:", "_____no_output_____" ] ], [ [ "pd.merge(df1, df2, how=\"inner\", on=\"publisher\")", "_____no_output_____" ] ], [ [ "![](img/chapter8/inner_join.png)", "_____no_output_____" ], [ "An \"outer\" join will return all rows of `df1` and `df2`, placing NaNs where information is unavailable:", "_____no_output_____" ] ], [ [ "pd.merge(df1, df2, how=\"outer\", on=\"publisher\")", "_____no_output_____" ] ], [ [ "![](img/chapter8/outer_join.png)", "_____no_output_____" ], [ "Return all rows from `df1` and all columns of `df1` and `df2`, populated where matches occur:", "_____no_output_____" ] ], [ [ "pd.merge(df1, df2, how=\"left\", on=\"publisher\")", "_____no_output_____" ] ], [ [ "![](img/chapter8/left_join.png)", "_____no_output_____" ] ], [ [ "pd.merge(df1, df2, how=\"right\", on=\"publisher\")", "_____no_output_____" ] ], [ [ "There are many ways to specify the `key` to join dataframes on, you can join on index values, different, column names, etc. Another helpful argument is the `indicator` argument which will add a column to the result telling you where matches were found in the dataframes:", "_____no_output_____" ] ], [ [ "pd.merge(df1, df2, how=\"outer\", on=\"publisher\", indicator=True)", "_____no_output_____" ] ], [ [ "By the way, you can use `pd.concat()` to do a simple \"inner\" or \"outer\" join on multiple datadrames at once. It's less flexible than merge, but can be useful sometimes.", "_____no_output_____" ], [ "## 5. More DataFrame Operations\n<hr>", "_____no_output_____" ], [ "### Applying Custom Functions", "_____no_output_____" ], [ "There will be times when you want to apply a function that is not built-in to Pandas. For this, we also have methods:\n- `df.apply()`, applies a function column-wise or row-wise across a dataframe (the function must be able to accept/return an array)\n- `df.applymap()`, applies a function element-wise (for functions that accept/return single values at a time)\n- `series.apply()`/`series.map()`, same as above but for Pandas series", "_____no_output_____" ], [ "For example, say you want to use a numpy function on a column in your dataframe:", "_____no_output_____" ] ], [ [ "df = pd.read_csv('data/cycling_data.csv')\ndf[['Time', 'Distance']].apply(np.sin)", "_____no_output_____" ] ], [ [ "Or you may want to apply your own custom function:", "_____no_output_____" ] ], [ [ "def seconds_to_hours(x):\n return x / 3600\n\ndf[['Time']].apply(seconds_to_hours)", "_____no_output_____" ] ], [ [ "This may have been better as a lambda function...", "_____no_output_____" ] ], [ [ "df[['Time']].apply(lambda x: x / 3600)", "_____no_output_____" ] ], [ [ "You can even use functions that require additional arguments. Just specify the arguments in `.apply()`:", "_____no_output_____" ] ], [ [ "def convert_seconds(x, to=\"hours\"):\n if to == \"hours\":\n return x / 3600\n elif to == \"minutes\":\n return x / 60\n\ndf[['Time']].apply(convert_seconds, to=\"minutes\")", "_____no_output_____" ] ], [ [ "Some functions only accept/return a scalar:", "_____no_output_____" ] ], [ [ "int(3.141)", "_____no_output_____" ], [ "float([3.141, 10.345])", "_____no_output_____" ] ], [ [ "For these, we need `.applymap()`:", "_____no_output_____" ] ], [ [ "df[['Time']].applymap(int)", "_____no_output_____" ] ], [ [ "However, there are often \"vectorized\" versions of common functions like this already available, which are much faster. In the case above, we can use `.astype()` to change the dtype of a whole column quickly:", "_____no_output_____" ] ], [ [ "time_applymap = %timeit -q -o -r 3 df[['Time']].applymap(float)\ntime_builtin = %timeit -q -o -r 3 df[['Time']].astype(float)\nprint(f\"'astype' is {time_applymap.average / time_builtin.average:.2f} faster than 'applymap'!\")", "'astype' is 1.98 faster than 'applymap'!\n" ] ], [ [ "### Grouping", "_____no_output_____" ], [ "Often we are interested in examining specific groups in our data. `df.groupby()` allows us to group our data based on a variable(s).", "_____no_output_____" ] ], [ [ "df = pd.read_csv('data/cycling_data.csv')\ndf", "_____no_output_____" ] ], [ [ "Let's group this dataframe on the column `Name`:", "_____no_output_____" ] ], [ [ "dfg = df.groupby(by='Name')\ndfg", "_____no_output_____" ] ], [ [ "What is a `DataFrameGroupBy` object? It contains information about the groups of the dataframe:\n\n![](img/chapter8/groupby_1.png)", "_____no_output_____" ], [ "The groupby object is really just a dictionary of index-mappings, which we could look at if we wanted to:", "_____no_output_____" ] ], [ [ "dfg.groups", "_____no_output_____" ] ], [ [ "We can also access a group using the `.get_group()` method:", "_____no_output_____" ] ], [ [ "dfg.get_group('Afternoon Ride')", "_____no_output_____" ] ], [ [ "The usual thing to do however, is to apply aggregate functions to the groupby object:\n\n![](img/chapter8/groupby_2.png)", "_____no_output_____" ] ], [ [ "dfg.mean()", "_____no_output_____" ] ], [ [ "We can apply multiple functions using `.aggregate()`:", "_____no_output_____" ] ], [ [ "dfg.aggregate(['mean', 'sum', 'count'])", "_____no_output_____" ] ], [ [ "And even apply different functions to different columns:", "_____no_output_____" ] ], [ [ "def num_range(x):\n return x.max() - x.min()\n\ndfg.aggregate({\"Time\": ['max', 'min', 'mean', num_range], \n \"Distance\": ['sum']})", "_____no_output_____" ] ], [ [ "By the way, you can use aggregate for non-grouped dataframes too. This is pretty much what `df.describe` does under-the-hood:", "_____no_output_____" ] ], [ [ "df.agg(['mean', 'min', 'count', num_range])", "_____no_output_____" ] ], [ [ "### Dealing with Missing Values", "_____no_output_____" ], [ "Missing values are typically denoted with `NaN`. We can use `df.isnull()` to find missing values in a dataframe. It returns a boolean for each element in the dataframe:", "_____no_output_____" ] ], [ [ "df.isnull()", "_____no_output_____" ] ], [ [ "But it's usually more helpful to get this information by row or by column using the `.any()` or `.info()` method:", "_____no_output_____" ] ], [ [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 33 entries, 0 to 32\nData columns (total 6 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Date 33 non-null object \n 1 Name 33 non-null object \n 2 Type 33 non-null object \n 3 Time 33 non-null int64 \n 4 Distance 31 non-null float64\n 5 Comments 33 non-null object \ndtypes: float64(1), int64(1), object(4)\nmemory usage: 1.7+ KB\n" ], [ "df[df.isnull().any(axis=1)]", "_____no_output_____" ] ], [ [ "When you have missing values, we usually either drop them or impute them.You can drop missing values with `df.dropna()`:", "_____no_output_____" ] ], [ [ "df.dropna()", "_____no_output_____" ] ], [ [ "Or you can impute (\"fill\") them using `.fillna()`. This method has various options for filling, you can use a fixed value, the mean of the column, the previous non-nan value, etc:", "_____no_output_____" ] ], [ [ "df = pd.DataFrame([[np.nan, 2, np.nan, 0],\n [3, 4, np.nan, 1],\n [np.nan, np.nan, np.nan, 5],\n [np.nan, 3, np.nan, 4]],\n columns=list('ABCD'))\ndf", "_____no_output_____" ], [ "df.fillna(0) # fill with 0", "_____no_output_____" ], [ "df.fillna(df.mean()) # fill with the mean", "_____no_output_____" ], [ "df.fillna(method='bfill') # backward (upwards) fill from non-nan values", "_____no_output_____" ], [ "df.fillna(method='ffill') # forward (downward) fill from non-nan values", "_____no_output_____" ] ], [ [ "Finally, sometimes I use visualizations to help identify (patterns in) missing values. One thing I often do is print a heatmap of my dataframe to get a feel for where my missing values are. If you want to run this code, you may need to install `seaborn`:\n\n```sh\nconda install seaborn\n```", "_____no_output_____" ] ], [ [ "import seaborn as sns\nsns.set(rc={'figure.figsize':(7, 7)})", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "sns.heatmap(df.isnull(), cmap='viridis', cbar=False);", "_____no_output_____" ], [ "# Generate a larger synthetic dataset for demonstration\nnp.random.seed(2020)\nnpx = np.zeros((100,20))\nmask = np.random.choice([True, False], npx.shape, p=[.1, .9])\nnpx[mask] = np.nan\nsns.heatmap(pd.DataFrame(npx).isnull(), cmap='viridis', cbar=False);", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
4a486369ee66673718db32731783bb828a6ba838
608,707
ipynb
Jupyter Notebook
notebooks/9-testing-baseline.ipynb
psurya1994/DeepRL
baccdc91a17432ff2f373cfcd8f12aa38d03d67f
[ "MIT" ]
null
null
null
notebooks/9-testing-baseline.ipynb
psurya1994/DeepRL
baccdc91a17432ff2f373cfcd8f12aa38d03d67f
[ "MIT" ]
null
null
null
notebooks/9-testing-baseline.ipynb
psurya1994/DeepRL
baccdc91a17432ff2f373cfcd8f12aa38d03d67f
[ "MIT" ]
1
2021-04-30T08:14:15.000Z
2021-04-30T08:14:15.000Z
997.880328
409,904
0.949467
[ [ [ "import sys,os\nsys.path.append('../')\nfrom deep_rl import *\nimport matplotlib.pyplot as plt\nimport torch\nfrom tqdm.notebook import trange, tqdm\nimport random\nimport numpy as np\nimport time\n%load_ext autoreload\n%reload_ext autoreload\n%autoreload 2", "/home/surya/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/home/surya/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/home/surya/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/home/surya/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/home/surya/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/home/surya/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n/home/surya/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n" ], [ "select_device(0)", "_____no_output_____" ], [ "def dqn_feature(hu=676,**kwargs):\n generate_tag(kwargs)\n kwargs.setdefault('log_level', 0)\n config = Config()\n config.merge(kwargs)\n\n config.task_fn = lambda: Task(config.game)\n config.eval_env = config.task_fn()\n\n config.optimizer_fn = lambda params: torch.optim.RMSprop(params, 0.001)\n config.network_fn = lambda: VanillaNet(config.action_dim, FCBody(config.state_dim, \n hidden_units=(hu,)))\n # config.network_fn = lambda: DuelingNet(config.action_dim, FCBody(config.state_dim))\n # config.replay_fn = lambda: Replay(memory_size=int(1e4), batch_size=10)\n config.replay_fn = lambda: Replay(memory_size=int(1e4), batch_size=10)\n\n config.random_action_prob = LinearSchedule(1.0, 0.1, 3e4)\n config.discount = 0.99\n config.target_network_update_freq = 200\n config.exploration_steps = 0\n # config.double_q = True\n config.double_q = False\n config.sgd_update_frequency = 4\n config.gradient_clip = 5\n config.eval_interval = int(5e3)\n config.max_steps = 5e4\n config.async_actor = False\n agent = DQNAgent(config)\n #run_steps function below\n config = agent.config\n agent_name = agent.__class__.__name__\n t0 = time.time()\n while True:\n if config.save_interval and not agent.total_steps % config.save_interval:\n agent.save('data/%s-%s-%d' % (agent_name, config.tag, agent.total_steps))\n if config.log_interval and not agent.total_steps % config.log_interval:\n t0 = time.time()\n if config.eval_interval and not agent.total_steps % config.eval_interval:\n agent.eval_episodes()\n pass\n if config.max_steps and agent.total_steps >= config.max_steps:\n return agent\n break\n agent.step()\n agent.switch_task()\n return agent", "_____no_output_____" ], [ "start = time.time()\ngame = 'FourRoomsMatrix'\nagent = dqn_feature(game=game)\nprint(time.time()-start)", "2020-07-05 17:34:29,823 - root - INFO: steps 0, episodic_return_test -200.00(0.00)\n2020-07-05 17:34:42,654 - root - INFO: steps 5000, episodic_return_test -180.30(18.69)\n2020-07-05 17:34:52,035 - root - INFO: steps 10000, episodic_return_test -23.30(18.64)\n2020-07-05 17:35:01,332 - root - INFO: steps 15000, episodic_return_test -83.40(30.11)\n2020-07-05 17:35:13,616 - root - INFO: steps 20000, episodic_return_test -65.10(27.95)\n2020-07-05 17:35:27,603 - root - INFO: steps 25000, episodic_return_test -64.80(28.01)\n2020-07-05 17:35:39,315 - root - INFO: steps 30000, episodic_return_test -7.20(0.87)\n" ], [ "plt.figure(figsize=(18,6))\nplt.plot(np.array(agent.returns)[:,0], np.array(agent.returns)[:,1], '.-')\nplt.xlabel('timesteps'), plt.ylabel('returns')\nplt.title('DQN performance on ' + game), plt.show()", "_____no_output_____" ], [ "print(agent.network)", "VanillaNet(\n (fc_head): Linear(in_features=676, out_features=4, bias=True)\n (body): FCBody(\n (layers): ModuleList(\n (0): Linear(in_features=169, out_features=676, bias=True)\n )\n )\n)\n" ], [ "weights = list(agent.network.parameters())[2]\nbiases = list(agent.network.parameters())[3]", "_____no_output_____" ], [ "weights = weights.detach().cpu().numpy().flatten()\nbiases = biases.detach().cpu().numpy()", "_____no_output_____" ], [ "plt.figure(figsize=(12,4))\nplt.subplot(121), plt.hist(weights, bins=100)\nplt.title('weights'), plt.subplot(122)\nplt.hist(biases, bins=100)\nplt.title('biases'), plt.show()", "_____no_output_____" ], [ "print(weights.shape, biases.shape)\n# random shuffling\nnp.random.shuffle(biases)\nnp.random.shuffle(weights)\nweights = np.reshape(weights, (676, 169))\nprint(weights.shape, biases.shape)", "(114244,) (676,)\n(676, 169) (676,)\n" ], [ "\"\"\"\n1. Use these new weights to initialize a network.\n2. Fix these weights and fine tune the following layer.\n3. See learning performance and save plots.\n\"\"\" \n# Step 1\nimport collections\nod_weights = collections.OrderedDict()\nod_weights['layers.0.weight'] = torch.Tensor(weights)\nod_weights['layers.0.bias'] = torch.Tensor(biases)\n\nimport pickle\n# pickle.dump( od_weights, open( \"storage/layer1_noshuffle.p\", \"wb\" ) )\n# od_weights = pickle.load( open( \"save.p\", \"rb\" ) )\n\n# agent.network.load_state_dict(od_weights, strict=False)", "_____no_output_____" ], [ "import pickle\n# pickle.dump( od_weights, open( \"tmp.p\", \"wb\" ) )\nod_weights = pickle.load( open( \"storage/layer1_noshuffle.p\", \"rb\" ) )", "_____no_output_____" ], [ "# Step 2\ndef dsr_feature_init(ref,**kwargs):\n generate_tag(kwargs)\n kwargs.setdefault('log_level', 0)\n config = Config()\n config.merge(kwargs)\n\n config.task_fn = lambda: Task(config.game)\n config.eval_env = config.task_fn()\n config.c = 1\n\n config.optimizer_fn = lambda params: torch.optim.RMSprop(params, 0.001)\n config.network_fn = lambda: SRNet(config.action_dim, SRIdentityBody(config.state_dim), config=0)\n config.replay_fn = lambda: Replay(memory_size=int(1e5), batch_size=10)\n\n config.random_action_prob = LinearSchedule(1.0, 0.1, 3e4)\n config.discount = 0.99\n config.target_network_update_freq = 200\n config.exploration_steps = 0\n # config.double_q = True\n config.double_q = False\n config.sgd_update_frequency = 4\n config.gradient_clip = 5\n config.eval_interval = int(5e3)\n config.max_steps = 5e4\n config.async_actor = False\n \n agent = DSRAgent(config)\n #run_steps function below\n config = agent.config\n agent_name = agent.__class__.__name__\n if(ref is not None):\n if(ref == -1):\n print(agent.network.load_state_dict(od_weights, strict=False))\n else:\n print(agent.network.load_state_dict(ref.network.state_dict(), strict=False))\n t0 = time.time()\n while True:\n if config.save_interval and not agent.total_steps % config.save_interval:\n agent.save('data/%s-%s-%d' % (agent_name, config.tag, agent.total_steps))\n if config.log_interval and not agent.total_steps % config.log_interval:\n# agent.logger.info('steps %d, %.2f steps/s' % (agent.total_steps, config.log_interval / (time.time() - t0)))\n t0 = time.time()\n if config.eval_interval and not agent.total_steps % config.eval_interval:\n agent.eval_episodes()\n if config.max_steps and agent.total_steps >= config.max_steps:\n return agent\n break\n# import pdb; pdb.set_trace()\n agent.step()\n agent.switch_task()\n \n return agent", "_____no_output_____" ], [ "def runNAgents(function, runs, store=False, freeze=0, ref=None,hu=676):\n r_dqn = []; t_dqn = []\n if(store):\n agents = []\n for i in range(runs): \n agent = function(game='FourRoomsMatrix', freeze=freeze, ref=ref, hu=hu)\n rewards = np.array(agent.returns)\n t_dqn.append(rewards[:,0])\n r_dqn.append(rewards[:,1])\n if(store):\n agents.append(agent)\n \n if(store):\n return agents, t_dqn, r_dqn\n \n return t_dqn, r_dqn", "_____no_output_____" ], [ "# r_shuffle = runNAgents(dsr_feature_init, runs=3, freeze=2, ref=-1)\n# r_main_676 = runNAgents(dsr_feature_init, runs=3, freeze=0, ref=None)\nr_main_16 = runNAgents(dqn_feature, runs=3, freeze=0, ref=None, hu=16)", "2020-07-05 18:16:56,697 - root - INFO: steps 0, episodic_return_test -200.00(0.00)\n2020-07-05 18:17:00,837 - root - INFO: steps 5000, episodic_return_test -180.00(18.97)\n2020-07-05 18:17:05,844 - root - INFO: steps 10000, episodic_return_test -121.10(30.56)\n2020-07-05 18:17:11,287 - root - INFO: steps 15000, episodic_return_test -101.50(31.15)\n2020-07-05 18:17:16,792 - root - INFO: steps 20000, episodic_return_test -141.70(28.17)\n2020-07-05 18:17:21,970 - root - INFO: steps 25000, episodic_return_test -101.20(31.25)\n2020-07-05 18:17:27,122 - root - INFO: steps 30000, episodic_return_test -82.80(30.27)\n2020-07-05 18:17:32,849 - root - INFO: steps 35000, episodic_return_test -46.30(24.33)\n2020-07-05 18:17:39,211 - root - INFO: steps 40000, episodic_return_test -6.10(0.98)\n2020-07-05 18:17:43,751 - root - INFO: steps 45000, episodic_return_test -7.20(1.08)\n2020-07-05 18:17:49,103 - root - INFO: steps 50000, episodic_return_test -9.50(1.70)\n2020-07-05 18:17:50,041 - root - INFO: steps 0, episodic_return_test -200.00(0.00)\n2020-07-05 18:17:58,115 - root - INFO: steps 5000, episodic_return_test -160.60(24.92)\n2020-07-05 18:18:03,374 - root - INFO: steps 10000, episodic_return_test -140.70(28.65)\n2020-07-05 18:18:08,508 - root - INFO: steps 15000, episodic_return_test -121.40(30.45)\n2020-07-05 18:18:17,353 - root - INFO: steps 20000, episodic_return_test -82.60(30.32)\n2020-07-05 18:18:26,036 - root - INFO: steps 25000, episodic_return_test -83.00(30.22)\n2020-07-05 18:18:32,906 - root - INFO: steps 30000, episodic_return_test -161.00(24.67)\n2020-07-05 18:18:39,496 - root - INFO: steps 35000, episodic_return_test -24.60(18.51)\n2020-07-05 18:18:46,456 - root - INFO: steps 40000, episodic_return_test -44.60(24.58)\n2020-07-05 18:18:51,302 - root - INFO: steps 45000, episodic_return_test -4.60(1.08)\n2020-07-05 18:18:57,130 - root - INFO: steps 50000, episodic_return_test -28.60(18.13)\n2020-07-05 18:18:57,949 - root - INFO: steps 0, episodic_return_test -180.30(18.69)\n2020-07-05 18:19:08,695 - root - INFO: steps 5000, episodic_return_test -200.00(0.00)\n2020-07-05 18:19:17,022 - root - INFO: steps 10000, episodic_return_test -121.40(30.44)\n2020-07-05 18:19:23,321 - root - INFO: steps 15000, episodic_return_test -141.10(28.45)\n2020-07-05 18:19:28,209 - root - INFO: steps 20000, episodic_return_test -81.10(30.70)\n2020-07-05 18:19:33,291 - root - INFO: steps 25000, episodic_return_test -102.20(30.93)\n2020-07-05 18:19:39,147 - root - INFO: steps 30000, episodic_return_test -8.30(1.33)\n2020-07-05 18:19:44,567 - root - INFO: steps 35000, episodic_return_test -142.20(27.93)\n2020-07-05 18:19:49,872 - root - INFO: steps 40000, episodic_return_test -8.30(1.45)\n2020-07-05 18:19:55,193 - root - INFO: steps 45000, episodic_return_test -3.30(0.83)\n2020-07-05 18:20:00,790 - root - INFO: steps 50000, episodic_return_test -8.40(1.02)\n" ], [ "def plot_rewards(rewards, plot_seperate=True , clip=50000, title='unnamed'):\n smooth = 5000\n \n colors = ['red', 'blue', 'green', 'm', 'k', 'y', '#999999']\n \n plt.figure(figsize=(18,6), dpi=200)\n if(plot_seperate):\n for k, v in rewards.items():\n for t, r in zip(v[0], v[1]):\n plt.plot(t, r, label=k)\n plt.legend(), plt.show()\n return\n \n for j, (k, v) in enumerate(rewards.items()):\n r_vec = np.zeros((len(v[0]), clip-smooth+1))\n for i, (t, r) in enumerate(zip(v[0], v[1])):\n r_vec[i,:] = convolve(np.interp(np.arange(clip), t, r), smooth)\n \n mean = np.mean(np.array(r_vec), axis=0)\n std = np.std(np.array(r_vec), axis=0)\n plt.plot(mean, label=k, color=colors[j])\n plt.fill_between(np.arange(0, len(mean)), mean+std, mean-std, facecolor=colors[j], alpha=0.3)\n \n plt.xlabel('timesteps'), plt.ylabel('episodic returns')\n plt.title(title)\n plt.legend(loc='lower right'), plt.show()", "_____no_output_____" ], [ "rewards_dict = {\n 'DQN h=(676,) - 2708 parameters': r_shuffle,\n 'DQN h=(676,) - 117628 parameters': r_main_676,\n 'DQN h=(16,) - 2708 parameters': r_main_16\n }\n# rewards_dict = {'avDSR, 1eps: 169 learnable params':r_dsr_rand,\n# 'avDSR, 1eps: 2708 learnable params':r_dsr_abs_rand[1:],\n# 'DQN, h=(676,): 117628 learnable params': r_dqn_base,\n# 'DQN, h=(16,): 2788 learnable params': r_dqn_base2[1:]}\n# plot_rewards(rewards_dict, plot_seperate=True)\nplot_rewards(rewards_dict, plot_seperate=False, title='3 runs on 3roomsh env')", "_____no_output_____" ], [ "%pwd", "_____no_output_____" ], [ "import pickle\n\nwith open(\"../storage/33-3rooms-baselines.p\", 'wb') as f:\n pickle.dump(rewards_dict, f, pickle.HIGHEST_PROTOCOL)\n# rewards_dict = pickle.load( open( \"storage/33-3rooms-baselines.p\", \"rb\" ) )", "_____no_output_____" ], [ "from deep_rl.component.fourrooms import * # CHECK\nenv = FourRoomsMatrix()\nenv.reset()\nenv.reset()\nplt.imshow(env.render())", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a4873616a2de261b774c77168803eb6c497d1f7
173,397
ipynb
Jupyter Notebook
BikeShare Jupyter.ipynb
DrAfifi/Bikeshare-Project
5e0b852da1bc89f49a14ba382d37962906f7925e
[ "Apache-2.0" ]
null
null
null
BikeShare Jupyter.ipynb
DrAfifi/Bikeshare-Project
5e0b852da1bc89f49a14ba382d37962906f7925e
[ "Apache-2.0" ]
null
null
null
BikeShare Jupyter.ipynb
DrAfifi/Bikeshare-Project
5e0b852da1bc89f49a14ba382d37962906f7925e
[ "Apache-2.0" ]
null
null
null
168.674125
53,844
0.852927
[ [ [ "import time\nimport pandas as pd\nimport numpy as np\n\ncity_con = {\n 1: [\"chicago.csv\",\"Chicago\"],\n 2: [\"new_york_city.csv\",\"New York City\"],\n 3: [\"washington.csv\",\"Washington\"],\n 4:[0,\"Exit\"],\n \"NS\":[0,\"Not selected\"]\n}\n\n\nfltr_choice = {\n 1:\"Month\",\n 2:\"Day\",\n 3:\"Show all data\",\n 4:\"Exit\",\n \"NS\":\"Not selected\"\n}\n\n\ndays_con = {\n 0: \"Saturday\",\n 1: \"Sunday\",\n 2: \"Monday\",\n 3: \"Tuesday\",\n 4: \"Wednesday\",\n 5: \"Thursday\",\n 6: \"Friday\",\n \"NS\": \"Not selected\"\n}\n\n\nmonths_con = {\n 1: [\"January\",31],\n 2: [\"February\",28],\n 3: [\"March\",31],\n 4: [\"April\",30],\n 5: [\"May\",31],\n 6: [\"June\",30],\n \"NS\":[\"Not selected\",0]\n}", "_____no_output_____" ], [ "def get_filters():\n city=dy=fltr=mon=\"NS\"\n print('Hello! Let\\'s explore some US bikeshare data!')\n while True:\n city = int(input(\"Would you like to see data for Chicago Enter 1, New York City Enter 2, Washington Enter 3 ,or 4 to exit:: \"))\n if city==1 or city==2 or city==3:\n print(\"You have entered \", city_con[city][1])\n break\n elif city == 4:\n print(\"You choosed to exit\")\n break\n else:\n print(\"You have entered a wrong number, Kindly try again\")\n\n while True:\n if city==4:\n break\n fltr = int(input(\"Would you like to filter the data by month Enter 1, day Enter 2, not at all Enter 3 or 4 to exit:: \"))\n if fltr==1 or fltr==2 or fltr==3 :\n print(\"You have choosed:\", fltr_choice[fltr])\n if fltr==1:\n dy=\"NS\"\n elif fltr==2:\n mon=\"NS\"\n else:\n dy=mon=\"NS\"\n break\n elif fltr == 4:\n print(\"You choosed to exit\")\n break\n else:\n print(\"You have entered a wrong number, Kindly try again\")\n\n while True:\n if city==4 or fltr==3 or fltr==4:\n break\n elif fltr==1:\n mon = int(input(\"January Enter 1, February Enter 2, March Enter 3, April Enter 4, May Enter 5, June Enter 6 :: \")) \n break\n elif fltr==2:\n dy = int(input(\"Saturday Enter 0, Sunday Enter 1, Monday Enter 2, Tuesday Enter 3, Wednesday Enter 4, Thursday Enter 5, FridayEnter 6 :: \"))\n break\n else:\n print(\"You have entered a wrong number, Kindly try again\")\n \n \n \n print(\"\\n\")\n print('-'*40)\n return city, fltr, mon, dy\n \n", "_____no_output_____" ], [ "def day_to_letter(month,day):\n if month>1:\n for i in range(1,month):\n day=day+months_con[i][1]\n return days_con[day%7]", "_____no_output_____" ], [ "def load_data(city, fltr, mon, dy):\n if city==4 or fltr==4:\n return 0\n else:\n df = pd.read_csv(city_con[city][0])\n df['Start Time'] = pd.to_datetime(df['Start Time']) # convert the Start Time column to datetime\n df['year'] = df['Start Time'].dt.year\n df['month'] = df['Start Time'].dt.month\n df['day'] = df['Start Time'].dt.day\n df['hour'] = df['Start Time'].dt.hour\n df[\"dayletter\"] = df[[\"month\",\"day\"]].apply(lambda x: day_to_letter(*x),axis=1)\n df[\"trip\"]= \"From \" + df[\"Start Station\"] + \" to \" + df[\"End Station\"]\n \n if fltr == 1: # filter by day of week if applicable\n df = df[df['month'] == mon] # filter by day of week to create the new dataframe\n if fltr == 2: # filter by day of week if applicable\n df = df[df['dayletter'] == days_con[dy]] # filter by day of week to create the new dataframe\n\n return df", "_____no_output_____" ], [ "def time_stats(df,fltr):\n if city==4 or fltr==4:\n return 0\n else:\n if fltr != 1:\n print(\"*) The most common month is\",months_con[df[\"month\"].value_counts().index[0]][0], \", with a total bike riders of\",df[\"month\"].value_counts().iloc[0],\".\\n\")\n elif fltr != 2:\n print(\"*) The most common day of the week is\",df[\"dayletter\"].value_counts().index[0], \", with a total bike riders of\",df[\"dayletter\"].value_counts().iloc[0],\".\\n\")\n \n print(\"*) The most common start hour is\",df[\"hour\"].value_counts().index[0], \":00 , with a total bike riders of\",df[\"hour\"].value_counts().iloc[0],\".\\n\")\n print('-'*40)\n \n", "_____no_output_____" ], [ "def station_stats(df):\n if city==4 or fltr==4:\n return 0\n else:\n print(\"*) The most common start station is \",df['Start Station'].value_counts().index[0],\" and there are \",df['Start Station'].value_counts()[0], \" bike riders started out of there.\\n\")\n print(\"*) The first five start stations (sorted by the total number of users) are: \", df['Start Station'].value_counts().index[0:5], \"\\n\") \n print(\"*) The most common end station is \",df['End Station'].value_counts().index[0],\" and there are \",df['End Station'].value_counts()[0], \" bike riders ended there.\\n\")\n print(\"*) The first five start stations (sorted by the total number of users) are: \", df['End Station'].value_counts().index[0:5], \"\\n\")\n print(\"\\n*) The most common rout (same start and end stations) is \", df[\"trip\"].value_counts().index[0],\" and there are \",df['trip'].value_counts()[0], \" used it.\\n\")\n print('-'*40)\n", "_____no_output_____" ], [ "def trip_duration_stats(df):\n if city==4 or fltr==4:\n return 0\n else:\n print(\"*) The smallest trip duration is \",df[\"Trip Duration\"].min(),\" seconds, for the trip \",df.loc[df[\"Trip Duration\"] == (df[\"Trip Duration\"].min()),\"trip\"].iloc[0], \".\\n\")\n print(\"*) The longest trip duration is \",df[\"Trip Duration\"].max(),\" seconds, for the trip \",df.loc[df[\"Trip Duration\"] == (df[\"Trip Duration\"].max()),\"trip\"].iloc[0], \".\\n\")\n print(\"*) The trip duration first quartile = \", df[\"Trip Duration\"].quantile(.25) ,\"\\n The trip duration second quartile = \", df[\"Trip Duration\"].quantile(),\"\\n The trip duration third quartile = \", df[\"Trip Duration\"].quantile(.75),\"\\n\") \n print('-'*40)\n\n", "_____no_output_____" ], [ "def user_stats(df):\n if city==4 or fltr==4:\n return 0\n else:\n v0=df[\"User Type\"].value_counts()[0]\n v1=df[\"User Type\"].value_counts()[1]\n v00=((v0/(v0+v1))*100).round(2)\n v11=((v1/(v0+v1))*100).round(2)\n print(\"*) The \",df[\"User Type\"].value_counts().index[0],\" bike riders are \",df[\"User Type\"].value_counts()[0], \", and they are \", v00, \" % of all population\\n\")\n print(\"*) The \",df[\"User Type\"].value_counts().index[1],\" bike riders are \",df[\"User Type\"].value_counts()[1], \", and they are \", v11, \" % of all population\\n\") \n if city != 3:\n g0=df[\"Gender\"].value_counts()[0]\n g1=df[\"Gender\"].value_counts()[1]\n g00=((g0/(g0+g1))*100).round(2)\n g11=((g1/(g0+g1))*100).round(2)\n print(\"*) The \",df[\"Gender\"].value_counts().index[0],\" bike riders are \",df[\"Gender\"].value_counts()[0], \", and they are \", g00, \" % of all population\\n\")\n print(\"*) The \",df[\"Gender\"].value_counts().index[1],\" bike riders are \",df[\"Gender\"].value_counts()[1], \", and they are \", g11, \" % of all population\\n\") \n \n print(\"*) The youngest bike rider is \", 2021-df[\"Birth Year\"].max(), \" years old.\\n\")\n print(\"*) The oldest bike rider is \", 2021-df[\"Birth Year\"].min(), \" years old.\\n\")\n\n print('-'*40)\n", "_____no_output_____" ], [ "def main():\n while True:\n city, fltr, mon, dy = get_filters()\n df = load_data(city, fltr, mon, dy)\n time_stats(df,fltr)\n station_stats(df)\n trip_duration_stats(df)\n user_stats(df)\n\n restart = input('\\nWould you like to restart? Enter yes or no.\\n')\n if restart.lower() != 'yes':\n print(\"Thank you!\")\n break\n\n\nif __name__ == \"__main__\":\n\tmain()", "Hello! Let's explore some US bikeshare data!\nWould you like to see data for Chicago Enter 1, New York City Enter 2, Washington Enter 3 ,or 4 to exit:: 1\nYou have entered Chicago\nWould you like to filter the data by month Enter 1, day Enter 2, not at all Enter 3 or 4 to exit:: 3\nYou have choosed: Show all data\n\n\n----------------------------------------\n*) The most common month is June , with a total bike riders of 98081 .\n\n*) The most common start hour is 17 :00 , with a total bike riders of 35992 .\n\n----------------------------------------\n*) The most common start station is Streeter Dr & Grand Ave and there are 6911 bike riders started out of there.\n\n*) The first five start stations (sorted by the total number of users) are: Index(['Streeter Dr & Grand Ave', 'Clinton St & Washington Blvd',\n 'Lake Shore Dr & Monroe St', 'Clinton St & Madison St',\n 'Canal St & Adams St'],\n dtype='object') \n\n*) The most common end station is Streeter Dr & Grand Ave and there are 7512 bike riders ended there.\n\n*) The first five start stations (sorted by the total number of users) are: Index(['Streeter Dr & Grand Ave', 'Clinton St & Washington Blvd',\n 'Lake Shore Dr & Monroe St', 'Clinton St & Madison St',\n 'Lake Shore Dr & North Blvd'],\n dtype='object') \n\n\n*) The most common rout (same start and end stations) is From Lake Shore Dr & Monroe St to Streeter Dr & Grand Ave and there are 854 used it.\n\n----------------------------------------\n*) The smallest trip duration is 60 seconds, for the trip From State St & 19th St to State St & 19th St .\n\n*) The longest trip duration is 86224 seconds, for the trip From Central Park Ave & Ogden Ave to Central Park Ave & Ogden Ave .\n\n*) The trip duration first quartile = 393.0 \n The trip duration second quartile = 670.0 \n The trip duration third quartile = 1125.0 \n\n----------------------------------------\n*) The Subscriber bike riders are 238889 , and they are 79.63 % of all population\n\n*) The Customer bike riders are 61110 , and they are 20.37 % of all population\n\n*) The Male bike riders are 181190 , and they are 75.83 % of all population\n\n*) The Female bike riders are 57758 , and they are 24.17 % of all population\n\n*) The youngest bike rider is 5.0 years old.\n\n*) The oldest bike rider is 122.0 years old.\n\n----------------------------------------\n" ], [ "import pandas as pd\ndf = pd.read_csv(city_con[city][0])\nprint(df.head()) # start by viewing the first few rows of the dataset!", " Unnamed: 0 Start Time End Time Trip Duration \\\n0 1423854 2017-06-23 15:09:32 2017-06-23 15:14:53 321 \n1 955915 2017-05-25 18:19:03 2017-05-25 18:45:53 1610 \n2 9031 2017-01-04 08:27:49 2017-01-04 08:34:45 416 \n3 304487 2017-03-06 13:49:38 2017-03-06 13:55:28 350 \n4 45207 2017-01-17 14:53:07 2017-01-17 15:02:01 534 \n\n Start Station End Station User Type \\\n0 Wood St & Hubbard St Damen Ave & Chicago Ave Subscriber \n1 Theater on the Lake Sheffield Ave & Waveland Ave Subscriber \n2 May St & Taylor St Wood St & Taylor St Subscriber \n3 Christiana Ave & Lawrence Ave St. Louis Ave & Balmoral Ave Subscriber \n4 Clark St & Randolph St Desplaines St & Jackson Blvd Subscriber \n\n Gender Birth Year \n0 Male 1992.0 \n1 Female 1992.0 \n2 Male 1981.0 \n3 Male 1986.0 \n4 Male 1975.0 \n" ], [ "df['Start Time'] = pd.to_datetime(df['Start Time']) # convert the Start Time column to datetime\ndf['year'] = df['Start Time'].dt.year\ndf['month'] = df['Start Time'].dt.month\ndf['day'] = df['Start Time'].dt.day\ndf['hour'] = df['Start Time'].dt.hour\ndf['min'] = df['Start Time'].dt.minute\ndf['sec'] = df['Start Time'].dt.second\ndf[\"dayletter\"] = df[[\"month\",\"day\"]].apply(lambda x: day_to_letter(*x),axis=1)\n", "_____no_output_____" ], [ "x=df[\"dayletter\"].value_counts()\n\nimport matplotlib.pyplot as plt\n%matplotlib inline \n#only with jupyter to plot inline\n\nx.plot(kind=\"bar\",figsize=(14,8), color='red', edgecolor='black')\nplt.xlabel(x.name)\nplt.ylabel(\"No. of rider\")\nplt.title(\"%ss distribution\" % x.name)\nplt.show()\n", "_____no_output_____" ], [ "df['Start Time'] = pd.to_datetime(df['Start Time']) # convert the Start Time column to datetime\ndf['year'] = df['Start Time'].dt.year\ndf['month'] = df['Start Time'].dt.month\ndf['day'] = df['Start Time'].dt.day\ndf['hour'] = df['Start Time'].dt.hour\ndf['min'] = df['Start Time'].dt.minute\ndf['sec'] = df['Start Time'].dt.second\n\n\nmonth = int(input(\"please enter a month: \"))\nday = int(input(\"please enter a day: \"))\n\ndef day_to_letter(month,day):\n if month>1:\n for i in range(1,month):\n day=day+months_con[i][1]\n return days_con[day%7]\n\nday_to_letter(month,day)", "please enter a month: 5\nplease enter a day: 3\n" ], [ "df['Start Time'] = pd.to_datetime(df['Start Time']) # convert the Start Time column to datetime\ndf['year'] = df['Start Time'].dt.year\ndf['month'] = df['Start Time'].dt.month\ndf['day'] = df['Start Time'].dt.day\ndf['hour'] = df['Start Time'].dt.hour\ndf['min'] = df['Start Time'].dt.minute\ndf['sec'] = df['Start Time'].dt.second\n\nx= df['month']\n\n\nimport matplotlib.pyplot as plt\n%matplotlib inline \n#only with jupyter to plot inline\n\nx.hist(bins=20, figsize=(8,5), align='left', color='red', edgecolor='black')\nplt.xlabel(x.name)\nplt.ylabel(\"No. of rider\")\nplt.title(\"%ss distribution\" % x.name)\nplt.show()\nprint(x.describe())", "_____no_output_____" ], [ "x= df['Start Station'].value_counts().iloc[0:20] # the maximum 10 stations\n\nimport matplotlib.pyplot as plt\n%matplotlib inline \n#only with jupyter to plot inline\n\nx.plot(kind=\"bar\",figsize=(14,8), color='red', edgecolor='black')\nplt.xlabel(x.name)\nplt.ylabel(\"No. of rider\")\nplt.title(\"%ss distribution\" % x.name)\nplt.show()\n\n\n", "_____no_output_____" ], [ "df[\"trip\"]= \"From \" + df[\"Start Station\"] + \" to \" + df[\"End Station\"]\n#print(df[\"trip\"].mode())\nx = df[\"trip\"].value_counts().iloc[0:10] # the maximum 10 same trips\n\nimport matplotlib.pyplot as plt\n%matplotlib inline \n#only with jupyter to plot inline\n\nx.plot(kind=\"bar\",figsize=(14,8), color='red', edgecolor='black')\nplt.xlabel(x.name)\nplt.ylabel(\"No. of rider\")\nplt.title(\"%ss distribution\" % x.name)\nplt.show()\n\n\n\n\n", "_____no_output_____" ], [ "print(df.shape[0])\nprint(df[\"Start Station\"].unique().shape[0])\nprint(df[\"Start Station\"].unique()[0])\n\n\nprint(df.shape[0])\nprint(df[\"End Station\"].unique().shape[0])\nprint(df[\"End Station\"].unique()[0])", "300000\n568\nWood St & Hubbard St\n300000\n572\nDamen Ave & Chicago Ave\n" ], [ "y= df[df['month']==1] # for the month number 1\nprint(y.shape[0]) # number of rider at this month\n", "(21809, 17)\n" ], [ "import pandas as pd\n\nfilename = 'chicago.csv'\ndf = pd.read_csv(filename) # load data file into a dataframe\ndf['Start Time'] = pd.to_datetime(df['Start Time']) # convert the Start Time column to datetime\ndf['hour'] = df['Start Time'].dt.hour # extract hour from the Start Time column to create an hour column\npopular_hour = df['hour'].mode()[0] # find the most popular hour\nprint('Most Popular Start Hour:', popular_hour)\n", "Most Popular Start Hour: 17\n" ], [ "import pandas as pd\n\nfilename = 'chicago.csv'\ndf = pd.read_csv(filename) # load data file into a dataframe\nuser_types = df['User Type'].value_counts() # print value counts for each user type\nprint(user_types)", "Subscriber 238889\nCustomer 61110\nDependent 1\nName: User Type, dtype: int64\n" ], [ "import pandas as pd\n\nCITY_DATA = { 'chicago': 'chicago.csv',\n 'new york city': 'new_york_city.csv',\n 'washington': 'washington.csv' }\n\ndef load_data(city, month, day):\n \"\"\"\n Loads data for the specified city and filters by month and day if applicable.\n\n Args:\n (str) city - name of the city to analyze\n (str) month - name of the month to filter by, or \"all\" to apply no month filter\n (str) day - name of the day of week to filter by, or \"all\" to apply no day filter\n Returns:\n df - Pandas DataFrame containing city data filtered by month and day\n \"\"\"\n\n\n df = pd.read_csv(CITY_DATA[city]) # load data file into a dataframe\n df['Start Time'] = pd.to_datetime(df['Start Time']) # convert the Start Time column to datetime\n df['month'] = df['Start Time'].dt.month # extract month and day of week from Start Time to create new columns\n df['day_of_week'] = df['Start Time'].dt.weekday_name\n \n if month != 'all': # filter by month if applicable\n months = ['january', 'february', 'march', 'april', 'may', 'june'] # use the index of the months list to get the corresponding int\n month = months.index(month) + 1\n df = df[df['month'] == month] # filter by month to create the new dataframe\n\n if day != 'all': # filter by day of week if applicable\n df = df[df['day_of_week'] == day.title()] # filter by day of week to create the new dataframe\n\n return df", "_____no_output_____" ], [ "time_stats(df)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a488004de5f449403aad66b69eacabb0dc327f7
124,568
ipynb
Jupyter Notebook
vogado_vs_proposed.ipynb
elybin/hudaya-2020-all-segmentation
6c66756d6f39f1993ea57fabfb9972d91bb42522
[ "MIT" ]
null
null
null
vogado_vs_proposed.ipynb
elybin/hudaya-2020-all-segmentation
6c66756d6f39f1993ea57fabfb9972d91bb42522
[ "MIT" ]
null
null
null
vogado_vs_proposed.ipynb
elybin/hudaya-2020-all-segmentation
6c66756d6f39f1993ea57fabfb9972d91bb42522
[ "MIT" ]
null
null
null
146.723204
85,662
0.832911
[ [ [ "# installing required module \n!pip install fuzzy-c-means", "Requirement already satisfied: fuzzy-c-means in /usr/local/lib/python3.6/dist-packages (0.0.6)\nRequirement already satisfied: numpy>=1.15.4 in /usr/local/lib/python3.6/dist-packages (from fuzzy-c-means) (1.18.5)\nRequirement already satisfied: scipy>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from fuzzy-c-means) (1.4.1)\n" ], [ "import cv2\nimport numpy as np\nimport math\nimport bisect\nfrom google.colab.patches import cv2_imshow\nfrom skimage import morphology\nfrom sklearn.cluster import KMeans\nfrom fcmeans import FCM\n\ndef imadjust(src, tol=1, vin=[0,255], vout=(0,255)):\n # src : input one-layer image (numpy array)\n # tol : tolerance, from 0 to 100.\n # vin : src image bounds\n # vout : dst image bounds\n # return : output img\n\n dst = src.copy()\n tol = max(0, min(100, tol))\n\n if tol > 0:\n # Compute in and out limits\n # Histogram\n hist = np.zeros(256, dtype=np.int)\n for r in range(src.shape[0]):\n for c in range(src.shape[1]):\n hist[src[r,c]] += 1\n # Cumulative histogram\n cum = hist.copy()\n for i in range(1, len(hist)):\n cum[i] = cum[i - 1] + hist[i]\n\n # Compute bounds\n total = src.shape[0] * src.shape[1]\n low_bound = total * tol / 100\n upp_bound = total * (100 - tol) / 100\n vin[0] = bisect.bisect_left(cum, low_bound)\n vin[1] = bisect.bisect_left(cum, upp_bound)\n\n # Stretching\n scale = (vout[1] - vout[0]) / (vin[1] - vin[0])\n for r in range(dst.shape[0]):\n for c in range(dst.shape[1]):\n vs = max(src[r,c] - vin[0], 0)\n vd = min(int(vs * scale + 0.5) + vout[0], vout[1])\n dst[r,c] = vd\n return dst\n\ndef matlab_fspecial(typex = \"motion\", len = 9, theta = 0):\n # h = fspecial('motion',len,theta)\n if typex == 'motion':\n # Create the vertical kernel. \n kernel_v = np.zeros((len, theta)) \n\n # Fill the middle row with ones. \n kernel_v[:, int((kernel_size - 1)/2)] = np.ones(kernel_size) \n\n # Normalize. \n kernel_v /= kernel_size \n\n # Apply the vertical kernel. \n motion_blur = cv2.filter2D(img, -1, kernel_v) \n return motion_blur\n\n# equal to mat2gray on matlab \n# https://stackoverflow.com/questions/39808545/implement-mat2gray-in-opencv-with-python\ndef matlab_mat2gray(A, alpha = False, beta = False):\n if not alpha:\n alpha = min(A.flatten())\n else:\n alpha = 0\n \n if not beta:\n beta = max(A.flatten())\n else:\n beta = 255\n\n I = A\n cv2.normalize(A, I, alpha , beta ,cv2.NORM_MINMAX)\n I = np.uint8(I)\n return I\n\ndef matlab_strel_disk(r1):\n from skimage.morphology import disk\n\n mask = disk(r1)\n return mask\n\ndef matlab_strel_ball(r1,r2):\n from skimage.morphology import (octagon)\n\n mask = octagon(r1,r2)\n return mask", "_____no_output_____" ], [ "# function to resize image \ndef resize_img(file, size = 200):\n nrows = (np.shape(file)[0]) # image height \n ncols = (np.shape(file)[1]) \n ratio = nrows/ncols\n t_width = size\n t_height = math.ceil(t_width * ratio)\n return cv2.resize(file, (t_width, t_height))\n\n# get filename from path\ndef getfilename(path, ext = False):\n import ntpath\n import os\n if ext:\n return ntpath.basename(path)\n else:\n return os.path.splitext(ntpath.basename(path))[0]\n\ndef scanFolder(path = './', max_file_each_folder = \"all\", verbose = False):\n import os.path # untuk cek file\n \n files_path = []\n in_folder = []\n total_files = str(sum([len(f2)-len(d2) for r2, d2, f2 in os.walk(path)]))\n \n # r=root, d=directories, f = files\n for r, d, f in os.walk(path):\n # cut f\n if max_file_each_folder != \"all\":\n f = f[:int(max_file_each_folder)]\n \n for file in f:\n if (file.find('.png') > -1 or file.find('.jpg') > -1 or file.find('.tif') > -1):\n files_path.append(os.path.join(r, file))\n in_folder.append(os.path.basename(os.path.dirname(files_path[-1]))) \n \n if verbose:\n for f, i in zip(files_path, in_folder):\n print(f)\n print(i)\n\n # ret\n file_scanned = str(len(files_path))\n\n return files_path, in_folder, total_files, file_scanned\n # // END OF scanFolder", "_____no_output_____" ], [ "def performance_evaluation(ground_truth, segmented_img): \n verbose_mode = False #debug\n\n # performance evaluation\n nrows = (np.shape(ground_truth)[0]) # image height \n ncols = (np.shape(ground_truth)[1]) \n\n # try again\n iGT = ground_truth\n iPM = segmented_img\n iTP = cv2.bitwise_and(iGT, iPM)\n iFN = np.subtract(iGT,iTP)\n iFP = np.subtract(iPM,iTP)\n iTN = cv2.bitwise_not(cv2.bitwise_or(iGT, iPM))\n\n # sum \n FN = np.sum(iFN)/255\n FP = np.sum(iFP)/255\n TP = np.sum(iTP)/255\n TN = np.sum(iTN)/255\n\n # hasil lebih detail\n acc = (TP+TN)/(TP+TN+FP+FN) \n spec = TN/(TN+FP)\n prec = TP/(TP+FP)\n recall = TP/(TP+FN)\n j = (((TP+TN)/(TP+TN+FP+FN))-((((TP+FN)*(TP+FP))+((TN+FN)*(TN+FP)))/(TP+TN+FP+FN)**2))/(1-((((TP+FN)*(TP+FP))+((TN+FN)*(TN+FP)))/(TP+TN+FP+FN)**2))\n dc = (2*TP)/(2*(TP+FP+FN))\n\n # performance evaluation\n import pandas as pd\n from IPython.display import display, HTML\n\n df = pd.DataFrame(\n {\n \"Pred. (1)\": [str(TP) + \" (TP)\", str(FP) + \" (FP)\"],\n \"Pred. (0)\": [str(FN) + \" (FN)\", str(TN) + \" (TN)\"]\n },index=[\"Actu. (1)\", \"Actu. (0)\"])\n display(HTML(df.to_html()))\n df = pd.DataFrame(\n {\n \"Accuracy (A)\": [acc],\n \"Specificity (S)\": [spec],\n \"Precision (P)\": [prec],\n \"Recall (R)\": [recall],\n \"Kappa Index (J)\": [j],\n \"Dice coefficiet (DC),\": [dc],\n })\n display(HTML(df.to_html()))\n\n # showing image \n if verbose_mode:\n # show\n print(\"(a) FN = Ground Truth\")\n cv2_imshow(resize_img(iFN,500))\n print(\"(b) FP = Segmented Image\")\n cv2_imshow(resize_img(iFP,500))\n print(\"(c) TP = Correct Region\")\n cv2_imshow(resize_img(iTP,500))\n print(\"(d) TN = True Negative\")\n cv2_imshow(resize_img(iTN,500))\n return df", "_____no_output_____" ], [ "# original script of vogado's segmentation method\n# clustering using k-means\ndef wbc_vogado(f, debug_mode = False):\n image_lab = int(0)\n image_rgb = f # send into figure (a)]\n\n # time measurement \n import time\n start_time = time.time()\n\n # pre-processing step, convert rgb into CIELAB (L*a*b)\n image_lab = cv2.cvtColor(image_rgb, cv2.COLOR_BGR2Lab);\n L = image_lab[:,:,0]\n A = image_lab[:,:,1]\n B = image_lab[:,:,2] # the bcomponent\n lab_y = B # send into figure (c)\n\n AD = cv2.add(L,B)\n\n # f (bgr)\n r = f[:,:,2] # red channel (rgb)\n r = imadjust(r)\n g = f[:,:,1]\n g = imadjust(g)\n b = f[:,:,0]\n b = imadjust(b)\n c = np.subtract(255,r)\n c = imadjust(c)\n m = np.subtract(255,g)\n\n cmyk_m = m # send into figure (b)\n m = cv2.blur(m,(10,10)) # updated in 13/04/2016 - 6:15\n cmyk_m_con_med = m # send into figure (d)\n m = imadjust(m)\n y = np.subtract(255,b)\n y = imadjust(y)\n AD = matlab_mat2gray(B)\n AD = cv2.medianBlur(AD,7)\n lab_y_con_med = AD # send into figure (e)\n\n # subtract the M and b\n sub = cv2.subtract(m,AD)\n img_subt = sub # send into figure (f)\n CMY = np.stack((c,m,y), axis=2) \n F = np.stack((r, g, b), axis=2)\n ab = CMY # generate CMY color model\n nrows = (np.shape(f)[0]) # image height \n ncols = (np.shape(f)[1]) \n\n # reshape into one single array\n ab = ab.flatten()\n x = nrows\n y = ncols\n data = sub.flatten() # sub = result of subtraction M and b, put them into one long array\n\n ## step 2 - clustering\n nColors = 3 # Number of clusters (k)\n kmeans = KMeans(n_clusters=nColors, random_state=0)\n kmeans.fit_predict(data.reshape(-1, 1))\n # cluster_idx, cluster_center = kmeans.cluster_centers_\n cluster_idx = kmeans.labels_ # index result of kmeans\n cluster_center = kmeans.cluster_centers_ # position of cluster center\n pixel_labels = np.reshape(cluster_idx, (nrows, ncols));\n pixel_labels = np.uint8(pixel_labels)\n \n # the problem is here, \n tmp = np.sort(cluster_center.flatten())\n idx = np.zeros((len(tmp), 1))\n for i in range(len(tmp)):\n idx[i] = cluster_center.tolist().index(tmp[i])\n\n nuclei_cluster = idx[2] # sort asc, nuclei cluster is always who has higher value\n A = np.zeros((nrows, ncols), dtype=np.uint8)\n\n # print(np.shape(A))\n for row in range(nrows):\n for col in range(ncols):\n # print(\" pixel_labels[row,col] = \", row, col)\n if pixel_labels[row,col] == nuclei_cluster:\n A[row,col] = 255\n else:\n A[row,col] = 0\n\n ## step 3 - post-processing\n img_clustering = A # send into figure (x)\n img_clustering = imadjust(img_clustering)\n\n sed = matlab_strel_disk(7) # disk\n see = matlab_strel_ball(3,3) #circle \n\n A = cv2.dilate(A,sed)\n # erosion\n A = cv2.erode(A, see) \n # remove area < 800px \n A = morphology.area_opening(A, area_threshold=800*3, connectivity=1) # vogado mention he use 800px\n img_morpho = A # send into figure (g)\n\n # debug mode \n if(debug_mode):\n # resize image into width 200px \n ratio = ncols/nrows\n t_width = 200\n t_height = math.ceil(t_width * ratio)\n print(\"(a) Original\")\n cv2_imshow(resize_img(image_rgb, t_width))\n print(\"(b) M from CMYK\")\n cv2_imshow(resize_img(cmyk_m, t_width))\n print(\"(c) *b from CIELAB\")\n cv2_imshow(resize_img(lab_y, t_width))\n print(\"(d) M con adj + med(7x7)\")\n cv2_imshow(resize_img(cmyk_m_con_med, t_width))\n print(\"(e) *b con adj + med(7x7)\")\n cv2_imshow(resize_img(lab_y_con_med, t_width))\n print(\"(f) *b - M\")\n cv2_imshow(resize_img(img_subt, t_width))\n print(\"(x) clustering\" )\n cv2_imshow(resize_img(img_clustering, t_width))\n print(\"(g) Morphological Ops.\")\n cv2_imshow(resize_img(img_morpho, t_width))\n print(\"--- %s seconds ---\" % (time.time() - start_time))\n return img_morpho", "_____no_output_____" ], [ "# modified vogado's segmentaion\n# segmentation: fcm\ndef wbc_vogado_modified(f, debug_mode = False):\n image_lab = int(0)\n image_rgb = f # send into figure (a)\n\n # time measurement \n import time\n start_time = time.time()\n\n # pre-processing step, convert rgb into CIELAB (L*a*b)\n image_lab = cv2.cvtColor(image_rgb, cv2.COLOR_BGR2Lab);\n L = image_lab[:,:,0]\n A = image_lab[:,:,1]\n B = image_lab[:,:,2] # the bcomponent\n lab_y = B # send into figure (c)\n\n AD = cv2.add(L,B)\n\n # f (bgr)\n r = f[:,:,2] # red channel (rgb)\n r = imadjust(r)\n g = f[:,:,1]\n g = imadjust(g)\n b = f[:,:,0]\n b = imadjust(b)\n c = np.subtract(255,r)\n c = imadjust(c)\n m = np.subtract(255,g)\n\n cmyk_m = m # send into figure (b)\n\n # add median filter into M component of CMYK\n m = cv2.blur(m,(10,10)) # updated in 13/04/2016 - 6:15\n cmyk_m_con_med = m # send into figure (d)\n m = imadjust(m)\n y = np.subtract(255,b)\n y = imadjust(y)\n AD = matlab_mat2gray(B)\n AD = cv2.medianBlur(AD,7)\n lab_y_con_med = AD # send into figure (e)\n \n # subtract the M and b\n sub = cv2.subtract(m,AD)\n img_subt = sub # send into figure (f)\n CMY = np.stack((c,m,y), axis=2) \n F = np.stack((r, g, b), axis=2)\n\n ab = CMY # generate CMY color model\n nrows = (np.shape(f)[0]) # image height \n ncols = (np.shape(f)[1]) \n\n # reshape into one single array\n ab = ab.flatten()\n x = nrows\n y = ncols\n data = sub.flatten() # sub = result of subtraction M and b, put them into one long array\n\n ## step 2 - clustering\n nColors = 3 # Number of clusters (k)\n\n # fit the fuzzy-c-means\n fcm = FCM(n_clusters=nColors)\n fcm.fit(data.reshape(-1, 1))\n\n # outputs\n cluster_idx = fcm.u.argmax(axis=1)\n cluster_center = fcm.centers\n kmeans = fcm\n\n pixel_labels = np.reshape(cluster_idx, (nrows, ncols));\n pixel_labels = np.uint8(pixel_labels)\n \n # the problem is here, \n tmp = np.sort(cluster_center.flatten())\n idx = np.zeros((len(tmp), 1))\n for i in range(len(tmp)):\n idx[i] = cluster_center.tolist().index(tmp[i])\n nuclei_cluster = idx[2] # sort asc, nuclei cluster is always who has higher value\n A = np.zeros((nrows, ncols), dtype=np.uint8)\n\n # print(np.shape(A))\n for row in range(nrows):\n for col in range(ncols):\n # print(\" pixel_labels[row,col] = \", row, col)\n if pixel_labels[row,col] == nuclei_cluster:\n A[row,col] = 255\n else:\n A[row,col] = 0\n\n ## step 3 - post-processing\n img_clustering = A # send into figure (x)\n img_clustering = imadjust(img_clustering)\n\n # dilation (thing goes weird here)\n sed = matlab_strel_disk(7) # disk\n see = matlab_strel_ball(3,3) #circle \n A = cv2.dilate(A,sed)\n # erosion\n A = cv2.erode(A, see) \n # remove area < 800px \n A = morphology.area_opening(A, area_threshold=800*3, connectivity=1) # vogado mention he use 800px\n img_morpho = A # send into figure (g)\n\n # debug mode \n if(debug_mode):\n # resize image into width 200px \n ratio = ncols/nrows\n t_width = 200\n t_height = math.ceil(t_width * ratio)\n print(\"(a) Original\")\n cv2_imshow(resize_img(image_rgb, t_width))\n print(\"(b) M from CMYK\")\n cv2_imshow(resize_img(cmyk_m, t_width))\n print(\"(c) *b from CIELAB\")\n cv2_imshow(resize_img(lab_y, t_width))\n print(\"(d) M con adj + med(7x7)\")\n cv2_imshow(resize_img(cmyk_m_con_med, t_width))\n print(\"(e) *b con adj + med(7x7)\")\n cv2_imshow(resize_img(lab_y_con_med, t_width))\n print(\"(f) *b - M\")\n cv2_imshow(resize_img(img_subt, t_width))\n print(\"(x) clustering\")\n cv2_imshow(resize_img(img_clustering, t_width))\n print(\"(g) Morphological Ops.\")\n cv2_imshow(resize_img(img_morpho, t_width))\n\n print(\"--- %s seconds ---\" % (time.time() - start_time))\n return img_morpho", "_____no_output_____" ], [ "# SET THIS \n# path = folder contain image dataset and groundtruth\n# gt_path = ground truth folder inside active dir\n# data = data folder inside active dir\npath = \"drive/My Drive/ALL_IDB2/\"\ngt_path = path + \"gt/\"\ntarget_path = path + \"data/\"\n\n# here the loop \n# scanFolder(target_path, <limit image tested [\"all\"/int]>, <verbose>)\nfiles_path, in_folder, total_files, file_scanned = scanFolder(target_path, 1, 1)\nfor f in files_path:\n original_image = cv2.imread(f)\n\n # load grount truth \n gt_file_path = gt_path + getfilename(f) + \".png\"\n gt_image = cv2.imread(gt_file_path)\n\n # convert gt into binary \n gray = cv2.cvtColor(gt_image, cv2.COLOR_BGR2GRAY)\n (thresh, gt_image) = cv2.threshold(gray, 1, 255, cv2.THRESH_BINARY)\n \n # calculte dc\n print(\"kmeans:\")\n kmeans = wbc_vogado(original_image, False)\n performance_evaluation(gt_image, kmeans)\n\n print(\"fcm:\")\n fcm = wbc_vogado_modified(original_image, False) \n performance_evaluation(gt_image, fcm)\n\n # show gt_image \n print(\"original image:\")\n cv2_imshow(resize_img(original_image,250))\n print(\"ground truth image:\")\n cv2_imshow(resize_img(gt_image,250))\n print(\"KM final result:\")\n cv2_imshow(resize_img(kmeans,250))\n print(\"FCM final result:\")\n cv2_imshow(resize_img(fcm,250))", "drive/My Drive/ALL_IDB2/data/Im130_1.jpg\ndata\nkmeans:\n--- 3.6996915340423584 seconds ---\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
4a488342b546156d951da7e7c1748200512563e3
8,000
ipynb
Jupyter Notebook
examples/intro_to_time_series_classification.ipynb
ablaom/MLJTime.jl
8e304292812860fced71fbedd87263cc01f1407f
[ "MIT" ]
14
2021-01-06T17:49:08.000Z
2022-02-11T21:49:56.000Z
examples/intro_to_time_series_classification.ipynb
ablaom/MLJTime.jl
8e304292812860fced71fbedd87263cc01f1407f
[ "MIT" ]
15
2020-05-12T03:55:58.000Z
2020-11-30T00:16:37.000Z
examples/intro_to_time_series_classification.ipynb
ablaom/MLJTime.jl
8e304292812860fced71fbedd87263cc01f1407f
[ "MIT" ]
4
2020-06-15T18:44:20.000Z
2020-11-29T01:39:39.000Z
22.59887
185
0.50075
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
4a48aec084b2475c0fc8560b3911e34b719919f3
204,683
ipynb
Jupyter Notebook
notebook/2.0-final results.ipynb
tamarakatic/radiology_project
5e46774ce0b73b2899eda0336163b5fa534abce6
[ "MIT" ]
null
null
null
notebook/2.0-final results.ipynb
tamarakatic/radiology_project
5e46774ce0b73b2899eda0336163b5fa534abce6
[ "MIT" ]
null
null
null
notebook/2.0-final results.ipynb
tamarakatic/radiology_project
5e46774ce0b73b2899eda0336163b5fa534abce6
[ "MIT" ]
null
null
null
59.294032
2,914
0.372894
[ [ [ "import pandas as pd\n\npd.set_option('max_colwidth', 400)", "_____no_output_____" ], [ "df_glove_parag = pd.read_csv(\"../results/final_results_08_03/test_results_glove_preprocessed_reports_08_03.csv\")\ndf_glove_no_back_parag = pd.read_csv(\"../results/final_results_08_03/test_results_glove_no_back_preprocessed_reports_08_03.csv\")\ndf_ft_parag = pd.read_csv(\"../results/final_results_08_03/test_results_fastText_preprocessed_reports_08_03.csv\")\ndf_ft_no_back_parag = pd.read_csv(\"../results/final_results_08_03/test_results_fastText_no_back_preprocessed_reports_08_03.csv\")", "_____no_output_____" ], [ "df_glove_seq = pd.read_csv(\"../results/final_results_08_03/test_results_glove_seq_filltered_reports_08_03.csv\")\ndf_glove_no_back_seq = pd.read_csv(\"../results/final_results_08_03/test_results_glove_no_back_seq_filltered_reports_08_03.csv\")\ndf_ft_seq = pd.read_csv(\"../results/final_results_08_03/test_results_fastText_seq_filltered_reports_08_03.csv\")\ndf_ft_no_back_seq = pd.read_csv(\"../results/final_results_08_03/test_results_fastText_no_back_seq_filltered_reports_08_03.csv\")", "_____no_output_____" ], [ "test_files = ['9.txt', '21.txt', '25.txt', '58.txt', '63.txt', '76.txt', '104.txt', '105.txt', '127.txt', '152.txt', '156.txt', '160.txt', '165.txt', '188.txt', '199.txt', '201.txt', '210.txt', '251.txt', '273.txt', '295.txt', '322.txt', '355.txt', '380.txt', '404.txt', '417.txt', '422.txt', '426.txt', '463.txt', '493.txt', '498.txt', '504.txt', '511.txt', '516.txt', '580.txt', '598.txt', '616.txt', '621.txt', '653.txt', '656.txt', '659.txt', '675.txt', '680.txt', '689.txt', '703.txt', '709.txt', '777.txt', '782.txt', '813.txt', '817.txt', '821.txt', '837.txt', '843.txt', '860.txt', '880.txt', '906.txt', '915.txt', '922.txt', '927.txt', '929.txt', '938.txt', '949.txt', '984.txt', '1001.txt', '1002.txt', '1016.txt', '1035.txt', '1044.txt', '1076.txt', '1088.txt', '1094.txt', '1100.txt', '1121.txt', '1133.txt', '1171.txt', '1184.txt', '1247.txt', '1248.txt', '1261.txt', '1284.txt', '1288.txt', '1290.txt', '1291.txt', '1302.txt', '1324.txt', '1348.txt', '1353.txt', '1376.txt', '1383.txt', '1386.txt', '1405.txt', '1407.txt', '1411.txt', '1430.txt', '1437.txt', '1441.txt', '1449.txt', '1453.txt', '1459.txt', '1475.txt', '1478.txt', '1479.txt', '1496.txt', '1501.txt', '1513.txt', '1527.txt', '1543.txt', '1554.txt', '1570.txt', '1576.txt', '1577.txt', '1600.txt', '1648.txt', '1664.txt', '1676.txt', '1702.txt', '1709.txt', '1722.txt', '1725.txt', '1727.txt', '1743.txt', '1767.txt', '1808.txt', '1818.txt', '1822.txt', '1845.txt', '1849.txt', '1859.txt', '1880.txt', '1886.txt', '1888.txt', '1890.txt', '1898.txt', '1904.txt', '1936.txt', '1945.txt', '2034.txt', '2035.txt', '2037.txt', '2044.txt', '2073.txt', '2084.txt', '2105.txt', '2133.txt', '2138.txt', '2142.txt', '2180.txt', '2187.txt', '2197.txt', '2209.txt', '2213.txt', '2226.txt', '2235.txt', '2270.txt', '2296.txt', '2341.txt', '2350.txt', '2351.txt', '2352.txt', '2379.txt', '2400.txt', '2409.txt', '2411.txt', '2421.txt', '2439.txt', '2455.txt', '2479.txt', '2506.txt', '2521.txt', '2543.txt', '2565.txt', '2582.txt', '2588.txt', '2589.txt', '2602.txt', '2618.txt', '2647.txt', '2653.txt', '2695.txt', '2702.txt', '2705.txt', '2711.txt', '2737.txt', '2763.txt', '2793.txt', '2801.txt', '2802.txt', '2857.txt', '2860.txt', '2882.txt', '2883.txt', '2931.txt', '2934.txt', '2943.txt', '2944.txt', '2954.txt', '2957.txt', '2999.txt', '3000.txt', '3001.txt', '3017.txt', '3035.txt', '3062.txt', '3068.txt', '3072.txt', '3074.txt', '3096.txt', '3107.txt', '3110.txt', '3111.txt', '3116.txt', '3124.txt', '3137.txt', '3140.txt', '3172.txt', '3193.txt', '3213.txt', '3219.txt', '3242.txt', '3273.txt', '3331.txt', '3348.txt', '3349.txt', '3353.txt', '3372.txt', '3379.txt', '3423.txt', '3428.txt', '3451.txt', '3457.txt', '3482.txt', '3492.txt', '3493.txt', '3514.txt', '3516.txt', '3533.txt', '3550.txt', '3589.txt', '3616.txt', '3702.txt', '3714.txt', '3741.txt', '3748.txt', '3797.txt', '3798.txt', '3822.txt', '3949.txt', '3991.txt']", "_____no_output_____" ], [ "df_glove_parag[\"file\"] = test_files\ndf_glove_no_back_parag[\"file\"] = test_files\ndf_ft_parag[\"file\"] = test_files\ndf_ft_no_back_parag[\"file\"] = test_files", "_____no_output_____" ], [ "df_glove_seq[\"file\"] = test_files\ndf_glove_no_back_seq[\"file\"] = test_files\ndf_ft_seq[\"file\"] = test_files\ndf_ft_no_back_seq[\"file\"] = test_files", "_____no_output_____" ], [ "df_glove_parag.head(20)", "_____no_output_____" ], [ "df_glove_no_back_parag.head(20)", "_____no_output_____" ], [ "df_ft_parag.head(20)", "_____no_output_____" ], [ "df_ft_no_back_parag.head()", "_____no_output_____" ], [ "df_glove_seq.head(20)", "_____no_output_____" ], [ "df_glove_no_back_seq.head(20)", "_____no_output_____" ], [ "df_ft_seq.head(20)", "_____no_output_____" ], [ "df_ft_no_back_seq[-20:]", "_____no_output_____" ], [ "print(sum(df_ft_no_back_seq.cumulative_bleu_4) / len(df_ft_no_back_seq) * 100)\nprint(sum(df_ft_seq.cumulative_bleu_4) / len(df_ft_no_back_seq) * 100)\nprint(sum(df_glove_no_back_seq.cumulative_bleu_4) / len(df_ft_no_back_seq) * 100)\nprint(sum(df_glove_seq.cumulative_bleu_4) / len(df_ft_no_back_seq) * 100)", "29.513765182186226\n26.28870445344128\n26.901862348178117\n28.35356275303641\n" ], [ "print(sum(df_ft_no_back_parag.cumulative_bleu_4) / len(df_ft_no_back_parag) * 100)\nprint(sum(df_ft_parag.cumulative_bleu_4) / len(df_ft_no_back_parag) * 100)\nprint(sum(df_glove_no_back_parag.cumulative_bleu_4) / len(df_ft_no_back_parag) * 100)\nprint(sum(df_glove_parag.cumulative_bleu_4) / len(df_ft_no_back_parag) * 100)", "41.89477732793522\n38.89753036437247\n38.28275303643722\n37.86421052631576\n" ], [ "print(sum(df_ft_no_back_seq.rougeL) / len(df_ft_no_back_seq) * 100)\nprint(sum(df_ft_seq.rougeL) / len(df_ft_no_back_seq) * 100)\nprint(sum(df_glove_no_back_seq.rougeL) / len(df_ft_no_back_seq) * 100)\nprint(sum(df_glove_seq.rougeL) / len(df_ft_no_back_seq) * 100)", "65.14611336032391\n63.84659919028345\n63.494372469635664\n63.88364372469635\n" ], [ "print(sum(df_ft_no_back_parag.rougeL) / len(df_ft_no_back_parag) * 100)\nprint(sum(df_ft_parag.rougeL) / len(df_ft_no_back_parag) * 100)\nprint(sum(df_glove_no_back_parag.rougeL) / len(df_ft_no_back_parag) * 100)\nprint(sum(df_glove_parag.rougeL) / len(df_ft_no_back_parag) * 100)", "73.59153846153849\n73.37842105263164\n72.67842105263153\n72.98101214574896\n" ], [ "new_df = df_ft_no_back_parag[[\"ground truth\", \"prediction\"]]\nnew_df.columns = [[\"ground truth\", \"parag_prediction\"]]", "_____no_output_____" ], [ "# pd.concat([df_human, df_fastText_rule_based_v2], axis=1)\nnew_df_1 = df_ft_no_back_seq[[\"ground truth\", \"prediction\"]]\nnew_df_1.columns = [[\"ground truth1\", \"sent_prediction\"]]", "_____no_output_____" ], [ "df_result = pd.concat([new_df, new_df_1], axis=1).drop([\"ground truth1\"], axis=1)", "/home/tamara/.pyenv/versions/3.8.3/envs/env/lib/python3.8/site-packages/pandas/core/generic.py:3936: PerformanceWarning: dropping on a non-lexsorted multi-index without a level parameter may impact performance.\n obj = obj._drop_axis(labels, axis, level=level, errors=errors)\n" ], [ "df_result[20:30]", "_____no_output_____" ], [ "df_glove_parag_old = pd.read_csv(\"../results/final_results/test_results_update_glove_paragraph.csv\")\ndf_glove_no_back_parag_old = pd.read_csv(\"../results/final_results/test_results_update_glove_paragraph.csv\")\ndf_ft_parag_old = pd.read_csv(\"../results/final_results/test_results_update_fastText_paragraph.csv\")\ndf_ft_no_back_parag_old = pd.read_csv(\"../results/final_results/test_results_update_fastText_no_back_paragraph.csv\")", "_____no_output_____" ], [ "df_glove_seq_old = pd.read_csv(\"../results/final_results/test_results_update_glove_sequence.csv\")\ndf_glove_no_back_seq_old = pd.read_csv(\"../results/final_results/test_results_update_glove_no_back_sequence.csv\")\ndf_ft_seq_old = pd.read_csv(\"../results/final_results/test_results_update_fastText_sequence.csv\")\ndf_ft_no_back_seq_old = pd.read_csv(\"../results/final_results/test_results_update_fastText_no_back_sequence.csv\")", "_____no_output_____" ], [ "print(sum(df_ft_no_back_parag.cumulative_bleu_4) / len(df_ft_no_back_parag) * 100)\nprint(sum(df_ft_parag.cumulative_bleu_4) / len(df_ft_no_back_parag) * 100)\nprint(sum(df_glove_no_back_parag.cumulative_bleu_4) / len(df_ft_no_back_parag) * 100)\nprint(sum(df_glove_parag.cumulative_bleu_4) / len(df_ft_no_back_parag) * 100)", "42.54708502024289\n41.48809716599192\n41.63356275303642\n41.63356275303642\n" ], [ "print(sum(df_ft_no_back_seq_old.cumulative_bleu_4) / len(df_ft_no_back_seq_old) * 100)\nprint(sum(df_ft_seq_old.cumulative_bleu_4) / len(df_ft_no_back_seq_old) * 100)\nprint(sum(df_glove_no_back_seq_old.cumulative_bleu_4) / len(df_ft_no_back_seq_old) * 100)\nprint(sum(df_glove_seq_old.cumulative_bleu_4) / len(df_ft_no_back_seq_old) * 100)", "28.97761133603236\n25.3242105263158\n25.41206477732793\n23.07008097165993\n" ], [ "print(sum(df_ft_no_back_parag_old.rougeL) / len(df_ft_no_back_parag_old) * 100)\nprint(sum(df_ft_parag_old.rougeL) / len(df_ft_no_back_parag_old) * 100)\nprint(sum(df_glove_no_back_parag_old.rougeL) / len(df_ft_no_back_parag_old) * 100)\nprint(sum(df_glove_parag_old.rougeL) / len(df_ft_no_back_parag_old) * 100)", "77.53720647773284\n77.07510121457494\n77.82380566801622\n77.82380566801622\n" ], [ "print(sum(df_ft_no_back_seq_old.rougeL) / len(df_ft_no_back_seq_old) * 100)\nprint(sum(df_ft_seq_old.rougeL) / len(df_ft_no_back_seq_old) * 100)\nprint(sum(df_glove_no_back_seq_old.rougeL) / len(df_ft_no_back_seq_old) * 100)\nprint(sum(df_glove_seq_old.rougeL) / len(df_ft_no_back_seq_old) * 100)", "64.99643724696357\n62.42761133603233\n63.47465587044534\n61.9739271255061\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a48b0f65387d4438534bf2603e424559f32e185
3,037
ipynb
Jupyter Notebook
assignments/assignment02/ProjectEuler2.ipynb
LimeeZ/phys292-2015-work
d31e1e0f5dc7fa37dcfd77f59f76431d1478ea06
[ "MIT" ]
null
null
null
assignments/assignment02/ProjectEuler2.ipynb
LimeeZ/phys292-2015-work
d31e1e0f5dc7fa37dcfd77f59f76431d1478ea06
[ "MIT" ]
null
null
null
assignments/assignment02/ProjectEuler2.ipynb
LimeeZ/phys292-2015-work
d31e1e0f5dc7fa37dcfd77f59f76431d1478ea06
[ "MIT" ]
null
null
null
22.664179
211
0.525189
[ [ [ "# Project Euler: Problem 2", "_____no_output_____" ], [ "Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 0 and 1, the first 12 terms will be:\n\n0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...\n\nBy considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.", "_____no_output_____" ] ], [ [ "#defining the variable fibonacci_seq\n\nfibonacci_seq = [0,1]\ni = 1\n\n#While loop!!! \n\nwhile fibonacci_seq[i] < 4000000:\n fibonacci_seq.append(fibonacci_seq[i]+fibonacci_seq[i-1])\n i += 1\n\nprint(fibonacci_seq)\n#This prints out a number larger than 4000000... I have to get rid of that \n", "[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811, 514229, 832040, 1346269, 2178309, 3524578, 5702887]\n" ], [ "#Getting the even numbers!!! \n\neven = []\nfor x in fibonacci_seq:\n if x % 2 == 0:\n even.append(x)\n\nprint (even)\n\n#Sum of the even numbers!\n\nSumFibo = sum(even)\nprint(SumFibo)", "[0, 2, 8, 34, 144, 610, 2584, 10946, 46368, 196418, 832040, 3524578]\n4613732\n" ], [ "# This cell will be used for grading, leave it at the end of the notebook.", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ] ]
4a48b15652604435454f17be65c2b6c65b706452
22,386
ipynb
Jupyter Notebook
docs/tutorials/modeling.ipynb
Jaleleddine/gammapy
de9195df40fa5bbf8840cda4e7cd5e8cc5eaadbb
[ "BSD-3-Clause" ]
null
null
null
docs/tutorials/modeling.ipynb
Jaleleddine/gammapy
de9195df40fa5bbf8840cda4e7cd5e8cc5eaadbb
[ "BSD-3-Clause" ]
null
null
null
docs/tutorials/modeling.ipynb
Jaleleddine/gammapy
de9195df40fa5bbf8840cda4e7cd5e8cc5eaadbb
[ "BSD-3-Clause" ]
null
null
null
32.632653
508
0.585366
[ [ [ "# Modeling and fitting\n\n\n## Prerequisites\n\n- Knowledge of spectral analysis to produce 1D On-Off datasets, [see the following tutorial](spectrum_analysis.ipynb)\n- Reading of pre-computed datasets [see the MWL tutorial](analysis_mwl.ipynb)\n- General knowledge on statistics and optimization methods\n\n## Proposed approach\n\nThis is a hands-on tutorial to `~gammapy.modeling`, showing how the model, dataset and fit classes work together. As an example we are going to work with HESS data of the Crab Nebula and show in particular how to :\n- perform a spectral analysis\n- use different fitting backends\n- acces covariance matrix informations and parameter errors\n- compute likelihood profile\n- compute confidence contours\n\nSee also: [Models gallery tutorial](models.ipynb) and `docs/modeling/index.rst`.\n\n\n## The setup", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom astropy import units as u\nimport matplotlib.pyplot as plt\nimport scipy.stats as st\nfrom gammapy.modeling import Fit\nfrom gammapy.datasets import Datasets, SpectrumDatasetOnOff\nfrom gammapy.modeling.models import LogParabolaSpectralModel, SkyModel\nfrom gammapy.visualization.utils import plot_contour_line\nfrom itertools import combinations", "_____no_output_____" ] ], [ [ "## Model and dataset\n\nFirst we define the source model, here we need only a spectral model for which we choose a log-parabola", "_____no_output_____" ] ], [ [ "crab_spectrum = LogParabolaSpectralModel(\n amplitude=1e-11 / u.cm ** 2 / u.s / u.TeV,\n reference=1 * u.TeV,\n alpha=2.3,\n beta=0.2,\n)\n\ncrab_spectrum.alpha.max = 3\ncrab_spectrum.alpha.min = 1\ncrab_model = SkyModel(spectral_model=crab_spectrum, name=\"crab\")", "_____no_output_____" ] ], [ [ "The data and background are read from pre-computed ON/OFF datasets of HESS observations, for simplicity we stack them together.\nThen we set the model and fit range to the resulting dataset.", "_____no_output_____" ] ], [ [ "datasets = []\nfor obs_id in [23523, 23526]:\n dataset = SpectrumDatasetOnOff.read(\n f\"$GAMMAPY_DATA/joint-crab/spectra/hess/pha_obs{obs_id}.fits\"\n )\n datasets.append(dataset)\n\ndataset_hess = Datasets(datasets).stack_reduce(name=\"HESS\")\n\n# Set model and fit range\ndataset_hess.models = crab_model\ne_min = 0.66 * u.TeV\ne_max = 30 * u.TeV\ndataset_hess.mask_fit = dataset_hess.counts.geom.energy_mask(e_min, e_max)", "_____no_output_____" ] ], [ [ "## Fitting options\n\n\n\nFirst let's create a `Fit` instance:", "_____no_output_____" ] ], [ [ "fit = Fit([dataset_hess], store_trace=True)", "_____no_output_____" ] ], [ [ "By default the fit is performed using MINUIT, you can select alternative optimizers and set their option using the `optimize_opts` argument of the `Fit.run()` method. In addition we have specified to store the trace of parameter values of the fit.\n\nNote that, for now, covaraince matrix and errors are computed only for the fitting with MINUIT. However depending on the problem other optimizers can better perform, so somethimes it can be usefull to run a pre-fit with alternative optimization methods.\n\nFor the \"scipy\" backend the available options are desribed in detail here: \nhttps://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html", "_____no_output_____" ] ], [ [ "%%time\nscipy_opts = {\"method\": \"L-BFGS-B\", \"options\": {\"ftol\": 1e-4, \"gtol\": 1e-05}}\nresult_scipy = fit.run(backend=\"scipy\", optimize_opts=scipy_opts)", "_____no_output_____" ] ], [ [ "For the \"sherpa\" backend you can choose the optimization algorithm between method = {\"simplex\", \"levmar\", \"moncar\", \"gridsearch\"}. \nThose methods are described and compared in detail on http://cxc.cfa.harvard.edu/sherpa/methods/index.html. \nThe available options of the optimization methods are described on the following page https://cxc.cfa.harvard.edu/sherpa/methods/opt_methods.html", "_____no_output_____" ] ], [ [ "%%time\nsherpa_opts = {\"method\": \"simplex\", \"ftol\": 1e-3, \"maxfev\": int(1e4)}\nresults_simplex = fit.run(backend=\"sherpa\", optimize_opts=sherpa_opts)", "_____no_output_____" ] ], [ [ "For the \"minuit\" backend see https://iminuit.readthedocs.io/en/latest/reference.html for a detailed description of the available options. If there is an entry ‘migrad_opts’, those options will be passed to [iminuit.Minuit.migrad](https://iminuit.readthedocs.io/en/latest/reference.html#iminuit.Minuit.migrad). Additionnaly you can set the fit tolerance using the [tol](https://iminuit.readthedocs.io/en/latest/reference.html#iminuit.Minuit.tol\n) option. The minimization will stop when the estimated distance to the minimum is less than 0.001*tol (by default tol=0.1). The [strategy](https://iminuit.readthedocs.io/en/latest/reference.html#iminuit.Minuit.strategy) option change the speed and accuracy of the optimizer: 0 fast, 1 default, 2 slow but accurate. If you want more reliable error estimates, you should run the final fit with strategy 2.\n", "_____no_output_____" ] ], [ [ "%%time\nminuit_opts = {\"tol\": 0.001, \"strategy\": 1}\nresult_minuit = fit.run(backend=\"minuit\", optimize_opts=minuit_opts)", "_____no_output_____" ] ], [ [ "## Fit quality assessment\n\nThere are various ways to check the convergence and quality of a fit. Among them:\n\n- Refer to the automatically-generated results dictionary", "_____no_output_____" ] ], [ [ "print(result_scipy)", "_____no_output_____" ], [ "print(results_simplex)", "_____no_output_____" ], [ "print(result_minuit)", "_____no_output_____" ] ], [ [ "- Check the trace of the fit e.g. in case the fit did not converge properly", "_____no_output_____" ] ], [ [ "result_minuit.trace", "_____no_output_____" ] ], [ [ "- Check that the fitted values and errors for all parameters are reasonable, and no fitted parameter value is \"too close\" - or even outside - its allowed min-max range", "_____no_output_____" ] ], [ [ "result_minuit.parameters.to_table()", "_____no_output_____" ] ], [ [ "- Plot fit statistic profiles for all fitted prameters, using `~gammapy.modeling.Fit.stat_profile()`. For a good fit and error estimate each profile should be parabolic", "_____no_output_____" ] ], [ [ "total_stat = result_minuit.total_stat\n\nfor par in dataset_hess.models.parameters:\n if par.frozen is False:\n profile = fit.stat_profile(parameter=par)\n plt.plot(\n profile[f\"{par.name}_scan\"], profile[\"stat_scan\"] - total_stat\n )\n plt.xlabel(f\"{par.unit}\")\n plt.ylabel(\"Delta TS\")\n plt.title(f\"{par.name}: {par.value} +- {par.error}\")\n plt.show()\n plt.close()", "_____no_output_____" ] ], [ [ "- Inspect model residuals. Those can always be accessed using `~Dataset.residuals()`, that will return an array in case a the fitted `Dataset` is a `SpectrumDataset` and a full cube in case of a `MapDataset`. For more details, we refer here to the dedicated fitting tutorials: [analysis_3d.ipynb](analysis_3d.ipynb) (for `MapDataset` fitting) and [spectrum_analysis.ipynb](spectrum_analysis.ipynb) (for `SpectrumDataset` fitting).", "_____no_output_____" ], [ "## Covariance and parameters errors\n\nAfter the fit the covariance matrix is attached to the model. You can get the error on a specific parameter by accessing the `.error` attribute:", "_____no_output_____" ] ], [ [ "crab_model.spectral_model.alpha.error", "_____no_output_____" ] ], [ [ "As an example, this step is needed to produce a butterfly plot showing the envelope of the model taking into account parameter uncertainties.", "_____no_output_____" ] ], [ [ "energy_range = [1, 10] * u.TeV\ncrab_spectrum.plot(energy_range=energy_range, energy_power=2)\nax = crab_spectrum.plot_error(energy_range=energy_range, energy_power=2)", "_____no_output_____" ] ], [ [ "## Confidence contours\n\n\nIn most studies, one wishes to estimate parameters distribution using observed sample data.\nA 1-dimensional confidence interval gives an estimated range of values which is likely to include an unknown parameter.\nA confidence contour is a 2-dimensional generalization of a confidence interval, often represented as an ellipsoid around the best-fit value.\n\nGammapy offers two ways of computing confidence contours, in the dedicated methods `Fit.minos_contour()` and `Fit.stat_profile()`. In the following sections we will describe them.", "_____no_output_____" ], [ "An important point to keep in mind is: *what does a $N\\sigma$ confidence contour really mean?* The answer is it represents the points of the parameter space for which the model likelihood is $N\\sigma$ above the minimum. But one always has to keep in mind that **1 standard deviation in two dimensions has a smaller coverage probability than 68%**, and similarly for all other levels. In particular, in 2-dimensions the probability enclosed by the $N\\sigma$ confidence contour is $P(N)=1-e^{-N^2/2}$.", "_____no_output_____" ], [ "### Computing contours using `Fit.minos_contour()` ", "_____no_output_____" ], [ "After the fit, MINUIT offers the possibility to compute the confidence confours.\ngammapy provides an interface to this functionnality throught the `Fit` object using the `minos_contour` method.\nHere we defined a function to automatize the contour production for the differents parameterer and confidence levels (expressed in term of sigma):", "_____no_output_____" ] ], [ [ "def make_contours(fit, result, npoints, sigmas):\n cts_sigma = []\n for sigma in sigmas:\n contours = dict()\n for par_1, par_2 in combinations([\"alpha\", \"beta\", \"amplitude\"], r=2):\n contour = fit.minos_contour(\n result.parameters[par_1],\n result.parameters[par_2],\n numpoints=npoints,\n sigma=sigma,\n )\n contours[f\"contour_{par_1}_{par_2}\"] = {\n par_1: contour[par_1].tolist(),\n par_2: contour[par_2].tolist(),\n }\n cts_sigma.append(contours)\n return cts_sigma", "_____no_output_____" ] ], [ [ "Now we can compute few contours.", "_____no_output_____" ] ], [ [ "%%time\nsigma = [1, 2]\ncts_sigma = make_contours(fit, result_minuit, 10, sigma)", "_____no_output_____" ] ], [ [ "Then we prepare some aliases and annotations in order to make the plotting nicer.", "_____no_output_____" ] ], [ [ "pars = {\n \"phi\": r\"$\\phi_0 \\,/\\,(10^{-11}\\,{\\rm TeV}^{-1} \\, {\\rm cm}^{-2} {\\rm s}^{-1})$\",\n \"alpha\": r\"$\\alpha$\",\n \"beta\": r\"$\\beta$\",\n}\n\npanels = [\n {\n \"x\": \"alpha\",\n \"y\": \"phi\",\n \"cx\": (lambda ct: ct[\"contour_alpha_amplitude\"][\"alpha\"]),\n \"cy\": (\n lambda ct: np.array(1e11)\n * ct[\"contour_alpha_amplitude\"][\"amplitude\"]\n ),\n },\n {\n \"x\": \"beta\",\n \"y\": \"phi\",\n \"cx\": (lambda ct: ct[\"contour_beta_amplitude\"][\"beta\"]),\n \"cy\": (\n lambda ct: np.array(1e11)\n * ct[\"contour_beta_amplitude\"][\"amplitude\"]\n ),\n },\n {\n \"x\": \"alpha\",\n \"y\": \"beta\",\n \"cx\": (lambda ct: ct[\"contour_alpha_beta\"][\"alpha\"]),\n \"cy\": (lambda ct: ct[\"contour_alpha_beta\"][\"beta\"]),\n },\n]", "_____no_output_____" ] ], [ [ "Finally we produce the confidence contours figures.", "_____no_output_____" ] ], [ [ "fig, axes = plt.subplots(1, 3, figsize=(16, 5))\ncolors = [\"m\", \"b\", \"c\"]\nfor p, ax in zip(panels, axes):\n xlabel = pars[p[\"x\"]]\n ylabel = pars[p[\"y\"]]\n for ks in range(len(cts_sigma)):\n plot_contour_line(\n ax,\n p[\"cx\"](cts_sigma[ks]),\n p[\"cy\"](cts_sigma[ks]),\n lw=2.5,\n color=colors[ks],\n label=f\"{sigma[ks]}\" + r\"$\\sigma$\",\n )\n ax.set_xlabel(xlabel)\n ax.set_ylabel(ylabel)\nplt.legend()\nplt.tight_layout()", "_____no_output_____" ] ], [ [ "### Computing contours using `Fit.stat_surface()`", "_____no_output_____" ], [ "This alternative method for the computation of confidence contours, although more time consuming than `Fit.minos_contour()`, is expected to be more stable. It consists of a generalization of `Fit.stat_profile()` to a 2-dimensional parameter space. The algorithm is very simple:\n- First, passing two arrays of parameters values, a 2-dimensional discrete parameter space is defined;\n- For each node of the parameter space, the two parameters of interest are frozen. This way, a likelihood value ($-2\\mathrm{ln}\\,\\mathcal{L}$, actually) is computed, by either freezing (default) or fitting all nuisance parameters;\n- Finally, a 2-dimensional surface of $-2\\mathrm{ln}(\\mathcal{L})$ values is returned.\nUsing that surface, one can easily compute a surface of $TS = -2\\Delta\\mathrm{ln}(\\mathcal{L})$ and compute confidence contours.\n\nLet's see it step by step.", "_____no_output_____" ], [ "First of all, we can notice that this method is \"backend-agnostic\", meaning that it can be run with MINUIT, sherpa or scipy as fitting tools. Here we will stick with MINUIT, which is the default choice:", "_____no_output_____" ] ], [ [ "optimize_opts = {\"backend\": \"minuit\", \"print_level\": 0}", "_____no_output_____" ] ], [ [ "As an example, we can compute the confidence contour for the `alpha` and `beta` parameters of the `dataset_hess`. Here we define the parameter space:", "_____no_output_____" ] ], [ [ "result = result_minuit\npar_1 = result.parameters[\"alpha\"]\npar_2 = result.parameters[\"beta\"]\n\nx = par_1\ny = par_2\nx_values = np.linspace(1.55, 2.7, 20)\ny_values = np.linspace(-0.05, 0.55, 20)", "_____no_output_____" ] ], [ [ "Then we run the algorithm, by choosing `reoptimize=False` for the sake of time saving. In real life applications, we strongly recommend to use `reoptimize=True`, so that all free nuisance parameters will be fit at each grid node. This is the correct way, statistically speaking, of computing confidence contours, but is expected to be time consuming.", "_____no_output_____" ] ], [ [ "stat_surface = fit.stat_surface(\n x, y, x_values, y_values, reoptimize=False, **optimize_opts\n)", "_____no_output_____" ] ], [ [ "In order to easily inspect the results, we can convert the $-2\\mathrm{ln}(\\mathcal{L})$ surface to a surface of statistical significance (in units of Gaussian standard deviations from the surface minimum):", "_____no_output_____" ] ], [ [ "# Compute TS\nTS = stat_surface[\"stat_scan\"] - result.total_stat", "_____no_output_____" ], [ "# Compute the corresponding statistical significance surface\ngaussian_sigmas = np.sqrt(TS.T)", "_____no_output_____" ] ], [ [ "Notice that, as explained before, $1\\sigma$ contour obtained this way will not contain 68% of the probability, but rather ", "_____no_output_____" ] ], [ [ "# Compute the corresponding statistical significance surface\n# p_value = 1 - st.chi2(df=1).cdf(TS)\n# gaussian_sigmas = st.norm.isf(p_value / 2).T", "_____no_output_____" ] ], [ [ "Finally, we can plot the surface values together with contours:", "_____no_output_____" ] ], [ [ "fig, ax = plt.subplots(figsize=(8, 6))\n\n# We choose to plot 1 and 2 sigma confidence contours\nlevels = [1, 2]\n\ncontours = plt.contour(gaussian_sigmas, levels=levels, colors=\"white\")\nplt.clabel(contours, fmt=\"%.0f$\\,\\sigma$\", inline=3, fontsize=15)\n\nim = plt.imshow(\n gaussian_sigmas,\n extent=[0, len(x_values) - 1, 0, len(y_values) - 1],\n origin=\"lower\",\n)\nfig.colorbar(im)\n\nplt.xticks(range(len(x_values)), np.around(x_values, decimals=2), rotation=45)\nplt.yticks(range(len(y_values)), np.around(y_values, decimals=2));", "_____no_output_____" ] ], [ [ "Note that, if computed with `reoptimize=True`, this plot would be completely consistent with the third panel of the plot produced with `Fit.minos_contour` (try!).", "_____no_output_____" ], [ "Finally, it is always remember that confidence contours are approximations. In particular, when the parameter range boundaries are close to the contours lines, it is expected that the statistical meaning of the countours is not well defined. That's why we advise to always choose a parameter space that com contain the contours you're interested in.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
4a48d386fbfe1783cb9417531be7a86dd7688eb5
16,754
ipynb
Jupyter Notebook
machineLearning/SimpleNeuralNet/predict_wages_employee.ipynb
WebClub-NITK/Hacktoberfest-2k19
69fafb354f0da58220a7ba68696b4d7fde0a3d5c
[ "MIT" ]
28
2019-10-01T09:13:50.000Z
2021-04-18T18:15:34.000Z
machineLearning/SimpleNeuralNet/predict_wages_employee.ipynb
arpita221b/Hacktoberfest-2k19-1
6f682ea2226a8ce6f5a913da9ecdafff7a9fa5bd
[ "MIT" ]
236
2019-09-30T16:06:09.000Z
2022-02-26T18:37:03.000Z
machineLearning/SimpleNeuralNet/predict_wages_employee.ipynb
arpita221b/Hacktoberfest-2k19-1
6f682ea2226a8ce6f5a913da9ecdafff7a9fa5bd
[ "MIT" ]
184
2019-09-30T16:08:04.000Z
2022-03-09T05:00:29.000Z
30.797794
105
0.366659
[ [ [ "import pandas as pd\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.callbacks import EarlyStopping\nfrom keras.utils import to_categorical", "Using TensorFlow backend.\n" ], [ "#read in training data\ntrain_df = pd.read_csv('./data/hourly_wages_data.csv')\n\n#view data structure\ntrain_df.head()", "_____no_output_____" ], [ "#create a dataframe with all training data except the target column\ntrain_X = train_df.drop(columns=['wage_per_hour'])\n\n#check that the target variable has been removed\ntrain_X.head()\n", "_____no_output_____" ], [ "#create a dataframe with only the target column\ntrain_y = train_df[['wage_per_hour']]\n\n#view dataframe\ntrain_y.head()", "_____no_output_____" ], [ "#create model\nmodel = Sequential()\n\n#get number of columns in training data\nn_cols = train_X.shape[1]\n\n#add model layers\nmodel.add(Dense(10, activation='relu', input_shape=(n_cols,)))\nmodel.add(Dense(10, activation='relu'))\nmodel.add(Dense(1))\n\n#compile model using mse as a measure of model performance\nmodel.compile(optimizer='adam', loss='mean_squared_error')\n\n#set early stopping monitor so the model stops training when it won't improve anymore\nearly_stopping_monitor = EarlyStopping(patience=3)", "_____no_output_____" ], [ "#train model\nmodel.fit(train_X, train_y, validation_split=0.2, epochs=30, callbacks=[early_stopping_monitor])", "Train on 427 samples, validate on 107 samples\nEpoch 1/30\n427/427 [==============================] - 3s 7ms/step - loss: 200.7186 - val_loss: 243.7071\nEpoch 2/30\n427/427 [==============================] - 0s 89us/step - loss: 130.2738 - val_loss: 172.7188\nEpoch 3/30\n427/427 [==============================] - 0s 97us/step - loss: 86.9109 - val_loss: 123.9527\nEpoch 4/30\n427/427 [==============================] - 0s 97us/step - loss: 58.5707 - val_loss: 91.3879\nEpoch 5/30\n427/427 [==============================] - 0s 98us/step - loss: 41.0577 - val_loss: 68.7715\nEpoch 6/30\n427/427 [==============================] - 0s 97us/step - loss: 30.9307 - val_loss: 53.7192\nEpoch 7/30\n427/427 [==============================] - 0s 98us/step - loss: 25.0882 - val_loss: 44.5493\nEpoch 8/30\n427/427 [==============================] - 0s 97us/step - loss: 22.1689 - val_loss: 38.9246\nEpoch 9/30\n427/427 [==============================] - 0s 98us/step - loss: 20.8058 - val_loss: 35.9207\nEpoch 10/30\n427/427 [==============================] - 0s 99us/step - loss: 20.3621 - val_loss: 34.2123\nEpoch 11/30\n427/427 [==============================] - 0s 101us/step - loss: 20.1909 - val_loss: 33.4018\nEpoch 12/30\n427/427 [==============================] - 0s 94us/step - loss: 20.1781 - val_loss: 32.6290\nEpoch 13/30\n427/427 [==============================] - 0s 98us/step - loss: 20.1422 - val_loss: 32.8353\nEpoch 14/30\n427/427 [==============================] - 0s 91us/step - loss: 20.1181 - val_loss: 32.6664\nEpoch 15/30\n427/427 [==============================] - 0s 98us/step - loss: 20.0899 - val_loss: 33.2220\n" ], [ "#training a new model on the same data to show the effect of increasing model capacity\n\n#create model\nmodel_mc = Sequential()\n\n#add model layers\nmodel_mc.add(Dense(200, activation='relu', input_shape=(n_cols,)))\nmodel_mc.add(Dense(200, activation='relu'))\nmodel_mc.add(Dense(200, activation='relu'))\nmodel_mc.add(Dense(1))\n\n#compile model using mse as a measure of model performance\nmodel_mc.compile(optimizer='adam', loss='mean_squared_error')", "_____no_output_____" ], [ "#train model\nmodel_mc.fit(train_X, train_y, validation_split=0.2, epochs=30, callbacks=[early_stopping_monitor])", "Train on 427 samples, validate on 107 samples\nEpoch 1/30\n427/427 [==============================] - 3s 7ms/step - loss: 39.2107 - val_loss: 40.3353\nEpoch 2/30\n427/427 [==============================] - 0s 226us/step - loss: 22.6596 - val_loss: 43.5961\nEpoch 3/30\n427/427 [==============================] - 0s 230us/step - loss: 21.1501 - val_loss: 35.9529\nEpoch 4/30\n427/427 [==============================] - 0s 222us/step - loss: 20.0189 - val_loss: 34.9528\nEpoch 5/30\n427/427 [==============================] - 0s 220us/step - loss: 19.5254 - val_loss: 33.4263\nEpoch 6/30\n427/427 [==============================] - 0s 221us/step - loss: 19.1911 - val_loss: 29.8092\nEpoch 7/30\n427/427 [==============================] - 0s 218us/step - loss: 18.8980 - val_loss: 28.0586\nEpoch 8/30\n427/427 [==============================] - 0s 210us/step - loss: 21.4672 - val_loss: 28.4561\nEpoch 9/30\n427/427 [==============================] - 0s 188us/step - loss: 22.2891 - val_loss: 28.5794\nEpoch 10/30\n427/427 [==============================] - 0s 189us/step - loss: 19.4059 - val_loss: 32.4649\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a48d8aa0ed003252df528d7e69791f000e29563
341,333
ipynb
Jupyter Notebook
00tutorial.ipynb
heyredhat/qbism
192333b725495c6b66582f7a7b0b4c18a2f392a4
[ "Apache-2.0" ]
2
2021-01-27T18:39:12.000Z
2021-02-01T06:57:02.000Z
00tutorial.ipynb
heyredhat/qbism
192333b725495c6b66582f7a7b0b4c18a2f392a4
[ "Apache-2.0" ]
null
null
null
00tutorial.ipynb
heyredhat/qbism
192333b725495c6b66582f7a7b0b4c18a2f392a4
[ "Apache-2.0" ]
null
null
null
60.638302
95,208
0.674834
[ [ [ "#hide\nfrom qbism import *", "_____no_output_____" ] ], [ [ "# Tutorial", "_____no_output_____" ], [ "> \"Chauncey Wright, a nearly forgotten philosopher of real merit, taught me when young that I must not say necessary about the universe, that we don’t know whether anything is necessary or not. So I describe myself as a bettabilitarian. I believe that we can bet on the behavior of the universe in its contact with us.\" (Oliver Wendell Holmes, Jr.)", "_____no_output_____" ], [ "QBism, as I understand it, consists of two interlocking components, one part philosophical and one part mathematical. We'll deal with the mathematical part first.", "_____no_output_____" ], [ "## The Math\n\nA Von Neumann measurement consists in a choice of observable represented by a Hermitian operator $H$. Such an operator will have real eigenvalues and orthogonal eigenvectors. For example, $H$ could be the energy operator. Then the eigenvectors would represent possible energy states, and the eigenvalues would represent possible values of the energy. According to textbook quantum mechanics, which state the system ends up in after a measurement will in general be random, and quantum mechanics allows you to calculate the probabilities. \n\nA Hermitian observable provides what is known as a \"projection valued measure.\" Suppose our system were represented by a density matrix $\\rho$. We could form the projectors $P_{i} = \\mid v_{i} \\rangle \\langle v_{i} \\mid$, where $\\mid v_{i} \\rangle$ is the $i^{th}$ eigenvector. Then the probability for the $i^{th}$ outcome would be given by $Pr(i) = tr(P_{i}\\rho)$, and the state after measurement would be given by $\\frac{P_{i} \\rho P_{i}}{tr(P_{i}\\rho)}$. Moreover, the expectation value of the observable $\\langle H \\rangle$ would be given by $tr(H\\rho)$, and it amounts to a sum over the eigenvalues weighted by the corresponding probabilities.\n", "_____no_output_____" ] ], [ [ "import numpy as np\nimport qutip as qt\n\nd = 2\nrho = qt.rand_dm(d)\nH = qt.rand_herm(d)\nL, V = H.eigenstates()\nP = [v*v.dag() for v in V]\np = [(proj*rho).tr() for proj in P]\n\nprint(\"probabilities: %s\" % p)\nprint(\"expectation value: %.3f\" % (H*rho).tr())\nprint(\"expectation value again: %.3f\" % (sum([L[i]*p[i] for i in range(d)])))", "probabilities: [0.7863582631986583, 0.2136417368013418]\nexpectation value: 0.002\nexpectation value again: 0.002\n" ] ], [ [ "<hr>\n\nBut there is a more general notion of measurement: a POVM (a positive operator valued measure). A POVM consists in a set of positive semidefinite operators that sum to the identity, i.e., a set $\\{E_{i}\\}$ such that $\\sum_{i} E_{i} = I$. Positive semidefinite just means that the eigenvalues must be non-negative, so that $\\langle \\psi \\mid E \\mid \\psi \\rangle$ is always positive or zero for any $\\mid \\psi \\rangle$. Indeed, keep in mind that density matrices are defined by Hermitian, positive semi-definite operators with trace $1$.\n\nFor a POVM, each *operator* corresponds to a possible outcome of the experiment, and whereas for a Von Neumann measurement, assuming no degeneracies, there would be $d$ possible outcomes, corresponding to the dimension of the Hilbert space, there can be *any* number of outcomes to a POVM measurement, as long as all the associated operators sum to the identity. The probability of an outcome, however, is similarly given by $Pr(i) = tr(E_{i}\\rho)$. \n\nIf we write each $E_{i}$ as a product of so-called Kraus operators $E_{i} = A_{i}^{\\dagger}A_{i}$, then the state after measurement will be: $\\frac{A_{i}\\rho A_{i}^{\\dagger}}{tr(E_{i}\\rho)}$. The Kraus operators, however, aren't uniquely defined by the POVM, and so the state after measurement will depend on its implementation: to implement POVM's, you couple your system to an auxilliary system and make a standard measurement on the latter. We'll show how to do that in a little bit!\n\nIn the case we'll be considering, however, the $\\{E_{i}\\}$ will be rank-1, and so the state after measurement will be $\\frac{\\Pi_{i}\\rho \\Pi_{i}}{tr(\\Pi_{i}\\rho)}$ as before, where $\\Pi_{i}$ are normalized projectors associated to each element of the POVM (details to follow).\n\n(For a reference, recall that spin coherent states form an \"overcomplete\" basis, or frame, for spin states of a given $j$ value. This can be viewed as a POVM. In this case, the POVM would have an infinite number of elements, one for each point on the sphere: and the integral over the sphere gives $1$.)", "_____no_output_____" ], [ "<hr>\n\nA very special kind of POVM is a so-called SIC-POVM: a symmetric informationally complete positive operator valued measure. They've been conjectured to exist in all dimensions, and numerical evidence suggests this is indeed the case. For a given Hilbert space of dimension $d$, a SIC is a set of $d^2$ rank-one projection operators $\\Pi_{i} = \\mid \\psi_{i} \\rangle \\langle \\psi_{i} \\mid$ such that:\n\n$$tr(\\Pi_{k}\\Pi_{l}) = \\frac{d\\delta_{k,l} + 1}{d+1} $$\n\nSuch a set of projectors will be linearly independent, and if you rescale they to $\\frac{1}{d}\\Pi_{i}$, they form a POVM: $\\sum_{i} \\frac{1}{d} \\Pi_{i} = I$. \n\nThe key point is that for any quantum state $\\rho$, a SIC specifies a measurement *for which the probabilities of outcomes $p(i)$ specify $\\rho$ itself*. Normally, say, in the case of a qubit, we'd have to measure the separate expectation values $(\\langle X \\rangle, \\langle Y \\rangle, \\langle Z \\rangle)$ to nail down the state: in other words, we'd have to repeat many times three *different* measurements. But for a SIC-POVM, the probabilities on each of the elements of the POVM fully determine the state: we're talking here about a *single* type of measurement.", "_____no_output_____" ], [ "<hr>\n\nThanks to Chris Fuchs & Co., we have a repository of SIC-POVM's in a variety of dimensions. One can download them [here](http://www.physics.umb.edu/Research/QBism/solutions.html). You'll get a zip of text files, one for each dimension: and in each text file will be a single complex vector: the \"fiducial\" vector. From this vector, the SIC can be derived. \n\nIn order to do this, we first define (with Sylvester) the unitary clock and shift matrices for a given dimension $d$:\n\n$$\nX = \\begin{pmatrix}\n0 & 0 & 0 & \\cdots & 0 & 1\\\\\n1 & 0 & 0 & \\cdots & 0 & 0\\\\\n0 & 1 & 0 & \\cdots & 0 & 0\\\\\n0 & 0 & 1 & \\cdots & 0 & 0\\\\\n\\vdots & \\vdots & \\vdots & \\ddots &\\vdots &\\vdots\\\\\n0 & 0 & 0 & \\cdots & 1 & 0\\\\ \n\\end{pmatrix}\n$$\n\n$$\nZ = \\begin{pmatrix}\n1 & 0 & 0 & \\cdots & 0\\\\\n0 & \\omega & 0 & \\cdots & 0\\\\\n0 & 0 & \\omega^2 & \\cdots & 0\\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots\\\\\n0 & 0 & 0 & \\cdots & \\omega^{d-1}\n\\end{pmatrix}\n$$\n\nWhere $\\omega = e^{\\frac{2\\pi i}{d}}$.\n\nNote that when $d=2$, this amounts to Pauli $X$ and $Z$.", "_____no_output_____" ] ], [ [ "def shift(d):\n return sum([qt.basis(d, i+1)*qt.basis(d, i).dag()\\\n if i != d-1 else qt.basis(d, 0)*qt.basis(d, i).dag()\\\n for i in range(d) for j in range(d)])/d\n\ndef clock(d):\n w = np.exp(2*np.pi*1j/d)\n return qt.Qobj(np.diag([w**i for i in range(d)]))", "_____no_output_____" ] ], [ [ "We can then define displacement operators:\n\n$$D_{a,b} = (-e^{\\frac{i\\pi}{d}})^{ab}X^{b}Z^{a} $$\n\nFor $a, b$ each from $0$ to $d$.", "_____no_output_____" ] ], [ [ "def displace(d, a, b):\n Z, X = clock(d), shift(d)\n return (-np.exp(1j*np.pi/d))**(a*b)*X**b*Z**a\n\ndef displacement_operators(d):\n return dict([((a, b), displace(d, a, b)) for b in range(d) for a in range(d)])", "_____no_output_____" ] ], [ [ "Finally, if we act on the fiducial vector with each of the displacement operators, we obtain the $d^2$ pure states, whose projectors, weighted by $\\frac{1}{d}$, form the SIC-POVM.", "_____no_output_____" ] ], [ [ "def sic_states(d):\n fiducial = load_fiducial(d)\n return [D*fiducial for index, D in displacement_operators(d).items()]", "_____no_output_____" ] ], [ [ "Cf. `load_fiducial`.\n\nBy the way, this construction works because these SIC-POVM's are covariant under the Weyl-Heisenberg group. This means is that if you apply one of those displacement operators to all the SIC states, you get the same set of SIC states back! They just switch places among themselves. (It's also worth considering the action of elements of the \"Clifford group\", since these operators leave the Weyl-Heisenberg group invariant or, in other words, \"normalize\" it.)", "_____no_output_____" ] ], [ [ "sic = sic_states(2)\nD = displacement_operators(2)\nprint(sic)\nprint()\nprint([D[(1,1)]*state for state in sic])", "[Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket\nQobj data =\n[[0.11931164+0.88002265j]\n [0.36578174+0.27843956j]], Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket\nQobj data =\n[[ 0.11931164+0.88002265j]\n [-0.36578174-0.27843956j]], Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket\nQobj data =\n[[0.36578174+0.27843956j]\n [0.11931164+0.88002265j]], Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket\nQobj data =\n[[-0.27843956+0.36578174j]\n [ 0.88002265-0.11931164j]]]\n\n[Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket\nQobj data =\n[[-0.27843956+0.36578174j]\n [ 0.88002265-0.11931164j]], Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket\nQobj data =\n[[0.27843956-0.36578174j]\n [0.88002265-0.11931164j]], Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket\nQobj data =\n[[-0.88002265+0.11931164j]\n [ 0.27843956-0.36578174j]], Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket\nQobj data =\n[[0.11931164+0.88002265j]\n [0.36578174+0.27843956j]]]\n" ] ], [ [ "As far as anyone knows, the construction seems to work for SIC's in all dimensions. It's worth noting, however, the exceptional case of $d=8$, where there is *also another* SIC-POVM covariant under the tensor product of three copies of the Pauli group ($d=2$). Cf. `hoggar_fiducial`.\n\nWe can test that a given SIC has the property:\n\n$$tr(\\Pi_{k}\\Pi_{l}) = \\frac{d\\delta_{k,l} + 1}{d+1} $$", "_____no_output_____" ] ], [ [ "def test_sic_states(states):\n d = int(np.sqrt(len(states)))\n for i, s in enumerate(states):\n for j, t in enumerate(states):\n should_be = 1 if i == j else 1/(d+1)\n print(\"(%d, %d): %.4f | should be: %.4f\" % (i, j, np.abs(s.overlap(t)**2), should_be))\n\nstates = sic_states(2)\ntest_sic_states(states)", "(0, 0): 1.0000 | should be: 1.0000\n(0, 1): 0.3333 | should be: 0.3333\n(0, 2): 0.3333 | should be: 0.3333\n(0, 3): 0.3333 | should be: 0.3333\n(1, 0): 0.3333 | should be: 0.3333\n(1, 1): 1.0000 | should be: 1.0000\n(1, 2): 0.3333 | should be: 0.3333\n(1, 3): 0.3333 | should be: 0.3333\n(2, 0): 0.3333 | should be: 0.3333\n(2, 1): 0.3333 | should be: 0.3333\n(2, 2): 1.0000 | should be: 1.0000\n(2, 3): 0.3333 | should be: 0.3333\n(3, 0): 0.3333 | should be: 0.3333\n(3, 1): 0.3333 | should be: 0.3333\n(3, 2): 0.3333 | should be: 0.3333\n(3, 3): 1.0000 | should be: 1.0000\n" ] ], [ [ "In the case of a two dimensional Hilbert space, the SIC-POVM states will form a regular tetrahedron in the Bloch sphere:", "_____no_output_____" ] ], [ [ "pts = np.array([[qt.expect(qt.sigmax(), state),\\\n qt.expect(qt.sigmay(), state),\\\n qt.expect(qt.sigmaz(), state)] for state in states])\n\nsphere = qt.Bloch()\nsphere.point_size = [300]\nsphere.add_points(pts.T)\nsphere.add_vectors(pts)\nsphere.make_sphere()", "_____no_output_____" ] ], [ [ "In general, in higher dimensions, the study of SIC's is a very interesting geometry problem involving the study of \"maximal sets of complex equiangular lines,\" which has implications in various domains of mathematics.", "_____no_output_____" ] ], [ [ "def sic_povm(d):\n return [(1/d)*state*state.dag() for state in sic_states(d)]\n\nd = 2\nref_povm = sic_povm(d)\nprint(\"elements sum to identity? %s\" % np.allclose(sum(ref_povm), qt.identity(d)))", "elements sum to identity? True\n" ] ], [ [ "Given a density matrix $\\rho$, we can expand it in terms of the SIC-POVM elements via $tr(E_{i}\\rho)$:", "_____no_output_____" ] ], [ [ "def dm_probs(dm, ref_povm):\n return np.array([(e*dm).tr() for e in ref_povm]).real\n\nrho = qt.rand_dm(d)\np = dm_probs(rho, ref_povm)\nprint(\"probabilities: %s\" % p)\nprint(\"sum to 1? %s\" % np.isclose(sum(p), 1))", "probabilities: [0.1895833 0.1895833 0.3104167 0.3104167]\nsum to 1? True\n" ] ], [ [ "From these probabilities, we can uniquely reconstruct the density matrix via:\n\n$$ \\rho = \\sum_{i} ((d+1)p(i) - \\frac{1}{d})\\Pi_{i} $$\n\nWhere $\\Pi_{i}$ are the projectors onto the SIC states: $E_{i} = \\frac{1}{d}\\Pi_{i}$.\n\nOr given the fact that $\\sum_{i} \\frac{1}{d} \\Pi_{i} = I$:\n\n$$\\rho = (d+1) \\sum_{i} p(i)\\Pi_{i} - I $$\n", "_____no_output_____" ] ], [ [ "def probs_dm_sic(p, ref_povm):\n d = int(np.sqrt(len(p)))\n return sum([((d+1)*p[i] - 1/d)*(e/e.tr()) for i, e in enumerate(ref_povm)])\n\ndef probs_dm_sic2(p, ref_povm):\n d = int(np.sqrt(len(p)))\n return (d+1)*sum([p[i]*e/e.tr() for i, e in enumerate(ref_povm)]) - qt.identity(d)\n\nrho2 = probs_dm_sic(p, ref_povm)\nrho3 = probs_dm_sic2(p, ref_povm)\nprint(\"recovered? %s\" % (np.allclose(rho, rho2, rtol=1e-02, atol=1e-04) and np.allclose(rho, rho3, rtol=1e-02, atol=1e-04)))", "recovered? True\n" ] ], [ [ "<hr>\n\nNow suppose we have the following situation. We first make a SIC-POVM measurement, and then we make a standard Von Neumann (PVM) measurement on a given system. Following the vivid imagery of Fuchs, we'll refer to the SIC-POVM as being \"up in the sky\" and the Von Neumann measurement as being \"down on the ground\".\n\nSo given our state $\\rho$, above we've calculated the probabilities $p(i)$ for each outcome of the POVM. Now we'd like to assign probabilities for the outcomes of the Von Neumann measurement. What we need are the conditional probabilities $r(j|i)$, the probability of Von Neumann outcome $j$ given that the SIC-POVM returned $i$. Then:\n\n$s(j) = \\sum_{i}^{d^2} p(i)r(j|i)$\n\nThis is just standard probability theory: the law of total probability. The probability for an outcome $j$ of the Von Neumann measurement is the sum over all the conditional probabilities for $j$, given some outcome $i$ of the SIC-POVM, multiplied by the probability that $i$ occured.\n\nThe standard way of thinking about this would be that after the SIC-POVM measurement:\n\n$\\rho^{\\prime} = \\sum_{i} p(i)\\Pi_{i}$\n\nIn other words, after the first measurement, $\\rho$ becomes a mixture of outcome states weighted by the probabilities of them occuring. In this simple case, where we aren't considering a subsystem of larger system, and we're sticking with SIC-POVM's whose elements, we recall, are rank-1, we can just use the projectors $\\Pi_{i}$ for the SIC-POVM outcome states. Then the probabilities for the Von Neumann measurement are:\n\n$s(j) = tr(\\tilde{\\Pi}_{j}\\rho^{\\prime})$\n\nWhere $\\tilde{\\Pi}_{j}$ is the projector for the $j^{th}$ Von Neumann outcome.", "_____no_output_____" ] ], [ [ "von_neumann = qt.rand_herm(d)\nvn_projectors = [v*v.dag() for v in von_neumann.eigenstates()[1]]\n\nvn_rho = sum([prob*ref_povm[i]/ref_povm[i].tr() for i, prob in enumerate(p)])\nvn_s = np.array([(proj*vn_rho).tr() for proj in vn_projectors]).real\n\nprint(\"vn probabilities after sic: %s\" % vn_s)", "vn probabilities after sic: [0.47071558 0.52928442]\n" ] ], [ [ "Alternatively, however, we could form conditional probabilities directly:\n\n$r(j|i) = tr(\\tilde{\\Pi}_{j}\\Pi_{i})$\n\nWhere $\\Pi_{i}$ is the projector for the $i^{th}$ POVM outcome (in the sky), and $\\tilde{\\Pi}_{j}$ is the projector for the $j^{th}$ Von Neumann outcome (on the ground).\n\nThen we can use the formula:\n\n$s(j) = \\sum_{i}^{d^2} p(i)r(j|i)$\n", "_____no_output_____" ] ], [ [ "def vn_conditional_probs(von_neumann, ref_povm):\n d = von_neumann.shape[0]\n vn_projectors = [v*v.dag() for v in von_neumann.eigenstates()[1]]\n return np.array([[(vn_projectors[j]*(e/e.tr())).tr() for i, e in enumerate(ref_povm)] for j in range(d)]).real\n\ndef vn_posterior(dm, von_neumann, ref_povm):\n d = dm.shape[0]\n p = dm_probs(rho, ref_povm)\n r = vn_conditional_probs(von_neumann, ref_povm)\n return np.array([sum([p[i]*r[j][i] for i in range(d**2)]) for j in range(d)])\n\nprint(\"vn probabilities after sic: %s\" % vn_posterior(rho, von_neumann, ref_povm))", "vn probabilities after sic: [0.47071558 0.52928442]\n" ] ], [ [ "Indeed, $r(j|i)$ is a valid conditional probability matrix: its columns all sum to 1.", "_____no_output_____" ] ], [ [ "np.sum(vn_conditional_probs(von_neumann, ref_povm), axis=0)", "_____no_output_____" ] ], [ [ "Incidentally, there's no need to confine ourselves to the case of Von Neumann measurements. Suppose the \"measurement on the ground\" is given by another POVM. In fact, we can get one by just rotating our SIC-POVM by some random unitary. We'll obtain another SIC-POVM $\\{F_{j}\\}$.\n\nIn this case, we'd form $\\rho^{\\prime} = \\sum_{i} p(i)\\Pi_{i}$ just as before, and then take $s(j) = tr(F_{j}\\rho^{\\prime})$.", "_____no_output_____" ] ], [ [ "U = qt.rand_unitary(d)\nground_povm = [U*e*U.dag() for e in ref_povm]\n\npovm_rho = sum([prob*ref_povm[i]/ref_povm[i].tr() for i, prob in enumerate(p)])\npovm_s = np.array([(e*povm_rho).tr() for e in ground_povm]).real\n\nprint(\"povm probabilities after sic: %s\" % povm_s)", "povm probabilities after sic: [0.2166969 0.25169366 0.26812392 0.26348553]\n" ] ], [ [ "And alternatively, we could work with the conditional probabilities:\n\n$r(j|i) = tr(F_{j}\\Pi_{i})$\n\nAnd then apply:\n\n$s(j) = \\sum_{i}^{d^2} p(i)r(j|i)$\n\nWhere now $j$ will range from $0$ to $d^2$.", "_____no_output_____" ] ], [ [ "def povm_conditional_probs(povm, ref_povm):\n d = int(np.sqrt(len(ref_povm)))\n return np.array([[(a*(b/b.tr())).tr() for i, b in enumerate(ref_povm)] for j, a in enumerate(povm)]).real\n\ndef povm_posterior(dm, povm, ref_povm):\n d = dm.shape[0]\n p = dm_probs(dm, ref_povm)\n r = povm_conditional_probs(povm, ref_povm)\n return np.array([sum([p[i]*r[j][i] for i in range(d**2)]) for j in range(d**2)])\n\nprint(\"povm probabilities after sic: %s\" % povm_posterior(rho, ground_povm, ref_povm))", "povm probabilities after sic: [0.2166969 0.25169366 0.26812392 0.26348553]\n" ] ], [ [ "<hr>\n\nOkay, now we get to the punch line. Let's consider the case of the Von Neumann measurement. Suppose we *didn't* make the SIC-POVM measurement first. What would the probabilities be? Well, we all know:\n\n$q(j) = tr(\\tilde{\\Pi}_{i}\\rho)$", "_____no_output_____" ] ], [ [ "vn_p = np.array([(proj*rho).tr() for proj in vn_projectors]).real\nprint(\"vn probabilities (no sic in the sky): %s\" % vn_p)", "vn probabilities (no sic in the sky): [0.41214673 0.58785327]\n" ] ], [ [ "Now it turns out that we can get these same probabilities in a different way:\n\n$q(j) = (d+1)[\\sum_{i}^{d^2} p(i)r(j|i)] - 1$", "_____no_output_____" ] ], [ [ "def vn_born(dm, von_neumann, ref_povm):\n d = dm.shape[0]\n p = dm_probs(dm, ref_povm)\n r = vn_conditional_probs(von_neumann, ref_povm)\n return np.array([(d+1)*sum([p[i]*r[j][i] for i in range(d**2)]) - 1 for j in range(d)]).real\n\nprint(\"vn probabilities (no sic in the sky): %s\" % vn_born(rho, von_neumann, ref_povm))", "vn probabilities (no sic in the sky): [0.41214673 0.58785327]\n" ] ], [ [ "In other words, we can express the usual quantum probabilities in the case that we go directly to the Von Neumann measurement in a way that looks *ridiculously* close to our formula from before, involving probabilities for the SIC-POVM outcomes and conditional probabilities for Von Neumann outcomes given SIC-POVM outcomes! We sum over *hypothetical* outcomes of the SIC-POVM, multiplying the probability of each outcome, given our state $\\rho$, by the conditional probability for the Von Neumann measurement giving the $j^{th}$ outcome, given that the SIC-POVM outcome was $i$. Except the formula is somewhat deformed by the the $(d+1)$ and the $-1$. \n\nClearly, this is equivalent to the usual Born Rule: but it's expressed *entirely* in terms of probabilities and conditional probabilities. It makes sense, in the end, that you can do this, given that the probabilities for the SIC-POVM measurement completely nail down the state. The upshot is that we can just work with the probabilities instead! Indeed, we could just pick some SIC-POVM to be our \"reference apparatus\", and describe any quantum state we're ever interested in terms of probabilities with reference to it, and any measurement in terms of conditional probabilities. \n\nOperationally, what *is* difference between:\n\n$s(j) = \\sum_{i}^{d^2} p(i)r(j|i)$\n\nand\n\n$q(j) = (d+1)[\\sum_{i}^{d^2} p(i)r(j|i)] - 1$\n\nThe difference is precisely *whether the SIC-POVM measurement has actually been performed*. If it has, then we lose quantum coherence. If it hasn't, we maintain it. In other words, the difference between classical and quantum is summed up in the minor difference between these two formulas.\n\nIn slogan form, due to Asher Peres, \"unperformed measurements have no results.\" We'll get to the philosophy of this later, but the point is that classically speaking, we should be able to use the law of total probability *whether or not we actually do the measurement in the sky*: but quantum mechanically, if we don't actually do the measurement, we can't. But we have something just as good: the Born Rule.", "_____no_output_____" ], [ "<hr>\n\nIf we want to consider a more general measurement \"on the ground,\" in particular, another SIC-POVM measurement, then our formula becomes:\n\n$q(j) = (d+1)[\\sum_{i}^{d^2} p(i)r(j|i)] - \\frac{1}{d}[\\sum_{i}^{d^2} r(j|i) ]$\n\nWhere now $i$ ranges to $d^2$.", "_____no_output_____" ] ], [ [ "print(\"povm probabilities (no sic in the sky): %s\" % dm_probs(rho, ground_povm))", "povm probabilities (no sic in the sky): [0.15009069 0.25508097 0.30437175 0.2904566 ]\n" ], [ "def povm_born(dm, povm, ref_povm):\n d = dm.shape[0]\n p = dm_probs(dm, ref_povm)\n r = povm_conditional_probs(povm, ref_povm)\n return np.array([(d+1)*sum([p[i]*r[j][i] for i in range(d**2)]) - (1/d)*sum([r[j][i] for i in range(d**2)]) for j in range(d**2)]).real\n\nprint(\"povm probabilities (no sic in the sky): %s\" % povm_born(rho, ground_povm, ref_povm))", "povm probabilities (no sic in the sky): [0.15009069 0.25508097 0.30437175 0.2904566 ]\n" ] ], [ [ "We can write these rules in much more compact matrix form.\n\nDefine $\\Phi = (d+1)I_{d^2} - \\frac{1}{d}J_{d^2}$\n\nWhere $I_{d^2}$ is the $d^2 \\times d^2$ identity, and $J_{d^2}$ is the $d^2 \\times d^2$ matrix all full of $1$'s.\n\nIf $R$ is the matrix of conditional probabilities, and $p$ is the vector of probabilities for the reference POVM in the sky, then the vector of values for $q(i)$ is:\n\n$\\vec{q} = R \\Phi p$\n\n", "_____no_output_____" ] ], [ [ "def vn_born_matrix(dm, von_neumann, ref_povm):\n d = rho.shape[0]\n p = dm_probs(dm, ref_povm)\n r = vn_conditional_probs(von_neumann, ref_povm)\n phi = (d+1)*np.eye(d**2) - (1/d)*np.ones((d**2,d**2))\n return r @ phi @ p\n\nprint(\"vn probabilities (no sic in the sky): %s\" % vn_born_matrix(rho, von_neumann, ref_povm))\n\ndef povm_born_matrix(dm, povm, ref_povm):\n d = dm.shape[0]\n p = dm_probs(dm, ref_povm)\n r = povm_conditional_probs(povm, ref_povm)\n phi = (d+1)*np.eye(d**2) - (1/d)*np.ones((d**2,d**2))\n return r @ phi @ p\nprint(\"povm probabilities (no sic in the sky): %s\" % povm_born_matrix(rho, ground_povm, ref_povm))", "vn probabilities (no sic in the sky): [0.41214673 0.58785327]\npovm probabilities (no sic in the sky): [0.15009069 0.25508097 0.30437175 0.2904566 ]\n" ] ], [ [ "And for that matter, we can calculate the \"classical\" probabilities from before in the same vectorized way: we just leave out $\\Phi$!", "_____no_output_____" ] ], [ [ "print(\"vn probabilities after sic: %s\" % (vn_conditional_probs(von_neumann, ref_povm) @ dm_probs(rho, ref_povm)))\nprint(\"povm probabilities after sic: %s\" % (povm_conditional_probs(ground_povm, ref_povm) @ dm_probs(rho, ref_povm)))", "vn probabilities after sic: [0.47071558 0.52928442]\npovm probabilities after sic: [0.2166969 0.25169366 0.26812392 0.26348553]\n" ] ], [ [ "In fact, this this is how qbist operators are implemented in this library behind the scenes. It allows one to easily handle the general case of IC-POVM's (informationally complete POVM's) which aren't SIC's: in that case, the matrix $\\Phi$ will be different. Cf. `povm_phi`.", "_____no_output_____" ], [ "<hr>\n\nLet's consider time evolution in this picture. We evolve our $\\rho$ by some unitary:\n\n$\\rho_{t} = U \\rho U^{\\dagger}$\n\nNaturally, we can calculate the new probabilities with reference to our SIC-POVM:", "_____no_output_____" ] ], [ [ "U = qt.rand_unitary(d)\nrhot = U*rho*U.dag()\npt = dm_probs(rhot, ref_povm)\n\nprint(\"time evolved probabilities: %s\" % pt)", "time evolved probabilities: [0.1895833 0.1895833 0.3104167 0.3104167]\n" ] ], [ [ "But we could also express this in terms of conditional probabilities:\n\n$u(j|i) = \\frac{1}{d}tr(\\Pi_{j}U\\Pi_{i}U^{\\dagger})$\n\nAs:\n\n$p_{t}(j) = \\sum_{i}^{d^2} ((d+1)p(i) - \\frac{1}{d})u(j|i)$\n", "_____no_output_____" ] ], [ [ "def temporal_conditional_probs(U, ref_povm):\n d = U.shape[0]\n return np.array([[(1/d)*((a/a.tr())*U*(b/b.tr())*U.dag()).tr() for i, b in enumerate(ref_povm)] for j, a in enumerate(ref_povm)]).real\n\nu = temporal_conditional_probs(U, ref_povm)\npt2 = np.array([sum([((d+1)*p[i] - 1/d)*u[j][i] for i in range(d**2)]) for j in range(d**2)]).real\nprint(\"time evolved probabilities: %s\" % pt2)", "time evolved probabilities: [0.1895833 0.1895833 0.3104167 0.3104167]\n" ] ], [ [ "We can compare this to the standard rule for stochastic evolution:\n\n$p_{t}(j) = \\sum_{i} p(i)u(j|i)$\n\nWe can see how the expression is deformed in exactly the same way. Indeed $u(j|i)$ is a doubly stochastic matrix: its rows and colums all sum to 1. And we can describe the time evolution of the quantum system in terms of it.", "_____no_output_____" ] ], [ [ "print(np.sum(u, axis=0))\nprint(np.sum(u, axis=1))", "[1. 1. 1. 1.]\n[1. 1. 1. 1.]\n" ] ], [ [ "For more on the subleties of time evolution, consider the notes on `conditional_probs`.", "_____no_output_____" ], [ "<hr>\n\nYou can express the inner product between states in terms of SIC-POVM probability vectors via:\n\n$tr(\\rho \\sigma) = d(d+1)[\\vec{p} \\cdot \\vec{s}] - 1$", "_____no_output_____" ] ], [ [ "d = 3\nref_povm = sic_povm(d)\n\nrho = qt.rand_dm(d)\nsigma = qt.rand_dm(d)\n\np = dm_probs(rho, ref_povm)\ns = dm_probs(sigma, ref_povm)\n\ndef quantum_inner_product_sic(p, s):\n d = int(np.sqrt(len(p)))\n return d*(d+1)*np.dot(p, s) - 1\n\nprint(\"inner product of rho and sigma: %.3f\" % (rho*sigma).tr().real)\nprint(\"inner product of rho and sigma: %.3f\" % quantum_inner_product_sic(p, s))", "inner product of rho and sigma: 0.333\ninner product of rho and sigma: 0.333\n" ] ], [ [ "This brings up an important point.\n\nYou might wonder: Suppose we have a SIC-POVM with $d^2$ elements which provides $d^2$ probabilities which completely nail down the quantum state, given as a $d \\times d$ density matrix. But what if we just start off with any old random vector of $d^2$ probabilities? Will we always get a valid density matrix? In other words, we've seen how we can start with quantum states, and then proceed to do quantum mechanics entirely in terms of probabilities and conditional probabilities. But now we're considering going in reverse. Does *any* assignment of probabilities to SIC-POVM outcomes specify a valid quantum state?\n\nWell: any probability assignment will give us a $\\rho$ which is Hermitian and has trace 1, which is great--BUT: this $\\rho$ may not be positive-semidefinite (which is a requirement for density matrices). Like: if you assigned any old probabilites to the SIC-POVM outcomes, and then constructed a correponding $\\rho$, it might end up having negative eigenvalues. Since the eigenvalues of $\\rho$ are supposed to be probabilities (positive, summing to 1, etc), this is a problem.\n\nIn fact, you can't even have probability vectors that are too sharply peaked at any one value!\n", "_____no_output_____" ] ], [ [ "d = 3\npovm = sic_povm(d)\n\nvec = np.zeros(d**2)\nvec[np.random.randint(d**2)] = 1\n\nprint(\"probs: %s\" % vec)\nprint(probs_dm(vec, povm))", "probs: [0. 0. 0. 0. 0. 0. 1. 0. 0.]\nQuantum object: dims = [[3], [3]], shape = (3, 3), type = oper, isherm = True\nQobj data =\n[[ 0.09996448+0.j 0.58704937+0.00894882j 0.85113669-1.45631434j]\n [ 0.58704937-0.00894882j -0.68689371+0.j 0.44220778-0.78382399j]\n [ 0.85113669+1.45631434j 0.44220778+0.78382399j 1.58692923+0.j ]]\n" ] ], [ [ "Note the negative entries. Furthermore, even if we start off in a SIC-POVM state, that doesn't mean we'll get that state with certainty after the measurement--indeed, unlike with projective measurements, repeated measurements don't always give the same results. ", "_____no_output_____" ] ], [ [ "d = 3\npovm = sic_povm(d)\nprint(dm_probs(povm[0]/povm[0].tr(), povm))", "[0.33333333 0.08332833 0.08332833 0.08332833 0.08332833 0.08334835\n 0.08332833 0.08334835 0.08332833]\n" ] ], [ [ "Above we see the probabilities for SIC-POVM outcomes given that we start off in the first SIC-POVM state. We see that indeed, the first SIC-POVM state has the highest probability, but all the other elements have non-zero probability (and for SIC's this is the same probability: not true for general IC-POVM's).\n\nIndeed, it's a theorem that no such probability vector can have an element which exceeds $\\frac{1}{d}$, and that the number of $0$ entries is bounded above by $\\frac{d(d-1)}{2}$.\n\n", "_____no_output_____" ], [ "So we need another constraint. In other words, the quantum state space is a *proper subset* of the probability simplex over $d^2$ outcomes. There's some very interesting work exploring the geometric aspects of this constraint. \n\nFor example, insofar as pure states are those Hermitian matrices satisfying $tr(\\rho^2) = tr(\\rho^3) = 1$, we can evidently finagle this into two conditions:\n\n$\\sum_{i}^{d^2} p(i)^2 = \\frac{2}{d(d+1)}$\n\nand\n\n$\\sum_{i,j,k} c_{i, j, k}p(i)p(j)p(k) = \\frac{d+7}{(d+1)^3}$\n\nWhere $c_{i, j, k} = \\Re{[tr(\\Pi_{i}\\Pi_{j}\\Pi_{k})]}$, which is a real-valued, completely symmetric three index tensor. The quantum state space is the <a href=\"https://en.wikipedia.org/wiki/Convex_hull\">convex hull</a> of probability distributions satisfying these two equations. \n\nOn this same note, considering our expression for the inner product, since we know that the inner product between two quantum states $\\rho$ and $\\sigma$ is bounded between $0$ and $1$, we must have:\n\n$\\frac{1}{d(d+1)} \\leq \\vec{p} \\cdot \\vec{s} \\leq \\frac{2}{d(d+1)}$\n\nThe upper bound corresponds to our first condition. Call two vectors $\\vec{p}$ and $\\vec{s}$ \"consistent\" if their inner product obeys both inequalities. If we have a subset of the probability simplex for which every pair of vectors satisfies the inequalities, call it a \"germ.\" If adding one more vector to a germ makes the set inconsistent, call the germ \"maximal.\" And finally, call a maximal germ a \"qplex.\" The space of quantum states in the SIC representation form a qplex, but not all qplexes correspond to quantum state spaces. The geometry of the qplexes are explored in <a href=\"https://arxiv.org/abs/1612.03234\">Introducing the Qplex: A Novel Arena for Quantum Theory</a>. The conclusion?\n\n\"\\[Turning\\] to the problem of identifying the “missing assumption” which will serve to pick out quantum state space uniquely from the set of all qplexes... Of course, as is usual in such cases, there is more than one possibility. We identify one such assumption: the requirement that the symmetry group contain a subgroup isomorphic to the projective unitary group. This is a useful result because it means that we have a complete characterization of quantum state space in probabilistic terms. It also has an important corollary: That SIC existence in dimension d is equivalent to the existence of a certain kind of subgroup of the real orthogonal group in dimension $d^2 − 1$.\"", "_____no_output_____" ], [ "<hr>\n\nHere's one final thing, for flavor. Having specified a SIC-POVM with $n$ elements and then an additional measurement (Von Neumann or POVM), we can construct the matrix $r(j|i)$.", "_____no_output_____" ] ], [ [ "d = 2\nref_povm = sic_povm(d)\nvon_neumann = qt.rand_herm(d)\n\nn = len(ref_povm)\nr = vn_conditional_probs(von_neumann, ref_povm)\nr", "_____no_output_____" ] ], [ [ "We can then consider its rows, and extract a set of vectors $s_{j}$, each of which sums to 1:\n\n$r(j|i) = n\\gamma_{j} s_{j}(i)$", "_____no_output_____" ] ], [ [ "s = np.array([row/sum(row) for row in r])\ngammas = [sum(row)/n for row in r]", "_____no_output_____" ], [ "np.array([n*gammas[i]*row for i, row in enumerate(s)])", "_____no_output_____" ] ], [ [ "We'll call these vectors $s_{j}$ \"measurement vectors.\"\n\nSuppose we're completely indifferent to the outcomes of the POVM in the sky. We could represent this by: $p(i) = \\frac{1}{n}$. In other words, equal probability for each outcome.\n\nThe probabilities for outcomes to the later Von Neumann measurement would be:\n\n$q(j) = \\frac{1}{n}\\sum_{i}r(j|i)$", "_____no_output_____" ] ], [ [ "p = [1/n for i in range(n)]\nvn_probs = np.array([sum([p[i]*r[j][i] for i in range(n)]) for j in range(d)])\nvn_probs", "_____no_output_____" ] ], [ [ "We could describe this by assigning to $\\rho$ the maximally mixed state.", "_____no_output_____" ] ], [ [ "max_mixed = qt.identity(d)/d\nvn_born(max_mixed, von_neumann, ref_povm)", "_____no_output_____" ] ], [ [ "But we could also rewrite $q(j)$ as:\n\n$q(j) = \\frac{1}{n} \\sum_{i} n\\gamma_{j} s_{j}(i) = \\gamma_{j} \\sum_{i} s_{j}(i)$\n\nAnd since the $s_{j}(i)$ sum to 1:\n\n$q(j) = \\gamma_{j}$", "_____no_output_____" ] ], [ [ "np.array([gammas[j]*sum([s[j][i] for i in range(n)]) for j in range(d)])", "_____no_output_____" ], [ "gammas", "_____no_output_____" ] ], [ [ "Thus you can interpret the $\\gamma_{j}$'s as: the probabilities of obtaining the $j^{th}$ outcome on the ground when you're completely indifferent to the potential outcomes in the sky.\n\nNow let's rewrite:\n\n$r(j|i) = n\\gamma_{j} s_{j}(i)$\n\nas\n\n$s_{j}(i) = \\frac{\\frac{1}{n}r(j|i)}{\\gamma_{j}}$\n\nWe know that $\\gamma_{j}$ is the probability of obtaining $j$ on the ground, given complete ignorance about the potential outcomes of the sky experiment. We also know that $\\frac{1}{n}$ is the probability assigned to each outcome of the sky experiment from complete indifference. \n\nSo write $Pr_{CI}(i)= \\frac{1}{n}$ and $Pr_{CI}(j) = \\gamma_{i}$, where $CI$ stands for complete ignorance/indifference. And we could apply the same notation: $Pr_{CI}(j|i) = r(j|i)$:\n\n$s_{j}(i) = \\frac{Pr_{CI}(i)Pr_{CI}(j|i)}{Pr_{CI}(j)}$\n\nBut this is just the Baysian formula for inverting conditional probabilities:\n\n$Pr_{CI}(i|j) = \\frac{Pr_{CI}(i)Pr_{CI}(j|i)}{Pr_{CI}(j)}$\n\nIn a similar vein:\n\n<img src=\"img/fuchs.png\">\n", "_____no_output_____" ], [ "<hr>\n\n## Interlude: Implementing POVM's\n\n", "_____no_output_____" ], [ "It's worth mentioning how POVM's are actually implemented in practice. Here's the simplest way of thinking about it. Suppose we have a system with Hilbert space dimension $d$, and we have a POVM with $n$ elements. (In the case of our SIC-POVM's, we'd have $d^2$ elements.) We then adjoin an auxilliary system with Hilbert space dimension $n$: as many dimensions as POVM elements. So now we're working with $\\mathcal{H}_{d} \\otimes \\mathcal{H}_{n}$.\n\nLet's define projectors onto the basis states of the auxilliary system: $\\Xi_{i} = I_{d} \\otimes \\mid i \\rangle \\langle i \\mid$. If we denote the elements of the POVM by $\\{ E_{i} \\}$, then we can construct an isometry:\n\n$V = \\sum_{i}^{n} \\sqrt{E_{i}} \\otimes \\mid i \\rangle$\n\nSuch that any element of the POVM can be written:\n\n$E_{i} = V^{\\dagger}\\Xi_{i}V $\n", "_____no_output_____" ] ], [ [ "d = 3\nmy_povm = sic_povm(d)\nn = len(my_povm)\n\naux_projectors = [qt.tensor(qt.identity(d), qt.basis(n, i)*qt.basis(n, i).dag()) for i in range(n)]\nV = sum([qt.tensor(my_povm[i].sqrtm(), qt.basis(n, i)) for i in range(n)])\n\npovm_elements = [V.dag()*aux_projectors[i]*V for i in range(n)]\nprint(\"recovered povm elements? %s\" % np.all([np.allclose(my_povm[i], povm_elements[i]) for i in range(n)]))", "recovered povm elements? True\n" ] ], [ [ "So this isometry $V$ takes us from $\\mathcal{H}_{d}$ to $\\mathcal{H}_{d} \\otimes \\mathcal{H}_{n}$.\nWe can extend this to a unitary $U$ (that takes $\\mathcal{H}_{d} \\otimes \\mathcal{H}_{n}$ to $\\mathcal{H}_{d} \\otimes \\mathcal{H}_{n}$) using the QR decomposition. In essence, we use the Gram-Schmidt procedure to fill out the rectangular matrix to a square matrix with extra orthogonal columns. (And then we have to rearrange the columns so that the columns of $V$ appear every $n^{th}$ column, in order to take into account the tensor product structure.)", "_____no_output_____" ] ], [ [ "Q, R = np.linalg.qr(V, mode=\"complete\")\n\nfor i in range(d):\n Q.T[[i,n*i]] = Q.T[[n*i,i]]\n Q[:,n*i] = V[:,i].T\n\nU = qt.Qobj(Q)\nU.dims = [[d, n],[d, n]]", "_____no_output_____" ] ], [ [ "We can check our work. It should be the case that:\n\n$V = U(I_{d} \\otimes \\mid 0 \\rangle)$", "_____no_output_____" ] ], [ [ "print(\"recovered V?: %s\" % np.allclose(V, U*qt.tensor(qt.identity(d), qt.basis(n, 0))))", "recovered V?: True\n" ] ], [ [ "Now for the finale. We know how to calculate the probabilities for each of the POVM outcomes. It's just:\n\n$Pr(i) = tr(E_{i}\\rho)$\n\nTo actually implement this, we start off with our auxilliary system in the $\\mid 0 \\rangle$ state, so that the overall density matrix is: $\\rho \\otimes \\mid 0 \\rangle \\langle 0 \\mid$. We then evolve the system and the auxilliary with our unitary $U$: \n\n$$U [\\rho \\otimes \\mid 0 \\rangle \\langle 0 \\mid] U^{\\dagger} $$\n\nFinally, we perform a standard Von Neumann measurement on the auxilliary system (whose outcomes correspond to the basis states we've been using). Recalling that we defined the projectors onto the auxilliary basis states as $\\Xi_{i} = I_{d} \\otimes \\mid i \\rangle \\langle i \\mid$, we can then write probabilities for each outcome:\n\n$Pr(i) = tr(\\Xi_{i} U [\\rho \\otimes \\mid 0 \\rangle \\langle 0 \\mid] U^{\\dagger} )$\n\nThese are the same probabilities as above.", "_____no_output_____" ] ], [ [ "rho = qt.rand_dm(d)\npovm_probs = np.array([(my_povm[i]*rho).tr() for i in range(n)]).real\n\nsystem_aux_probs = np.array([(aux_projectors[i]*\\\n U*qt.tensor(rho, qt.basis(n,0)*qt.basis(n,0).dag())*U.dag()).tr()\\\n for i in range(n)]).real\n\nprint(\"povm probs:\\n%s\" % povm_probs)\nprint(\"system and aux probs:\\n%s\" % system_aux_probs)", "povm probs:\n[0.14347539 0.12795568 0.12275462 0.12076828 0.0548661 0.06606244\n 0.11953627 0.08713102 0.15745018]\nsystem and aux probs:\n[0.14347539 0.12795568 0.12275462 0.12076828 0.0548661 0.06606244\n 0.11953627 0.08713102 0.15745018]\n" ] ], [ [ "Moreover, we can see that the states after measurement correspond to the SIC-POVM projectors:", "_____no_output_____" ] ], [ [ "states = [(aux_projectors[i]*(U*qt.tensor(rho, qt.basis(n,0)*qt.basis(n,0).dag())*U.dag())).ptrace(0) for i in range(n)]\nprint(states[0].unit())\nprint(d*my_povm[0])", "Quantum object: dims = [[3], [3]], shape = (3, 3), type = oper, isherm = True\nQobj data =\n[[0.64671347+0.j 0.21279188+0.36409112j 0.11056486+0.19597917j]\n [0.21279188-0.36409112j 0.27499462+0.j 0.14671348+0.00223761j]\n [0.11056486-0.19597917j 0.14671348-0.00223761j 0.0782919 +0.j ]]\nQuantum object: dims = [[3], [3]], shape = (3, 3), type = oper, isherm = True\nQobj data =\n[[0.64671348+0.j 0.21279188+0.36409113j 0.11056486+0.19597917j]\n [0.21279188-0.36409113j 0.27499463+0.j 0.14671348+0.00223761j]\n [0.11056486-0.19597917j 0.14671348-0.00223761j 0.0782919 +0.j ]]\n" ] ], [ [ "Indeed, whether you buy the philosophy that we're about to go into, SIC-POVM's have deep practical value in terms of quantum tomography and quantum information theory generally.\n\nCf. `implement_povm`.", "_____no_output_____" ], [ "<hr>\n\n## The Philosophy\n\nSo in some sense the difference between classical and quantum is summed up in the difference between these two formulas:\n\n$s(j) = \\sum_{i}^{d^2} p(i)r(j|i)$\n\nand\n\n$q(j) = (d+1)[\\sum_{i}^{d^2} p(i)r(j|i)] - 1$\n\nIn the first case, I make a SIC-POVM measurement in the sky, and then make a Von Neumann measurement on the ground. I can calculate the probabilities for the outcomes of the latter measurement using the law of total probability. Given the probabilities for the sky outcomes, and the conditional probabilities that relate ground outcomes to sky outcomes, I can calculate the probabilities for ground outcomes. Classically speaking, and this is the crucial point, I could use the first formula *whether or not I actually did the sky measurement*. \n\nIn other words, insofar as classically we've identified the relevant \"degrees of freedom,\" and the assignment of sky probabilities uniquely characterizes the state, then it's a matter of mathematical convenience if we express $s(j)$ as a sum over those degrees of freedom $\\sum_{i}^{d^2} p(i)r(j|i)$: by the nature of the formula, by the law of total probability, all the $i$'s drop out, and we're left with the value for $j$. We could actually perform the sky measurement or not: either way, we'd use the same formula to calculate the ground probabilities.\n\nThis is precisely what changes with quantum mechanics: it makes a difference *whether you actually do the sky measurement or not*. If you do, then you use the classical formula. If you don't, then you use the quantum formula. \n\nOne way of interpreting the moral of this is that, to quote Asher Peres again, \"Unperformed measurements have no results.\" In contrast, classically, you *can* always regard unperformed measurements as having results: indeed, classical objectivity consists in, as it were, everything wearing its outcomes on its sleeve. In other words, outcomes aren't a special category: one can just speak of the properties of things. And this is just another way of saying you can use the law of total probability whether or not you actually do an intermediate measurement. But this is exactly what you can't rely on in quantum mechanics. \n\nBut remarkably, all you need to do to update your probability calculus is to use the quantum formula, which is ultimately the Born Rule in disguise. In other words, in a world where unperformed measurements have no results, when we consider different kinds of sequences of measurements, we need a (minor) addition to probability theory so that our probability assignments are coherent/consistent/no one can make a buck off of us.\n\nMoreover, Blake Stacey makes the nice point, considering the realtionship between SIC-POVM's and Von Neumann measurements:\n\n\"Two orthogonal quantum states are perfectly distinguishable with respect to some experiment, yet in terms of the reference \\[SIC-POVM\\] measurement, they are inevitably overlapping probability distributions. The idea that any two valid probability distributions for the reference measurement must overlap, and that the minimal overlap in fact corresponds to distinguishability with respect to some other test, expresses the fact that quantum probability is not about hidden variables\" (Stacey 2020).\n\n<hr>", "_____no_output_____" ], [ "de Finetti famously advocated a subjectivist, personalist view of classical probability theory, and he and his theorems have proved to be an inspiration for QBists like Christopher Fuchs and others. In this view, probabilities don't \"exist\" out in the world: they are mathematical representations of personal beliefs which you are free to update in the face of new evidence. There isn't ever \"one objective probability distribution\" for things: rather, there's a constant personal process of convergence towards better beliefs. If you don't want to make bad bets, there are some basic consistency criteria that your probabilities have to satisfy. And that's what probability theory as such amounts to. The rest is just \"priors.\" \n\n\"Statisticians for years had been speaking of how statistical sampling can reveal the 'unknown probability distribution'. But from de Finetti’s point of view, this makes as little sense as the unknown quantum state made for us. What de Finetti’s representation theorem established was that all this talk of an unknown probability was just that, talk. Instead, one could show that there was a way of thinking of the resultant of statistical sampling purely in terms of a transition from prior subjective probabilities (for the sampler himself) to posterior subjective probabilities (for the sampler himself). That is, every bit of statistical sampling from beginning to end wasn’t about revealing a true state of affairs (the “unknown probability”), but about the statistician’s own states of information about a set of “exchangeable” trials, full stop. The quantum de Finetti theorem does the same sort of thing, but for quantum states\" (Fuchs 2018).\n\nIndeed, QBists advocate a similar epistemic interpretation of the quantum state. The quantum state does not represent a quantum system. It represents *your beliefs about that quantum system*. In other words, interpretations that assign ontological roles to quantum states miss the mark. Quantum states are just packages of probabilities, indeed, probabilities personal to you. (In this sense, one can see a close relation to relational interpretations of quantum mechanics, where the quantum state is always defined not objectively, but to one system relative to another system.) Similarly, all the superstructure of quantum mechanics, operators, time evolution, etc-- are all just a matter of making subjective probabilities consistent with each other, given the *objective fact* that you should use the quantum formula when you haven't done an intermediate measurement, and the classical formula if you have. (And one should also mention that the formulas above imply that the *dimension* of the Hilbert space is, in fact, objective.)\n\nOn the other hand, QBists also hold that the very outcomes of measurements themselves are subjective--not in the sense of being vacuously open to intepretation, but in the sense that they are *experiences*; and it is precisely these subjective experiences that are being gambled upon. In other words, quantum mechanics is not a theory of the objective physical world as such, but is instead a first person theory by which one may predict the future consequences of one's own actions in experience. \n\nThis is how they deal with the dilemma of Wigner's friend. Fuchs: \"...for the QBist, the real world, the one both agents are embedded in—with its objects and events—is taken for granted. What is not taken for granted is each agent's access to the parts of it he has not touched. Wigner holds two thoughts in his head: a) that his friend interacted with a quantum system, eliciting some consequences of the interaction for himself, and b) after the specified time, for any of Wigner's own future interactions with his friend or the system or both, he ought to gamble upon their consequences according to $U(\\rho \\otimes \\mid \\psi \\rangle \\langle \\psi \\mid) U^{\\dagger}$. One statement refers to the friend's potential experiences, and one refers to Wigner's own. So long as it is explicit that $U(\\rho \\otimes \\mid \\psi \\rangle \\langle \\psi \\mid) U^{\\dagger}$ refers to the latter--i.e., how Wigner should gamble upon the things that might happen to him--making no statement whatsoever about the former, there is no conflict. The world is filled with all the same things it was before quantum theory came along, like each of our experiences, that rock and that tree, and all the other things under the sun; it is just that quantum theory provides a calculus for gambling on each agent's experiences--it doesn't give anything other than that. It certainly doesn't give one agent the ability to conceptually pierce the other agent's personal experience. It is true that with enough effort Wigner \\[could apply the reverse unitary, disentangling the friend and the spin\\], causing him to predict that his friend will have amnesia to any future questions on his old measurement results. But we always knew Wigner could do that--a mallet to the head would have been good enough\" (Fuchs, Stacey 2019).\n\nMost assuredly, this is not a solipsistic theory: indeed, the actual results of measurement are precisely not within one's control. The way they imagine it is that whenever you set up an experiment, you divide the world into subject and object: the subject has the autonomy to set up the experiment, and the object has the autonomy to respond to the experiment. But the act of measurement itself is a kind of creation, a mutual experience which transcends the very distinction between subject and object itself, a linkage between oneself and the other. \"QBism says that when an agent reaches out and touches a quantum system—when he performs a quantum measurement—this process gives rise to birth in a nearly literal sense\" (Fuchs, Stacey 2019).\n\nThe only conflict here is with a notion that the only valid physical theories are those that attempt to directly represent the universe \"in its totality as a pre-existing static system; an unchanging, monistic something that just *is*.\" Moreover, a theory like QBism clears a space for \"real particularity and 'interiority' in the world.\" For Wigner, considering his friend and the system, with his back turned, \"that phenomenon has an inside, a vitality that he takes no part in until he again interacts with one or both relevant pieces of it.\"\n\nOften in the interpretation of quantum mechanics, one tries to achieve objectivity by focusing on the big bulky apparatuses we use and the \"objective\" record of outcomes left behind by these machines. The QBists take a different track: Bohr himself considers the analogy of a blind man seeing with a stick. He's not actively, rationally thinking about the stick and how it's skittering off this or that: rather, for him, it becomes an extension of his body: he *sees with the stick*. And thus one can understand Fuchs's three tenants of QBism:\n\n1. Quantum Theory Is Normative, Not Descriptive\n2. My Probabilities Cannot Tell Nature What To Do\n3. A Measuring Device Is Literally an Extension of the Agent", "_____no_output_____" ], [ "<hr>\n\n<img width=600 src=\"img/qbism_assumptions1.png\">\n\n<img width=600 src=\"img/qbism_assumptions2.png\">", "_____no_output_____" ], [ "<hr>\n\nIndeed, one might wonder about entanglement in this picture. In line with the discussion of Wigner's friend, we can interpret entanglement and the use of tensor product itself as relating to the objective fact that we require a way of representing correlations while being completely agnostic about what is correlated insofar as we haven't yet reached out and \"touched\" the thing.\n\nMoreover, in this sense, one can look at QBism as a completely \"local\" theory. An experimenter has one half of an entangled pair of spins, and makes a measurement, and has an experience. In the textbook way of thinking it, this causes the state of the other spin to immedietely collapse. QBism takes a different approach. They say: quantum theory allows the experimenter to predict that if they go over and measure the other spin in the same direction, they will have another experience, of the answers of the two particles being correlated. But just because quantum theory licenses the experimenter to assign a probability 1 for the latter outcome after they do the first measurement doesn't mean that the latter particle *really is now $\\uparrow$, say, as a property*. If the experimenter never actually goes to check out the other particle, it's yet another unperformed measurement: and it has no outcome yet. To paraphrase William James, if it isn't experienced, it isn't real. And in order to \"cash out\" on entanglement, one actually has to traverse the distance between the two particles and compare the results.\n\nWith regard to quantum teleportation, in this view, it's not about getting \"things\" from one place to another, but about making one's information cease referring to this part of the universe and start referring instead to another part of the universe, without referring to anything else in between. \"The only nontrivial thing transferred in the process of teleportation is *reference*\" (Fuchs, Stacey 2019).", "_____no_output_____" ], [ "<hr>\n\nOne of the things that makes QBism so interesting is its attempt to give nature as much latitude as possible. Usually in science, we're mentally trying to constraint nature, applying concepts, laws, systems, to it, etc. QBism instead proposes that we live in a unfinished world, whose creation is ongoing and ceaseless, and that this profound openendedness is the real meaning behind \"quantum indeterminism.\" In itself, the universe is not governed by immutable laws and initial conditions fixed from the beginning: instead, new situations are coming into being all the time. Of course, regularities arise by evolution, the laws of large numbers, symmetries and so forth. But they take seriously John Wheeler's idea of the \"participatory universe,\" that we and everything else are constantly engaged bringing the universe into being, together. \n\nWheeler writes:\n\n\"How did the universe come into being? Is that some strange, far-off process beyond hope of analysis? Or is the mechanism that comes into play one which all the time shows itself? Of all the signs that testify to 'quantum phenomenon' as being the elementary act and building block of existence, none is more striking than its utter absence of internal structure and its untouchability. For a process of creation that can and does operate anywhere, that is more basic than particles or fields or spacetime geometry themselves, a process that reveals and yet hides itself, what could one have dreamed up out of pure imagination more magic and more fitting than this?\"\n\n\n\"'Law without law': It is difficult to see what else than that can be the “plan” for physics. It is preposterous to think of the laws of physics as installed by a Swiss watchmaker to endure from everlasting to everlasting when we know that the universe began with a big bang. The laws must have come into being. Therefore they could not have been always a hundred percent accurate. That means that they are derivative, not primary. Also derivative, also not primary is the statistical law of distribution of the molecules of a dilute gas between two intersecting portions of a total volume. This law is always violated and yet always upheld. The individual molecules laugh at it; yet as they laugh they find themselves obeying it. ... Are the laws of physics of a similar statistical character? And if so, statistics of what? Of billions and billions of acts of observer-participancy which individually defy all law? . . . \\[Might\\] the entirety of existence, rather than \\[be\\] built on particles or fields or multidimensional geometry, \\[be\\] built on billions upon billions of elementary quantum phenomena, those elementary acts of observer-participancy?\"\n\n\n<img src=\"img/wheeler.png\">\n", "_____no_output_____" ], [ "<hr>\n\nIn such world, to quote William James, \"Theories thus become instruments, not answers to enigmas, in which we can rest. We don’t lie back upon them, we move forward, and, on occasion, make nature over again by their aid.\" Moreover, in relegating quantum states to the observers who use them for predictions, one clears some ontological space for the quantum systems themselves to be \"made of\" who knows what qualitative, experiential stuff.\n\n\"\\[QBism\\] means that reality differs from one agent to another. This is not as strange as it may sound. What is real for an agent rests entirely on what that agent experiences, and different agents have different experiences. An agent-dependent reality is constrained by the fact that different agents can communicate their experience to each other, limited only by the extent that personal experience can be expressed in ordinary language. Bob’s verbal representation of his own experience can enter Alice’s, and vice-versa. In this way a common body of reality can be constructed, limited only by the inability of language to represent the full flavor — the “qualia” — of personal experience\" (Fuchs, Mermin, Schack 2013).\n\nIndeed, the QBists reach back in time and draw on the work of the old American pragmatists: James, John Dewey, Charles Sanders Peirce, and others. It's interesting to read their works particularly as many of them date from the pre-quantum era, so that even in the very face of classical physics, they were advocating a radically indeterministic, experience-first view of the world. \n\nFor example, James writes:", "_____no_output_____" ], [ "\"Chance] is a purely negative and relative term, giving us no information about that of which it is predicated, except that it happens to be disconnected with something else—not controlled, secured, or necessitated by other things in advance of its own actual presence... What I say is that it tells us nothing about what a thing may be in itself to call it “chance.” ... All you mean by calling it “chance” is that this is not guaranteed, that it may also fall out otherwise. For the system of other things has no positive hold on the chance-thing. Its origin is in a certain fashion negative: it escapes, and says, Hands off! coming, when it comes, as a free gift, or not at all.\"\n\n\"This negativeness, however, and this opacity of the chance-thing when thus considered ab extra, or from the point of view of previous things or distant things, do not preclude its having any amount of positiveness and luminosity from within, and at its own place and moment. All that its chance-character asserts about it is that there is something in it really of its own, something that is not the unconditional property of the whole. If the whole wants this property, the whole must wait till it can get it, if it be a matter of chance. That the universe may actually be a sort of joint-stock society of this sort, in which the sharers have both limited liabilities and limited powers, is of course a simple and conceivable notion.\"\n\n<hr>\n\n\"Why may not the world be a sort of republican banquet of this sort, where all the qualities of being respect one another’s personal sacredness, yet sit at the common table of space and time?\nTo me this view seems deeply probable. Things cohere, but the act of cohesion itself implies but few conditions, and leaves the rest of their qualifications indeterminate. As the first three notes of a tune comport many endings, all melodious, but the tune is not named till a particular ending has actually come,—so the parts actually known of the universe may comport many ideally possible complements. But as the facts are not the complements, so the knowledge of the one is not the knowledge of the other in anything but the few necessary elements of which all must partake in order to be together at all. Why, if one act of knowledge could from one point take in the total perspective, with all mere possibilities abolished, should there ever have been anything more than that act? Why duplicate it by the tedious unrolling, inch by inch, of the foredone reality? No answer seems possible. On the other hand, if we stipulate only a partial community of partially independent powers, we see perfectly why no one part controls the whole view, but each detail must come and be actually given, before, in any special sense, it can be said to be determined at all. This is the moral view, the view that gives to other powers the same freedom it would have itself.\"\n\n<hr>\n\n\"Does our act then create the world’s salvation so far as it makes room for itself, so far as it leaps into the gap? Does it create, not the whole world’s salvation of course, but just so much of this as itself covers of the world’s extent? Here I take the bull by the horns, and in spite of the whole crew of rationalists and monists, of whatever brand they be, I ask why not? Our acts, our turning-places, where we seem to ourselves to make ourselves and grow, are the parts of the world to which we are closest, the parts of which our knowledge is the most intimate and complete. Why should we not take them at their facevalue? Why may they not be the actual turning-places and growing-places which they seem to be, of the world—why not the workshop of being, where we catch fact in the making, so that nowhere may the world grow in any other kind of way than this?\"\n\n\"Irrational! we are told. How can new being come in local spots and patches which add themselves or stay away at random, independently of the rest? There must be a reason for our acts, and where in the last resort can any reason be looked for save in the material pressure or the logical compulsion of the total nature of the world? There can be but one real agent of growth, or seeming growth, anywhere, and that agent is the integral world itself. It may grow all-over, if growth there be, but that single parts should grow per se is irrational.\"\n\n\"But if one talks of rationality—and of reasons for things, and insists that they can’t just come in spots, what kind of a reason can there ultimately be why anything should come at all?\"\n\n<hr>\n\n\"What does determinism profess? It professes that those parts of the universe already laid down absolutely appoint and decree what the other parts shall be. The future has no ambiguous possibilities hidden in its womb; the part we call the present is compatible with only one totality. Any other future complement than the one fixed from eternity is impossible. The whole is in each and every part, and welds it with the rest into an absolute unity, an iron block, in which there can be no equivocation or shadow of turning.\"\n\n\"Indeterminism, on the contrary, says that the parts have a certain amount of loose play on one another, so that the laying down of one of them does not necessarily determine what the others shall be. It admits that possibilities may be in excess of actualities, and that things not yet revealed to our knowledge may really in themselves be ambiguous. Of two alternative futures which we conceive, both may now be really possible; and the one become impossible only at the very moment when the other excludes it by becoming real itself. Indeterminism thus denies the world to be one unbending unit of fact. It says there is a certain ultimate pluralism in it.\"\n\n<hr>\n\n\"The import of the difference between pragmatism and rationalism is now in sight throughout its whole extent. The essential contrast is that for rationalism reality is ready-made and complete from all eternity, while for pragmatism it is still in the making, and awaits part of its complexion from the future. On the one side the universe is absolutely secure, on the other it is still pursuing its adventures...\"\n\n\"The humanist view of 'reality,' as something resisting, yet malleable, which controls our thinking as an energy that must be taken 'account' of incessantly is evidently a difficult one to introduce to novices...\nThe alternative between pragmatism and rationalism, in the shape in which we now have it before us, is no longer a question in the theory of knowledge, it concerns the structure of the universe itself.\"\n\n\"On the pragmatist side we have only one edition of the universe, unfinished, growing in all sorts of places, especially in the places where thinking beings are at work. On the rationalist side we have a universe in many editions, one real one, the infinite folio, or ́edition de luxe, eternally complete; and then the various finite editions, full of false readings, distorted and mutilated each in its own way.\"", "_____no_output_____" ], [ "<hr>\n\nAnd yet, we know that quantum mechanics presents many faces, Bohmian deterministic faces, the many faces of Many Worlds, and so forth. It's beautiful, in a way: there's something for everybody. One is reminded of another passage from James:\n\n\"The history of philosophy is to a great extent that of a certain clash of human temperaments. Undignified as such a treatment may seem to some of my colleagues, I shall have to take account of this clash and explain a good many of the divergencies of philosophies by it. Of whatever temperament a professional philosopher is, he tries, when philosophizing, to sink the fact of his temperament. Temperament is no conventionally recognized reason, so he urges impersonal reasons only for his conclusions. Yet his temperament really gives him a stronger bias than any of his more strictly objective premises. It loads the evidence for him one way or the other ... just as this fact or that principle would. He trusts his temperament. Wanting a universe that suits it, he believes in any representation of the universe that does suit it.\"\n\n\"Why does Clifford fearlessly proclaim his belief in the conscious-automaton theory, although the ‘proofs’ before him are the same which make Mr. Lewes reject it? Why does he believe in primordial units of ‘mind-stuff’ on evidence which would seem quite worthless to Professor Bain? Simply because, like every human being of the slightest mental originality, he is peculiarly sensitive to evidence that bears in some one direction. It is utterly hopeless to try to exorcise such sensitiveness by calling it the disturbing subjective factor, and branding it as the root of all evil. ‘Subjective’ be it called! and ‘disturbing’ to those whom it foils! But if it helps those who, as Cicero says, “vim naturae magis sentiunt” \\[feel the force of nature more\\], it is good and not evil. Pretend what we may, the whole man within us is at work when we form our philosophical opinions. Intellect, will, taste, and passion co-operate just as they do in practical affairs...\\[I\\]n the forum \\[one\\] can make no claim, on the bare ground of his temperament, to superior discernment or authority. There arises thus a certain insincerity in our philosophic discussions: the potentest of all our premises is never mentioned. I am sure it would contribute to clearness if in these lectures we should break this rule and mention it, and I accordingly feel free to do so.\"\n\nIndeed, for James, the value of a philosophy lies not so much in its proofs, but in the total vision that it expresses. As I say, perhaps the universe itself has something for everyone, whatever their temperament.", "_____no_output_____" ], [ "<hr>\n\nAs a final word, it seems to me that QBism has taught us something genuinely new about quantum theory and its relationship to probability theory. On the other hand, it also pretends to be a theory of \"experience\": and yet, I'm not sure that I've learned anything new about experience. If QBism is to really prove itself, it will have to make novel predictions not just on the quantum side, but also on the side of our everyday perceptions. \n\n\"The burning question for the QBist is how to model in Hilbert-space terms the common sorts of measurements we perform just by opening our eyes, cupping our ears, and extending our fingers\" (Fuchs, Stacey 2019).", "_____no_output_____" ], [ "## Bibliography\n\n", "_____no_output_____" ], [ "<a href=\"https://arxiv.org/abs/1612.07308\">QBism: Quantum Theory as a Hero’s Handbook</a>\n\n<a href=\"https://arxiv.org/abs/1612.03234\">Introducing the Qplex: A Novel Arena for Quantum Theory</a>\n\n<a href=\"https://arxiv.org/abs/1311.5253\">An Introduction to QBism with an Application to the Locality of Quantum Mechanics</a>\n\n<a href=\"https://arxiv.org/abs/1003.5209\">QBism, the Perimeter of Quantum Bayesianism</a>\n\n<a href=\"https://arxiv.org/abs/1301.3274\">Quantum-Bayesian Coherence: The No-Nonsense Version</a>\n\n<a href=\"https://arxiv.org/abs/1401.7254\">Some Negative Remarks on Operational Approaches to Quantum Theory</a>\n\n<a href=\"https://arxiv.org/abs/1405.2390\">My Struggles with the Block Universe</a>\n\n<a href=\"https://arxiv.org/abs/1412.4209\">Quantum Measurement and the Paulian Idea</a>\n\n<a href=\"https://arxiv.org/abs/quant-ph/0105039\">Notes on a Paulian Idea</a>\n\n<a href=\"https://arxiv.org/abs/1601.04360\">On Participatory Realism</a>\n\n<a href=\"https://arxiv.org/abs/0906.1968\">Delirium Quantum</a>\n\n<a href=\"https://arxiv.org/abs/1703.07901\">The SIC Question: History and State of Play</a>\n\n<a href=\"https://arxiv.org/abs/1705.03483\">Notwithstanding Bohr, the Reasons for QBism</a>\n\n<a href=\"https://arxiv.org/abs/2012.14397\">The Born Rule as Dutch-Book Coherence (and only a little more)</a>\n\n<a href=\"https://arxiv.org/abs/quant-ph/0205039\">Quantum Mechanics as Quantum Information (and only a little more)</a>\n\n<a href=\"https://arxiv.org/abs/1907.02432\">Quantum Theory as Symmetry Broken by Vitality</a>\n\nhttps://en.wikipedia.org/wiki/POVM\n\nhttps://en.wikipedia.org/wiki/SIC-POVM\n\n<a href=\"refs/wheeler_law_without_law.pdf\">Law without Law</a>\n\n<a href=\"http://www.gutenberg.org/ebooks/11984\">A Pluralistic Universe</a>\n\n<a href=\"http://www.gutenberg.org/ebooks/32547\">Essays in Radical Empiricism</a>", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
4a48e5bc8c7c5e474ae10b2f367d72ba36d95c20
17,024
ipynb
Jupyter Notebook
10_0_develop_new_OD_demand_estimator_Sioux_uni_class/archive/04_OD_matrix_estimation_GLS_Sioux.ipynb
jingzbu/InverseVITraffic
c0d33d91bdd3c014147d58866c1a2b99fb8a9608
[ "MIT" ]
null
null
null
10_0_develop_new_OD_demand_estimator_Sioux_uni_class/archive/04_OD_matrix_estimation_GLS_Sioux.ipynb
jingzbu/InverseVITraffic
c0d33d91bdd3c014147d58866c1a2b99fb8a9608
[ "MIT" ]
null
null
null
10_0_develop_new_OD_demand_estimator_Sioux_uni_class/archive/04_OD_matrix_estimation_GLS_Sioux.ipynb
jingzbu/InverseVITraffic
c0d33d91bdd3c014147d58866c1a2b99fb8a9608
[ "MIT" ]
null
null
null
33.511811
1,302
0.48367
[ [ [ "%run ../Python_files/load_dicts.py\n%run ../Python_files/util.py", "_____no_output_____" ], [ "from util import *\nimport numpy as np\nfrom numpy.linalg import inv, matrix_rank\nimport json", "_____no_output_____" ], [ "# # load logit_route_choice_probability_matrix\n# P = zload('../temp_files/logit_route_choice_probability_matrix_Sioux.pkz')\n# P = np.matrix(P)\n\n# print('rank of P is: ')\n# print(matrix_rank(P))\n\n# print('shape of P is: ')\n# print(np.shape(P))", "_____no_output_____" ], [ "# # load path-link incidence matrix\n# A = zload('../temp_files/path-link_incidence_matrix_Sioux-Falls.pkz')\n\n# print('rank of A is: ')\n# print(matrix_rank(A))\n\n# print('shape of A is: ')\n# print(np.shape(A))", "_____no_output_____" ], [ "# load link counts data\n\nflow_list = []\nwith open('SiouxFallsFlow.txt', 'r') as f:\n read_data = f.readlines()\n flag = 0\n for row in read_data:\n flag += 1\n if flag > 1:\n flow_list.append(float(row.split('\\t')[2]))\n\nx_0 = np.array(flow_list)", "_____no_output_____" ], [ "x_0", "_____no_output_____" ] ], [ [ "### Assignment Equation\n\nWe have the following equation: \n$$AP'\\boldsymbol{\\lambda} = \\textbf{x},$$\nwhose least-squares solution can be written as\n$$\\boldsymbol{\\lambda} = (AP')^+\\textbf{x}, \\quad (1)$$\nwhere $(AP')^{+}$ is the pseudo-inverse of $AP'$.\n\nHowever, the $\\boldsymbol{\\lambda}$ given by (1) might contain negative entries, which is not desired. Thus, instead, we solve a constrained least-squares problem:\n$$\\mathop {\\min }\\limits_{\\boldsymbol{\\lambda} \\geq \\textbf{0}} {\\left\\| {AP'\\boldsymbol{\\lambda} - \\textbf{x}} \\right\\|_2}. \\quad (2)$$\n\nNote that (2) typically contains a non-PSD matrix Q, thus preventing the solver calculating the correct $\\boldsymbol{\\lambda}$.\n\nIn the end, we return to the flow conservation expression in CDC16 paper; that is\n$$\\mathcal{F} = \\left\\{ {\\textbf{x}:\\exists {\\textbf{x}^{\\textbf{w}}} \\in \\mathbb{R}_ +\n ^{\\left| \\mathcal{A} \\right|} ~\\text{s.t.}~\\textbf{x} =\n \\sum\\limits_{\\textbf{w} \\in \\mathcal{W}} {{\\textbf{x}^{\\textbf{w}}}}\n ,~\\textbf{N}{\\textbf{x}^{\\textbf{w}}} = {\\textbf{d}^{\\textbf{w}}},~\\forall\n \\textbf{w} \\in \\mathcal{W}} \\right\\}.$$", "_____no_output_____" ] ], [ [ "# load node-link incidence matrix\nN = zload('node_link_incidence_Sioux.pkz')", "_____no_output_____" ], [ "N", "_____no_output_____" ], [ "# load link counts data\nwith open('demands_Sioux.json', 'r') as json_file:\n demands_Sioux = json.load(json_file)", "_____no_output_____" ], [ "demands_Sioux['(1,2)']", "_____no_output_____" ], [ "# assert(1==2)\n\nn = 24 # number of nodes\nm = 76 # number of links\n\nmodel = Model(\"OD_matrix_est_Sioux\")\n\n# lam = {}\n# for i in range(n+1)[1:]:\n# for j in range(n+1)[1:]:\n# if i != j:\n# key = str(i) + '->' + str(j)\n# lam[key] = model.addVar(name='lam_' + key)\n \nx = {}\nfor k in range(m):\n for i in range(n+1)[1:]:\n for j in range(n+1)[1:]:\n if i != j:\n key = str(k) + '->' + str(i) + '->' + str(j)\n x[key] = model.addVar(name='x_' + key)\n\nmodel.update() ", "_____no_output_____" ], [ "# Set objective\nobj = 0\n\n# for i in range(n+1)[1:]:\n# for j in range(n+1)[1:]:\n# if i != j:\n# key = str(i) + '->' + str(j)\n# obj += lam[key] * lam[key]\n \nmodel.setObjective(obj)", "_____no_output_____" ], [ "# # Add constraint: lam >= 0\n# for i in range(n+1)[1:]:\n# for j in range(n+1)[1:]:\n# if i != j:\n# key = str(i) + '->' + str(j)\n# key_ = '(' + str(i) + ',' + str(j) + ')'\n# # model.addConstr(lam[key] >= 0)\n# model.addConstr(lam[key] == demands_Sioux[key_])\n \nfor k in range(m):\n s = 0\n for i in range(n+1)[1:]:\n for j in range(n+1)[1:]:\n if i != j:\n key = str(k) + '->' + str(i) + '->' + str(j)\n s += x[key]\n model.addConstr(x[key] >= 0)\n model.addConstr(s - x_0[k] <= 1e2)\n model.addConstr(x_0[k] - s <= 1e2)\n \nfor l in range(n):\n for i in range(n+1)[1:]:\n for j in range(n+1)[1:]:\n if i != j:\n key_ = str(i) + '->' + str(j)\n key__ = '(' + str(i) + ',' + str(j) + ')'\n s = 0\n for k in range(m):\n key = str(k) + '->' + str(i) + '->' + str(j)\n s += N[l, k] * x[key] \n if (l+1 == i):\n model.addConstr(s + demands_Sioux[key__] == 0)\n elif (l+1 == j):\n model.addConstr(s - demands_Sioux[key__]== 0)\n else:\n model.addConstr(s == 0)\n \n# if (i == 1 and j == 2):\n# print(s)\n\nmodel.update()", "_____no_output_____" ], [ "# model.setParam('OutputFlag', False)\nmodel.optimize()", "Optimize a model with 55352 rows, 41952 columns and 209760 nonzeros\nCoefficient statistics:\n Matrix range [1e+00, 1e+00]\n Objective range [0e+00, 0e+00]\n Bounds range [0e+00, 0e+00]\n RHS range [1e+02, 2e+04]\n\nConcurrent LP optimizer: dual simplex and barrier\nShowing barrier log only...\n\nPresolve removed 42580 rows and 0 columns\nPresolve time: 0.07s\nPresolved: 12772 rows, 42028 columns, 122620 nonzeros\n\nOrdering time: 0.00s\n\nBarrier statistics:\n AA' NZ : 9.991e+04\n Factor NZ : 7.941e+05 (roughly 30 MBytes of memory)\n Factor Ops : 5.594e+07 (less than 1 second per iteration)\n Threads : 3\n\n Objective Residual\nIter Primal Dual Primal Dual Compl Time\n 0 0.00000000e+00 -1.52000000e+03 1.29e+06 0.00e+00 2.34e+02 0s\n 1 0.00000000e+00 -7.03736893e+04 1.34e+05 8.33e-17 2.57e+01 0s\n 2 0.00000000e+00 -2.83030061e+04 1.38e+04 9.71e-17 3.12e+00 0s\n 3 0.00000000e+00 -1.18256050e+04 3.60e+03 1.11e-16 9.65e-01 0s\n 4 0.00000000e+00 -5.88353604e+03 1.54e+03 1.11e-16 4.57e-01 0s\n 5 0.00000000e+00 -2.74452425e+03 7.70e+02 1.18e-16 2.27e-01 0s\n 6 0.00000000e+00 -1.21517064e+03 1.67e+02 1.11e-16 6.18e-02 0s\n 7 0.00000000e+00 -4.20718073e+02 6.62e+00 1.11e-16 1.08e-02 0s\n 8 0.00000000e+00 -3.76071089e+00 4.57e-07 1.11e-16 8.93e-05 0s\n 9 0.00000000e+00 -3.76071087e-03 2.11e-06 1.36e-18 8.93e-08 0s\n 10 0.00000000e+00 -3.78152412e-09 1.64e-07 1.78e-19 8.98e-14 0s\n\nBarrier solved model in 10 iterations and 0.44 seconds\nOptimal objective 0.00000000e+00\n\nCrossover log...\n\n 0 DPushes remaining with DInf 0.0000000e+00 0s\n\n 29267 PPushes remaining with PInf 0.0000000e+00 0s\n 0 PPushes remaining with PInf 0.0000000e+00 1s\n\n Push phase complete: Pinf 0.0000000e+00, Dinf 0.0000000e+00 1s\n\nIteration Objective Primal Inf. Dual Inf. Time\n 32284 0.0000000e+00 0.000000e+00 0.000000e+00 1s\n 32284 0.0000000e+00 0.000000e+00 0.000000e+00 1s\n\nSolved with barrier\nSolved in 32284 iterations and 0.75 seconds\nOptimal objective 0.000000000e+00\n" ], [ "lam_list = []\nfor v in model.getVars():\n print('%s %g' % (v.varName, v.x))\n lam_list.append(v.x)\n# print('Obj: %g' % obj.getValue())", "_____no_output_____" ], [ "sum(lam_list[0:551])", "_____no_output_____" ], [ "# write estimation result to file\nn = 24 # number of nodes\nwith open('OD_demand_matrix_Sioux.txt', 'w') as the_file:\n idx = 0\n for i in range(n + 1)[1:]:\n for j in range(n + 1)[1:]:\n if i != j: \n the_file.write(\"%d,%d,%f\\n\" %(i, j, lam_list[idx]))\n idx += 1", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a48ed69c78e95bf56471897c6cd8477cb9c9191
4,851
ipynb
Jupyter Notebook
PyRamen/archive/Untitled2.ipynb
yfkok5/python-homework
fc8b06102d2559c29dd3e10b37f66a896728af1b
[ "FTL" ]
null
null
null
PyRamen/archive/Untitled2.ipynb
yfkok5/python-homework
fc8b06102d2559c29dd3e10b37f66a896728af1b
[ "FTL" ]
null
null
null
PyRamen/archive/Untitled2.ipynb
yfkok5/python-homework
fc8b06102d2559c29dd3e10b37f66a896728af1b
[ "FTL" ]
null
null
null
38.808
1,345
0.581117
[ [ [ "import csv\nfrom pathlib import Path\n\n# @TODO: Set file paths for menu_data.csv and sales_data.csv\nmenu_filepath = Path(r'C:\\Users\\yfkok\\OneDrive\\Desktop\\Monash Bootcamp\\python-homework\\PyRamen\\Resources\\menu_data.csv')\nsales_filepath = Path(r'C:\\Users\\yfkok\\OneDrive\\Desktop\\Monash Bootcamp\\python-homework\\PyRamen\\Resources\\sales_data.csv')\n\nfilename = open(menu_filepath, 'r')\n \n# creating dictreader object\nmenu = csv.DictReader(filename)\n \n# creating empty lists\nitem = []\ncategory = []\ndescription = []\nprice = []\ncost = []\nall_elements = ['item','category','description','price','cost']\n# iterating over each row and append\n# values to empty list\nfor col in menu:\n item.append(col['item'])\n category.append(col['category'])\n description.append(col['description'])\n price.append(float(col['price']))\n cost.append(int(col['cost']))\n\n value = zip(category,description,price,cost)\n \n menu_dict = dict(zip(item, value)) \n", "_____no_output_____" ], [ "a = [('spicy miso ramen', 1), ('spicy miso ramen', 1), ('tori paitan ramen', 3), ('tori paitan ramen', 3), ('truffle butter ramen', 1), ('truffle butter ramen', 1)]\nprint(a)\nprint(len(a))", "[('spicy miso ramen', 1), ('spicy miso ramen', 1), ('tori paitan ramen', 3), ('tori paitan ramen', 3), ('truffle butter ramen', 1), ('truffle butter ramen', 1)]\n6\n" ], [ "quantity_spicymiso = []\n\nfor n in range(len(item)):\n \n if a[n][0] == item[n]:\n quantity_spicymiso.append(a[n][1])\n\n \nprint(sum(quantity_spicymiso))", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
4a48fa9cd144e9d3bd63dfd8d08008c6d98c687f
15,055
ipynb
Jupyter Notebook
Cars_in_Hours.ipynb
netlabufjf/Modo-Scripts
a3a0acd50a18e755713bf4e1b8fc6a280f134ecf
[ "MIT" ]
null
null
null
Cars_in_Hours.ipynb
netlabufjf/Modo-Scripts
a3a0acd50a18e755713bf4e1b8fc6a280f134ecf
[ "MIT" ]
null
null
null
Cars_in_Hours.ipynb
netlabufjf/Modo-Scripts
a3a0acd50a18e755713bf4e1b8fc6a280f134ecf
[ "MIT" ]
null
null
null
31.561845
142
0.502557
[ [ [ "%matplotlib inline\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport datetime\nimport pytz\nimport urllib as ur\nimport json", "_____no_output_____" ] ], [ [ "## Funções para auxiliar a manipulação do tempo", "_____no_output_____" ] ], [ [ "def convert_datetime_timezone(dt, tz1, tz2):\n \"\"\"\n Converte uma hora no fuso UTC ou São Paulo para um provável fuso de Vancouver.\n \n Parameters\n ------------\n dt : unix timestamp\n Timestamp a ser convertido para outro fuso horário.\n \n tz1, tz2 : Timezone String\n Time zone atual e a que a hora irá ser convertida.\n \n Returns\n ----------\n dt : unix timestamp\n Timestamp já convertida para o fuso de Vancouver.\n \n \"\"\" \n \n tz1 = pytz.timezone(tz1)\n tz2 = pytz.timezone(tz2)\n\n dt = datetime.datetime.fromtimestamp(dt)\n dt = datetime.datetime.strptime(str(dt),\"%Y-%m-%d %H:%M:%S\")\n dt = tz1.localize(dt)\n dt = dt.astimezone(tz2)\n \n try:\n # Fuso horário comum de Vancouver\n dt = datetime.datetime.strptime(str(dt),\"%Y-%m-%d %H:%M:%S-08:00\")\n except:\n # Fuso horário característico de horário de verão em Vancouver\n dt = datetime.datetime.strptime(str(dt),\"%Y-%m-%d %H:%M:%S-07:00\")\n \n dt = int(dt.timestamp())\n\n return dt", "_____no_output_____" ], [ "def Hour_Diff(h1,h2):\n \"\"\"\n Faz a diferença entre duas horas dadas e retorna em minutos\n \n Parameters\n ----------\n h1, h2 : unix timestamp\n Hora inicio e fim para ser feito o cálculo da diferença\n \n Returns\n ---------\n diff : float\n Diferença entre as duas horas dadas em minutos\n \n \"\"\"\n \n h1Aux = datetime.datetime.fromtimestamp(h1)\n h2Aux = datetime.datetime.fromtimestamp(h2)\n diff = abs((h1Aux - h2Aux)).total_seconds()/60\n \n return diff", "_____no_output_____" ] ], [ [ "## Requisitando Estacionamentos, carros e coordenadas", "_____no_output_____" ] ], [ [ "response = ur.request.urlopen('https://bookit.modo.coop/api/v2/car_list').read().decode('UTF-8')\njson_cars = json.loads(response)\n\nresponse = ur.request.urlopen('https://bookit.modo.coop/api/v2/location_list').read().decode('UTF-8')\njson_location = json.loads(response)", "_____no_output_____" ], [ "car_coord = pd.DataFrame(list(json_cars['Response']['Cars'].keys()), columns=['car_id'], dtype='int')\ncar_coord['location'] = [0] * len(car_coord)\ncar_coord['lat'] = [0] * len(car_coord)\ncar_coord['lon'] = [0] * len(car_coord)\n\nfor i in range(len(car_coord)):\n car_coord['location'].iloc[i] = int(json_cars['Response']['Cars'][str(car_coord['car_id'].iloc[i])]['Location'][0]['LocationID'])\n car_coord['lat'].iloc[i] = float(json_location['Response']['Locations'][str(car_coord['location'].iloc[i])]['Latitude'])\n car_coord['lon'].iloc[i] = float(json_location['Response']['Locations'][str(car_coord['location'].iloc[i])]['Longitude'])", "_____no_output_____" ] ], [ [ "## Fazendo a média da quantidade de viagens para cada hora", "_____no_output_____" ] ], [ [ "# CSV criado a partir dos dados coletados do arquivo ModoApi_Data_Filter\ndfTravels = pd.read_csv('travels_v2.csv')\ndfTravels = dfTravels.sort_values(by='car_id')", "_____no_output_____" ], [ "def cont_travels(df):\n \"\"\"\n Função para calcular a quantidade de viagens por hora de cada carro\n \n Parameters\n ----------\n df : pandas.DataFrame\n DataFrame com dados das viagens registradas\n \n Returns\n ---------\n df_count : pandas.DataFrame\n DataFrame com os dados organizados com horas como colunas e linhas sendo id dos veículos.\n Os seus valores representam a quantidade de viagens efetuadas em dada hora por tal veículo.\n \n \"\"\"\n df_cont = []\n id_prox = df['car_id'].iloc[1]\n id_atual = df['car_id'].iloc[0]\n tempo = []\n\n for i in range(len(df)):\n try:\n id_prox = df['car_id'].iloc[i+1]\n id_atual = df['car_id'].iloc[i]\n except:\n pass\n\n # Se irá mudar de id então é registrado o somatório dos intervalos de tempo estacionado\n # i == len(df)-1 para não pular o ultimo\n if (id_prox != id_atual or i == len(df)-1):\n \n hour = datetime.datetime.fromtimestamp(df['start'].iloc[i]).hour\n \n auxHour = [0] * 24\n auxHour[hour] += 1\n tempo.append(auxHour)\n \n #Somando todas a quantidade de ocorrências de cada horário\n tempo = pd.DataFrame(tempo)\n \n tempo = list(pd.Series.sum(tempo))\n \n tempo = [id_atual] + tempo\n \n df_cont.append(tempo)\n tempo = []\n else:\n # Verificando a hora de inicio da viagem e somando em uma lista que representa as 24h\n # Evita mais de uma viagen por hora\n hour = datetime.datetime.fromtimestamp(df['start'].iloc[i]).hour\n hour_anterior = datetime.datetime.fromtimestamp(df['start'].iloc[i-1]).hour\n \n if (hour == hour_anterior):\n continue\n \n auxHour = [0] * 24\n auxHour[hour] += 1\n #Armazenando a quantidade de viagens em dada hora\n tempo.append(auxHour)\n \n #Labels das colunas\n labels = list(range(-1,24))\n [format(x,'02d') for x in labels]\n labels[0] = 'car_id'\n \n df_cont = pd.DataFrame(df_cont, columns=labels)\n df_cont = df_cont.sort_values(by=['car_id'])\n \n return df_cont", "_____no_output_____" ], [ "hours = cont_travels(dfTravels)", "_____no_output_____" ], [ "# Preparando o dataframe para colocar a location, latitude ,longitude e numero de carros\nhours['lat'] = [0]*len(hours)\nhours['lon'] = [0]*len(hours)\nhours['location'] = [0]*len(hours)\nhours['n_cars'] = [1]*len(hours)", "_____no_output_____" ], [ "# Colocando as coordenadas de cada carro\nfor i in range(len(hours)):\n try:\n coord = car_coord[car_coord['car_id'] == hours['car_id'].iloc[i]]\n hours['lat'].iloc[i] = coord['lat'].iloc[0]\n hours['lon'].iloc[i] = coord['lon'].iloc[0]\n hours['location'].iloc[i] = coord['location'].iloc[0]\n except Exception as e:\n # Carros que sairam da frota\n print(e)\n print('id:'+str(hours['car_id'].iloc[i]))\n\nhours = hours.sort_values(by='location')", "_____no_output_____" ], [ "# Somando todos os valores para cada estação\n# A cada loop verifica se ainda existe repetições de id\nwhile(True in list(hours.duplicated(subset=['location']))):\n i = 0\n while i < len(hours):\n try:\n if (hours['location'].iloc[i] == hours['location'].iloc[i+1]):\n print('Antes:')\n print(len(hours))\n \n # Percorre todas as 24 somando a quantidade de viagens de cada carro da estação\n for j in range(24):\n hours[j].iloc[i] = hours[j].iloc[i] + hours[j].iloc[i+1]\n \n # Adicionando mais um ao numero de carros da estação\n hours['n_cars'].iloc[i] = hours['n_cars'].iloc[i] + 1 \n \n # Retirando a linha já analisada\n hours = hours.drop(hours.index[i+1])\n hours.index = range(len(hours))\n \n print('Depois:')\n print(len(hours))\n except Exception as e:\n print(e)\n break\n i+=1", "_____no_output_____" ], [ "# Ordenando por estacionamento\nhours = hours.sort_values(by='location')\n\n# Dividindo pela quantidade de dias e numero de carros\nfor i in range(len(hours)):\n for j in range(24):\n hours[j].iloc[i] = hours[j].iloc[i] / (31*hours['n_cars'].iloc[i])\n", "_____no_output_____" ], [ "aux_csv = hours\naux_csv.dropna(how='any', axis=0, inplace=True)", "_____no_output_____" ], [ "for i in range(24):\n aux_csv[['lat', 'lon', i]].to_csv('CarrosPorHora/Hour'+str(i)+'_v2.csv')", "_____no_output_____" ] ], [ [ "## Plotagem em mapas de calor de cada hora", "_____no_output_____" ] ], [ [ "import geoplotlib as gpl\nfrom geoplotlib.utils import read_csv, BoundingBox, DataAccessObject", "_____no_output_____" ], [ "# Imprimindo todas as 24 horas\nfor i in range(0,24):\n \n hora = str(i)\n \n # Lendo csv com dados de tempo estacionado médio, latitude, longitude de cada estacionamento\n location = pd.read_csv('CarrosPorHora/Hour'+hora+'_v2.csv', usecols=[1,2,3])\n data = location\n # Multiplicando os valores por um escalar para se tornarem mais visíveis\n location[hora] = location[hora] * 100\n location_aux = []\n \n \n # Utilizando um auxiliar para gerar repetições de incidencias para a plotagem no mapa de calor\n for i in range(len(location)):\n for j in range(int(location[hora].iloc[i])):\n location_aux.append([location['lat'].iloc[i], location['lon'].iloc[i]])\n \n location_aux = pd.DataFrame(location_aux, columns=['lat', 'lon'])\n \n # Vancouver \n \n gpl.kde(location_aux, bw=3, cut_below=1e-4, cmap='jet', alpha=150 )\n# data = DataAccessObject(pd.DataFrame({'lat': [],'lon': []}))\n# gpl.hist(data, scalemin=0, scalemax=100, cmap='jet', colorscale='lin', alpha=190)\n \n # Coordenadas para o mapa focar em Vancouver\n lat = pd.DataFrame([49.246292, 49.262428, 49.24966])\n lon = pd.DataFrame([-123.11554, -123.116226, -123.04464])\n \n gpl.set_bbox(BoundingBox.from_points(lon[0], lat[0]))\n gpl.request_zoom(12)\n gpl.set_window_size(1280,700)\n gpl.savefig('CarrosPorHora/CarrosPorHoraPNGs/vanc_'+hora+'_v2')\n \n # Victoria\n \n gpl.kde(location_aux, bw=3, cut_below=1e-4, cmap='jet', alpha=150 )\n# data = DataAccessObject(pd.DataFrame({'lat': [],'lon': []}))\n# gpl.hist(data, scalemin=0, scalemax=100, cmap='jet', colorscale='lin', alpha=190)\n \n # Coordenadas para o mapa focar em Victoria\n lat = pd.DataFrame([48.42666, 48.44344, 48.44560])\n lon = pd.DataFrame([-123.36027,-123.35853,-123.33673])\n \n gpl.set_bbox(BoundingBox.from_points(lon[0], lat[0]))\n gpl.request_zoom(13)\n gpl.set_window_size(1280,700)\n \n gpl.savefig('CarrosPorHora/CarrosPorHoraPNGs/vic_'+hora+'_v2')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
4a48fe0a536974a6c925581c528448d786a2932f
329,965
ipynb
Jupyter Notebook
Chapter 05.ipynb
giinie/data-science-handbook
4d3eec3ef714c9e1794f2348fdbfed27d3280544
[ "MIT" ]
21
2017-12-28T03:36:20.000Z
2021-02-07T01:52:35.000Z
Chapter 05.ipynb
giinie/data-science-handbook
4d3eec3ef714c9e1794f2348fdbfed27d3280544
[ "MIT" ]
null
null
null
Chapter 05.ipynb
giinie/data-science-handbook
4d3eec3ef714c9e1794f2348fdbfed27d3280544
[ "MIT" ]
7
2018-05-01T00:18:37.000Z
2021-01-08T06:41:20.000Z
445.296896
70,500
0.946303
[ [ [ "# 5장", "_____no_output_____" ] ], [ [ "import matplotlib\nmatplotlib.rc('font', family=\"NanumBarunGothicOTF\") \n\n%matplotlib inline", "_____no_output_____" ] ], [ [ "# 5.2 아이리스 데이터셋", "_____no_output_____" ] ], [ [ "import pandas as pd\nfrom matplotlib import pyplot as plt\nimport sklearn.datasets\n\ndef get_iris_df():\n ds = sklearn.datasets.load_iris()\n df = pd.DataFrame(ds['data'], columns=ds['feature_names'])\n code_species_map = dict(zip(\n range(3), ds['target_names']))\n df['species'] = [code_species_map[c] for c in ds['target']]\n return df\n\ndf = get_iris_df()\ndf_iris = df", "_____no_output_____" ] ], [ [ "# 5.3 원형 차트", "_____no_output_____" ] ], [ [ "sums_by_species = df.groupby('species').sum()\nvar = 'sepal width (cm)'\nsums_by_species[var].plot(kind='pie', fontsize=20)\nplt.ylabel(var, horizontalalignment='left')\nplt.title('꽃받침 너비로 분류한 붓꽃', fontsize=25)\n# plt.savefig('iris_pie_for_one_variable.png')\n# plt.close()", "_____no_output_____" ], [ "sums_by_species = df.groupby('species').sum()\nsums_by_species.plot(kind='pie', subplots=True,\nlayout=(2,2), legend=False)\nplt.title('종에 따른 전체 측정값Total Measurements, by Species')\n# plt.savefig('iris_pie_for_each_variable.png')\n# plt.close()\n", "_____no_output_____" ] ], [ [ "# 5.4 막대그래프", "_____no_output_____" ] ], [ [ "sums_by_species = df.groupby('species').sum()\nvar = 'sepal width (cm)'\nsums_by_species[var].plot(kind='bar', fontsize=15, rot=30)\n\nplt.title('꽃받침 너비(cm)로 분류한 붓꽃', fontsize=20)\n# plt.savefig('iris_bar_for_one_variable.png')\n# plt.close()\nsums_by_species = df.groupby('species').sum()\nsums_by_species.plot(\n kind='bar', subplots=True, fontsize=12)\nplt.suptitle('종에 따른 전체 측정값')\n# plt.savefig('iris_bar_for_each_variable.png')\n# plt.close()", "_____no_output_____" ] ], [ [ "# 5.5 히스토그램", "_____no_output_____" ] ], [ [ "df.plot(kind='hist', subplots=True, layout=(2,2))\nplt.suptitle('붓꽃 히스토그램', fontsize=20)\n# plt.show()", "_____no_output_____" ], [ "for spec in df['species'].unique():\n forspec = df[df['species']==spec]\n forspec['petal length (cm)'].plot(kind='hist', alpha=0.4, label=spec)\n\nplt.legend(loc='upper right')\nplt.suptitle('종에 따른 꽃잎 길이')\n# plt.savefig('iris_hist_by_spec.png')", "_____no_output_____" ] ], [ [ "# 5.6 평균, 표준편차, 중간값, 백분위", "_____no_output_____" ] ], [ [ "col = df['petal length (cm)']\naverage = col.mean()\nstd = col.std()\nmedian = col.quantile(0.5)\npercentile25 = col.quantile(0.25)\npercentile75 = col.quantile(0.75)\nprint(average, std, median, percentile25, percentile75)", "3.75866666667 1.76442041995 4.35 1.6 5.1\n" ] ], [ [ "### 아웃라이어 걸러내기", "_____no_output_____" ] ], [ [ "col = df['petal length (cm)']\nperc25 = col.quantile(0.25)\nperc75 = col.quantile(0.75)\nclean_avg = col[(col>perc25)&(col<perc75)].mean()\nprint(clean_avg)", "4.0984375\n" ] ], [ [ "# 5.7 상자그림", "_____no_output_____" ] ], [ [ "col = 'sepal length (cm)'\ndf['ind'] = pd.Series(df.index).apply(lambda i: i% 50)\ndf.pivot('ind','species')[col].plot(kind='box')\n# plt.show()", "_____no_output_____" ] ], [ [ "# 5.8 산포도", "_____no_output_____" ] ], [ [ "df.plot(kind=\"scatter\",\n x=\"sepal length (cm)\", y=\"sepal width (cm)\")\nplt.title(\"Length vs Width\")\n# plt.show()", "_____no_output_____" ], [ "colors = [\"r\", \"g\", \"b\"]\nmarkers= [\".\", \"*\", \"^\"]\nfig, ax = plt.subplots(1, 1)\nfor i, spec in enumerate(df['species'].unique() ):\n ddf = df[df['species']==spec]\n ddf.plot(kind=\"scatter\",\n x=\"sepal width (cm)\", y=\"sepal length (cm)\",\n alpha=0.5, s=10*(i+1), ax=ax,\n color=colors[i], marker=markers[i], label=spec)\n \nplt.legend()\nplt.show()", "_____no_output_____" ], [ "import pandas as pd\nimport sklearn.datasets as ds\nimport matplotlib.pyplot as plt\n# 팬다스 데이터프레임 생성\nbs = ds.load_boston()\ndf = pd.DataFrame(bs.data, columns=bs.feature_names)\ndf['MEDV'] = bs.target\n# 일반적인 산포도\ndf.plot(x='CRIM',y='MEDV',kind='scatter')\nplt.title('일반축에 나타낸 범죄 발생률')\n# plt.show()", "_____no_output_____" ] ], [ [ "## 로그를 적용", "_____no_output_____" ] ], [ [ "df.plot(x='CRIM',y='MEDV',kind='scatter',logx=True)\nplt.title('Crime rate on logarithmic axis')\nplt.show()", "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/matplotlib/mathtext.py:854: MathTextWarning: Font 'default' does not have a glyph for '-' [U+2212]\n MathTextWarning)\n/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/matplotlib/mathtext.py:855: MathTextWarning: Substituting with a dummy symbol.\n warn(\"Substituting with a dummy symbol.\", MathTextWarning)\n" ] ], [ [ "# 5.10 산포 행렬", "_____no_output_____" ] ], [ [ "from pandas.tools.plotting import scatter_matrix\nscatter_matrix(df_iris)\nplt.show()", "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/ipykernel_launcher.py:2: FutureWarning: 'pandas.tools.plotting.scatter_matrix' is deprecated, import 'pandas.plotting.scatter_matrix' instead.\n \n" ] ], [ [ "# 5.11 히트맵", "_____no_output_____" ] ], [ [ "df_iris.plot(kind=\"hexbin\", x=\"sepal width (cm)\", y=\"sepal length (cm)\")\nplt.show()", "_____no_output_____" ] ], [ [ "# 5.12 상관관계", "_____no_output_____" ] ], [ [ "df[\"sepal width (cm)\"].corr(df[\"sepal length (cm)\"]) # Pearson corr", "_____no_output_____" ], [ "df[\"sepal width (cm)\"].corr(df[\"sepal length (cm)\"], method=\"pearson\")", "_____no_output_____" ], [ "df[\"sepal width (cm)\"].corr(df[\"sepal length (cm)\"], method=\"spearman\")", "_____no_output_____" ], [ "df[\"sepal width (cm)\"].corr(df[\"sepal length (cm)\"], method=\"spearman\")", "_____no_output_____" ] ], [ [ "# 5.12 시계열 데이터", "_____no_output_____" ] ], [ [ "# $ pip install statsmodels\nimport statsmodels.api as sm\ndta = sm.datasets.co2.load_pandas().data\ndta.plot()\nplt.title(\"이산화탄소 농도\")\nplt.ylabel(\"PPM\")\nplt.show()", "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/importlib/_bootstrap.py:321: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.\n return f(*args, **kwds)\n" ] ], [ [ "## 구글 주가 불러오는 코드는 야후 API가 작동하지 않아서 생략합니다.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a490e7f159d30ab5461c99224edb4affe9f4456
9,559
ipynb
Jupyter Notebook
Custom Configuration Notebook.ipynb
amirmasoud/NDIC
716f7db98c1eb82bc86de8082fb47909582281bd
[ "MIT" ]
18
2021-12-08T04:47:07.000Z
2022-03-20T02:46:39.000Z
Custom Configuration Notebook.ipynb
amirmasoud/NDIC
716f7db98c1eb82bc86de8082fb47909582281bd
[ "MIT" ]
1
2021-11-06T20:54:51.000Z
2021-11-06T20:54:51.000Z
Custom Configuration Notebook.ipynb
amirmasoud/NDIC
716f7db98c1eb82bc86de8082fb47909582281bd
[ "MIT" ]
4
2021-11-08T19:06:38.000Z
2022-01-27T17:02:43.000Z
30.736334
449
0.607176
[ [ [ "In this notebook you can define your own configuration and run the model based on your custom configuration.", "_____no_output_____" ], [ "## Dataset", "_____no_output_____" ], [ "`dataset_name` is the name of the dataset which will be used in the model. In case of using KITTI, `dataset_path` shows the path to `data_paths` directory that contains every image and its pair path, and for Cityscape it is the path to the directory that contains `leftImg8bit` and `rightImg8bit` folders. The `resize` value selects the width, and the height dimensions that each image will be resized to.", "_____no_output_____" ] ], [ [ "dataset_name: 'KITTI'\ndataset_path = '.'\nresize = [128, 256]", "_____no_output_____" ] ], [ [ "## Model", "_____no_output_____" ], [ "`baseline_model` selects the compression model. The accepted models for this parameter are bmshj18 for [Variational image compression with a scale hyperprior](https://arxiv.org/abs/1802.01436) and bls17 for [End-to-end Optimized Image Compression](https://arxiv.org/abs/1611.01704). If `use_side_info` is set as `True`, then the baseline model is modified using our proposed method for using side information for compressing.\nIf `load_weight` is `True`, then in model initialization, the weight saved in `weight_path` is loaded to the model. You can also specify the experiment name in `experiment_name`.", "_____no_output_____" ] ], [ [ "baseline_model = 'bls17' # can be bmshj18 for Variational image compression with a scale hyperprior by Ballé, et al.\n # or bls17 for End-to-end Optimized Image Compression by Ballé, et al.\nuse_side_info = True # if True then the modified version of baseline model for distributed compression is used.\nnum_filters = 192 # number of filters used in the baseline model network\ncuda = True\nload_weight = False\nweight_path = './pretrained_weights/ours+balle17_MS-SSIM_lambda3e-05.pt' # weight path for loading the weight\n# note that we provide some pretrained weights, accessible from the anonymous link provided in README.md", "_____no_output_____" ] ], [ [ "## Training", "_____no_output_____" ], [ "For training set `train` to be `True`. `lambda` shows the lambda value in the rate-distortion equation and `alpha` and `beta` correspond to the handles on the reconstruction of the correlated image and amount of common information extracted from the decoder-only side information, respectively. `distortion_loss` selects the distortion evaluating method. Its accepted values are MS-SSIM for the ms-ssim method or MSE for mean squared error.\n`verbose_period: 50` indicates that every 50 epochs print the results of the validation dataset.", "_____no_output_____" ] ], [ [ "train = True\nepochs = 50000\ntrain_batch_size = 1\nlr = 0.0001\nlmbda = 0.00003 # the lambda value in rate-distortion equation\nalpha = 1\nbeta = 1\ndistortion_loss = 'MS-SSIM' # can be MS-SSIM or MSE. selects the method by which the distortion is calculated during training\nverbose_period = 50 # non-positive value indicates no verbose", "_____no_output_____" ] ], [ [ "## Weights and Results parameters", "_____no_output_____" ], [ "If you wish to save the model weights after training set `save_weights` `True`. `save_output_path` shows the directory path where the model weights are saved.\nFor the weights, in `save_output_path` a `weight` folder will be created, and the weights will be saved there with the name according to `experiment_name`. ", "_____no_output_____" ] ], [ [ "save_weights = True\nsave_output_path = './outputs' # path where results and weights will be saved\nexperiment_name = 'bls17_with_side_info_MS-SSIM_lambda:3e-05'", "_____no_output_____" ] ], [ [ "## Test", "_____no_output_____" ], [ "If you wish to test the model and save the results set `test` to `True`. If `save_image` is set to `True` then a `results` folder will be created, and the reconstructed images will be saved in `save_output_path/results` during testing, with the results named according to `experiment_name`.", "_____no_output_____" ] ], [ [ "test = True\nsave_image = True", "_____no_output_____" ] ], [ [ "## Inference", "_____no_output_____" ], [ "In order to (only) carry out inference, please open `configs/config.yaml` and change the relevant lines as follows:", "_____no_output_____" ] ], [ [ "resize = [128, 256] # we used this crop size for our inference\ndataset_path = '.'\ntrain = False\nload_weight = True\ntest = True\nsave_output_path = './inference' \nsave_image = True ", "_____no_output_____" ] ], [ [ "Download the desired weights and put them in `pretrained_weights` folder and put the dataset folder in the root . \n\nBased on the weight you chose, specify the weight name, and the experiment name in `configs/config.yaml`:", "_____no_output_____" ] ], [ [ "weight_path: './pretrained_weights/...' # load a specified pre-trained weight\nexperiment_name: '...' # a handle for the saved results of the inference", "_____no_output_____" ] ], [ [ "Also, change `baseline_model` and `use_side_info` parameters in `configs/config.yaml` accordingly.\nFor example, for the `balle2017+ours` weights, these parameters should be: ", "_____no_output_____" ] ], [ [ "baseline_model: 'bls17'\nuse_side_info: True", "_____no_output_____" ] ], [ [ "After running the code using the commands in below section, the results will be saved in `inference` folder.", "_____no_output_____" ], [ "## Saving Custom Configuration", "_____no_output_____" ], [ "By running this piece of code you can save your configuration as a yaml file file in the configs folder. You can set your configuration file name by changing `config_name` variable.", "_____no_output_____" ] ], [ [ "import yaml\n\nconfig = {\n \"dataset_name\": dataset_name,\n \"dataset_path\": dataset_path,\n \"resize\": resize,\n \"baseline_model\": baseline_model,\n \"use_side_info\": use_side_info,\n \"num_filters\": num_filters,\n \"cuda\": cuda,\n \"load_weight\": load_weight,\n \"weight_path\": weight_path,\n \"experiment_name\": experiment_name,\n \"train\": train,\n \"epochs\": epochs,\n \"train_batch_size\": train_batch_size,\n \"lr\": lr,\n \"lambda\": lmbda,\n \"distortion_loss\": distortion_loss,\n \"verbose_period\": verbose_period,\n \"save_weights\": save_weights,\n \"save_output_path\": save_output_path,\n \"test\": test,\n \"save_image\": save_image\n}\n\nconfig_name = \"CUSTOM_CONFIG_FILE_NAME.yaml\"\n\nwith open('configs/' + config_name) + config_name, 'w') as outfile:\n yaml.dump(config, outfile, default_flow_style=None, sort_keys=False)", "_____no_output_____" ] ], [ [ "## Running the Model", "_____no_output_____" ] ], [ [ "!python main.py --config=configs/$config_name", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a4917c3f793477cf43a3244e958f272e9defb5c
7,397
ipynb
Jupyter Notebook
Data_Prep_Notebooks/DataPrep-US-Company-ML.ipynb
sqweets/Final-Project
482fd0e744fb64440e7e65bc6fb178ea6c2abf4a
[ "MIT" ]
null
null
null
Data_Prep_Notebooks/DataPrep-US-Company-ML.ipynb
sqweets/Final-Project
482fd0e744fb64440e7e65bc6fb178ea6c2abf4a
[ "MIT" ]
null
null
null
Data_Prep_Notebooks/DataPrep-US-Company-ML.ipynb
sqweets/Final-Project
482fd0e744fb64440e7e65bc6fb178ea6c2abf4a
[ "MIT" ]
null
null
null
33.470588
122
0.534406
[ [ [ "# Data analysis and wrangling\nimport pandas as pd\nimport numpy as np\nimport random as rnd\n\n# Visualization\nimport matplotlib.pyplot as plt\n\n# Machine learning imports\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.svm import SVC, LinearSVC\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.tree import DecisionTreeClassifier\n\nfrom sklearn import preprocessing\nfrom sklearn.preprocessing import binarize, LabelEncoder, MinMaxScaler\n", "_____no_output_____" ], [ "# Create the data frame\ntrain_df = pd.read_csv('../Resources/survey.csv')\n\n#Whats the data row count?\nprint(train_df.shape)\n \n#Whats the distribution of the data?\nprint(train_df.describe())\n \n#What types of data\nprint(train_df.info())\n\n#train_df.head(20)", "(1259, 27)\n Age\ncount 1.259000e+03\nmean 7.942815e+07\nstd 2.818299e+09\nmin -1.726000e+03\n25% 2.700000e+01\n50% 3.100000e+01\n75% 3.600000e+01\nmax 1.000000e+11\n<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1259 entries, 0 to 1258\nData columns (total 27 columns):\nTimestamp 1259 non-null object\nAge 1259 non-null int64\nGender 1259 non-null object\nCountry 1259 non-null object\nstate 744 non-null object\nself_employed 1241 non-null object\nfamily_history 1259 non-null object\ntreatment 1259 non-null object\nwork_interfere 995 non-null object\nno_employees 1259 non-null object\nremote_work 1259 non-null object\ntech_company 1259 non-null object\nbenefits 1259 non-null object\ncare_options 1259 non-null object\nwellness_program 1259 non-null object\nseek_help 1259 non-null object\nanonymity 1259 non-null object\nleave 1259 non-null object\nmental_health_consequence 1259 non-null object\nphys_health_consequence 1259 non-null object\ncoworkers 1259 non-null object\nsupervisor 1259 non-null object\nmental_health_interview 1259 non-null object\nphys_health_interview 1259 non-null object\nmental_vs_physical 1259 non-null object\nobs_consequence 1259 non-null object\ncomments 164 non-null object\ndtypes: int64(1), object(26)\nmemory usage: 265.6+ KB\nNone\n" ], [ "# # Split out US and non-US\ntrain_df = train_df.loc[train_df['Country'] == \"United States\"].copy()\n", "_____no_output_____" ], [ "# Remove everything that a company doesn't have control over, or doesn't really apply\ntrain_df = train_df.drop(['Timestamp', 'Age', 'state', 'comments', 'Country', 'state', 'Gender', 'self_employed',\n 'family_history', 'work_interfere', 'no_employees', 'remote_work', 'tech_company', \n 'phys_health_consequence', 'mental_health_interview', 'phys_health_interview',\n 'comments'], axis=1)", "_____no_output_____" ], [ "# Clean NaNs\n\n# Assign default values for each data type\ndefaultString = 'NaN'\n\n# Create lists by data tpe\nstringFeatures = ['treatment', 'anonymity', 'leave', 'mental_health_consequence', 'coworkers', 'supervisor',\n 'mental_vs_physical', 'obs_consequence', 'benefits', 'care_options', 'wellness_program',\n 'seek_help']\n\n# Clean the NaN's\nfor feature in train_df:\n train_df[feature] = train_df[feature].fillna(defaultString)\n\n#train_df.head(5)\n", "_____no_output_____" ], [ "# Encoding data\n\n# Change string responses to numerical values\nlabelDict = {}\nfor feature in train_df:\n le = preprocessing.LabelEncoder()\n le.fit(train_df[feature])\n le_name_mapping = dict(zip(le.classes_, le.transform(le.classes_)))\n train_df[feature] = le.transform(train_df[feature])\n # Get labels\n labelKey = 'label_' + feature\n labelValue = [*le_name_mapping]\n labelDict[labelKey] = labelValue\n \nfor key, value in labelDict.items(): \n print(key, value)\n", "label_treatment ['No', 'Yes']\nlabel_benefits [\"Don't know\", 'No', 'Yes']\nlabel_care_options ['No', 'Not sure', 'Yes']\nlabel_wellness_program [\"Don't know\", 'No', 'Yes']\nlabel_seek_help [\"Don't know\", 'No', 'Yes']\nlabel_anonymity [\"Don't know\", 'No', 'Yes']\nlabel_leave [\"Don't know\", 'Somewhat difficult', 'Somewhat easy', 'Very difficult', 'Very easy']\nlabel_mental_health_consequence ['Maybe', 'No', 'Yes']\nlabel_coworkers ['No', 'Some of them', 'Yes']\nlabel_supervisor ['No', 'Some of them', 'Yes']\nlabel_mental_vs_physical [\"Don't know\", 'No', 'Yes']\nlabel_obs_consequence ['No', 'Yes']\n" ], [ "# Output the cleaned us dataframe and not cleaned not us dataframe to a new files\ntrain_df.to_csv('../Resources/us-company-ml.csv')\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
4a491a121e716274939fb744381861d7acae21a2
78,112
ipynb
Jupyter Notebook
Case 1. Heart Disease Classification.ipynb
Joonakle/cognitive-systems-for-healthtechnology-applications
da0b4a979557c877c893714edc62a1e9fafb221b
[ "MIT" ]
null
null
null
Case 1. Heart Disease Classification.ipynb
Joonakle/cognitive-systems-for-healthtechnology-applications
da0b4a979557c877c893714edc62a1e9fafb221b
[ "MIT" ]
null
null
null
Case 1. Heart Disease Classification.ipynb
Joonakle/cognitive-systems-for-healthtechnology-applications
da0b4a979557c877c893714edc62a1e9fafb221b
[ "MIT" ]
null
null
null
116.585075
23,050
0.824816
[ [ [ "# Case 1. Heart Disease Classification \nJoona Klemetti \n4.2.2018 \nCognitive Systems for Health Technology Applications \nHelsinki Metropolia University of Applied Science", "_____no_output_____" ], [ "# 1. Objectives \nThe aim of this case is learn to manipulate and read data from externals sources using panda’s functions and use keras dense neural networks to make an expert system to support in diagnostic decision making. \n<br>\nAfter the neural network and the expert system is made it's intended to examine how number of nodes, layers and epochs affects to systems reliability. Also it's tested how batch size \nand train-test distribution affects the results.\n", "_____no_output_____" ], [ "# 2. Required libraries \nAt first it is necessary to import all libraries. In this assignment is used numpy to scientific computing and creating multidimensional arrays, matplotlib to ploting figures, pandas to data analysis and handling, scikit-learn to preprocessing data and spliting it train and test groups and keras to build the neural network.", "_____no_output_____" ] ], [ [ "# import libraries\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n\nimport pandas as pd\n\nimport sklearn as sk\nfrom sklearn import preprocessing\nfrom sklearn.model_selection import train_test_split\n\nimport keras\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Activation, Dropout\nfrom keras import models\nfrom keras import layers", "Using TensorFlow backend.\n" ], [ "# Check the versions\nprint('numpy:', np.__version__)\nprint('pandas:', pd.__version__)\nprint('sklearn:', sk.__version__)\nprint('keras:', keras.__version__)", "numpy: 1.12.1\npandas: 0.22.0\nsklearn: 0.19.1\nkeras: 2.1.2\n" ] ], [ [ "# 3. Data description and preprocessing \nData consists four different datasets with numerical information of heart disease diagnosis. All datasets is modified in same formation. Because of that it was easy to merge them to one data frame. For more information of used datasets it is recommended to visit original information file made by David Aha https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/heart-disease.names \n<br>\nAt first datasets is imported to dataframes and merged to one dataframe. After that data is described and analysed with figures and tables. Data preprocessing is continued for replacing missing values with column mode values. Mode values is elected because mean values could harm some atributes i.e. 'thal' column mean value is 5.088 but its mode value is 3. Mode value means most common value. Because 'thal' value should be 3, 6 or 7 it is recommended to use mode values instead of mean values. \n<br>\nNext step is to define the labels. Data frames 'num' value represent persons health condition. If 'num' value is 0 person is healthy otherwise person got heart disease. Label is the output value and in this case it should be 1 or 0, true or false. After defining the labels it is needed to drop 'num' atribute from training set. Next the data frame is converted to the numerical array and it is scaled between 0 and 1. That is important because otherwise small numerical values may remain insignificant. Last task before defining the neural network is divide the data for training and testing sets. It is decided to use 30% of data to testing set and it is executed by using train_test_split() function from scikit-learn library.", "_____no_output_____" ] ], [ [ "# location of datasets\nfilename = 'http://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/processed.cleveland.data'\nfilename1 = 'https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/processed.hungarian.data'\nfilename2 = 'https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/processed.switzerland.data'\nfilename3 = 'https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/processed.va.data'\n\n# column names for data\ncolnames = ['age','sex','cp','trestbps',\n 'chol','fbs','restecg','thalach',\n 'exang','oldpeak','slope','ca','thal','num']\n\n# read datas to data frames\ndf1 = pd.read_csv(filename,\n names = colnames,\n na_values = '?')\n\ndf2 = pd.read_csv(filename1,\n names = colnames,\n na_values = '?')\n\ndf3 = pd.read_csv(filename2,\n names = colnames,\n na_values = '?')\n\ndf4 = pd.read_csv(filename3,\n names = colnames,\n na_values = '?')\n\n# merge all 4 data frames to one data frame \nframes = [df1,df2,df3,df4] \n\ndf = pd.concat(frames)\n\ndf.index = range(920)", "_____no_output_____" ], [ "# calculate descriptive statistics\ndf.describe()", "_____no_output_____" ], [ "# Create histogram of age distribution\ndf['age'].hist(bins = np.arange(20, 90, 5))\nplt.xlabel('Age (years)')\nplt.ylabel('Count')\nplt.show()", "_____no_output_____" ], [ "# Replace missing values with column mode values\ndf = df.where(~np.isnan(df), df.mode(), axis = 'columns')", "_____no_output_____" ], [ "# Calculate the labels:\n# Output value: = 0 Normal, 0 > Heart disaese\nlabel = (df['num'] > 0).values", "_____no_output_____" ], [ "#Select the columns for training\ncolumns = ['age','sex','cp','trestbps',\n 'chol','fbs','restecg','thalach',\n 'exang','oldpeak','slope','ca','thal']\n#Convert data into numerical array\ndata = df[columns].values", "_____no_output_____" ], [ "# Scale the data using min_max_scaler\nmin_max_scaler = preprocessing.MinMaxScaler()\ndata = min_max_scaler.fit_transform(data)", "_____no_output_____" ], [ "# dividing the data for training and testing\n\ntrain_data, test_data, train_label, test_label = train_test_split(\ndata, label, test_size = 0.35)", "_____no_output_____" ] ], [ [ "# 4. Modeling and compilation \nIn this case it is choosed to use Keras Sequential model to build dense neural network. At first architecture of network is defined and layers is added via .add() method. Compilation is have to done before training. Compilation configures the learning process. According the formula Nh=Ns(α∗(Ni+No)) number of nodes should be between 6 and 32. It is decided to use 10 nodes for both hidden layers because that seems to work best.", "_____no_output_____" ] ], [ [ "# Define the architecture of the network\n\nnetwork = []\nnetwork = models.Sequential()\nnetwork.add(layers.Dense(10, activation= 'relu', input_shape=(13,)))\nnetwork.add(layers.Dense(10, activation= 'relu', ))\nnetwork.add(layers.Dense(1, activation= 'sigmoid'))", "_____no_output_____" ], [ "# Compile the network\n\nnetwork.compile(optimizer = 'rmsprop',\n loss = 'binary_crossentropy',\n metrics = ['accuracy'])", "_____no_output_____" ] ], [ [ "# 5. Training and Validation \nNext step is train the network. Training is executed with .fit() method.", "_____no_output_____" ] ], [ [ "#Train the network\n# N = number of epochs\nN = 120\n\nh = network.fit(train_data, train_label,\n verbose = 0,\n epochs = N, \n batch_size=128,\n validation_data=(test_data, test_label)\n )", "_____no_output_____" ] ], [ [ "# 6. Evaluation \nEvaluation is made by .evaluate() method. It computes testing sets loss function and accuracity. ", "_____no_output_____" ] ], [ [ "# Evaluation the network\nscore = network.evaluate(test_data, test_label, batch_size = 128)\nscore", "322/322 [==============================] - 0s 25us/step\n" ] ], [ [ "# 7. Results and Discussion \nIn the testing neural network it is noticed that regardless choosen number of node and train-test distribution the accuracy and loss function will be about same. Either batch size doesn't seems to affect ressults. Accuracy stays in range 0.77 to 0.85 generally. Loss function is between 0.40 and 0.50. If number of nodes increase too much there is noticed big variation in test accuracy and loss function. Best train-test distribution would seem to be about 70% to training and 30% to testing. Small variation in train-test distribution isn't harmful. Optimal number of nodes seems to be between 8 to 15. Adding more layers doesn't seems to affect scores either. Randomness seems to affect a lot of accuracy and loss function. \n", "_____no_output_____" ] ], [ [ "# Plot the results\n\nepochs = range(1, N + 1)\nacc = h.history['acc']\nval_acc = h.history['val_acc']\nloss = h.history['loss']\nval_loss = h.history['val_loss']\n\n# Accuracy plot\nplt.figure(figsize = (20, 5))\nplt.plot(epochs, acc, 'bo', label='Training')\nplt.plot(epochs, val_acc, 'b', label = 'Validation')\nplt.xlabel('Epochs')\nplt.ylabel('Accuracy')\nplt.ylim([0, 1])\nplt.grid()\nplt.legend()\nplt.show()\n\n# Loss plot\nplt.figure(figsize = (20, 5))\nplt.plot(epochs, loss, 'bo', label='Training')\nplt.plot(epochs, val_loss, 'b', label = 'Validation')\nplt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.ylim([0.2, 0.8])\nplt.grid()\nplt.legend()\nplt.show()\n", "_____no_output_____" ] ], [ [ "# 8. Conclusions \nThe case 1. was very good introduction to the neural networks. I think the difficulty level was just right at this point. The objectives were achieved such the neural network works. \n<br>\nThere was a little variation of results. Even there were used the same atributes randomness seems to affect the results. That must be due to train_test_split() function. Function splits data into random train and test sets. Therefore, there was variation in the results. \n<br>\nAbout 80% accuracy is fine but in diagnostic medical system it isn't enough. Even if accuracy would always be 85% it is still too poor. There should be more patient data and system have to be developed before using in real situations. Even though system is used only to support diagnostic decision making. ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a492af92be43688c2afdb87aa05ad8c8c9e78bf
166,412
ipynb
Jupyter Notebook
centralms/notebooks/notes_choletsky.ipynb
changhoonhahn/centralMS
39b4509ea99e47ab5cf1f9be8775d53eee6de80f
[ "MIT" ]
1
2018-05-30T23:43:52.000Z
2018-05-30T23:43:52.000Z
centralms/notebooks/notes_choletsky.ipynb
changhoonhahn/centralMS
39b4509ea99e47ab5cf1f9be8775d53eee6de80f
[ "MIT" ]
2
2017-04-24T22:58:40.000Z
2017-04-24T22:59:28.000Z
centralms/notebooks/notes_choletsky.ipynb
changhoonhahn/centralMS
39b4509ea99e47ab5cf1f9be8775d53eee6de80f
[ "MIT" ]
null
null
null
457.175824
36,354
0.931147
[ [ [ "# figuring out how to generate correlated random variables using Choletsky decomposition", "_____no_output_____" ] ], [ [ "import numpy as np \nimport scipy as sp\n\nimport matplotlib.pyplot as plt\n%matplotlib inline ", "_____no_output_____" ], [ "XY = np.random.randn(2, 1000)", "_____no_output_____" ], [ "C = np.array([[1, 0.99],[0.99, 1]])\nL = np.linalg.cholesky(C)", "_____no_output_____" ], [ "XY_corr = np.dot(L, XY)", "_____no_output_____" ], [ "fig = plt.figure()\nsub = fig.add_subplot(1,2,1)\nsub.scatter(XY[0,:], XY[1,:])\nsub = fig.add_subplot(1,2,2)\nsub.scatter(XY_corr[0,:], XY_corr[1,:])", "_____no_output_____" ], [ "XY = np.zeros((2, 1000))\nXY[0,:] = np.random.uniform(-1., 1., size=1000)\nXY[1,:] = np.random.randn(1000)\n\nXY_corr = np.dot(L, XY)\n\nfig = plt.figure()\nsub = fig.add_subplot(1,2,1)\nsub.scatter(XY[0,:], XY[1,:])\nsub = fig.add_subplot(1,2,2)\nsub.scatter(XY_corr[0,:], XY_corr[1,:])\n\nprint np.cov(XY_corr)", "[[ 0.33211461 0.32732946]\n [ 0.32732946 0.34186679]]\n" ], [ "XY = np.zeros((2, 1000))\nXY[0,:] = np.random.uniform(-1., 1., size=1000)\nXY[1,:] = np.random.uniform(-1., 1., 1000)\n\nXY_corr = np.dot(L, XY)\n\nfig = plt.figure()\nsub = fig.add_subplot(1,2,1)\nsub.scatter(XY[0,:], XY[1,:])\nsub = fig.add_subplot(1,2,2)\nsub.scatter(XY_corr[0,:], XY_corr[1,:])\n\nprint np.cov(XY_corr)\nprint XY[0,:10]\nprint XY_corr[0,:10]", "[[ 0.31261428 0.30694585]\n [ 0.30694585 0.30806052]]\n[-0.64563645 0.80335192 0.66544065 -0.60246825 -0.10004731 0.57098973\n 0.42275835 -0.21441152 -0.0523032 -0.8842028 ]\n[-0.64563645 0.80335192 0.66544065 -0.60246825 -0.10004731 0.57098973\n 0.42275835 -0.21441152 -0.0523032 -0.8842028 ]\n" ], [ "XY = np.random.randn(2, 1000)\nXY_corr = np.dot(L, XY)\nC = np.array([[0.3, 2*0.1224],[2*0.1224, 0.2]])\nL = np.linalg.cholesky(C)\nXY_corr = np.dot(L, XY)\n\nfig = plt.figure()\nsub = fig.add_subplot(1,2,1)\nsub.scatter(XY[0,:], XY[1,:])\nsub = fig.add_subplot(1,2,2)\nsub.scatter(XY_corr[0,:], XY_corr[1,:])", "_____no_output_____" ] ], [ [ "## Instead of Choletsky, we can use the error function\n$$i_{rank} = \\frac{1}{2} \\left[1 - {\\rm erf}(x / \\sqrt{2}) \\right]$$\nwhere \n$$ x = \\frac{SFR - <SFR>}{\\sigma_{log SFR}}$$\n", "_____no_output_____" ] ], [ [ "from scipy.special import erfinv", "_____no_output_____" ], [ "# scratch pad trying to figure out how assembly bias is induced\ndMhalo = np.random.randn(10000)\nisort = np.argsort(dMhalo)\n\nirank = np.zeros(10000)\nirank[isort] = np.arange(10000) + 0.5\nirank /= 10000.\n\n#dlogSFR = 0.2 * 1.414 * erfinv(1. - 2. * irank) + np.sqrt(0.3**2 - 0.2**2) * np.random.randn(1000)\ndlogSFR = 0.2 * 1.414 * erfinv(2. * irank - 1.) + np.sqrt(0.3**2 - 0.2**2) * np.random.randn(10000)\n\nplt.scatter(dMhalo, 0.3*np.random.randn(10000), c='k')\nplt.scatter(dMhalo, dlogSFR, c='r', lw=0)\n\ncov = np.cov(np.array([dMhalo, dlogSFR]))\nprint cov\nr = cov[0,1]/np.sqrt(cov[0,0]*cov[1,1])\nprint r\n\nXY = np.random.randn(2, 10000)\nXY_corr = np.dot(L, XY)\nC = np.array([[1.,r*np.sqrt(0.09)],[r*np.sqrt(0.09), 0.09]])\nL = np.linalg.cholesky(C)\nXY_corr = np.dot(L, XY)\nplt.scatter(XY_corr[0,:], XY_corr[1,:])\nprint np.cov(XY_corr)\n\n#plt.xlim([-1, 1])\n#plt.ylim([-1., 1.])", "[[ 0.97972242 0.1973816 ]\n [ 0.1973816 0.08909991]]\n0.668061605326\n[[ 1.00407164 0.1999674 ]\n [ 0.1999674 0.09028121]]\n" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
4a492fa3a4effea329fdaeed930aad60fca13fc0
220,786
ipynb
Jupyter Notebook
Pandas/Python_para_Data_Science_Pandas.ipynb
ThiagoKS-7/Python_para_Data_Science
9b6bb8ce3ce089cf17896726099974081a7efade
[ "Apache-2.0" ]
1
2022-03-25T00:18:40.000Z
2022-03-25T00:18:40.000Z
Pandas/Python_para_Data_Science_Pandas.ipynb
ThiagoKS-7/Python_para_Data_Science
9b6bb8ce3ce089cf17896726099974081a7efade
[ "Apache-2.0" ]
null
null
null
Pandas/Python_para_Data_Science_Pandas.ipynb
ThiagoKS-7/Python_para_Data_Science
9b6bb8ce3ce089cf17896726099974081a7efade
[ "Apache-2.0" ]
null
null
null
28.240727
280
0.387312
[ [ [ "# <font color=green> PYTHON PARA DATA SCIENCE - PANDAS\n---", "_____no_output_____" ], [ "# <font color=green> 1. INTRODUÇÃO AO PYTHON\n---", "_____no_output_____" ], [ "# 1.1 Introdução", "_____no_output_____" ], [ "> Python é uma linguagem de programação de alto nível com suporte a múltiplos paradigmas de programação. É um projeto *open source* e desde seu surgimento, em 1991, vem se tornando uma das linguagens de programação interpretadas mais populares. \n>\n> Nos últimos anos Python desenvolveu uma comunidade ativa de processamento científico e análise de dados e vem se destacando como uma das linguagens mais relevantes quando o assundo é ciência de dados e machine learning, tanto no ambiente acadêmico como também no mercado.", "_____no_output_____" ], [ "# 1.2 Instalação e ambiente de desenvolvimento", "_____no_output_____" ], [ "### Instalação Local\n\n### https://www.python.org/downloads/\n### ou\n### https://www.anaconda.com/distribution/", "_____no_output_____" ], [ "### Google Colaboratory\n\n### https://colab.research.google.com", "_____no_output_____" ], [ "### Verificando versão", "_____no_output_____" ] ], [ [ "!python -V", "Python 3.6.13 :: Anaconda, Inc.\n" ] ], [ [ "# 1.3 Trabalhando com dados", "_____no_output_____" ] ], [ [ "import pandas as pd\npd.set_option('display.max_rows', 10)\npd.set_option('display.max_columns', 10)", "_____no_output_____" ], [ "dataset = pd.read_csv('db.csv', sep = ';')", "_____no_output_____" ], [ "dataset", "_____no_output_____" ], [ "dataset.dtypes", "_____no_output_____" ], [ "dataset[['Quilometragem', 'Valor']].describe()", "_____no_output_____" ], [ "dataset.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 258 entries, 0 to 257\nData columns (total 7 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Nome 258 non-null object \n 1 Motor 258 non-null object \n 2 Ano 258 non-null int64 \n 3 Quilometragem 197 non-null float64\n 4 Zero_km 258 non-null bool \n 5 Acessórios 258 non-null object \n 6 Valor 258 non-null float64\ndtypes: bool(1), float64(2), int64(1), object(3)\nmemory usage: 12.5+ KB\n" ] ], [ [ "# <font color=green> 2. TRABALHANDO COM TUPLAS\n---", "_____no_output_____" ], [ "# 2.1 Criando tuplas\n\nTuplas são sequências imutáveis que são utilizadas para armazenar coleções de itens, geralmente heterogêneos. Podem ser construídas de várias formas:\n```\n- Utilizando um par de parênteses: ( )\n- Utilizando uma vírgula à direita: x,\n- Utilizando um par de parênteses com itens separados por vírgulas: ( x, y, z )\n- Utilizando: tuple() ou tuple(iterador)\n```", "_____no_output_____" ] ], [ [ "()", "_____no_output_____" ], [ "1,2,3", "_____no_output_____" ], [ "nome = \"Teste\"\nvalor = 1\n(nome,valor)", "_____no_output_____" ], [ "nomes_carros = tuple(['Jetta Variant', 'Passat', 'Crossfox', 'DS5'])\nnomes_carros", "_____no_output_____" ], [ "type(nomes_carros)", "_____no_output_____" ] ], [ [ "# 2.2 Seleções em tuplas", "_____no_output_____" ] ], [ [ "nomes_carros = tuple(['Jetta Variant', 'Passat', 'Crossfox', 'DS5'])\nnomes_carros", "_____no_output_____" ], [ "nomes_carros[0]", "_____no_output_____" ], [ "nomes_carros[1]", "_____no_output_____" ], [ "nomes_carros[-1]", "_____no_output_____" ], [ "nomes_carros[1:3]", "_____no_output_____" ], [ "nomes_carros = ('Jetta Variant', 'Passat', 'Crossfox', 'DS5', ('Fusca', 'Gol', 'C4'))\nnomes_carros", "_____no_output_____" ], [ "nomes_carros[-1]\n", "_____no_output_____" ], [ "nomes_carros[-1][1]", "_____no_output_____" ] ], [ [ "# 2.3 Iterando em tuplas", "_____no_output_____" ] ], [ [ "nomes_carros = ('Jetta Variant', 'Passat', 'Crossfox', 'DS5')\nnomes_carros", "_____no_output_____" ], [ "for item in nomes_carros:\n print(item)", "Jetta Variant\nPassat\nCrossfox\nDS5\n" ] ], [ [ "### Desempacotamento de tuplas", "_____no_output_____" ] ], [ [ "nomes_carros = ('Jetta Variant', 'Passat', 'Crossfox', 'DS5')\nnomes_carros", "_____no_output_____" ], [ "carro_1, carro_2, carro_3, carro_4 = nomes_carros\n", "_____no_output_____" ], [ "carro_1", "_____no_output_____" ], [ "carro_2", "_____no_output_____" ], [ "carro_3", "_____no_output_____" ], [ "carro_4", "_____no_output_____" ], [ "_, A, _, B = nomes_carros", "_____no_output_____" ], [ "A", "_____no_output_____" ], [ "B", "_____no_output_____" ], [ "_, C, *_ = nomes_carros", "_____no_output_____" ], [ "C", "_____no_output_____" ] ], [ [ "## *zip()*\n\nhttps://docs.python.org/3.6/library/functions.html#zip", "_____no_output_____" ] ], [ [ "carros = ['Jetta Variant', 'Passat', 'Crossfox', 'DS5']\ncarros", "_____no_output_____" ], [ "valores = [88078.64, 106161.94, 72832.16, 124549.07]\nvalores", "_____no_output_____" ], [ "zip(carros, valores)", "_____no_output_____" ], [ "list(zip(carros, valores))", "_____no_output_____" ], [ "for carro, valor in zip(carros, valores):\n print(carro, valor)", "Jetta Variant 88078.64\nPassat 106161.94\nCrossfox 72832.16\nDS5 124549.07\n" ] ], [ [ "# <font color=green> 3. TRABALHANDO COM DICIONÁRIOS\n---", "_____no_output_____" ], [ "# 3.1 Criando dicionários\n\nListas são coleções sequenciais, isto é, os itens destas sequências estão ordenados e utilizam índices (números inteiros) para acessar os valores.\n\nOs dicionários são coleções um pouco diferentes. São estruturas de dados que representam um tipo de mapeamento. Mapeamentos são coleções de associações entre pares de valores onde o primeiro elemento do par é conhecido como chave (*key*) e o segundo como valor (*value*).\n\n```\ndicionario = {key_1: value_1, key_2: value_2, ..., key_n: value_n}\n```\n\nhttps://docs.python.org/3.6/library/stdtypes.html#typesmapping", "_____no_output_____" ] ], [ [ "carros = ['Jetta Variant', 'Passat', 'Crossfox']\ncarros", "_____no_output_____" ], [ "valores = [88078.64, 106161.94, 72832.16]\nvalores", "_____no_output_____" ], [ "carros.index(\"Passat\")", "_____no_output_____" ], [ "valores[carros.index(\"Passat\")]", "_____no_output_____" ], [ "valores_carros = {\"Jetta Variant\": 88078.64, \"Passat\": 106161.94, \"Crossfox\": 72832.16}\nvalores_carros", "_____no_output_____" ], [ "type(valores_carros)", "_____no_output_____" ] ], [ [ "### Criando dicionários com *zip()*", "_____no_output_____" ] ], [ [ "list(zip(carros, valores))", "_____no_output_____" ], [ "valores_carros = dict(zip(carros, valores))\nvalores_carros", "_____no_output_____" ] ], [ [ "# 3.2 Operações com dicionários", "_____no_output_____" ] ], [ [ "valores_carros = dict(zip(carros, valores))\nvalores_carros", "_____no_output_____" ] ], [ [ "## *dict[ key ]*\n\nRetorna o valor correspondente à chave (*key*) no dicionário.", "_____no_output_____" ] ], [ [ "valores_carros[\"Passat\"]", "_____no_output_____" ] ], [ [ "## *key in dict*\n\nRetorna **True** se a chave (*key*) for encontrada no dicionário.", "_____no_output_____" ] ], [ [ "import termcolor\nfrom termcolor import colored", "_____no_output_____" ], [ "is_it = colored('tá lá', 'green') if \"Passat\" in valores_carros else colored('tá não','red')\nprint(f'Tá lá? \\n R: {is_it}')", "Tá lá? \n R: \u001b[32mtá lá\u001b[0m\n" ], [ "is_it = colored('tá lá', 'green') if \"Fusqueta\" in valores_carros else colored('tá não','red')\nprint(f'Tá lá? \\n R: {is_it}')", "Tá lá? \n R: \u001b[31mtá não\u001b[0m\n" ], [ "is_it = colored('tá não','red') if \"Passat\" not in valores_carros else colored('tá lá', 'green')\nprint(f'Tá lá? \\n R: {is_it}')", "Tá lá? \n R: \u001b[32mtá lá\u001b[0m\n" ] ], [ [ "## *len(dict)*\n\nRetorna o número de itens do dicionário.", "_____no_output_____" ] ], [ [ "len(valores_carros)", "_____no_output_____" ] ], [ [ "## *dict[ key ] = value*\n\nInclui um item ao dicionário.", "_____no_output_____" ] ], [ [ "valores_carros[\"DS5\"] = 124549.07\n", "_____no_output_____" ], [ "valores_carros", "_____no_output_____" ] ], [ [ "## *del dict[ key ]*\n\nRemove o item de chave (*key*) do dicionário.", "_____no_output_____" ] ], [ [ "del valores_carros[\"DS5\"]\n", "_____no_output_____" ], [ "valores_carros", "_____no_output_____" ] ], [ [ "# 3.3 Métodos de dicionários", "_____no_output_____" ], [ "## *dict.update()*\n\nAtualiza o dicionário.", "_____no_output_____" ] ], [ [ "valores_carros", "_____no_output_____" ], [ "valores_carros.update({'DS5': 124549.07})\nvalores_carros", "_____no_output_____" ], [ "valores_carros.update({'DS5': 124549.10, 'Fusca': 75000})\nvalores_carros", "_____no_output_____" ] ], [ [ "## *dict.copy()*\n\nCria uma cópia do dicionário.", "_____no_output_____" ] ], [ [ "copia = valores_carros.copy()", "_____no_output_____" ], [ "copia\n", "_____no_output_____" ], [ "del copia['Fusca']\ncopia", "_____no_output_____" ], [ "valores_carros", "_____no_output_____" ] ], [ [ "## *dict.pop(key[, default ])*\n\nSe a chave for encontrada no dicionário, o item é removido e seu valor é retornado. Caso contrário, o valor especificado como *default* é retornado. Se o valor *default* não for fornecido e a chave não for encontrada no dicionário um erro será gerado.", "_____no_output_____" ] ], [ [ "copia", "_____no_output_____" ], [ "copia.pop('Passat')", "_____no_output_____" ], [ "copia", "_____no_output_____" ], [ "# copia.pop('Passat')", "_____no_output_____" ], [ "copia.pop('Passat', 'Chave não encontrada')", "_____no_output_____" ], [ "copia.pop('DS5', 'Chave não encontrada')", "_____no_output_____" ], [ "copia", "_____no_output_____" ] ], [ [ "## *dict.clear()*\n\nRemove todos os itens do dicionário.", "_____no_output_____" ] ], [ [ "copia.clear()", "_____no_output_____" ], [ "copia", "_____no_output_____" ] ], [ [ "# 3.4 Iterando em dicionários", "_____no_output_____" ], [ "## *dict.keys()*\n\nRetorna uma lista contendo as chaves (*keys*) do dicionário.", "_____no_output_____" ] ], [ [ "valores_carros.keys()", "_____no_output_____" ], [ "for key in valores_carros.keys():\n print(valores_carros[key])", "88078.64\n106161.94\n72832.16\n124549.1\n75000\n" ] ], [ [ "## *dict.values()*\n\nRetorna uma lista com todos os valores (*values*) do dicionário.", "_____no_output_____" ] ], [ [ "valores_carros.values()", "_____no_output_____" ] ], [ [ "## *dict.items()*\n\nRetorna uma lista contendo uma tupla para cada par chave-valor (*key-value*) do dicionário.", "_____no_output_____" ] ], [ [ "valores_carros.items()", "_____no_output_____" ], [ "for item in valores_carros.items():\n print(item)", "('Jetta Variant', 88078.64)\n('Passat', 106161.94)\n('Crossfox', 72832.16)\n('DS5', 124549.1)\n('Fusca', 75000)\n" ], [ "for key,value in valores_carros.items():\n print(key,value)", "Jetta Variant 88078.64\nPassat 106161.94\nCrossfox 72832.16\nDS5 124549.1\nFusca 75000\n" ], [ "for key,value in valores_carros.items():\n if(value >= 100000):\n print(key,value)", "Passat 106161.94\nDS5 124549.1\n" ], [ "dados = {\n 'Crossfox': {'valor': 72000, 'ano': 2005}, \n 'DS5': {'valor': 125000, 'ano': 2015}, \n 'Fusca': {'valor': 150000, 'ano': 1976}, \n 'Jetta': {'valor': 88000, 'ano': 2010}, \n 'Passat': {'valor': 106000, 'ano': 1998}\n}", "_____no_output_____" ], [ "for item in dados.items():\n if(item[1]['ano'] >= 2000):\n print(item[0])", "Crossfox\nDS5\nJetta\n" ] ], [ [ "# <font color=green> 4. FUNÇÕES E PACOTES\n---\n \nFunções são unidades de código reutilizáveis que realizam uma tarefa específica, podem receber alguma entrada e também podem retornar alguma resultado.", "_____no_output_____" ], [ "# 4.1 Built-in function\n\nA linguagem Python possui várias funções integradas que estão sempre acessíveis. Algumas já utilizamos em nosso treinamento: type(), print(), zip(), len(), set() etc.\n\nhttps://docs.python.org/3.6/library/functions.html", "_____no_output_____" ] ], [ [ "dados = {'Jetta Variant': 88078.64, 'Passat': 106161.94, 'Crossfox': 72832.16}\ndados", "_____no_output_____" ], [ "valores = []\nfor valor in dados.values():\n valores.append(valor)\nvalores", "_____no_output_____" ], [ "soma = 0\nfor valor in dados.values():\n soma += valor\nsoma", "_____no_output_____" ], [ "list(dados.values())", "_____no_output_____" ], [ "sum(dados.values())", "_____no_output_____" ], [ "help(print)", "Help on built-in function print in module builtins:\n\nprint(...)\n print(value, ..., sep=' ', end='\\n', file=sys.stdout, flush=False)\n \n Prints the values to a stream, or to sys.stdout by default.\n Optional keyword arguments:\n file: a file-like object (stream); defaults to the current sys.stdout.\n sep: string inserted between values, default a space.\n end: string appended after the last value, default a newline.\n flush: whether to forcibly flush the stream.\n\n" ], [ "print?", "_____no_output_____" ] ], [ [ "# 4.2 Definindo funções sem e com parâmetros", "_____no_output_____" ], [ "### Funções sem parâmetros\n\n#### Formato padrão\n\n```\ndef <nome>():\n <instruções>\n```", "_____no_output_____" ] ], [ [ "def mean():\n valor = (1+2+3)/3\n return valor\n ", "_____no_output_____" ], [ "mean()", "_____no_output_____" ] ], [ [ "### Funções com parâmetros\n\n#### Formato padrão\n\n```\ndef <nome>(<param_1>, <param_2>, ..., <param_n>):\n <instruções>\n```", "_____no_output_____" ] ], [ [ "def mean(lista):\n mean = sum(lista)/len(lista) \n return mean", "_____no_output_____" ], [ "media = mean([1,2,3])\nprint(f'A média é: {media}')", "A média é: 2.0\n" ], [ "media = mean([65665656,96565454,4565545])\nprint(f'A média é: {media}')", "A média é: 55598885.0\n" ], [ "dados = {\n 'Crossfox': {'km': 35000, 'ano': 2005}, \n 'DS5': {'km': 17000, 'ano': 2015}, \n 'Fusca': {'km': 130000, 'ano': 1979}, \n 'Jetta': {'km': 56000, 'ano': 2011}, \n 'Passat': {'km': 62000, 'ano': 1999}\n}", "_____no_output_____" ], [ "def km_media(dataset, ano_atual):\n for item in dataset.items():\n result = item[1]['km'] / (ano_atual - item[1]['ano'])\n print(result)", "_____no_output_____" ], [ "km_media(dados,2019)", "2500.0\n4250.0\n3250.0\n7000.0\n3100.0\n" ] ], [ [ "# 4.3 Definindo funções que retornam valores", "_____no_output_____" ], [ "### Funções que retornam um valor\n\n#### Formato padrão\n\n```\ndef <nome>(<param_1>, <param_2>, ..., <param_n>):\n <instruções>\n return <resultado>\n```", "_____no_output_____" ] ], [ [ "def mean(lista):\n mean = sum(lista)/len(lista) \n return mean\nresult = mean([1,2,3])\nresult", "_____no_output_____" ] ], [ [ "### Funções que retornam mais de um valor\n\n#### Formato padrão\n\n```\ndef <nome>(<param_1>, <param_2>, ..., <param_n>):\n <instruções>\n return (<resultado_1>, <resultado_2>, ..., <resultado_n>)\n```", "_____no_output_____" ] ], [ [ "def mean(lista):\n mean = sum(lista)/len(lista) \n return (mean,len(lista))", "_____no_output_____" ], [ "result = mean([1,2,3])\nresult", "_____no_output_____" ], [ "result, length = mean([1,2,3])\nprint(f'{result}, {length}')\n", "2.0, 3\n" ], [ "dados = {\n 'Crossfox': {'km': 35000, 'ano': 2005}, \n 'DS5': {'km': 17000, 'ano': 2015}, \n 'Fusca': {'km': 130000, 'ano': 1979}, \n 'Jetta': {'km': 56000, 'ano': 2011}, \n 'Passat': {'km': 62000, 'ano': 1999}\n}", "_____no_output_____" ], [ "\ndef km_media(dataset, ano_atual):\n result = {}\n for item in dataset.items():\n media = item[1]['km'] / (ano_atual - item[1]['ano'])\n item[1].update({ 'km_media': media })\n result.update({ item[0]: item[1] })\n return result", "_____no_output_____" ], [ "km_media(dados, 2019)", "_____no_output_____" ] ], [ [ "# <font color=green> 5. PANDAS BÁSICO\n---\n\n**versão: 0.25.2**\n \nPandas é uma ferramenta de manipulação de dados de alto nível, construída com base no pacote Numpy. O pacote pandas possui estruturas de dados bastante interessantes para manipulação de dados e por isso é muito utilizado por cientistas de dados.\n\n\n## Estruturas de Dados\n\n### Series\n\nSeries são arrays unidimensionais rotulados capazes de armazenar qualquer tipo de dado. Os rótulos das linhas são chamados de **index**. A forma básica de criação de uma Series é a seguinte:\n\n\n```\n s = pd.Series(dados, index = index)\n```\n\nO argumento *dados* pode ser um dicionário, uma lista, um array Numpy ou uma constante.\n\n### DataFrames\n\nDataFrame é uma estrutura de dados tabular bidimensional com rótulos nas linha e colunas. Como a Series, os DataFrames são capazes de armazenar qualquer tipo de dados.\n\n\n```\n df = pd.DataFrame(dados, index = index, columns = columns)\n```\n\nO argumento *dados* pode ser um dicionário, uma lista, um array Numpy, uma Series e outro DataFrame.\n\n**Documentação:** https://pandas.pydata.org/pandas-docs/version/0.25/", "_____no_output_____" ], [ "# 5.1 Estruturas de dados", "_____no_output_____" ] ], [ [ "import pandas as pd", "_____no_output_____" ] ], [ [ "### Criando uma Series a partir de uma lista", "_____no_output_____" ] ], [ [ "carros: {'Jetta Variant', 'Passat', 'Crossfox'}\ncarros", "_____no_output_____" ], [ "pd.Series(carros)", "_____no_output_____" ] ], [ [ "### Criando um DataFrame a partir de uma lista de dicionários", "_____no_output_____" ] ], [ [ "dados = [\n {'Nome': 'Jetta Variant', 'Motor': 'Motor 4.0 Turbo', 'Ano': 2003, 'Quilometragem': 44410.0, 'Zero_km': False, 'Valor': 88078.64},\n {'Nome': 'Passat', 'Motor': 'Motor Diesel', 'Ano': 1991, 'Quilometragem': 5712.0, 'Zero_km': False, 'Valor': 106161.94},\n {'Nome': 'Crossfox', 'Motor': 'Motor Diesel V8', 'Ano': 1990, 'Quilometragem': 37123.0, 'Zero_km': False, 'Valor': 72832.16}\n]", "_____no_output_____" ], [ "dataset = pd.DataFrame(dados) ", "_____no_output_____" ], [ "dataset", "_____no_output_____" ], [ "dataset[['Motor','Valor','Ano', 'Nome', 'Quilometragem', 'Zero_km']]", "_____no_output_____" ] ], [ [ "### Criando um DataFrame a partir de um dicionário", "_____no_output_____" ] ], [ [ "dados = {\n 'Nome': ['Jetta Variant', 'Passat', 'Crossfox'], \n 'Motor': ['Motor 4.0 Turbo', 'Motor Diesel', 'Motor Diesel V8'],\n 'Ano': [2003, 1991, 1990],\n 'Quilometragem': [44410.0, 5712.0, 37123.0],\n 'Zero_km': [False, False, False],\n 'Valor': [88078.64, 106161.94, 72832.16]\n}", "_____no_output_____" ], [ "dataset = pd.DataFrame(dados)", "_____no_output_____" ], [ "dataset", "_____no_output_____" ] ], [ [ "### Criando um DataFrame a partir de uma arquivo externo", "_____no_output_____" ] ], [ [ "dataset = pd.read_csv('db.csv', sep=';', index_col =0)\ndataset", "_____no_output_____" ], [ "dados = {\n 'Crossfox': {'km': 35000, 'ano': 2005}, \n 'DS5': {'km': 17000, 'ano': 2015}, \n 'Fusca': {'km': 130000, 'ano': 1979}, \n 'Jetta': {'km': 56000, 'ano': 2011}, \n 'Passat': {'km': 62000, 'ano': 1999}\n}", "_____no_output_____" ], [ "def km_media(dataset, ano_atual):\n result = {}\n for item in dataset.items():\n media = item[1]['km'] / (ano_atual - item[1]['ano'])\n item[1].update({ 'km_media': media })\n result.update({ item[0]: item[1] })\n\n return result", "_____no_output_____" ], [ "km_media(dados, 2019)", "_____no_output_____" ], [ "import pandas as pd\ncarros = pd.DataFrame(km_media(dados, 2019)).T\ncarros", "_____no_output_____" ] ], [ [ "# 5.2 Seleções com DataFrames", "_____no_output_____" ], [ "### Selecionando colunas", "_____no_output_____" ] ], [ [ "dataset.head()", "_____no_output_____" ], [ "dataset['Valor']", "_____no_output_____" ], [ "type(dataset['Valor'])", "_____no_output_____" ], [ "dataset[['Valor']]", "_____no_output_____" ], [ "type(dataset[['Valor']])", "_____no_output_____" ] ], [ [ "### Selecionando linhas - [ i : j ] \n\n<font color=red>**Observação:**</font> A indexação tem origem no zero e nos fatiamentos (*slices*) a linha com índice i é **incluída** e a linha com índice j **não é incluída** no resultado.", "_____no_output_____" ] ], [ [ "dataset[0:3]", "_____no_output_____" ] ], [ [ "### Utilizando .loc para seleções\n\n<font color=red>**Observação:**</font> Seleciona um grupo de linhas e colunas segundo os rótulos ou uma matriz booleana.", "_____no_output_____" ] ], [ [ "dataset.loc[['Passat', 'DS5']]", "_____no_output_____" ], [ "dataset.loc[['Passat', 'DS5'], ['Motor', 'Ano']]", "_____no_output_____" ], [ "dataset.loc[:, ['Motor', 'Ano']]", "_____no_output_____" ] ], [ [ "### Utilizando .iloc para seleções\n\n<font color=red>**Observação:**</font> Seleciona com base nos índices, ou seja, se baseia na posição das informações.", "_____no_output_____" ] ], [ [ "dataset.head()", "_____no_output_____" ], [ "dataset.iloc[1]", "_____no_output_____" ], [ "dataset.iloc[[1]]", "_____no_output_____" ], [ "dataset.iloc[1:4]", "_____no_output_____" ], [ "dataset.iloc[1:4, [0, 5, 2]]", "_____no_output_____" ], [ "dataset.iloc[[1,42,22], [0, 5, 2]]", "_____no_output_____" ], [ "dataset.iloc[:, [0, 5, 2]]", "_____no_output_____" ], [ "import pandas as pd\n\ndados = {\n 'Nome': ['Jetta', 'Passat', 'Crossfox', 'DS5', 'Fusca'], \n 'Motor': ['Motor 4.0 Turbo', 'Motor Diesel', 'Motor Diesel V8', 'Motor 2.0', 'Motor 1.6'],\n 'Ano': [2019, 2003, 1991, 2019, 1990],\n 'Quilometragem': [0.0, 5712.0, 37123.0, 0.0, 120000.0],\n 'Zero_km': [True, False, False, True, False],\n 'Valor': [88000.0, 106000.0, 72000.0, 89000.0, 32000.0]\n}\n\ndataset = pd.DataFrame(dados)", "_____no_output_____" ], [ "dataset[['Nome', 'Ano', 'Quilometragem', 'Valor']][1:3]", "_____no_output_____" ], [ "import pandas as pd\n\ndados = {\n 'Motor': ['Motor 4.0 Turbo', 'Motor Diesel', 'Motor Diesel V8', 'Motor 2.0', 'Motor 1.6'],\n 'Ano': [2019, 2003, 1991, 2019, 1990],\n 'Quilometragem': [0.0, 5712.0, 37123.0, 0.0, 120000.0],\n 'Zero_km': [True, False, False, True, False],\n 'Valor': [88000.0, 106000.0, 72000.0, 89000.0, 32000.0]\n}\n\ndataset = pd.DataFrame(dados, index = ['Jetta', 'Passat', 'Crossfox', 'DS5', 'Fusca'])", "_____no_output_____" ], [ "dataset.loc[['Passat', 'DS5'], ['Motor', 'Valor']]", "_____no_output_____" ], [ "dataset.iloc[[1,3], [0,-1]]", "_____no_output_____" ] ], [ [ "# 5.3 Queries com DataFrames", "_____no_output_____" ] ], [ [ "dataset.head()", "_____no_output_____" ], [ "dataset.Motor", "_____no_output_____" ], [ "select = dataset.Motor == 'Motor Diesel'", "_____no_output_____" ], [ "type(select)", "_____no_output_____" ], [ "dataset[select]", "_____no_output_____" ], [ "dataset[(select) & (dataset.Zero_km == True)]", "_____no_output_____" ], [ "(select) & (dataset.Zero_km == True)", "_____no_output_____" ] ], [ [ "### Utilizando o método query", "_____no_output_____" ] ], [ [ "dataset.query('Motor == \"Motor Diesel\" and Zero_km == True')", "_____no_output_____" ], [ "import pandas as pd\n\ndados = {\n 'Motor': ['Motor 4.0 Turbo', 'Motor Diesel', 'Motor Diesel V8', 'Motor Diesel', 'Motor 1.6'],\n 'Ano': [2019, 2003, 1991, 2019, 1990],\n 'Quilometragem': [0.0, 5712.0, 37123.0, 0.0, 120000.0],\n 'Zero_km': [True, False, False, True, False],\n 'Valor': [88000.0, 106000.0, 72000.0, 89000.0, 32000.0]\n}\n\ndataset = pd.DataFrame(dados, index = ['Jetta', 'Passat', 'Crossfox', 'DS5', 'Fusca'])", "_____no_output_____" ], [ "dataset.query('Motor == \"Motor Diesel\" or Zero_km == True')", "_____no_output_____" ], [ "dataset.query('Motor == \"Motor Diesel\" | Zero_km == True')", "_____no_output_____" ] ], [ [ "# 5.4 Iterando com DataFrames", "_____no_output_____" ] ], [ [ "dataset.head()", "_____no_output_____" ], [ "for index, row in dataset.iterrows():\n if (2019 - row['Ano'] != 0):\n dataset.loc[index, \"km_media\"] = row['Quilometragem'] / 2019 - row['Ano']\n else:\n dataset.loc[index, \"km_media\"] = 0\ndataset", "_____no_output_____" ] ], [ [ "# 5.5 Tratamento de dados", "_____no_output_____" ] ], [ [ "dataset.head()", "_____no_output_____" ], [ "dataset.info()", "<class 'pandas.core.frame.DataFrame'>\nIndex: 258 entries, Jetta Variant to Macan\nData columns (total 7 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Motor 258 non-null object \n 1 Ano 258 non-null int64 \n 2 Quilometragem 197 non-null float64\n 3 Zero_km 258 non-null bool \n 4 Acessórios 258 non-null object \n 5 Valor 258 non-null float64\n 6 km_media 258 non-null float64\ndtypes: bool(1), float64(3), int64(1), object(2)\nmemory usage: 24.4+ KB\n" ], [ "dataset.Quilometragem.isna()", "_____no_output_____" ], [ "dataset[dataset.Quilometragem.isna()]", "_____no_output_____" ], [ "dataset.fillna(0, inplace = True)\ndataset", "_____no_output_____" ], [ "dataset.query('Zero_km == True')", "_____no_output_____" ], [ "dataset = pd.read_csv('db.csv', sep=';')\ndataset.dropna(subset = ['Quilometragem'], inplace = True)", "_____no_output_____" ], [ "dataset", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a4947a6a089f993dbfb9c02b71133e9f92de02f
27,332
ipynb
Jupyter Notebook
content/03_regression/04_polynomial_regression/colab.ipynb
skykykykykykykykykykykys/applied-machine-learning-intensive
7b988359ff627638d11c4d90c8005dfeda41c348
[ "Apache-2.0" ]
126
2020-09-10T21:58:56.000Z
2022-03-23T14:21:09.000Z
content/03_regression/04_polynomial_regression/colab.ipynb
skykykykykykykykykykykys/applied-machine-learning-intensive
7b988359ff627638d11c4d90c8005dfeda41c348
[ "Apache-2.0" ]
1
2021-04-06T06:32:00.000Z
2021-04-06T06:35:05.000Z
content/03_regression/04_polynomial_regression/colab.ipynb
skykykykykykykykykykykys/applied-machine-learning-intensive
7b988359ff627638d11c4d90c8005dfeda41c348
[ "Apache-2.0" ]
58
2020-09-10T15:23:13.000Z
2022-01-20T02:24:05.000Z
27.168986
334
0.583492
[ [ [ "<a href=\"https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/03_regression/04_polynomial_regression/colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "#### Copyright 2020 Google LLC.", "_____no_output_____" ] ], [ [ "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "# Polynomial Regression and Overfitting", "_____no_output_____" ], [ "So far in this course, we have dealt exclusively with linear models. These have all been \"straight-line\" models where we attempt to draw a straight line that fits a regression.\n\nToday we will start building curved-lined models based on [polynomial equations](https://en.wikipedia.org/wiki/Polynomial).", "_____no_output_____" ], [ "## Generating Sample Data\n\nLet's start by generating some data based on a second degree polynomial.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n\nnum_items = 100\n\nnp.random.seed(seed=420)\nX = np.random.randn(num_items, 1)\n\n# These coefficients are chosen arbitrarily.\ny = 0.6*(X**2) - 0.4*X + 1.3\n\nplt.plot(X, y, 'b.')\nplt.show()", "_____no_output_____" ] ], [ [ "Let's add some randomness to create a more realistic dataset and re-plot the randomized data points and the fit line.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n\nnum_items = 100\n\nnp.random.seed(seed=420)\nX = np.random.randn(num_items, 1)\n\n# Create some randomness.\nrandomness = np.random.randn(num_items, 1) / 2\n\n# This is the same equation as the plot above, with added randomness.\ny = 0.6*(X**2) - 0.4*X + 1.3 + randomness\n\nX_line = np.linspace(X.min(), X.max(), num=num_items)\ny_line = 0.6*(X_line**2) - 0.4*X_line + 1.3\n\nplt.plot(X, y, 'b.')\nplt.plot(X_line, y_line, 'r-')\nplt.show()", "_____no_output_____" ] ], [ [ "That looks much better! Now we can see that a 2-degree polynomial function fits this data reasonably well.", "_____no_output_____" ], [ "## Polynomial Fitting\n\nWe can now see a pretty obvious 2-degree polynomial that fits the scatter plot.\n\nScikit-learn offers a `PolynomialFeatures` class that handles polynomial combinations for a linear model. In this case, we know that a 2-degree polynomial is a good fit since the data was generated from a polynomial curve. Let's see if the model works.\n\nWe begin by creating a `PolynomialFeatures` instance of degree 2.", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import PolynomialFeatures\n\npf = PolynomialFeatures(degree=2)\nX_poly = pf.fit_transform(X)\n\nX.shape, X_poly.shape", "_____no_output_____" ] ], [ [ "You might be wondering what the `include_bias` parameter is. By default, it is `True`, in which case it forces the first exponent to be 0.\n\nThis adds a constant bias term to the equation. When we ask for no bias we start our exponents at 1 instead of 0.", "_____no_output_____" ], [ "This preprocessor generates a new feature matrix consisting of all polynomial combinations of the features. Notice that the input shape of `(100, 1)` becomes `(100, 2)` after transformation.\n\nIn this simple case, we doubled the number of features since we asked for a 2-degree polynomial and had one input feature. The number of generated features grows exponentially as the number of features and polynomial degrees increases.", "_____no_output_____" ], [ "## Model Fitting\n\nWe can now fit the model by passing our polynomial preprocessing data to the linear regressor.\n\nHow close did the intercept and coefficient match the values in the function we used to generate our data?", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LinearRegression\n\nlin_reg = LinearRegression()\nlin_reg.fit(X_poly, y)\n\nlin_reg.intercept_, lin_reg.coef_", "_____no_output_____" ] ], [ [ "## Visualization\n\nWe can plot our fitted line against the equation we used to generate the data. The fitted line is green, and the actual curve is red.", "_____no_output_____" ] ], [ [ "np.random.seed(seed=420)\n\n# Create 100 even-spaced x-values.\nX_line_fitted = np.linspace(X.min(), X.max(), num=100)\n\n# Start our equation with the intercept.\ny_line_fitted = lin_reg.intercept_\n\n# For each exponent, raise the X value to that exponent and multiply it by the\n# appropriate coefficient\nfor i in range(len(pf.powers_)):\n exponent = pf.powers_[i][0]\n y_line_fitted = y_line_fitted + \\\n lin_reg.coef_[0][i] * (X_line_fitted**exponent)\n\nplt.plot(X_line_fitted, y_line_fitted, 'g-')\nplt.plot(X_line, y_line, 'r-')\nplt.plot(X, y, 'b.')\nplt.show()", "_____no_output_____" ] ], [ [ "# Overfitting\n\nWhen using polynomial regression, it can be easy to *overfit* the data so that it performs well on the training data but doesn't perform well in the real world.\n\nTo understand overfitting we will create a fake dataset generated off of a linear equation, but we will use a polynomial regression as the model.", "_____no_output_____" ] ], [ [ "np.random.seed(seed=420)\n\n# Create 50 points from a linear dataset with randomness.\nnum_items = 50\nX = 6 * np.random.rand(num_items, 1)\ny = X + 2 + np.random.randn(num_items, 1)\n\nX_line = np.array([X.min(), X.max()])\ny_line = X_line + 2\n\nplt.plot(X_line, y_line, 'r-')\nplt.plot(X, y, 'b.')\nplt.show()", "_____no_output_____" ] ], [ [ "Let's now create a 10 degree polynomial to fit the linear data and fit the model.", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import PolynomialFeatures\nfrom sklearn.linear_model import LinearRegression\n\nnp.random.seed(seed=420)\n\npoly_features = PolynomialFeatures(degree=10, include_bias=False)\nX_poly = poly_features.fit_transform(X)\n\nregression = LinearRegression()\nregression.fit(X_poly, y)", "_____no_output_____" ] ], [ [ "## Visualization\n\nLet's draw the polynomial line that we fit to the data. To draw the line, we need to execute the 10 degree polynomial equation.\n\n$$\ny = k_0 + k_1x^1 + k_2x^2 + k_3x^3 + ... + k_9x^9 + k_{10}x^{10}\n$$\n\nCoding the above equation by hand is tedious and error-prone. It also makes it difficult to change the degree of the polynomial we are fitting.\n\nLet's see if there is a way to write the code more dynamically, using the `PolynomialFeatures` and `LinearRegression` functions.", "_____no_output_____" ], [ "The `PolynomialFeatures` class provides us with a list of exponents that we can use for each portion of the polynomial equation.", "_____no_output_____" ] ], [ [ "poly_features.powers_", "_____no_output_____" ] ], [ [ "The `LinearRegression` class provides us with a list of coefficients that correspond to the powers provided by `PolynomialFeatures`.", "_____no_output_____" ] ], [ [ "regression.coef_", "_____no_output_____" ] ], [ [ "It also provides an intercept.", "_____no_output_____" ] ], [ [ "regression.intercept_", "_____no_output_____" ] ], [ [ "Having this information, we can take a set of $X$ values (in the code below we use 100), then run our equation on those values.", "_____no_output_____" ] ], [ [ "np.random.seed(seed=420)\n\n# Create 100 even-spaced x-values.\nX_line_fitted = np.linspace(X.min(), X.max(), num=100)\n\n# Start our equation with the intercept.\ny_line_fitted = regression.intercept_\n\n# For each exponent, raise the X value to that exponent and multiply it by the\n# appropriate coefficient\nfor i in range(len(poly_features.powers_)):\n exponent = poly_features.powers_[i][0]\n y_line_fitted = y_line_fitted + \\\n regression.coef_[0][i] * (X_line_fitted**exponent)", "_____no_output_____" ] ], [ [ "We can now plot the data points, the actual line used to generate them, and our fitted model.", "_____no_output_____" ] ], [ [ "plt.plot(X_line, y_line, 'r-')\nplt.plot(X_line_fitted, y_line_fitted, 'g-')\nplt.plot(X, y, 'b.')\nplt.show()", "_____no_output_____" ] ], [ [ "Notice how our line is very wavy, and it spikes up and down to pass through specific data points. (This is especially true for the lowest and highest $x$-values, where the curve passes through them exactly.) This is a sign of overfitting. The line fits the training data reasonably well, but it may not be as useful on new data.", "_____no_output_____" ], [ "## Using a Simpler Model\n\nThe most obvious way to prevent overfitting in this example is to simply reduce the degree of the polynomial.\n\nThe code below uses a 2-degree polynomial and seems to fit the data much better. A linear model would work well too.", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import PolynomialFeatures\nfrom sklearn.linear_model import LinearRegression\n\npoly_features = PolynomialFeatures(degree=2, include_bias=False)\nX_poly = poly_features.fit_transform(X)\n\nregression = LinearRegression()\nregression.fit(X_poly, y)\n\nX_line_fitted = np.linspace(X.min(), X.max(), num=100)\ny_line_fitted = regression.intercept_\nfor i in range(len(poly_features.powers_)):\n exponent = poly_features.powers_[i][0]\n y_line_fitted = y_line_fitted + \\\n regression.coef_[0][i] * (X_line_fitted**exponent)\n\nplt.plot(X_line, y_line, 'r-')\nplt.plot(X_line_fitted, y_line_fitted, 'g-')\nplt.plot(X, y, 'b.')\nplt.show()", "_____no_output_____" ] ], [ [ "## Lasso Regularization\n\nIt is not always so clear what the \"simpler\" model choice is. Often, you will have to rely on regularization methods. A **regularization** is a method that penalizes large coefficients, with the aim of shrinking unnecessary coefficients to zero.\n\nLeast Absolute Shrinkage and Selection Operator (Lasso) regularization, also called L1 regularization, is a regularization method that adds the sum of the absolute values of the coefficients as a penalty in a cost function.\n\nIn scikit-learn, we can use the [Lasso](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html) model, which performs a linear regression with an L1 regression penalty.\n\nIn the resultant graph, you can see that the regression smooths out our polynomial curve quite a bit despite the polynomial being a degree 10 polynomial. Note that Lasso regression can make the impact of less important features completely disappear.", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import Lasso\n\npoly_features = PolynomialFeatures(degree=10, include_bias=False)\nX_poly = poly_features.fit_transform(X)\n\nlasso_reg = Lasso(alpha=5.0)\nlasso_reg.fit(X_poly, y)\n\nX_line_fitted = np.linspace(X.min(), X.max(), num=100)\ny_line_fitted = lasso_reg.intercept_\nfor i in range(len(poly_features.powers_)):\n exponent = poly_features.powers_[i][0]\n y_line_fitted = y_line_fitted + lasso_reg.coef_[i] * (X_line_fitted**exponent)\n\nplt.plot(X_line, y_line, 'r-')\nplt.plot(X_line_fitted, y_line_fitted, 'g-')\nplt.plot(X, y, 'b.')\nplt.show()", "_____no_output_____" ] ], [ [ "## Ridge Regularization\n\nSimilar to Lasso regularization, [Ridge](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) regularization adds a penalty to the cost function of a model. In the case of Ridge, also called L2 regularization, the penalty is the sum of squares of the coefficients.\n\nAgain, we can see that the regression smooths out the curve of our 10-degree polynomial.", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import Ridge\n\npoly_features = PolynomialFeatures(degree=10, include_bias=False)\nX_poly = poly_features.fit_transform(X)\n\nridge_reg = Ridge(alpha=0.5)\nridge_reg.fit(X_poly, y)\n\nX_line_fitted = np.linspace(X.min(), X.max(), num=100)\ny_line_fitted = ridge_reg.intercept_\nfor i in range(len(poly_features.powers_)):\n exponent = poly_features.powers_[i][0]\n y_line_fitted = y_line_fitted + ridge_reg.coef_[0][i] * (X_line_fitted**exponent)\n\nplt.plot(X_line, y_line, 'r-')\nplt.plot(X_line_fitted, y_line_fitted, 'g-')\nplt.plot(X, y, 'b.')\nplt.show()", "_____no_output_____" ] ], [ [ "## ElasticNet Regularization\n\nAnother common form of regularization is [ElasticNet](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html) regularization. This regularization method combines the concepts of L1 and L2 regularization by applying a penalty containing both a squared value and an absolute value.", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import ElasticNet\n\npoly_features = PolynomialFeatures(degree=10, include_bias=False)\nX_poly = poly_features.fit_transform(X)\n\nelastic_reg = ElasticNet(alpha=2.0, l1_ratio=0.5)\nelastic_reg.fit(X_poly, y)\n\nX_line_fitted = np.linspace(X.min(), X.max(), num=100)\ny_line_fitted = elastic_reg.intercept_\nfor i in range(len(poly_features.powers_)):\n exponent = poly_features.powers_[i][0]\n y_line_fitted = y_line_fitted + \\\n elastic_reg.coef_[i] * (X_line_fitted**exponent)\n\nplt.plot(X_line, y_line, 'r-')\nplt.plot(X_line_fitted, y_line_fitted, 'g-')\nplt.plot(X, y, 'b.')\nplt.show()", "_____no_output_____" ] ], [ [ "## Other Strategies\n\nAside from regularization, there are other strategies that can be used to prevent overfitting. These include:\n\n* [Early stopping](https://en.wikipedia.org/wiki/Early_stopping)\n* [Cross-validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics)\n* [Ensemble methods](https://en.wikipedia.org/wiki/Ensemble_learning)\n* Simplifying your model\n* Removing features", "_____no_output_____" ], [ "# Exercises", "_____no_output_____" ], [ "For these exercises we will work with the [diabetes dataset](https://scikit-learn.org/stable/datasets/index.html#diabetes-dataset) that comes with scikit-learn. The data contains the following features:\n\n1. age\n1. sex\n1. body mass index (bmi)\n1. average blood pressure (bp)\n\nIt also contains six measures of blood serum, `s1` through `s6`. The target is a numeric assessment of the progression of the disease over the course of a year.\n\nThe data has been standardized.", "_____no_output_____" ] ], [ [ "from sklearn.datasets import load_diabetes\n\nimport numpy as np\nimport pandas as pd\n\ndata = load_diabetes()\ndf = pd.DataFrame(data.data, columns=data.feature_names)\ndf['progression'] = data.target\n\ndf.describe()", "_____no_output_____" ] ], [ [ "Let's plot how body mass index relates to blood pressure.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n\nplt.plot(df['bmi'], df['bp'], 'b.')\nplt.show()", "_____no_output_____" ] ], [ [ "## Exercise 1: Polynomial Regression ", "_____no_output_____" ], [ "Let's create a model to see if we can map body mass index to blood pressure.\n\n1. Create a 10-degree polynomial preprocessor for our regression\n1. Create a linear regression model\n1. Fit and transform the `bmi` values with the polynomial features preprocessor\n1. Fit the transformed data using the linear regression\n1. Plot the fitted line over a scatter plot of the data points", "_____no_output_____" ], [ "**Student Solution**", "_____no_output_____" ] ], [ [ "# Your code goes here", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "## Exercise 2: Regularization", "_____no_output_____" ], [ "Your model from exercise one likely looked like it overfit. Experiment with the Lasso, Ridge, and/or ElasticNet classes in the place of the `LinearRegression`. Adjust the parameters for whichever regularization class you use until you create a line that doesn't look to be under- or over-fitted.", "_____no_output_____" ], [ "**Student Solution**", "_____no_output_____" ] ], [ [ "# Your code goes here", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "## Exercise 3: Other Models", "_____no_output_____" ], [ "Experiment with the [BayesianRidge](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.BayesianRidge.html). Does its fit line look better or worse than your other models?", "_____no_output_____" ], [ "**Student Solution**", "_____no_output_____" ] ], [ [ "# Your code goes here.", "_____no_output_____" ] ], [ [ "Does your fit line look better or worse than your other models?", "_____no_output_____" ], [ "> *Your Answer Goes Here*", "_____no_output_____" ], [ "---", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ] ]
4a4947e0f1422a5b8ede84af0e7b35f1b54b895d
16,775
ipynb
Jupyter Notebook
Quiz/m4_multifactor_models/m4l2/portfolio_variance.ipynb
scumabo/AI4Trading
9a36e18fc25e849b80718c3a462637b086089945
[ "Apache-2.0" ]
98
2020-05-22T00:41:23.000Z
2022-03-24T12:57:15.000Z
Quiz/m4_multifactor_models/m4l2/portfolio_variance.ipynb
kevingoh/AI-for-Trading
9d8e85c0753e41fec6b55b5803cdfd34668d8f71
[ "Apache-2.0" ]
1
2020-01-04T05:32:35.000Z
2020-01-04T18:22:21.000Z
Quiz/m4_multifactor_models/m4l2/portfolio_variance.ipynb
kevingoh/AI-for-Trading
9d8e85c0753e41fec6b55b5803cdfd34668d8f71
[ "Apache-2.0" ]
74
2020-05-05T16:44:42.000Z
2022-03-23T06:59:09.000Z
28.432203
422
0.564948
[ [ [ "# Portfolio Variance", "_____no_output_____" ] ], [ [ "import sys\n!{sys.executable} -m pip install -r requirements.txt", "_____no_output_____" ], [ "import numpy as np\nimport pandas as pd\nimport time\nimport os\nimport quiz_helper\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "%matplotlib inline\nplt.style.use('ggplot')\nplt.rcParams['figure.figsize'] = (14, 8)", "_____no_output_____" ] ], [ [ "### data bundle", "_____no_output_____" ] ], [ [ "import os\nimport quiz_helper\nfrom zipline.data import bundles", "_____no_output_____" ], [ "os.environ['ZIPLINE_ROOT'] = os.path.join(os.getcwd(), '..', '..','data','module_4_quizzes_eod')\ningest_func = bundles.csvdir.csvdir_equities(['daily'], quiz_helper.EOD_BUNDLE_NAME)\nbundles.register(quiz_helper.EOD_BUNDLE_NAME, ingest_func)\nprint('Data Registered')", "_____no_output_____" ] ], [ [ "### Build pipeline engine", "_____no_output_____" ] ], [ [ "from zipline.pipeline import Pipeline\nfrom zipline.pipeline.factors import AverageDollarVolume\nfrom zipline.utils.calendars import get_calendar\n\nuniverse = AverageDollarVolume(window_length=120).top(500) \ntrading_calendar = get_calendar('NYSE') \nbundle_data = bundles.load(quiz_helper.EOD_BUNDLE_NAME)\nengine = quiz_helper.build_pipeline_engine(bundle_data, trading_calendar)", "_____no_output_____" ] ], [ [ "### View Data¶\nWith the pipeline engine built, let's get the stocks at the end of the period in the universe we're using. We'll use these tickers to generate the returns data for the our risk model.", "_____no_output_____" ] ], [ [ "universe_end_date = pd.Timestamp('2016-01-05', tz='UTC')\n\nuniverse_tickers = engine\\\n .run_pipeline(\n Pipeline(screen=universe),\n universe_end_date,\n universe_end_date)\\\n .index.get_level_values(1)\\\n .values.tolist()\n \nuniverse_tickers", "_____no_output_____" ], [ "len(universe_tickers)", "_____no_output_____" ], [ "from zipline.data.data_portal import DataPortal\n\ndata_portal = DataPortal(\n bundle_data.asset_finder,\n trading_calendar=trading_calendar,\n first_trading_day=bundle_data.equity_daily_bar_reader.first_trading_day,\n equity_minute_reader=None,\n equity_daily_reader=bundle_data.equity_daily_bar_reader,\n adjustment_reader=bundle_data.adjustment_reader)", "_____no_output_____" ] ], [ [ "## Get pricing data helper function", "_____no_output_____" ] ], [ [ "from quiz_helper import get_pricing", "_____no_output_____" ] ], [ [ "## get pricing data into a dataframe", "_____no_output_____" ] ], [ [ "returns_df = \\\n get_pricing(\n data_portal,\n trading_calendar,\n universe_tickers,\n universe_end_date - pd.DateOffset(years=5),\n universe_end_date)\\\n .pct_change()[1:].fillna(0) #convert prices into returns\n\nreturns_df", "_____no_output_____" ] ], [ [ "## Let's look at a two stock portfolio\n\nLet's pretend we have a portfolio of two stocks. We'll pick Apple and Microsoft in this example.", "_____no_output_____" ] ], [ [ "aapl_col = returns_df.columns[3]\nmsft_col = returns_df.columns[312]\nasset_return_1 = returns_df[aapl_col].rename('asset_return_aapl')\nasset_return_2 = returns_df[msft_col].rename('asset_return_msft')\nasset_return_df = pd.concat([asset_return_1,asset_return_2],axis=1)\nasset_return_df.head(2)", "_____no_output_____" ] ], [ [ "## Factor returns\nLet's make up a \"factor\" by taking an average of all stocks in our list. You can think of this as an equal weighted index of the 490 stocks, kind of like a measure of the \"market\". We'll also make another factor by calculating the median of all the stocks. These are mainly intended to help us generate some data to work with. We'll go into how some common risk factors are generated later in the lessons.\n\nAlso note that we're setting axis=1 so that we calculate a value for each time period (row) instead of one value for each column (assets).", "_____no_output_____" ] ], [ [ "factor_return_1 = returns_df.mean(axis=1)\nfactor_return_2 = returns_df.median(axis=1)\nfactor_return_l = [factor_return_1, factor_return_2]", "_____no_output_____" ] ], [ [ "## Factor exposures\n\nFactor exposures refer to how \"exposed\" a stock is to each factor. We'll get into this more later. For now, just think of this as one number for each stock, for each of the factors.", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LinearRegression", "_____no_output_____" ], [ "\"\"\"\nFor now, just assume that we're calculating a number for each \nstock, for each factor, which represents how \"exposed\" each stock is\nto each factor. \nWe'll discuss how factor exposure is calculated later in the lessons.\n\"\"\"\ndef get_factor_exposures(factor_return_l, asset_return):\n lr = LinearRegression()\n X = np.array(factor_return_l).T\n y = np.array(asset_return.values)\n lr.fit(X,y)\n return lr.coef_", "_____no_output_____" ], [ "factor_exposure_l = []\nfor i in range(len(asset_return_df.columns)):\n factor_exposure_l.append(\n get_factor_exposures(factor_return_l,\n asset_return_df[asset_return_df.columns[i]]\n ))\n \nfactor_exposure_a = np.array(factor_exposure_l)", "_____no_output_____" ], [ "print(f\"factor_exposures for asset 1 {factor_exposure_a[0]}\")\nprint(f\"factor_exposures for asset 2 {factor_exposure_a[1]}\")", "_____no_output_____" ] ], [ [ "## Variance of stock 1\n\nCalculate the variance of stock 1. \n$\\textrm{Var}(r_{1}) = \\beta_{1,1}^2 \\textrm{Var}(f_{1}) + \\beta_{1,2}^2 \\textrm{Var}(f_{2}) + 2\\beta_{1,1}\\beta_{1,2}\\textrm{Cov}(f_{1},f_{2}) + \\textrm{Var}(s_{1})$", "_____no_output_____" ] ], [ [ "factor_exposure_1_1 = factor_exposure_a[0][0]\nfactor_exposure_1_2 = factor_exposure_a[0][1]\ncommon_return_1 = factor_exposure_1_1 * factor_return_1 + factor_exposure_1_2 * factor_return_2\nspecific_return_1 = asset_return_1 - common_return_1", "_____no_output_____" ], [ "covm_f1_f2 = np.cov(factor_return_1,factor_return_2,ddof=1) #this calculates a covariance matrix\n# get the variance of each factor, and covariances from the covariance matrix covm_f1_f2\nvar_f1 = covm_f1_f2[0,0]\nvar_f2 = covm_f1_f2[1,1]\ncov_f1_f2 = covm_f1_f2[0][1]\n\n# calculate the specific variance. \nvar_s_1 = np.var(specific_return_1,ddof=1)\n\n# calculate the variance of asset 1 in terms of the factors and specific variance\nvar_asset_1 = (factor_exposure_1_1**2 * var_f1) + \\\n (factor_exposure_1_2**2 * var_f2) + \\\n 2 * (factor_exposure_1_1 * factor_exposure_1_2 * cov_f1_f2) + \\\n var_s_1\nprint(f\"variance of asset 1: {var_asset_1:.8f}\")", "_____no_output_____" ] ], [ [ "## Variance of stock 2\nCalculate the variance of stock 2. \n$\\textrm{Var}(r_{2}) = \\beta_{2,1}^2 \\textrm{Var}(f_{1}) + \\beta_{2,2}^2 \\textrm{Var}(f_{2}) + 2\\beta_{2,1}\\beta_{2,2}\\textrm{Cov}(f_{1},f_{2}) + \\textrm{Var}(s_{2})$", "_____no_output_____" ] ], [ [ "factor_exposure_2_1 = factor_exposure_a[1][0]\nfactor_exposure_2_2 = factor_exposure_a[1][1]\ncommon_return_2 = factor_exposure_2_1 * factor_return_1 + factor_exposure_2_2 * factor_return_2\nspecific_return_2 = asset_return_2 - common_return_2", "_____no_output_____" ], [ "# Notice we already calculated the variance and covariances of the factors\n\n# calculate the specific variance of asset 2\nvar_s_2 = np.var(specific_return_2,ddof=1)\n\n# calcualte the variance of asset 2 in terms of the factors and specific variance\nvar_asset_2 = (factor_exposure_2_1**2 * var_f1) + \\\n (factor_exposure_2_2**2 * var_f2) + \\\n (2 * factor_exposure_2_1 * factor_exposure_2_2 * cov_f1_f2) + \\\n var_s_2\n \nprint(f\"variance of asset 2: {var_asset_2:.8f}\")", "_____no_output_____" ] ], [ [ "## Covariance of stocks 1 and 2\nCalculate the covariance of stock 1 and 2. \n$\\textrm{Cov}(r_{1},r_{2}) = \\beta_{1,1}\\beta_{2,1}\\textrm{Var}(f_{1}) + \\beta_{1,1}\\beta_{2,2}\\textrm{Cov}(f_{1},f_{2}) + \\beta_{1,2}\\beta_{2,1}\\textrm{Cov}(f_{1},f_{2}) + \\beta_{1,2}\\beta_{2,2}\\textrm{Var}(f_{2})$", "_____no_output_____" ] ], [ [ "# TODO: calculate the covariance of assets 1 and 2 in terms of the factors\ncov_asset_1_2 = (factor_exposure_1_1 * factor_exposure_2_1 * var_f1) + \\\n (factor_exposure_1_1 * factor_exposure_2_2 * cov_f1_f2) + \\\n (factor_exposure_1_2 * factor_exposure_2_1 * cov_f1_f2) + \\\n (factor_exposure_1_2 * factor_exposure_2_2 * var_f2)\nprint(f\"covariance of assets 1 and 2: {cov_asset_1_2:.8f}\")", "_____no_output_____" ] ], [ [ "## Quiz 1: calculate portfolio variance\n\nWe'll choose stock weights for now (in a later lesson, you'll learn how to use portfolio optimization that uses alpha factors and a risk factor model to choose stock weights).\n\n$\\textrm{Var}(r_p) = x_{1}^{2} \\textrm{Var}(r_1) + x_{2}^{2} \\textrm{Var}(r_2) + 2x_{1}x_{2}\\textrm{Cov}(r_{1},r_{2})$ ", "_____no_output_____" ] ], [ [ "weight_1 = 0.60\nweight_2 = 0.40\n\n# TODO: calculate portfolio variance\nvar_portfolio = # ...\nprint(f\"variance of portfolio is {var_portfolio:.8f}\")", "_____no_output_____" ] ], [ [ "## Quiz 2: Do it with Matrices!\n\nCreate matrices $\\mathbf{F}$, $\\mathbf{B}$ and $\\mathbf{S}$, where \n$\\mathbf{F}= \\begin{pmatrix}\n\\textrm{Var}(f_1) & \\textrm{Cov}(f_1,f_2) \\\\ \n\\textrm{Cov}(f_2,f_1) & \\textrm{Var}(f_2) \n\\end{pmatrix}$\nis the covariance matrix of factors, \n\n$\\mathbf{B} = \\begin{pmatrix}\n\\beta_{1,1}, \\beta_{1,2}\\\\ \n\\beta_{2,1}, \\beta_{2,2}\n\\end{pmatrix}$ \nis the matrix of factor exposures, and \n\n$\\mathbf{S} = \\begin{pmatrix}\n\\textrm{Var}(s_i) & 0\\\\ \n0 & \\textrm{Var}(s_j)\n\\end{pmatrix}$\nis the matrix of specific variances. \n\n$\\mathbf{X} = \\begin{pmatrix}\nx_{1} \\\\\nx_{2}\n\\end{pmatrix}$\n\n### Concept Question\nWhat are the dimensions of the $\\textrm{Var}(r_p)$ portfolio variance? Given this, when choosing whether to multiply a row vector or a column vector on the left and right sides of the $\\mathbf{BFB}^T$, which choice helps you get the dimensions of the portfolio variance term?\n\nIn other words:\nGiven that $\\mathbf{X}$ is a column vector, which makes more sense?\n\n$\\mathbf{X}^T(\\mathbf{BFB}^T + \\mathbf{S})\\mathbf{X}$ ? \nor \n$\\mathbf{X}(\\mathbf{BFB}^T + \\mathbf{S})\\mathbf{X}^T$ ? ", "_____no_output_____" ], [ "## Answer 2 here:", "_____no_output_____" ], [ "## Quiz 3: Calculate portfolio variance using matrices", "_____no_output_____" ] ], [ [ "# TODO: covariance matrix of factors\nF = # ...\nF", "_____no_output_____" ], [ "# TODO: matrix of factor exposures\nB = # ...\nB", "_____no_output_____" ], [ "# TODO: matrix of specific variances\nS = # ...\nS", "_____no_output_____" ] ], [ [ "#### Hint for column vectors\nTry using [reshape](https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.reshape.html)", "_____no_output_____" ] ], [ [ "# TODO: make a column vector for stock weights matrix X\nX = # ...\nX", "_____no_output_____" ], [ "# TODO: covariance matrix of assets\nvar_portfolio = # ...\nprint(f\"portfolio variance is \\n{var_portfolio[0][0]:.8f}\")", "_____no_output_____" ] ], [ [ "## Solution\n[Solution notebook is here](portfolio_variance_solution.ipynb)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
4a49503a5d766d66c5e18a19b2d60dd6e25b25fb
28,924
ipynb
Jupyter Notebook
exercise_notebooks/unit_testing_exercise/unit_testing_data_engineering.ipynb
cbymar/testing-and-monitoring-ml-deployments
a199e84815c9eba7ab57379e4bebd16e45187926
[ "BSD-3-Clause" ]
null
null
null
exercise_notebooks/unit_testing_exercise/unit_testing_data_engineering.ipynb
cbymar/testing-and-monitoring-ml-deployments
a199e84815c9eba7ab57379e4bebd16e45187926
[ "BSD-3-Clause" ]
null
null
null
exercise_notebooks/unit_testing_exercise/unit_testing_data_engineering.ipynb
cbymar/testing-and-monitoring-ml-deployments
a199e84815c9eba7ab57379e4bebd16e45187926
[ "BSD-3-Clause" ]
null
null
null
33.710956
299
0.37564
[ [ [ "# Unit Testing ML Code: Hands-on Exercise (Data Engineering)\n\n## In this notebook we will explore unit tests for data engineering\n\n#### We will use a classic toy dataset: the Iris plants dataset, which comes included with scikit-learn\nDataset details: https://scikit-learn.org/stable/datasets/index.html#iris-plants-dataset\n\nAs we progress through the course, the complexity of examples will increase, but we will start with something basic. This notebook is designed so that it can be run in isolation, once the setup steps described below are complete.\n\n### Setup\n\nLet's begin by importing the dataset and the libraries we are going to use. Make sure you have run `pip install -r requirements.txt` on requirements file located in the same directory as this notebook. We recommend doing this in a separate virtual environment (see dedicated setup lecture).\n\nIf you need a refresher on jupyter, pandas or numpy, there are some links to resources in the section notes.", "_____no_output_____" ] ], [ [ "from sklearn import datasets\nimport pandas as pd\nimport numpy as np\n\n# Access the iris dataset from sklearn\niris = datasets.load_iris()\n\n# Load the iris data into a pandas dataframe. The `data` and `feature_names`\n# attributes of the dataset are added by default by sklearn. We use them to\n# specify the columns of our dataframes.\niris_frame = pd.DataFrame(iris.data, columns=iris.feature_names)\n\n# Create a \"target\" column in our dataframe, and set the values to the correct\n# classifications from the dataset.\niris_frame['target'] = iris.target", "_____no_output_____" ], [ "iris.feature_names", "_____no_output_____" ] ], [ [ "### Add the `SimplePipeline` from the Test Input Values notebook (same as previous lecture, no changes here)", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import train_test_split\n\n\nclass SimplePipeline:\n def __init__(self):\n self.frame = None\n # Shorthand to specify that each value should start out as\n # None when the class is instantiated.\n self.X_train, self.X_test, self.y_train, self.Y_test = None, None, None, None\n self.model = None\n self.load_dataset()\n \n def load_dataset(self):\n \"\"\"Load the dataset and perform train test split.\"\"\"\n # fetch from sklearn\n dataset = datasets.load_iris()\n \n # remove units ' (cm)' from variable names\n self.feature_names = [fn[:-5] for fn in dataset.feature_names]\n self.frame = pd.DataFrame(dataset.data, columns=self.feature_names)\n for col in self.frame.columns:\n self.frame[col] *= -1\n self.frame['target'] = dataset.target\n \n \n # we divide the data set using the train_test_split function from sklearn, \n # which takes as parameters, the dataframe with the predictor variables, \n # then the target, then the percentage of data to assign to the test set, \n # and finally the random_state to ensure reproducibility.\n self.X_train, self.X_test, self.y_train, self.y_test = train_test_split(\n self.frame[self.feature_names], self.frame.target, test_size=0.65, random_state=42)\n \n def train(self, algorithm=LogisticRegression):\n \n # we set up a LogisticRegression classifier with default parameters\n self.model = algorithm(solver='lbfgs', multi_class='auto')\n self.model.fit(self.X_train, self.y_train)\n \n def predict(self, input_data):\n return self.model.predict(input_data)\n \n def get_accuracy(self):\n \n # use our X_test and y_test values generated when we used\n # `train_test_split` to test accuracy.\n # score is a method on the Logisitic Regression that \n # returns the accuracy by default, but can be changed to other metrics, see: \n # https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression.score\n return self.model.score(X=self.X_test, y=self.y_test)\n \n def run_pipeline(self):\n \"\"\"Helper method to run multiple pipeline methods with one call.\"\"\"\n self.load_dataset()\n self.train()", "_____no_output_____" ] ], [ [ "### Test Engineered Data (preprocessing)\n\nBelow we create an updated pipeline which inherits from the SimplePipeline but has new functionality to preprocess the data by applying a scaler. Linear models are sensitive to the scale of the features. For example features with bigger magnitudes tend to dominate if we do not apply a scaler.", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import StandardScaler\n\n\nclass PipelineWithDataEngineering(SimplePipeline):\n def __init__(self):\n # Call the inherited SimplePipeline __init__ method first.\n super().__init__()\n \n # scaler to standardize the variables in the dataset\n self.scaler = StandardScaler()\n # Train the scaler once upon pipeline instantiation:\n # Compute the mean and standard deviation based on the training data\n self.scaler.fit(self.X_train)\n \n def apply_scaler(self):\n # Scale the test and training data to be of mean 0 and of unit variance\n self.X_train = self.scaler.transform(self.X_train)\n self.X_test = self.scaler.transform(self.X_test)\n \n def predict(self, input_data):\n # apply scaler transform on inputs before predictions\n scaled_input_data = self.scaler.transform(input_data)\n return self.model.predict(scaled_input_data)\n \n def run_pipeline(self):\n \"\"\"Helper method to run multiple pipeline methods with one call.\"\"\"\n self.load_dataset()\n self.apply_scaler() # updated in the this class\n self.train()", "_____no_output_____" ], [ "pipeline = PipelineWithDataEngineering()\npipeline.run_pipeline()\naccuracy_score = pipeline.get_accuracy()\nprint(f'current model accuracy is: {accuracy_score}')", "current model accuracy is: 0.9591836734693877\n" ] ], [ [ "### Now we Unit Test\nWe focus specifically on the feature engineering step", "_____no_output_____" ] ], [ [ "pipeline.load_dataset()\n# pd.DataFrame(pipeline.X_train).stack().mean()", "_____no_output_____" ], [ "for col in pipeline.X_train.columns:\n pipeline.X_train[col] *= -1", "_____no_output_____" ], [ "pipeline.X_train", "_____no_output_____" ], [ "import unittest\n\n\nclass TestIrisDataEngineering(unittest.TestCase):\n\n def setUp(self):\n \"\"\"Call the first method of the tested class after instantiating\"\"\"\n self.pipeline = PipelineWithDataEngineering()\n self.pipeline.load_dataset()\n \n def test_scaler_preprocessing_brings_x_train_mean_near_zero(self):\n \"\"\"\"\"\"\n # Given\n # convert the dataframe to be a single column with pandas stack\n original_mean = self.pipeline.X_train.stack().mean()\n \n # When\n self.pipeline.apply_scaler()\n \n # Then\n # The idea behind StandardScaler is that it will transform your data \n # to center the distribution at 0 and scale the variance at 1.\n # Therefore we test that the mean has shifted to be less than the original\n # and close to 0 using assertAlmostEqual to check to 3 decimal places:\n # https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertAlmostEqual\n self.assertTrue(original_mean > self.pipeline.X_train.mean()) # X_train is a numpy array at this point.\n self.assertAlmostEqual(self.pipeline.X_train.mean(), 0.0, places=3)\n print(f'Original X train mean: {original_mean}')\n print(f'Transformed X train mean: {self.pipeline.X_train.mean()}')\n \n def test_scaler_preprocessing_brings_x_train_std_near_one(self):\n # When\n self.pipeline.apply_scaler()\n \n # Then\n # We also check that the standard deviation is close to 1\n self.assertAlmostEqual(self.pipeline.X_train.std(), 1.0, places=3)\n print(f'Transformed X train standard deviation : {self.pipeline.X_train.std()}')\n ", "_____no_output_____" ], [ "import sys\nsuite = unittest.TestLoader().loadTestsFromTestCase(TestIrisDataEngineering)\nunittest.TextTestRunner(verbosity=1, stream=sys.stderr).run(suite)", "F." ] ], [ [ "## Data Engineering Test: Hands-on Exercise\nChange the pipeline class preprocessing so that the test fails. Do you understand why the test is failing?", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ] ]
4a4964cc117f90505690f37ce90a67f1d6c56767
14,218
ipynb
Jupyter Notebook
business_licenses.ipynb
eisenba/etl_project
a507a0c7f3252a6de99ede879fda4f7d6d3650e3
[ "CNRI-Python", "Info-ZIP" ]
null
null
null
business_licenses.ipynb
eisenba/etl_project
a507a0c7f3252a6de99ede879fda4f7d6d3650e3
[ "CNRI-Python", "Info-ZIP" ]
null
null
null
business_licenses.ipynb
eisenba/etl_project
a507a0c7f3252a6de99ede879fda4f7d6d3650e3
[ "CNRI-Python", "Info-ZIP" ]
null
null
null
33.454118
120
0.379378
[ [ [ "import pandas as pd\nfrom sqlalchemy import create_engine", "_____no_output_____" ] ], [ [ "### Store CSV into a Dataframe ", "_____no_output_____" ] ], [ [ "csv_file = \"./Resources/licenses_data.csv\"\nlicenses_data_df = pd.read_csv(csv_file)\nlicenses_data_df.head()", "_____no_output_____" ] ], [ [ "### Create new data with select columns and group ", "_____no_output_____" ] ], [ [ "new_licenses_data_df = licenses_data_df[['LEGAL NAME','ZIP CODE', 'LICENSE DESCRIPTION', 'DATE ISSUED']].copy()\nnew_licenses_data_df.head()", "_____no_output_____" ] ], [ [ "### Connect to local database ", "_____no_output_____" ] ], [ [ "rds_connection_string = \"<postgres:huffhuff@localhost:5432/business_licenses_db\"\nengine = create_engine(f'postgresql://{rds_connection_string}')", "_____no_output_____" ] ], [ [ "### Check for Tables ", "_____no_output_____" ], [ "### Use pandas to load csv converted dataframe into database ", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
4a49894b39dfc170b1130884c0e7f32a59ad319d
33,768
ipynb
Jupyter Notebook
FederatedCF011.ipynb
abdurahman02/AcademicContent
ab0925eeb80923c1f59e484b4c60e477f5d7e8bd
[ "MIT" ]
null
null
null
FederatedCF011.ipynb
abdurahman02/AcademicContent
ab0925eeb80923c1f59e484b4c60e477f5d7e8bd
[ "MIT" ]
null
null
null
FederatedCF011.ipynb
abdurahman02/AcademicContent
ab0925eeb80923c1f59e484b4c60e477f5d7e8bd
[ "MIT" ]
null
null
null
35.285266
509
0.446073
[ [ [ "<a href=\"https://colab.research.google.com/github/abdurahman02/AcademicContent/blob/master/FederatedCF011.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nfrom pathlib import Path\nimport pandas as pd\nimport numpy as np\nimport os\nimport copy\nimport time\nimport string", "_____no_output_____" ], [ "!git clone \"https://github.com/abdurahman02/ml-latest-small.git\"\nos.chdir(\"ml-latest-small\")\nos.listdir()", "Cloning into 'ml-latest-small'...\nremote: Enumerating objects: 7, done.\u001b[K\nremote: Counting objects: 100% (7/7), done.\u001b[K\nremote: Compressing objects: 100% (7/7), done.\u001b[K\nremote: Total 7 (delta 0), reused 0 (delta 0), pack-reused 0\u001b[K\nUnpacking objects: 100% (7/7), done.\n" ], [ "data = pd.read_csv(\"ratings.csv\")", "_____no_output_____" ], [ "data.head()", "_____no_output_____" ], [ "def filtering_data(df,from_user, to_user, from_item, to_item):\n if(from_user <= to_user and from_item <= to_item\n and to_user < max(df[\"userId\"]) and to_item < max(df[\"movieId\"])\n ):\n return df[(df.userId >= from_user) & \n (df.userId <= to_user) &\n (df.movieId >= from_item) &\n (df.movieId <= to_item)\n ]\n print(\"Error Range\")\n\ndef getBatchForUser(data, u, batchSize):\n if u >= len(data[\"userId\"].unique()):\n print(\"INvalid UserId requested\")\n return\n if batchSize > len(data[data.userId == u]):\n batchSize = len(data[data.userId == u])\n return data[data.userId == u].sample(n=batchSize)", "_____no_output_____" ], [ "# split train and validation before encoding\n# np.random.seed(3)\n# msk = np.random.rand(len(data)) < 0.8\n# train = data[msk].copy()\n# val = data[~msk].copy()", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "# here is a handy function modified from fast.ai\ndef proc_col(col, train_col=None):\n \"\"\"Encodes a pandas column with continous ids. \n \"\"\"\n if train_col is not None:\n uniq = train_col.unique()\n else:\n uniq = col.unique()\n name2idx = {o:i for i,o in enumerate(uniq)}\n return name2idx, np.array([name2idx.get(x, -1) for x in col]), len(uniq)\n \ndef encode_data(df, train=None):\n \"\"\" Encodes rating data with continous user and movie ids. \n If train is provided, encodes df with the same encoding as train.\n \"\"\"\n df = df.copy()\n for col_name in [\"userId\", \"movieId\"]:\n train_col = None\n if train is not None:\n train_col = train[col_name]\n _,col,_ = proc_col(df[col_name], train_col)\n df[col_name] = col\n df = df[df[col_name] >= 0]\n return df", "_____no_output_____" ], [ "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F", "_____no_output_____" ], [ "# encoding the train and validation data\n# df_train = encode_data(train)\n# df_val = encode_data(val, train)", "_____no_output_____" ], [ "# df_val.movieId.values", "_____no_output_____" ], [ "# df_train_numpy = df_train.to_numpy(dtype=int, copy=True)\n# # print((df_train_numpy))\n# diction={}\n# for i in range(len(df_train_numpy)):\n# diction[df_train_numpy[i][0],df_train_numpy[i][1]] = df_train_numpy[i][2]", "_____no_output_____" ], [ "class MF(nn.Module):\n def __init__(self, userX_embedding, item_embed_mat, emb_size=100):\n super(MF, self).__init__()\n self.userX_embedding = userX_embedding\n self.item_embed_mat = item_embed_mat\n # print(userX_embedding.weight)\n \n def forward(self, u, v):\n u = self.userX_embedding(u)\n \n v = self.item_embed_mat(v)\n # print(\"u: \",u)\n # print(\"v: \",v)\n # print(len((u*v).sum(1)))\n return (u*v).sum(1)", "_____no_output_____" ], [ "# emb_size=5\n# items = torch.LongTensor(df_train.movieId.unique()) #.cuda()\n# embx_item = nn.Embedding(num_items, emb_size)\n# embx_user = nn.Embedding(1,emb_size)\n# embx_user.weight.data.uniform_(0, 0.05)\n# embx_item.weight.data.uniform_(0, 0.05)\n\n# model01 = MF(embx_user,embx_item,5)\n\n", "_____no_output_____" ], [ "# model01.userX_embedding.weight", "_____no_output_____" ], [ "# pred = model01(torch.tensor([0]), torch.tensor(items[0]))", "_____no_output_____" ], [ "# model01.item_embed_mat.weight[0]", "_____no_output_____" ], [ "# print(pred,torch.tensor([diction[0,0]]))", "_____no_output_____" ], [ "# optimizer = torch.optim.Adam(model01.parameters(), lr=0.01, weight_decay=0.0)\n# loss = F.mse_loss(pred,torch.FloatTensor([diction[0,0]]))\n# optimizer.zero_grad()\n# loss.backward()\n# optimizer.step()", "_____no_output_____" ], [ "# print(model01.item_embed_mat.weight[0])\n# print(model01.userX_embedding.weight)", "_____no_output_____" ], [ "# model = MF(num_users, num_items, emb_size=5) # .cuda() if you have a GPU", "_____no_output_____" ], [ "# print(len(df_train.movieId.unique()))\n# print(max(df_train.movieId.values))", "_____no_output_____" ], [ "def add_model_parameters(model1, model2):\n # Adds the parameters of model1 to model2\n\n params1 = model1.named_parameters()\n params2 = model2.named_parameters()\n\n dict_params2 = dict(params2)\n\n for name1, param1 in params1:\n if name1 in dict_params2 and name1 != 'userX_embedding.weight':\n dict_params2[name1].data.copy_(param1.data + dict_params2[name1].data)\n\n model2.load_state_dict(dict_params2)\n\ndef sub_model_parameters(model1, model2):\n # Subtracts the parameters of model2 with model1\n\n params1 = model1.named_parameters()\n params2 = model2.named_parameters()\n\n dict_params2 = dict(params2)\n\n for name1, param1 in params1:\n if name1 in dict_params2 and name1 != 'userX_embedding.weight':\n dict_params2[name1].data.copy_(dict_params2[name1].data - param1.data)\n\n model2.load_state_dict(dict_params2)\n\ndef divide_model_parameters(model, f):\n # Divides model parameters except for the user embeddings with f\n params1 = model.named_parameters()\n params2 = model.named_parameters()\n dict_params2 = dict(params2)\n for name1, param1 in params1:\n if name1 != 'userX_embedding.weight':\n dict_params2[name1].data.copy_(param1.data / f)\n model.load_state_dict(dict_params2)\n\ndef zero_model_parameters(model):\n # sets all parameters to zero\n\n params1 = model.named_parameters()\n params2 = model.named_parameters()\n dict_params2 = dict(params2)\n for name1, param1 in params1:\n if name1 in dict_params2:\n dict_params2[name1].data.copy_(param1.data - dict_params2[name1].data)\n\n model.load_state_dict(dict_params2)", "_____no_output_____" ], [ "class RMSELoss(nn.Module):\n def __init__(self, eps=1e-6):\n super().__init__()\n self.mse = nn.MSELoss()\n self.eps = eps\n \n def forward(self,yhat,y):\n loss = torch.sqrt(self.mse(yhat,y) + self.eps)\n return loss", "_____no_output_____" ], [ "def fed_train_client(model_server, df_train,epochs=10, lr=0.1):\n \n emb_size=5\n los=[]\n los_usr=[]\n dict_los={}\n user_emb_dict = {}\n item_emb_mat_dict = {}\n\n \n model_diff = copy.deepcopy(model_server)\n zero_model_parameters(model_diff)\n # model02(torch.tensor([0]), torch.tensor(items[0]))\n t1 = time.time()\n \n for user_id in range(len(df_train.userId.unique())):\n \n model02 = copy.deepcopy(model_server)\n optimizer = torch.optim.Adam(model02.parameters(), lr=lr, weight_decay=1e-5)\n batch = df_train[df_train.userId == user_id]\n batch = batch.to_numpy(dtype=int, copy=True)\n for e in range(epochs):\n for data_point in batch:\n # print(data_point)\n y_hat = model02(torch.tensor([0]), torch.tensor(data_point[1]))\n\n loss_fn = RMSELoss()\n loss = loss_fn(y_hat,torch.FloatTensor([data_point[2]]))\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n los.append(loss.item())\n \n los_usr.append(np.sqrt(np.sum([x**2 for x in los])/len(batch)))\n los.clear()\n # running_loss = running_loss/len(data_point)\n user_emb_dict[user_id] = copy.deepcopy(model02.userX_embedding.weight)\n item_emb_mat_dict[user_id] = copy.deepcopy(model02.item_embed_mat.weight)\n \n dict_los[user_id] = los_usr[len(los_usr)-1]\n print(\"userId:\", user_id, \"training_loss: \", dict_los[user_id])\n los_usr.clear()\n\n sub_model_parameters(model_server, model02)\n add_model_parameters(model02, model_diff)\n \n # Take the average of the MLP and item vectors\n divide_model_parameters(model_diff, (len(df_train.userId.unique())))\n\n # Update the global model by adding the total change\n add_model_parameters(model_diff, model_server)\n\n t2 = time.time()\n\n print(\"Time of round:\", round(t2 - t1), \"seconds\")\n return dict_los\n # test_loss(model, unsqueeze)", "_____no_output_____" ], [ "def fed_eval(model, df_val):\n los = []\n los_usr = []\n for user_id in range(len(df_val.userId.unique())):\n batch = df_val[df_val.userId == user_id]\n batch = batch.to_numpy(dtype=int, copy=True)\n for data_point in batch:\n y_hat = model(torch.tensor([0]), torch.tensor(data_point[1]))\n # loss = RMSELoss(y_hat,torch.FloatTensor([data_point[2]]))\n loss_fn = RMSELoss()\n loss = loss_fn(y_hat,torch.FloatTensor([data_point[2]]))\n los.append(loss.item())\n los_usr.append(np.sqrt(np.sum([x**2 for x in los])/len(batch)))\n los.clear()\n return np.mean(los_usr),los_usr\n", "_____no_output_____" ], [ "def Server(from_user, to_user, from_item, to_item, epochs, emb_size, rounds, lr):\n\n \n Max_BatchSize_User = 20\n lr = lr\n eta = 80\n \n print(\"embedding size is: \",emb_size)\n print(\"Max Batch Size is: \",Max_BatchSize_User)\n\n avg_train_loss = []\n dict_loss_train={}\n avg_test_loss_vec = []\n \n filtered_data = filtering_data(data, from_user, to_user, from_item, to_item)\n np.random.seed(3)\n msk = np.random.rand(len(filtered_data)) < 0.8\n train = filtered_data[msk].copy()\n val = filtered_data[~msk].copy()\n df_train = encode_data(train)\n df_val = encode_data(val, train)\n\n \n embx_item = nn.Embedding(len(df_train.movieId.unique()), emb_size)\n embx_user = nn.Embedding(1,emb_size)\n if torch.cuda.is_available():\n embx_user.weight.data.uniform_(0, 0.05).cuda()\n embx_item.weight.data.uniform_(0, 0.05).cuda()\n else:\n embx_user.weight.data.uniform_(0, 0.05)\n embx_item.weight.data.uniform_(0, 0.05)\n model_server = MF(embx_user,embx_item,emb_size)\n\n for t in range(rounds): # for each round\n\n print(\"Starting round\", t + 1)\n # train one round\n \n dict_loss_train = fed_train_client(model_server, df_train, epochs=epochs, lr=lr)\n\n avg_train_loss.append(np.mean([dict_loss_train[x] for x in dict_loss_train]))\n\n print(\"Evaluating model...\")\n avg_test_loss, test_loss_vec_userX = fed_eval(model_server, df_val)\n avg_test_loss_vec.append(avg_test_loss)\n print(\"Round \", t, \" computed test loss:\", avg_test_loss)\n return avg_test_loss_vec, test_loss_vec_userX, avg_train_loss\n ", "_____no_output_____" ], [ "from_user = 1\nto_user = 300\nfrom_item = 1\nto_item = 10000\nepochs=10\nemb_size=100\nrounds=100\nlr=0.1\navg_test_loss_vec, \\\ntest_loss_vec_userX,\\\navg_train_loss = Server(from_user, to_user, \n from_item, to_item, \n epochs, emb_size, rounds, lr)", "embedding size is: 100\nMax Batch Size is: 20\nStarting round 1\nuserId: 0 training_loss: 29.95471935048151\nuserId: 1 training_loss: 1.1629414046347475\nuserId: 2 training_loss: 30.468924103925943\nuserId: 3 training_loss: 26.605604991537337\nuserId: 4 training_loss: 32.7427961520338\nuserId: 5 training_loss: 32.11221345887219\nuserId: 6 training_loss: 25.874694461023072\nuserId: 7 training_loss: 33.347907647675825\nuserId: 8 training_loss: 10.32392958727665\nuserId: 9 training_loss: 17.051162543438288\nuserId: 10 training_loss: 20.41024979204911\nuserId: 11 training_loss: 10.986932484834952\nuserId: 12 training_loss: 27.869687808440627\nuserId: 13 training_loss: 29.7610577620545\nuserId: 14 training_loss: 24.717162821893947\nuserId: 15 training_loss: 16.887302954061052\nuserId: 16 training_loss: 19.285762453674305\nuserId: 17 training_loss: 29.319342518004945\nuserId: 18 training_loss: 39.28134819251093\nuserId: 19 training_loss: 32.15972881298207\nuserId: 20 training_loss: 28.645175733598627\nuserId: 21 training_loss: 19.886505691460183\nuserId: 22 training_loss: 22.391999240557052\nuserId: 23 training_loss: 35.882404188767026\nuserId: 24 training_loss: 1.812233111743227\nuserId: 25 training_loss: 8.148023275145915\nuserId: 26 training_loss: 18.276930885106488\n" ], [ "\n# from_user = 1\n# to_user = 3\n# from_item = 1\n# to_item = 10000\n# epochs=10\n# emb_size=20\n# rounds=100\nplt.xlabel('number of rounds')\nplt.ylabel('Average user test loss')\nprint(\"total users: \", abs(from_user-to_user))\nprint(\"total items: \", abs(from_item-to_item))\nprint(\"Embedding size: \",emb_size)\nprint(\"local epochs: \", epochs)\nplt.plot(np.arange(start=1, stop=len(avg_test_loss_vec)+1, step=1), avg_test_loss_vec, 'g--')\nplt.plot(np.arange(start=1, stop=len(avg_test_loss_vec)+1, step=1), avg_train_loss, 'b--')\nplt.legend([\"Test Loss\", \"Training loss\"])", "_____no_output_____" ], [ "class MF_bias(nn.Module):\n def __init__(self, num_users, num_items, emb_size=100):\n super(MF_bias, self).__init__()\n self.user_emb = nn.Embedding(num_users, emb_size)\n self.user_bias = nn.Embedding(num_users, 1)\n self.item_emb = nn.Embedding(num_items, emb_size)\n self.item_bias = nn.Embedding(num_items, 1)\n self.user_emb.weight.data.uniform_(0,0.05)\n self.item_emb.weight.data.uniform_(0,0.05)\n self.user_bias.weight.data.uniform_(-0.01,0.01)\n self.item_bias.weight.data.uniform_(-0.01,0.01)\n \n def forward(self, u, v):\n U = self.user_emb(u)\n V = self.item_emb(v)\n b_u = self.user_bias(u).squeeze()\n b_v = self.item_bias(v).squeeze()\n return (U*V).sum(1) + b_u + b_v", "_____no_output_____" ], [ "# model = MF_bias(num_users, num_items, emb_size=100) #.cuda()", "_____no_output_____" ], [ "# train_epocs(model, epochs=10, lr=0.05, wd=1e-5)", "_____no_output_____" ], [ "# train_epocs(model, epochs=10, lr=0.01, wd=1e-5)", "_____no_output_____" ], [ "# train_epocs(model, epochs=10, lr=0.001, wd=1e-5)", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a498d1250913aa57fdbe86114e324b7acc16b8a
223,513
ipynb
Jupyter Notebook
colab_notebooks/prediction.ipynb
dylan-losey/me5824
fe7b2b1762ce89f04cda16e3bddf5aacdb3c9abb
[ "MIT" ]
null
null
null
colab_notebooks/prediction.ipynb
dylan-losey/me5824
fe7b2b1762ce89f04cda16e3bddf5aacdb3c9abb
[ "MIT" ]
null
null
null
colab_notebooks/prediction.ipynb
dylan-losey/me5824
fe7b2b1762ce89f04cda16e3bddf5aacdb3c9abb
[ "MIT" ]
null
null
null
1,011.371041
216,229
0.953381
[ [ [ "### Human Motion Prediction Example ###\n# state-of-the-art approaches use recusive encoders and decoders.\n# this is meant to be a gentle introduction, not the \"best\" approach\n\nimport matplotlib.pyplot as plt\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport numpy as np", "_____no_output_____" ], [ "### Autoencoder Model ###\n\nclass Autoencoder(nn.Module):\n def __init__(self, history_dim, prediction_dim, latent_dim, hidden_dim):\n super(Autoencoder, self).__init__()\n\n # encoder architecture\n self.linear1 = nn.Linear(history_dim, hidden_dim)\n self.linear2 = nn.Linear(hidden_dim, hidden_dim)\n self.linear3 = nn.Linear(hidden_dim, latent_dim)\n\n # decoder architecture\n self.linear4 = nn.Linear(latent_dim, hidden_dim)\n self.linear5 = nn.Linear(hidden_dim, hidden_dim)\n self.linear6 = nn.Linear(hidden_dim, prediction_dim)\n\n # loss function: ||x - y||^2\n self.loss_fcn = nn.MSELoss()\n\n # encoder takes in history and outputs latent z\n def encoder(self, history):\n h1 = torch.tanh(self.linear1(history))\n h2 = torch.tanh(self.linear2(h1))\n return self.linear3(h2)\n\n # decoder takes in latent z and outputs prediction\n def decoder(self, z):\n h4 = torch.tanh(self.linear4(z))\n h5 = torch.tanh(self.linear5(h4))\n return self.linear6(h5)\n\n # compare prediction to actual future\n def forward(self, history, future):\n prediction = self.decoder(self.encoder(history))\n return self.loss_fcn(future, prediction)", "_____no_output_____" ], [ "### Generate the Training Data ###\n\n# make N sine waves\n# each sine wave is split in half:\n# the first half is the history, and the second half is the future\n# the amplitute and frequency are randomized\n\nN = 100\nstart_t = 0.0\ncurr_t = 3.0\nend_t = 6.0\nhistory_timesteps = np.linspace(start_t, curr_t, 30)\nfuture_timesteps = np.linspace(curr_t, end_t, 30)\ndataset = []\nfor _ in range(N):\n amp = np.random.uniform(0.2, 1.0)\n freq = 2*np.pi*np.random.uniform(0.1, 1.0)\n history = amp*np.sin(freq*history_timesteps)\n future = amp*np.sin(freq*future_timesteps)\n dataset.append((torch.FloatTensor(history), torch.FloatTensor(future)))\n plt.plot(history_timesteps, history)\n plt.plot(future_timesteps, future)\nplt.show()", "_____no_output_____" ], [ "### Train the BC Model ###\n\n# arguments: history_dim, prediction_dim, latent_dim, hidden_dim\nmodel = Autoencoder(30, 30, 10, 32)\n\n# hyperparameters for training\nEPOCH = 2001\nBATCH_SIZE_TRAIN = 100\nLR = 0.001\n\n# training loop\noptimizer = optim.Adam(model.parameters(), lr=LR)\nfor epoch in range(EPOCH):\n optimizer.zero_grad()\n loss = 0\n batch = np.random.choice(len(dataset), size=BATCH_SIZE_TRAIN, replace=False)\n for index in batch:\n item = dataset[index]\n loss += model(item[0], item[1])\n loss.backward()\n optimizer.step()\n if epoch % 100 == 0:\n print(epoch, loss.item())", "0 21.248249053955078\n100 10.632434844970703\n200 2.2612874507904053\n300 0.6548289060592651\n400 0.28719472885131836\n500 0.15014848113059998\n600 0.09450452029705048\n700 0.06726648658514023\n800 0.05145340412855148\n900 0.04233790561556816\n1000 0.03849542513489723\n1100 0.03055742010474205\n1200 0.02626430243253708\n1300 0.02339114621281624\n1400 0.02103869989514351\n1500 0.020270006731152534\n" ], [ "### Predict the Motion ###\n\n# given this history\namp = np.random.uniform(0.2, 1.0)\nfreq = 2*np.pi*np.random.uniform(0.1, 1.0)\nhistory = amp*np.sin(freq*history_timesteps)\n\n# predict the future trajectory\nz = model.encoder(torch.FloatTensor(history))\nprediction = model.decoder(z).detach().numpy()\n\n# plot the history, prediction, and actual trajectory\nplt.plot(history_timesteps, history)\nplt.plot(future_timesteps, amp*np.sin(freq*future_timesteps), 'x--')\nplt.plot(future_timesteps, prediction, 'o-')\nplt.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
4a498e33eaedc205c61d620f3cee81110e7b2e62
2,723
ipynb
Jupyter Notebook
7. Random Forest.ipynb
annhadoop/100-Days-Of-ML-Code
109be1814af8ad77e0232df09ffd2a7b782aee3b
[ "MIT" ]
null
null
null
7. Random Forest.ipynb
annhadoop/100-Days-Of-ML-Code
109be1814af8ad77e0232df09ffd2a7b782aee3b
[ "MIT" ]
null
null
null
7. Random Forest.ipynb
annhadoop/100-Days-Of-ML-Code
109be1814af8ad77e0232df09ffd2a7b782aee3b
[ "MIT" ]
null
null
null
20.473684
118
0.538744
[ [ [ "<img src=\"Images/PU.png\" width=\"100%\">\n\n### Course Name : ML 501 Practical Machine Learning \n#### Notebook compiled by : Rajiv Kale, Consultant at Learning and Development \n** Important ! ** For internal circulation only", "_____no_output_____" ], [ "# Random Forest \n<img src=\"Images/Random_Forest.jpg\" width=\"100%\">", "_____no_output_____" ] ], [ [ "from sklearn.ensemble import RandomForestClassifier\nfrom sklearn import datasets\n", "_____no_output_____" ], [ "dataset = datasets.load_iris()", "_____no_output_____" ], [ "#from sklearn.cross_validation import train_test_split\nfrom sklearn.model_selection import train_test_split", "_____no_output_____" ], [ "X_train, X_test, y_train, y_test = train_test_split(dataset.data, dataset.target, test_size=0.3, random_state=0)", "_____no_output_____" ], [ "forest = RandomForestClassifier(criterion='gini',n_estimators=100)\nforest.fit(X_train, y_train)\nforest.score(X_train,y_train)\n", "_____no_output_____" ], [ "forest.score(X_test,y_test)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
4a4992512abe26bec019749ba544f40a8781eb19
7,853
ipynb
Jupyter Notebook
d2l-zh/Classify_Leaves/preprocess.ipynb
nekomiao123/learn_more
a05dfe56583e87e209f4ea8a700dac50b9fb155b
[ "MIT" ]
null
null
null
d2l-zh/Classify_Leaves/preprocess.ipynb
nekomiao123/learn_more
a05dfe56583e87e209f4ea8a700dac50b9fb155b
[ "MIT" ]
null
null
null
d2l-zh/Classify_Leaves/preprocess.ipynb
nekomiao123/learn_more
a05dfe56583e87e209f4ea8a700dac50b9fb155b
[ "MIT" ]
null
null
null
62.325397
1,720
0.666497
[ [ [ "%matplotlib inline\nimport os\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport cv2\nimport numpy as np\nfrom glob import glob\nimport seaborn as sns", "_____no_output_____" ], [ "def create_mask_for_plant(image):\n image_hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)\n\n sensitivity = 35\n lower_hsv = np.array([60 - sensitivity, 100, 50])\n upper_hsv = np.array([60 + sensitivity, 255, 255])\n\n mask = cv2.inRange(image_hsv, lower_hsv, upper_hsv)\n kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11,11))\n mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)\n \n return mask\n\ndef segment_plant(image):\n mask = create_mask_for_plant(image)\n output = cv2.bitwise_and(image, image, mask = mask)\n return output\n\ndef sharpen_image(image):\n image_blurred = cv2.GaussianBlur(image, (0, 0), 3)\n image_sharp = cv2.addWeighted(image, 1.5, image_blurred, -0.5, 0)\n return image_sharp", "_____no_output_____" ], [ "folder_path = 'leaves_data/images'\nfolder = 'leaves_data'", "_____no_output_____" ], [ "for file in os.listdir(folder_path):\n image_path = os.path.join(folder_path, file)\n image_bgr = cv2.imread(image_path, cv2.IMREAD_COLOR)\n image_segmented = segment_plant(image_bgr)\n image_sharpen = sharpen_image(image_segmented)\n save_path = os.path.join(folder, 'newsegimage')\n if not os.path.isdir(save_path):\n os.mkdir(save_path)\n cv2.imwrite(os.path.join(save_path, file), image_segmented)\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
4a499355dda6ce17721b05cb291d1c2284bf82fd
139,747
ipynb
Jupyter Notebook
Data Analysis/Project - Weather Trend/Exploring Weather Trends Project.ipynb
ptyadana/probability-and-statistics-for-business-and-data-science
6c4d09c70e4c8546461eb7ebc401bb95a0827ef2
[ "MIT" ]
10
2021-01-14T15:14:03.000Z
2022-02-19T14:06:25.000Z
Data Analysis/Project - Weather Trend/Exploring Weather Trends Project.ipynb
ptyadana/probability-and-statistics-for-business-and-data-science
6c4d09c70e4c8546461eb7ebc401bb95a0827ef2
[ "MIT" ]
null
null
null
Data Analysis/Project - Weather Trend/Exploring Weather Trends Project.ipynb
ptyadana/probability-and-statistics-for-business-and-data-science
6c4d09c70e4c8546461eb7ebc401bb95a0827ef2
[ "MIT" ]
8
2021-03-24T13:00:02.000Z
2022-03-27T16:32:20.000Z
78.908526
28,676
0.771616
[ [ [ "# Exploring Weather Trends\n### by Phone Thiri Yadana\n\nIn this project, we will analyze Gobal vs Singapore weather data across 10 Years Moving Average.\n\n[<img src=\"./new24397338.png\"/>](https://www.vectorstock.com/royalty-free-vector/kawaii-world-and-thermometer-cartoon-vector-24397338)", "_____no_output_____" ], [ "-------------", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport matplotlib.pyplot as plt\n\n%matplotlib inline", "_____no_output_____" ], [ "#load data\nglobal_df = pd.read_csv(\"Data/global_data.csv\")\ncity_df = pd.read_csv(\"Data/city_data.csv\")\ncity_list_df = pd.read_csv(\"Data/city_list.csv\")", "_____no_output_____" ] ], [ [ "## Check info, duplicate or missing data", "_____no_output_____" ] ], [ [ "global_df.head()", "_____no_output_____" ], [ "global_df.tail()", "_____no_output_____" ], [ "global_df.shape", "_____no_output_____" ], [ "sum(global_df.duplicated())", "_____no_output_____" ], [ "global_df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 266 entries, 0 to 265\nData columns (total 2 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 year 266 non-null int64 \n 1 avg_temp 266 non-null float64\ndtypes: float64(1), int64(1)\nmemory usage: 4.3 KB\n" ], [ "city_df.head()", "_____no_output_____" ], [ "city_df.shape", "_____no_output_____" ], [ "city_df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 71311 entries, 0 to 71310\nData columns (total 4 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 year 71311 non-null int64 \n 1 city 71311 non-null object \n 2 country 71311 non-null object \n 3 avg_temp 68764 non-null float64\ndtypes: float64(1), int64(1), object(2)\nmemory usage: 2.2+ MB\n" ], [ "sum(city_df.duplicated())", "_____no_output_____" ], [ "city_list_df.head()", "_____no_output_____" ], [ "city_list_df.shape", "_____no_output_____" ], [ "city_list_df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 345 entries, 0 to 344\nData columns (total 2 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 city 345 non-null object\n 1 country 345 non-null object\ndtypes: object(2)\nmemory usage: 5.5+ KB\n" ], [ "sum(city_list_df.duplicated())", "_____no_output_____" ] ], [ [ "## Calculate Moving Average", "_____no_output_____" ], [ "### Global Temperature", "_____no_output_____" ] ], [ [ "#yearly plot\nplt.plot(global_df[\"year\"], global_df[\"avg_temp\"])", "_____no_output_____" ], [ "# 10 years Moving Avearge\nglobal_df[\"10 Years MA\"] = global_df[\"avg_temp\"].rolling(window=10).mean()", "_____no_output_____" ], [ "global_df.iloc[8:18, :]", "_____no_output_____" ], [ "#10 years Moving Average\nplt.plot(global_df[\"year\"], global_df[\"10 Years MA\"])", "_____no_output_____" ] ], [ [ "### Specific City Temperature (Singapore)", "_____no_output_____" ] ], [ [ "city_df.head()", "_____no_output_____" ], [ "singapore_df = city_df[city_df[\"country\"] == \"Singapore\"]", "_____no_output_____" ], [ "singapore_df.head()", "_____no_output_____" ], [ "singapore_df.tail()", "_____no_output_____" ], [ "#check which rows are missing values\nsingapore_df[singapore_df[\"avg_temp\"].isnull()]", "_____no_output_____" ] ], [ [ "As singapore data are missing from 1826 till 1862, so it won't make sense to compare temperature during those period.", "_____no_output_____" ] ], [ [ "singapore_df = singapore_df[singapore_df[\"year\"] >= 1863]", "_____no_output_____" ], [ "# to make sure, check again for null values\nsingapore_df.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 151 entries, 60089 to 60239\nData columns (total 4 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 year 151 non-null int64 \n 1 city 151 non-null object \n 2 country 151 non-null object \n 3 avg_temp 151 non-null float64\ndtypes: float64(1), int64(1), object(2)\nmemory usage: 5.9+ KB\n" ], [ "singapore_df.head()", "_____no_output_____" ], [ "# calculate 10 years moving average\nsingapore_df[\"10 Years MA\"] = singapore_df[\"avg_temp\"].rolling(window=10).mean()", "_____no_output_____" ], [ "singapore_df.iloc[8:18, :]", "_____no_output_____" ], [ "plt.plot(singapore_df[\"year\"], singapore_df[\"10 Years MA\"])", "_____no_output_____" ] ], [ [ "## Compare with Global Data (10 Years Moving Average)", "_____no_output_____" ] ], [ [ "years = global_df.query('year >= 1872 & year <= 2013')[[\"year\"]]\nglobal_ma = global_df.query('year >= 1872 & year <= 2013')[[\"10 Years MA\"]]\nsingapore_ma = singapore_df.query('year >= 1872 & year <= 2013')[\"10 Years MA\"]\n\nplt.figure(figsize=[10,5])\nplt.grid(True)\nplt.plot(years, global_ma, label = \"Global\")\nplt.plot(years,singapore_ma, label = \"Singapore\")\n\nplt.xlabel(\"Year\")\nplt.ylabel(\"Temperature (C)\")\nplt.title(\"Temperature in Singapore vs Global (10 Years Moving Average)\")\nplt.legend()\nplt.show()", "_____no_output_____" ], [ "global_ma.describe()", "_____no_output_____" ], [ "singapore_ma.describe()", "_____no_output_____" ] ], [ [ "----------------------\n\n# Observations:", "_____no_output_____" ], [ "- As per the findings, we can see in the plot that both Global and Specific City (In this case: Singapore) temperature are rising over the years.\n\n- There are certain ups and downs before 1920 and since then Temperatures have been steadily increasing.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ] ]
4a499642a59c89c576fcdc647074d27577d8939c
59,737
ipynb
Jupyter Notebook
docs/tutorials/tfx/components.ipynb
majiang/tfx
3cf78e30a900059fe9271412ad8912682fdfc448
[ "Apache-2.0" ]
null
null
null
docs/tutorials/tfx/components.ipynb
majiang/tfx
3cf78e30a900059fe9271412ad8912682fdfc448
[ "Apache-2.0" ]
1
2021-02-24T00:55:55.000Z
2021-02-24T01:16:36.000Z
docs/tutorials/tfx/components.ipynb
isabella232/tfx
43c5d4bdb859ea3a6bf5136f3aa6ffd09cca401f
[ "Apache-2.0" ]
null
null
null
38.195013
538
0.570266
[ [ [ "##### Copyright &copy; 2019 The TensorFlow Authors.", "_____no_output_____" ] ], [ [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "# TFX Estimator Component Tutorial\n\n***A Component-by-Component Introduction to TensorFlow Extended (TFX)***", "_____no_output_____" ], [ "Note: We recommend running this tutorial in a Colab notebook, with no setup required! Just click \"Run in Google Colab\".\n\n<div class=\"devsite-table-wrapper\"><table class=\"tfo-notebook-buttons\" align=\"left\">\n<td><a target=\"_blank\" href=\"https://www.tensorflow.org/tfx/tutorials/tfx/components\">\n<img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a></td>\n<td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/components.ipynb\">\n<img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Run in Google Colab</a></td>\n<td><a target=\"_blank\" href=\"https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/components.ipynb\">\n<img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">View source on GitHub</a></td>\n</table></div>", "_____no_output_____" ], [ "\nThis Colab-based tutorial will interactively walk through each built-in component of TensorFlow Extended (TFX).\n\nIt covers every step in an end-to-end machine learning pipeline, from data ingestion to pushing a model to serving.\n\nWhen you're done, the contents of this notebook can be automatically exported as TFX pipeline source code, which you can orchestrate with Apache Airflow and Apache Beam.\n\nNote: This notebook and its associated APIs are **experimental** and are\nin active development. Major changes in functionality, behavior, and\npresentation are expected.", "_____no_output_____" ], [ "## Background\nThis notebook demonstrates how to use TFX in a Jupyter/Colab environment. Here, we walk through the Chicago Taxi example in an interactive notebook.\n\nWorking in an interactive notebook is a useful way to become familiar with the structure of a TFX pipeline. It's also useful when doing development of your own pipelines as a lightweight development environment, but you should be aware that there are differences in the way interactive notebooks are orchestrated, and how they access metadata artifacts.\n\n### Orchestration\n\nIn a production deployment of TFX, you will use an orchestrator such as Apache Airflow, Kubeflow Pipelines, or Apache Beam to orchestrate a pre-defined pipeline graph of TFX components. In an interactive notebook, the notebook itself is the orchestrator, running each TFX component as you execute the notebook cells.\n\n### Metadata\n\nIn a production deployment of TFX, you will access metadata through the ML Metadata (MLMD) API. MLMD stores metadata properties in a database such as MySQL or SQLite, and stores the metadata payloads in a persistent store such as on your filesystem. In an interactive notebook, both properties and payloads are stored in an ephemeral SQLite database in the `/tmp` directory on the Jupyter notebook or Colab server.", "_____no_output_____" ], [ "## Setup\nFirst, we install and import the necessary packages, set up paths, and download data.", "_____no_output_____" ], [ "### Upgrade Pip\n\nTo avoid upgrading Pip in a system when running locally, check to make sure that we're running in Colab. Local systems can of course be upgraded separately.", "_____no_output_____" ] ], [ [ "try:\n import colab\n !pip install --upgrade pip\nexcept:\n pass", "_____no_output_____" ] ], [ [ "### Install TFX\n\n**Note: In Google Colab, because of package updates, the first time you run this cell you must restart the runtime (Runtime > Restart runtime ...).**", "_____no_output_____" ] ], [ [ "!pip install -q -U --use-feature=2020-resolver tfx", "_____no_output_____" ] ], [ [ "## Did you restart the runtime?\n\nIf you are using Google Colab, the first time that you run the cell above, you must restart the runtime (Runtime > Restart runtime ...). This is because of the way that Colab loads packages.", "_____no_output_____" ], [ "### Import packages\nWe import necessary packages, including standard TFX component classes.", "_____no_output_____" ] ], [ [ "import os\nimport pprint\nimport tempfile\nimport urllib\n\nimport absl\nimport tensorflow as tf\nimport tensorflow_model_analysis as tfma\ntf.get_logger().propagate = False\npp = pprint.PrettyPrinter()\n\nimport tfx\nfrom tfx.components import CsvExampleGen\nfrom tfx.components import Evaluator\nfrom tfx.components import ExampleValidator\nfrom tfx.components import Pusher\nfrom tfx.components import ResolverNode\nfrom tfx.components import SchemaGen\nfrom tfx.components import StatisticsGen\nfrom tfx.components import Trainer\nfrom tfx.components import Transform\nfrom tfx.dsl.experimental import latest_blessed_model_resolver\nfrom tfx.orchestration import metadata\nfrom tfx.orchestration import pipeline\nfrom tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext\nfrom tfx.proto import pusher_pb2\nfrom tfx.proto import trainer_pb2\nfrom tfx.proto.evaluator_pb2 import SingleSlicingSpec\nfrom tfx.utils.dsl_utils import external_input\nfrom tfx.types import Channel\nfrom tfx.types.standard_artifacts import Model\nfrom tfx.types.standard_artifacts import ModelBlessing\n\n%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip", "_____no_output_____" ] ], [ [ "Let's check the library versions.", "_____no_output_____" ] ], [ [ "print('TensorFlow version: {}'.format(tf.__version__))\nprint('TFX version: {}'.format(tfx.__version__))", "_____no_output_____" ] ], [ [ "### Set up pipeline paths", "_____no_output_____" ] ], [ [ "# This is the root directory for your TFX pip package installation.\n_tfx_root = tfx.__path__[0]\n\n# This is the directory containing the TFX Chicago Taxi Pipeline example.\n_taxi_root = os.path.join(_tfx_root, 'examples/chicago_taxi_pipeline')\n\n# This is the path where your model will be pushed for serving.\n_serving_model_dir = os.path.join(\n tempfile.mkdtemp(), 'serving_model/taxi_simple')\n\n# Set up logging.\nabsl.logging.set_verbosity(absl.logging.INFO)", "_____no_output_____" ] ], [ [ "### Download example data\nWe download the example dataset for use in our TFX pipeline.\n\nThe dataset we're using is the [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew) released by the City of Chicago. The columns in this dataset are:\n\n<table>\n<tr><td>pickup_community_area</td><td>fare</td><td>trip_start_month</td></tr>\n<tr><td>trip_start_hour</td><td>trip_start_day</td><td>trip_start_timestamp</td></tr>\n<tr><td>pickup_latitude</td><td>pickup_longitude</td><td>dropoff_latitude</td></tr>\n<tr><td>dropoff_longitude</td><td>trip_miles</td><td>pickup_census_tract</td></tr>\n<tr><td>dropoff_census_tract</td><td>payment_type</td><td>company</td></tr>\n<tr><td>trip_seconds</td><td>dropoff_community_area</td><td>tips</td></tr>\n</table>\n\nWith this dataset, we will build a model that predicts the `tips` of a trip.", "_____no_output_____" ] ], [ [ "_data_root = tempfile.mkdtemp(prefix='tfx-data')\nDATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/chicago_taxi_pipeline/data/simple/data.csv'\n_data_filepath = os.path.join(_data_root, \"data.csv\")\nurllib.request.urlretrieve(DATA_PATH, _data_filepath)", "_____no_output_____" ] ], [ [ "Take a quick look at the CSV file.", "_____no_output_____" ] ], [ [ "!head {_data_filepath}", "_____no_output_____" ] ], [ [ "*Disclaimer: This site provides applications using data that has been modified for use from its original source, www.cityofchicago.org, the official website of the City of Chicago. The City of Chicago makes no claims as to the content, accuracy, timeliness, or completeness of any of the data provided at this site. The data provided at this site is subject to change at any time. It is understood that the data provided at this site is being used at one’s own risk.*", "_____no_output_____" ], [ "### Create the InteractiveContext\nLast, we create an InteractiveContext, which will allow us to run TFX components interactively in this notebook.", "_____no_output_____" ] ], [ [ "# Here, we create an InteractiveContext using default parameters. This will\n# use a temporary directory with an ephemeral ML Metadata database instance.\n# To use your own pipeline root or database, the optional properties\n# `pipeline_root` and `metadata_connection_config` may be passed to\n# InteractiveContext. Calls to InteractiveContext are no-ops outside of the\n# notebook.\ncontext = InteractiveContext()", "_____no_output_____" ] ], [ [ "## Run TFX components interactively\nIn the cells that follow, we create TFX components one-by-one, run each of them, and visualize their output artifacts.", "_____no_output_____" ], [ "### ExampleGen\n\nThe `ExampleGen` component is usually at the start of a TFX pipeline. It will:\n\n1. Split data into training and evaluation sets (by default, 2/3 training + 1/3 eval)\n2. Convert data into the `tf.Example` format\n3. Copy data into the `_tfx_root` directory for other components to access\n\n`ExampleGen` takes as input the path to your data source. In our case, this is the `_data_root` path that contains the downloaded CSV.\n\nNote: In this notebook, we can instantiate components one-by-one and run them with `InteractiveContext.run()`. By contrast, in a production setting, we would specify all the components upfront in a `Pipeline` to pass to the orchestrator (see the \"Export to Pipeline\" section).", "_____no_output_____" ] ], [ [ "example_gen = CsvExampleGen(input=external_input(_data_root))\ncontext.run(example_gen)", "_____no_output_____" ] ], [ [ "Let's examine the output artifacts of `ExampleGen`. This component produces two artifacts, training examples and evaluation examples:", "_____no_output_____" ] ], [ [ "artifact = example_gen.outputs['examples'].get()[0]\nprint(artifact.split_names, artifact.uri)", "_____no_output_____" ] ], [ [ "We can also take a look at the first three training examples:", "_____no_output_____" ] ], [ [ "# Get the URI of the output artifact representing the training examples, which is a directory\ntrain_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'train')\n\n# Get the list of files in this directory (all compressed TFRecord files)\ntfrecord_filenames = [os.path.join(train_uri, name)\n for name in os.listdir(train_uri)]\n\n# Create a `TFRecordDataset` to read these files\ndataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type=\"GZIP\")\n\n# Iterate over the first 3 records and decode them.\nfor tfrecord in dataset.take(3):\n serialized_example = tfrecord.numpy()\n example = tf.train.Example()\n example.ParseFromString(serialized_example)\n pp.pprint(example)", "_____no_output_____" ] ], [ [ "Now that `ExampleGen` has finished ingesting the data, the next step is data analysis.", "_____no_output_____" ], [ "### StatisticsGen\nThe `StatisticsGen` component computes statistics over your dataset for data analysis, as well as for use in downstream components. It uses the [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) library.\n\n`StatisticsGen` takes as input the dataset we just ingested using `ExampleGen`.", "_____no_output_____" ] ], [ [ "statistics_gen = StatisticsGen(\n examples=example_gen.outputs['examples'])\ncontext.run(statistics_gen)", "_____no_output_____" ] ], [ [ "After `StatisticsGen` finishes running, we can visualize the outputted statistics. Try playing with the different plots!", "_____no_output_____" ] ], [ [ "context.show(statistics_gen.outputs['statistics'])", "_____no_output_____" ] ], [ [ "### SchemaGen\n\nThe `SchemaGen` component generates a schema based on your data statistics. (A schema defines the expected bounds, types, and properties of the features in your dataset.) It also uses the [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) library.\n\n`SchemaGen` will take as input the statistics that we generated with `StatisticsGen`, looking at the training split by default.", "_____no_output_____" ] ], [ [ "schema_gen = SchemaGen(\n statistics=statistics_gen.outputs['statistics'],\n infer_feature_shape=False)\ncontext.run(schema_gen)", "_____no_output_____" ] ], [ [ "After `SchemaGen` finishes running, we can visualize the generated schema as a table.", "_____no_output_____" ] ], [ [ "context.show(schema_gen.outputs['schema'])", "_____no_output_____" ] ], [ [ "Each feature in your dataset shows up as a row in the schema table, alongside its properties. The schema also captures all the values that a categorical feature takes on, denoted as its domain.\n\nTo learn more about schemas, see [the SchemaGen documentation](https://www.tensorflow.org/tfx/guide/schemagen).", "_____no_output_____" ], [ "### ExampleValidator\nThe `ExampleValidator` component detects anomalies in your data, based on the expectations defined by the schema. It also uses the [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) library.\n\n`ExampleValidator` will take as input the statistics from `StatisticsGen`, and the schema from `SchemaGen`.", "_____no_output_____" ] ], [ [ "example_validator = ExampleValidator(\n statistics=statistics_gen.outputs['statistics'],\n schema=schema_gen.outputs['schema'])\ncontext.run(example_validator)", "_____no_output_____" ] ], [ [ "After `ExampleValidator` finishes running, we can visualize the anomalies as a table.", "_____no_output_____" ] ], [ [ "context.show(example_validator.outputs['anomalies'])", "_____no_output_____" ] ], [ [ "In the anomalies table, we can see that there are no anomalies. This is what we'd expect, since this the first dataset that we've analyzed and the schema is tailored to it. You should review this schema -- anything unexpected means an anomaly in the data. Once reviewed, the schema can be used to guard future data, and anomalies produced here can be used to debug model performance, understand how your data evolves over time, and identify data errors.", "_____no_output_____" ], [ "### Transform\nThe `Transform` component performs feature engineering for both training and serving. It uses the [TensorFlow Transform](https://www.tensorflow.org/tfx/transform/get_started) library.\n\n`Transform` will take as input the data from `ExampleGen`, the schema from `SchemaGen`, as well as a module that contains user-defined Transform code.\n\nLet's see an example of user-defined Transform code below (for an introduction to the TensorFlow Transform APIs, [see the tutorial](https://www.tensorflow.org/tfx/tutorials/transform/simple)). First, we define a few constants for feature engineering:\n\nNote: The `%%writefile` cell magic will save the contents of the cell as a `.py` file on disk. This allows the `Transform` component to load your code as a module.\n\n", "_____no_output_____" ] ], [ [ "_taxi_constants_module_file = 'taxi_constants.py'", "_____no_output_____" ], [ "%%writefile {_taxi_constants_module_file}\n\n# Categorical features are assumed to each have a maximum value in the dataset.\nMAX_CATEGORICAL_FEATURE_VALUES = [24, 31, 12]\n\nCATEGORICAL_FEATURE_KEYS = [\n 'trip_start_hour', 'trip_start_day', 'trip_start_month',\n 'pickup_census_tract', 'dropoff_census_tract', 'pickup_community_area',\n 'dropoff_community_area'\n]\n\nDENSE_FLOAT_FEATURE_KEYS = ['trip_miles', 'fare', 'trip_seconds']\n\n# Number of buckets used by tf.transform for encoding each feature.\nFEATURE_BUCKET_COUNT = 10\n\nBUCKET_FEATURE_KEYS = [\n 'pickup_latitude', 'pickup_longitude', 'dropoff_latitude',\n 'dropoff_longitude'\n]\n\n# Number of vocabulary terms used for encoding VOCAB_FEATURES by tf.transform\nVOCAB_SIZE = 1000\n\n# Count of out-of-vocab buckets in which unrecognized VOCAB_FEATURES are hashed.\nOOV_SIZE = 10\n\nVOCAB_FEATURE_KEYS = [\n 'payment_type',\n 'company',\n]\n\n# Keys\nLABEL_KEY = 'tips'\nFARE_KEY = 'fare'\n\ndef transformed_name(key):\n return key + '_xf'", "_____no_output_____" ] ], [ [ "Next, we write a `preprocessing_fn` that takes in raw data as input, and returns transformed features that our model can train on:", "_____no_output_____" ] ], [ [ "_taxi_transform_module_file = 'taxi_transform.py'", "_____no_output_____" ], [ "%%writefile {_taxi_transform_module_file}\n\nimport tensorflow as tf\nimport tensorflow_transform as tft\n\nimport taxi_constants\n\n_DENSE_FLOAT_FEATURE_KEYS = taxi_constants.DENSE_FLOAT_FEATURE_KEYS\n_VOCAB_FEATURE_KEYS = taxi_constants.VOCAB_FEATURE_KEYS\n_VOCAB_SIZE = taxi_constants.VOCAB_SIZE\n_OOV_SIZE = taxi_constants.OOV_SIZE\n_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT\n_BUCKET_FEATURE_KEYS = taxi_constants.BUCKET_FEATURE_KEYS\n_CATEGORICAL_FEATURE_KEYS = taxi_constants.CATEGORICAL_FEATURE_KEYS\n_FARE_KEY = taxi_constants.FARE_KEY\n_LABEL_KEY = taxi_constants.LABEL_KEY\n_transformed_name = taxi_constants.transformed_name\n\n\ndef preprocessing_fn(inputs):\n \"\"\"tf.transform's callback function for preprocessing inputs.\n Args:\n inputs: map from feature keys to raw not-yet-transformed features.\n Returns:\n Map from string feature key to transformed feature operations.\n \"\"\"\n outputs = {}\n for key in _DENSE_FLOAT_FEATURE_KEYS:\n # Preserve this feature as a dense float, setting nan's to the mean.\n outputs[_transformed_name(key)] = tft.scale_to_z_score(\n _fill_in_missing(inputs[key]))\n\n for key in _VOCAB_FEATURE_KEYS:\n # Build a vocabulary for this feature.\n outputs[_transformed_name(key)] = tft.compute_and_apply_vocabulary(\n _fill_in_missing(inputs[key]),\n top_k=_VOCAB_SIZE,\n num_oov_buckets=_OOV_SIZE)\n\n for key in _BUCKET_FEATURE_KEYS:\n outputs[_transformed_name(key)] = tft.bucketize(\n _fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT)\n\n for key in _CATEGORICAL_FEATURE_KEYS:\n outputs[_transformed_name(key)] = _fill_in_missing(inputs[key])\n\n # Was this passenger a big tipper?\n taxi_fare = _fill_in_missing(inputs[_FARE_KEY])\n tips = _fill_in_missing(inputs[_LABEL_KEY])\n outputs[_transformed_name(_LABEL_KEY)] = tf.where(\n tf.math.is_nan(taxi_fare),\n tf.cast(tf.zeros_like(taxi_fare), tf.int64),\n # Test if the tip was > 20% of the fare.\n tf.cast(\n tf.greater(tips, tf.multiply(taxi_fare, tf.constant(0.2))), tf.int64))\n\n return outputs\n\n\ndef _fill_in_missing(x):\n \"\"\"Replace missing values in a SparseTensor.\n Fills in missing values of `x` with '' or 0, and converts to a dense tensor.\n Args:\n x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1\n in the second dimension.\n Returns:\n A rank 1 tensor where missing values of `x` have been filled in.\n \"\"\"\n default_value = '' if x.dtype == tf.string else 0\n return tf.squeeze(\n tf.sparse.to_dense(\n tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),\n default_value),\n axis=1)", "_____no_output_____" ] ], [ [ "Now, we pass in this feature engineering code to the `Transform` component and run it to transform your data.", "_____no_output_____" ] ], [ [ "transform = Transform(\n examples=example_gen.outputs['examples'],\n schema=schema_gen.outputs['schema'],\n module_file=os.path.abspath(_taxi_transform_module_file))\ncontext.run(transform)", "_____no_output_____" ] ], [ [ "Let's examine the output artifacts of `Transform`. This component produces two types of outputs:\n\n* `transform_graph` is the graph that can perform the preprocessing operations (this graph will be included in the serving and evaluation models).\n* `transformed_examples` represents the preprocessed training and evaluation data.", "_____no_output_____" ] ], [ [ "transform.outputs", "_____no_output_____" ] ], [ [ "Take a peek at the `transform_graph` artifact. It points to a directory containing three subdirectories.", "_____no_output_____" ] ], [ [ "train_uri = transform.outputs['transform_graph'].get()[0].uri\nos.listdir(train_uri)", "_____no_output_____" ] ], [ [ "The `transformed_metadata` subdirectory contains the schema of the preprocessed data. The `transform_fn` subdirectory contains the actual preprocessing graph. The `metadata` subdirectory contains the schema of the original data.\n\nWe can also take a look at the first three transformed examples:", "_____no_output_____" ] ], [ [ "# Get the URI of the output artifact representing the transformed examples, which is a directory\ntrain_uri = os.path.join(transform.outputs['transformed_examples'].get()[0].uri, 'train')\n\n# Get the list of files in this directory (all compressed TFRecord files)\ntfrecord_filenames = [os.path.join(train_uri, name)\n for name in os.listdir(train_uri)]\n\n# Create a `TFRecordDataset` to read these files\ndataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type=\"GZIP\")\n\n# Iterate over the first 3 records and decode them.\nfor tfrecord in dataset.take(3):\n serialized_example = tfrecord.numpy()\n example = tf.train.Example()\n example.ParseFromString(serialized_example)\n pp.pprint(example)", "_____no_output_____" ] ], [ [ "After the `Transform` component has transformed your data into features, and the next step is to train a model.", "_____no_output_____" ], [ "### Trainer\nThe `Trainer` component will train a model that you define in TensorFlow (either using the Estimator API or the Keras API with [`model_to_estimator`](https://www.tensorflow.org/api_docs/python/tf/keras/estimator/model_to_estimator)).\n\n`Trainer` takes as input the schema from `SchemaGen`, the transformed data and graph from `Transform`, training parameters, as well as a module that contains user-defined model code.\n\nLet's see an example of user-defined model code below (for an introduction to the TensorFlow Estimator APIs, [see the tutorial](https://www.tensorflow.org/tutorials/estimator/premade)):", "_____no_output_____" ] ], [ [ "_taxi_trainer_module_file = 'taxi_trainer.py'", "_____no_output_____" ], [ "%%writefile {_taxi_trainer_module_file}\n\nimport tensorflow as tf\nimport tensorflow_model_analysis as tfma\nimport tensorflow_transform as tft\nfrom tensorflow_transform.tf_metadata import schema_utils\nfrom tfx_bsl.tfxio import dataset_options\n\nimport taxi_constants\n\n_DENSE_FLOAT_FEATURE_KEYS = taxi_constants.DENSE_FLOAT_FEATURE_KEYS\n_VOCAB_FEATURE_KEYS = taxi_constants.VOCAB_FEATURE_KEYS\n_VOCAB_SIZE = taxi_constants.VOCAB_SIZE\n_OOV_SIZE = taxi_constants.OOV_SIZE\n_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT\n_BUCKET_FEATURE_KEYS = taxi_constants.BUCKET_FEATURE_KEYS\n_CATEGORICAL_FEATURE_KEYS = taxi_constants.CATEGORICAL_FEATURE_KEYS\n_MAX_CATEGORICAL_FEATURE_VALUES = taxi_constants.MAX_CATEGORICAL_FEATURE_VALUES\n_LABEL_KEY = taxi_constants.LABEL_KEY\n_transformed_name = taxi_constants.transformed_name\n\n\ndef _transformed_names(keys):\n return [_transformed_name(key) for key in keys]\n\n\n# Tf.Transform considers these features as \"raw\"\ndef _get_raw_feature_spec(schema):\n return schema_utils.schema_as_feature_spec(schema).feature_spec\n\n\ndef _build_estimator(config, hidden_units=None, warm_start_from=None):\n \"\"\"Build an estimator for predicting the tipping behavior of taxi riders.\n Args:\n config: tf.estimator.RunConfig defining the runtime environment for the\n estimator (including model_dir).\n hidden_units: [int], the layer sizes of the DNN (input layer first)\n warm_start_from: Optional directory to warm start from.\n Returns:\n A dict of the following:\n - estimator: The estimator that will be used for training and eval.\n - train_spec: Spec for training.\n - eval_spec: Spec for eval.\n - eval_input_receiver_fn: Input function for eval.\n \"\"\"\n real_valued_columns = [\n tf.feature_column.numeric_column(key, shape=())\n for key in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)\n ]\n categorical_columns = [\n tf.feature_column.categorical_column_with_identity(\n key, num_buckets=_VOCAB_SIZE + _OOV_SIZE, default_value=0)\n for key in _transformed_names(_VOCAB_FEATURE_KEYS)\n ]\n categorical_columns += [\n tf.feature_column.categorical_column_with_identity(\n key, num_buckets=_FEATURE_BUCKET_COUNT, default_value=0)\n for key in _transformed_names(_BUCKET_FEATURE_KEYS)\n ]\n categorical_columns += [\n tf.feature_column.categorical_column_with_identity( # pylint: disable=g-complex-comprehension\n key,\n num_buckets=num_buckets,\n default_value=0) for key, num_buckets in zip(\n _transformed_names(_CATEGORICAL_FEATURE_KEYS),\n _MAX_CATEGORICAL_FEATURE_VALUES)\n ]\n return tf.estimator.DNNLinearCombinedClassifier(\n config=config,\n linear_feature_columns=categorical_columns,\n dnn_feature_columns=real_valued_columns,\n dnn_hidden_units=hidden_units or [100, 70, 50, 25],\n warm_start_from=warm_start_from)\n\n\ndef _example_serving_receiver_fn(tf_transform_graph, schema):\n \"\"\"Build the serving in inputs.\n Args:\n tf_transform_graph: A TFTransformOutput.\n schema: the schema of the input data.\n Returns:\n Tensorflow graph which parses examples, applying tf-transform to them.\n \"\"\"\n raw_feature_spec = _get_raw_feature_spec(schema)\n raw_feature_spec.pop(_LABEL_KEY)\n\n raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(\n raw_feature_spec, default_batch_size=None)\n serving_input_receiver = raw_input_fn()\n\n transformed_features = tf_transform_graph.transform_raw_features(\n serving_input_receiver.features)\n\n return tf.estimator.export.ServingInputReceiver(\n transformed_features, serving_input_receiver.receiver_tensors)\n\n\ndef _eval_input_receiver_fn(tf_transform_graph, schema):\n \"\"\"Build everything needed for the tf-model-analysis to run the model.\n Args:\n tf_transform_graph: A TFTransformOutput.\n schema: the schema of the input data.\n Returns:\n EvalInputReceiver function, which contains:\n - Tensorflow graph which parses raw untransformed features, applies the\n tf-transform preprocessing operators.\n - Set of raw, untransformed features.\n - Label against which predictions will be compared.\n \"\"\"\n # Notice that the inputs are raw features, not transformed features here.\n raw_feature_spec = _get_raw_feature_spec(schema)\n\n serialized_tf_example = tf.compat.v1.placeholder(\n dtype=tf.string, shape=[None], name='input_example_tensor')\n\n # Add a parse_example operator to the tensorflow graph, which will parse\n # raw, untransformed, tf examples.\n features = tf.io.parse_example(serialized_tf_example, raw_feature_spec)\n\n # Now that we have our raw examples, process them through the tf-transform\n # function computed during the preprocessing step.\n transformed_features = tf_transform_graph.transform_raw_features(\n features)\n\n # The key name MUST be 'examples'.\n receiver_tensors = {'examples': serialized_tf_example}\n\n # NOTE: Model is driven by transformed features (since training works on the\n # materialized output of TFT, but slicing will happen on raw features.\n features.update(transformed_features)\n\n return tfma.export.EvalInputReceiver(\n features=features,\n receiver_tensors=receiver_tensors,\n labels=transformed_features[_transformed_name(_LABEL_KEY)])\n\n\ndef _input_fn(file_pattern, data_accessor, tf_transform_output, batch_size=200):\n \"\"\"Generates features and label for tuning/training.\n\n Args:\n file_pattern: List of paths or patterns of input tfrecord files.\n data_accessor: DataAccessor for converting input to RecordBatch.\n tf_transform_output: A TFTransformOutput.\n batch_size: representing the number of consecutive elements of returned\n dataset to combine in a single batch\n\n Returns:\n A dataset that contains (features, indices) tuple where features is a\n dictionary of Tensors, and indices is a single Tensor of label indices.\n \"\"\"\n return data_accessor.tf_dataset_factory(\n file_pattern,\n dataset_options.TensorFlowDatasetOptions(\n batch_size=batch_size, label_key=_transformed_name(_LABEL_KEY)),\n tf_transform_output.transformed_metadata.schema)\n\n\n# TFX will call this function\ndef trainer_fn(trainer_fn_args, schema):\n \"\"\"Build the estimator using the high level API.\n Args:\n trainer_fn_args: Holds args used to train the model as name/value pairs.\n schema: Holds the schema of the training examples.\n Returns:\n A dict of the following:\n - estimator: The estimator that will be used for training and eval.\n - train_spec: Spec for training.\n - eval_spec: Spec for eval.\n - eval_input_receiver_fn: Input function for eval.\n \"\"\"\n # Number of nodes in the first layer of the DNN\n first_dnn_layer_size = 100\n num_dnn_layers = 4\n dnn_decay_factor = 0.7\n\n train_batch_size = 40\n eval_batch_size = 40\n\n tf_transform_graph = tft.TFTransformOutput(trainer_fn_args.transform_output)\n\n train_input_fn = lambda: _input_fn( # pylint: disable=g-long-lambda\n trainer_fn_args.train_files,\n trainer_fn_args.data_accessor,\n tf_transform_graph,\n batch_size=train_batch_size)\n\n eval_input_fn = lambda: _input_fn( # pylint: disable=g-long-lambda\n trainer_fn_args.eval_files,\n trainer_fn_args.data_accessor,\n tf_transform_graph,\n batch_size=eval_batch_size)\n\n train_spec = tf.estimator.TrainSpec( # pylint: disable=g-long-lambda\n train_input_fn,\n max_steps=trainer_fn_args.train_steps)\n\n serving_receiver_fn = lambda: _example_serving_receiver_fn( # pylint: disable=g-long-lambda\n tf_transform_graph, schema)\n\n exporter = tf.estimator.FinalExporter('chicago-taxi', serving_receiver_fn)\n eval_spec = tf.estimator.EvalSpec(\n eval_input_fn,\n steps=trainer_fn_args.eval_steps,\n exporters=[exporter],\n name='chicago-taxi-eval')\n\n run_config = tf.estimator.RunConfig(\n save_checkpoints_steps=999, keep_checkpoint_max=1)\n\n run_config = run_config.replace(model_dir=trainer_fn_args.serving_model_dir)\n\n estimator = _build_estimator(\n # Construct layers sizes with exponetial decay\n hidden_units=[\n max(2, int(first_dnn_layer_size * dnn_decay_factor**i))\n for i in range(num_dnn_layers)\n ],\n config=run_config,\n warm_start_from=trainer_fn_args.base_model)\n\n # Create an input receiver for TFMA processing\n receiver_fn = lambda: _eval_input_receiver_fn( # pylint: disable=g-long-lambda\n tf_transform_graph, schema)\n\n return {\n 'estimator': estimator,\n 'train_spec': train_spec,\n 'eval_spec': eval_spec,\n 'eval_input_receiver_fn': receiver_fn\n }", "_____no_output_____" ] ], [ [ "Now, we pass in this model code to the `Trainer` component and run it to train the model.", "_____no_output_____" ] ], [ [ "trainer = Trainer(\n module_file=os.path.abspath(_taxi_trainer_module_file),\n transformed_examples=transform.outputs['transformed_examples'],\n schema=schema_gen.outputs['schema'],\n transform_graph=transform.outputs['transform_graph'],\n train_args=trainer_pb2.TrainArgs(num_steps=10000),\n eval_args=trainer_pb2.EvalArgs(num_steps=5000))\ncontext.run(trainer)", "_____no_output_____" ] ], [ [ "#### Analyze Training with TensorBoard\nOptionally, we can connect TensorBoard to the Trainer to analyze our model's training curves.", "_____no_output_____" ] ], [ [ "# Get the URI of the output artifact representing the training logs, which is a directory\nmodel_run_dir = trainer.outputs['model_run'].get()[0].uri\n\n%load_ext tensorboard\n%tensorboard --logdir {model_run_dir}", "_____no_output_____" ] ], [ [ "### Evaluator\nThe `Evaluator` component computes model performance metrics over the evaluation set. It uses the [TensorFlow Model Analysis](https://www.tensorflow.org/tfx/model_analysis/get_started) library. The `Evaluator` can also optionally validate that a newly trained model is better than the previous model. This is useful in a production pipeline setting where you may automatically train and validate a model every day. In this notebook, we only train one model, so the `Evaluator` automatically will label the model as \"good\". \n\n`Evaluator` will take as input the data from `ExampleGen`, the trained model from `Trainer`, and slicing configuration. The slicing configuration allows you to slice your metrics on feature values (e.g. how does your model perform on taxi trips that start at 8am versus 8pm?). See an example of this configuration below:", "_____no_output_____" ] ], [ [ "eval_config = tfma.EvalConfig(\n model_specs=[\n # Using signature 'eval' implies the use of an EvalSavedModel. To use\n # a serving model remove the signature to defaults to 'serving_default'\n # and add a label_key.\n tfma.ModelSpec(signature_name='eval')\n ],\n metrics_specs=[\n tfma.MetricsSpec(\n # The metrics added here are in addition to those saved with the\n # model (assuming either a keras model or EvalSavedModel is used).\n # Any metrics added into the saved model (for example using\n # model.compile(..., metrics=[...]), etc) will be computed\n # automatically.\n metrics=[\n tfma.MetricConfig(class_name='ExampleCount')\n ],\n # To add validation thresholds for metrics saved with the model,\n # add them keyed by metric name to the thresholds map.\n thresholds = {\n 'accuracy': tfma.MetricThreshold(\n value_threshold=tfma.GenericValueThreshold(\n lower_bound={'value': 0.5}),\n change_threshold=tfma.GenericChangeThreshold(\n direction=tfma.MetricDirection.HIGHER_IS_BETTER,\n absolute={'value': -1e-10}))\n }\n )\n ],\n slicing_specs=[\n # An empty slice spec means the overall slice, i.e. the whole dataset.\n tfma.SlicingSpec(),\n # Data can be sliced along a feature column. In this case, data is\n # sliced along feature column trip_start_hour.\n tfma.SlicingSpec(feature_keys=['trip_start_hour'])\n ])", "_____no_output_____" ] ], [ [ "Next, we give this configuration to `Evaluator` and run it.", "_____no_output_____" ] ], [ [ "# Use TFMA to compute a evaluation statistics over features of a model and\n# validate them against a baseline.\n\n# The model resolver is only required if performing model validation in addition\n# to evaluation. In this case we validate against the latest blessed model. If\n# no model has been blessed before (as in this case) the evaluator will make our\n# candidate the first blessed model.\nmodel_resolver = ResolverNode(\n instance_name='latest_blessed_model_resolver',\n resolver_class=latest_blessed_model_resolver.LatestBlessedModelResolver,\n model=Channel(type=Model),\n model_blessing=Channel(type=ModelBlessing))\ncontext.run(model_resolver)\n\nevaluator = Evaluator(\n examples=example_gen.outputs['examples'],\n model=trainer.outputs['model'],\n #baseline_model=model_resolver.outputs['model'],\n # Change threshold will be ignored if there is no baseline (first run).\n eval_config=eval_config)\ncontext.run(evaluator)", "_____no_output_____" ] ], [ [ "Now let's examine the output artifacts of `Evaluator`. ", "_____no_output_____" ] ], [ [ "evaluator.outputs", "_____no_output_____" ] ], [ [ "Using the `evaluation` output we can show the default visualization of global metrics on the entire evaluation set.", "_____no_output_____" ] ], [ [ "context.show(evaluator.outputs['evaluation'])", "_____no_output_____" ] ], [ [ "To see the visualization for sliced evaluation metrics, we can directly call the TensorFlow Model Analysis library.", "_____no_output_____" ] ], [ [ "import tensorflow_model_analysis as tfma\n\n# Get the TFMA output result path and load the result.\nPATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri\ntfma_result = tfma.load_eval_result(PATH_TO_RESULT)\n\n# Show data sliced along feature column trip_start_hour.\ntfma.view.render_slicing_metrics(\n tfma_result, slicing_column='trip_start_hour')", "_____no_output_____" ] ], [ [ "This visualization shows the same metrics, but computed at every feature value of `trip_start_hour` instead of on the entire evaluation set.\n\nTensorFlow Model Analysis supports many other visualizations, such as Fairness Indicators and plotting a time series of model performance. To learn more, see [the tutorial](https://www.tensorflow.org/tfx/tutorials/model_analysis/tfma_basic).", "_____no_output_____" ], [ "Since we added thresholds to our config, validation output is also available. The precence of a `blessing` artifact indicates that our model passed validation. Since this is the first validation being performed the candidate is automatically blessed.", "_____no_output_____" ] ], [ [ "blessing_uri = evaluator.outputs.blessing.get()[0].uri\n!ls -l {blessing_uri}", "_____no_output_____" ] ], [ [ "Now can also verify the success by loading the validation result record:", "_____no_output_____" ] ], [ [ "PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri\nprint(tfma.load_validation_result(PATH_TO_RESULT))", "_____no_output_____" ] ], [ [ "### Pusher\nThe `Pusher` component is usually at the end of a TFX pipeline. It checks whether a model has passed validation, and if so, exports the model to `_serving_model_dir`.", "_____no_output_____" ] ], [ [ "pusher = Pusher(\n model=trainer.outputs['model'],\n model_blessing=evaluator.outputs['blessing'],\n push_destination=pusher_pb2.PushDestination(\n filesystem=pusher_pb2.PushDestination.Filesystem(\n base_directory=_serving_model_dir)))\ncontext.run(pusher)", "_____no_output_____" ] ], [ [ "Let's examine the output artifacts of `Pusher`. ", "_____no_output_____" ] ], [ [ "pusher.outputs", "_____no_output_____" ] ], [ [ "In particular, the Pusher will export your model in the SavedModel format, which looks like this:", "_____no_output_____" ] ], [ [ "push_uri = pusher.outputs.model_push.get()[0].uri\nmodel = tf.saved_model.load(push_uri)\n\nfor item in model.signatures.items():\n pp.pprint(item)", "_____no_output_____" ] ], [ [ "We're finished our tour of built-in TFX components!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a499fdf1e7c8868fb37b184ddd5e66c9058da97
15,705
ipynb
Jupyter Notebook
colab_notebooks/blazingsql_demo.ipynb
gumdropsteve/bsql-demos
8ee8c09dc7fb03afcdb942f89a0513dcf6155995
[ "Apache-2.0" ]
null
null
null
colab_notebooks/blazingsql_demo.ipynb
gumdropsteve/bsql-demos
8ee8c09dc7fb03afcdb942f89a0513dcf6155995
[ "Apache-2.0" ]
null
null
null
colab_notebooks/blazingsql_demo.ipynb
gumdropsteve/bsql-demos
8ee8c09dc7fb03afcdb942f89a0513dcf6155995
[ "Apache-2.0" ]
null
null
null
29.137291
329
0.441452
[ [ [ "# Getting Started with BlazingSQL\n\nIn this notebook, we will cover: \n- How to set up [BlazingSQL](https://blazingsql.com) and the [RAPIDS AI](https://rapids.ai/) suite.\n- How to read and query csv files with cuDF and BlazingSQL.\n![Impression](https://www.google-analytics.com/collect?v=1&tid=UA-39814657-5&cid=555&t=event&ec=guides&ea=bsql-quick-start-guide&dt=bsql-quick-start-guide)", "_____no_output_____" ], [ "## Setup\n### Environment Sanity Check \n\nRAPIDS packages (BlazingSQL included) require Pascal+ architecture to run. For Colab, this translates to a T4 GPU instance. \n\nThe cell below will let you know what type of GPU you've been allocated, and how to proceed.", "_____no_output_____" ] ], [ [ "!wget https://github.com/BlazingDB/bsql-demos/raw/master/utils/colab_env.py\n!python colab_env.py ", "\n\n***********************************\nGPU = b'Tesla T4'\nWoo! You got the right kind of GPU!\n***********************************\n\n\n" ] ], [ [ "## Installs \nThe cell below pulls our Google Colab install script from the `bsql-demos` repo then runs it. The script first installs miniconda, then uses miniconda to install BlazingSQL and RAPIDS AI. This takes a few minutes to run. ", "_____no_output_____" ] ], [ [ "!wget https://github.com/BlazingDB/bsql-demos/raw/master/utils/bsql-colab.sh \n!bash bsql-colab.sh\n\nimport sys, os, time\nsys.path.append('/usr/local/lib/python3.6/site-packages/')\nos.environ['NUMBAPRO_NVVM'] = '/usr/local/cuda/nvvm/lib64/libnvvm.so'\nos.environ['NUMBAPRO_LIBDEVICE'] = '/usr/local/cuda/nvvm/libdevice/'\n\nimport subprocess\nsubprocess.Popen(['blazingsql-orchestrator', '9100', '8889', '127.0.0.1', '8890'],stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)\nsubprocess.Popen(['java', '-jar', '/usr/local/lib/blazingsql-algebra.jar', '-p', '8890'])\n\nimport pyblazing.apiv2.context as cont\ncont.runRal()\ntime.sleep(1) ", "_____no_output_____" ] ], [ [ "## Import packages and create Blazing Context\nYou can think of the BlazingContext much like a Spark Context (i.e. where information such as FileSystems you have registered and Tables you have created will be stored). If you have issues running this cell, restart runtime and try running it again.\n", "_____no_output_____" ] ], [ [ "from blazingsql import BlazingContext\nimport cudf\n\nbc = BlazingContext()", "BlazingContext ready\n" ] ], [ [ "## Read CSV\nFirst we need to download a CSV file. Then we use cuDF to read the CSV file, which gives us a GPU DataFrame (GDF). To learn more about the GDF and how it enables end to end workloads on rapids, check out our [blog post](https://blog.blazingdb.com/blazingsql-part-1-the-gpu-dataframe-gdf-and-cudf-in-rapids-ai-96ec15102240).", "_____no_output_____" ] ], [ [ "#Download the test CSV\n!wget 'https://s3.amazonaws.com/blazingsql-colab/Music.csv'", "--2019-10-16 20:40:36-- https://s3.amazonaws.com/blazingsql-colab/Music.csv\nResolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.113.141\nConnecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.113.141|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 10473 (10K) [text/csv]\nSaving to: ‘Music.csv.1’\n\nMusic.csv.1 100%[===================>] 10.23K --.-KB/s in 0s \n\n2019-10-16 20:40:36 (215 MB/s) - ‘Music.csv.1’ saved [10473/10473]\n\n" ], [ "# like pandas, cudf can simply read the csv\ngdf = cudf.read_csv('Music.csv')\n\n# let's see how it looks\ngdf.head()", "_____no_output_____" ] ], [ [ "## Create a Table\nNow we just need to create a table. ", "_____no_output_____" ] ], [ [ "bc.create_table('music', gdf)", "_____no_output_____" ] ], [ [ "## Query a Table\nThat's it! Now when you can write a SQL query the data will get processed on the GPU with BlazingSQL, and the output will be a GPU DataFrame (GDF) inside RAPIDS!", "_____no_output_____" ] ], [ [ "# query 10 events with a rating of at least 7\nresult = bc.sql('SELECT * FROM main.music where RATING >= 7 LIMIT 10').get()\n\n# get GDF\nresult_gdf = result.columns\n\n# display GDF (just like pandas)\nresult_gdf", "_____no_output_____" ] ], [ [ "# You're Ready to Rock\nAnd... thats it! You are now live with BlazingSQL.\n\n\nCheck out our [docs](https://docs.blazingdb.com) to get fancy or to learn more about how BlazingSQL works with the rest of [RAPIDS AI](https://rapids.ai/).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a49ab7d067c03d8d0d496de00f6282fe5abbac7
46,475
ipynb
Jupyter Notebook
machine_learning/regression.ipynb
vishalbelsare/teaching
dc1f7d6b259ff13ea4747e12ece0b2c66982532b
[ "MIT" ]
26
2019-07-24T16:54:19.000Z
2022-03-24T13:32:51.000Z
machine_learning/regression.ipynb
Test00DezWebSite/teaching
be550c24b0774d1365c3c5a1e2986b351e9191e8
[ "MIT" ]
1
2020-07-28T22:52:23.000Z
2020-07-28T22:52:23.000Z
machine_learning/regression.ipynb
Test00DezWebSite/teaching
be550c24b0774d1365c3c5a1e2986b351e9191e8
[ "MIT" ]
26
2020-04-16T14:57:16.000Z
2021-11-23T00:24:12.000Z
294.14557
22,736
0.936181
[ [ [ "## simple regression", "_____no_output_____" ] ], [ [ "import os, sys, math\nimport numpy as np\nimport pandas as pd\n%matplotlib inline\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_context(\"notebook\", font_scale=1.4)", "_____no_output_____" ], [ "df=pd.read_csv('weight_height.csv')", "_____no_output_____" ], [ "df.plot(x='height',y='weight',kind='scatter')", "_____no_output_____" ], [ "from sklearn.linear_model import LinearRegression\nX=pd.DataFrame(df.weight)\ny=pd.DataFrame(df.height)\n#print (X)\nreg = LinearRegression().fit(X, y)\n#reg.score(X, y)\ny_pred = reg.predict(X)\nprint('Coefficients: \\n', reg.coef_)", "Coefficients: \n [[0.11192231]]\n" ], [ "f,ax=plt.subplots(figsize=(6,6))\nplt.scatter(X, y, color='black')\nplt.plot(X, y_pred, color='blue', linewidth=3)\nplt.xlabel('weight')\nplt.ylabel('height')", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
4a49ad3873a1df0837abea0235801b01e75c81c2
402,136
ipynb
Jupyter Notebook
gridded_data_tutorial_1.ipynb
YifanCheng/waterdata
365c76bac4e63519ac9a6181a354302342462e6f
[ "CC-BY-4.0" ]
3
2020-07-13T21:39:47.000Z
2020-09-24T14:04:02.000Z
gridded_data_tutorial_1.ipynb
YifanCheng/waterdata
365c76bac4e63519ac9a6181a354302342462e6f
[ "CC-BY-4.0" ]
3
2020-05-10T21:36:39.000Z
2020-09-09T21:55:44.000Z
gridded_data_tutorial_1.ipynb
YifanCheng/waterdata
365c76bac4e63519ac9a6181a354302342462e6f
[ "CC-BY-4.0" ]
4
2020-05-06T22:17:21.000Z
2020-09-15T01:22:34.000Z
122.939774
105,952
0.790827
[ [ [ "[0: NumPy and the ndarray](gridded_data_tutorial_0.ipynb) | **1: Introduction to xarray** | [2: Daymet data access](gridded_data_tutorial_2.ipynb) | [3: Investigating SWE at Mt. Rainier with Daymet](gridded_data_tutorial_3.ipynb)", "_____no_output_____" ], [ "# Notebook 1: Introduction to xarray\nWaterhackweek 2020 | Steven Pestana ([email protected])\n\n**By the end of this notebook you will be able to:**\n* Create xarray DataArrays and Datasets\n* Index and slice DataArrays and Datasets\n* Make plots using xarray objects\n* Export xarray Datasets as NetCDF or CSV files\n\n---", "_____no_output_____" ], [ "#### What do we mean by \"gridded data\"?\n\nBroadly speaking, this can mean any data with a corresponding location in one or more dimensions. Typically, our dimensions represent points on the Earth's surface in two or three dimensions (latitude, longitude, and elevation), and often include time as an additional dimension. You may also hear the term \"raster\" data, which also means data points on some grid. These multi-dimensional datasets can be thought of as 2-D images, stacks of 2-D images, or data \"cubes\" in 3 or more dimensions. \n\nExamples of gridded data:\n * Satellite images of Earth's surface, where each pixel represents reflection or emission at some wavelength\n * Climate model output, where the model is evaluated at discrete nodes or grid cells\n \nExamples of raster/gridded data formats that combine multi-dimensional data along with metadata in a single file:\n* [NetCDF](https://www.unidata.ucar.edu/software/netcdf/docs/) (Network Common Data Form) for model data, satellite imagery, and more\n* [GeoTIFF](https://trac.osgeo.org/geotiff/) for georeferenced raster imagery (satellite images, digital elevation models, maps, and more)\n* [HDF-EOS](https://earthdata.nasa.gov/esdis/eso/standards-and-references/hdf-eos5) (Hierarchical Data Format - Earth Observing Systems) \n* [GRIB](https://en.wikipedia.org/wiki/GRIB) (GRIdded Binary) for meteorological data\n\n**How can we easily work with these types of data in python?**\n\nSome python packages for working with gridded data:\n* [rasterio](https://rasterio.readthedocs.io/en/latest/)\n* [xarray](https://xarray.pydata.org/en/stable/)\n* [rioxarray](https://corteva.github.io/rioxarray/stable/)\n* [cartopy](https://scitools.org.uk/cartopy/docs/latest/)\n \n**Today we'll be using xarray!**\n\n---", "_____no_output_____" ], [ "# xarray\n\nThe [xarray](https://xarray.pydata.org/) library allows us to read, manipulate, and create **labeled** multi-dimensional arrays and datasets, such as [NetCDF](https://www.unidata.ucar.edu/software/netcdf/) files.\n\nIn the image below, we can imagine having two \"data cubes\" (3-dimensional data arrays) of temperature and precipitation values, each of which corresponds to a particular x and y spatial coordinate, and t time step.\n\n<img src=\"https://xarray.pydata.org/en/stable/_images/dataset-diagram.png\" width=700>", "_____no_output_____" ], [ "Let's import xarray and start to explore its features...", "_____no_output_____" ] ], [ [ "# import the package, and give it the alias \"xr\"\nimport xarray as xr\n\n# we will also be using numpy and pandas, import both of these\nimport numpy as np\nimport pandas as pd\n\n# for plotting, import matplotlib.pyplot\nimport matplotlib.pyplot as plt\n# tell jupyter to display plots \"inline\" in the notebook\n%matplotlib inline", "_____no_output_____" ] ], [ [ "---\n# DataArrays\nSimilar to the `numpy.ndarray` object, the `xarray.DataArray` is a multi-dimensional array, with the addition of labeled dimensions, coordinates, and other metadata. A [DataArray](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.html) contains the following: \n* `values` which store the actual data values in a `numpy.ndarray`\n* `dims` are the names for each dimension of the `values` array\n* `coords` are arrays of labels for each point\n* `attrs` is a [dictionary](https://docs.python.org/3/tutorial/datastructures.html#dictionaries) that can contain additional metadata\n", "_____no_output_____" ], [ "**Let's create some fake air temperature data to see how these different parts work together to form a DataArray.**\n\nOur goal here is to have 100 years of annual maximum air temperature data for a 10 by 10 grid in a DataArray. (Our data will have a shape of 100 x 10 x 10)\n\nI'm going to use a numpy function to generate some random numbers that are [normally distributed](https://numpy.org/devdocs/reference/random/generated/numpy.random.normal.html) (`np.random.normal()`).", "_____no_output_____" ] ], [ [ "# randomly generated annual maximum air temperature data for a 10 by 10 grid\n\n# choose a mean and standard deviation for our random data\nmean = 20\nstandard_deviation = 5\n\n# specify that we want to generate 100 x 10 x 10 random samples\nsamples = (100, 10, 10)\n\n# generate the random samples\nair_temperature_max = np.random.normal(mean, standard_deviation, samples)", "_____no_output_____" ], [ "# look at this ndarray we just made\nair_temperature_max", "_____no_output_____" ], [ "# look at the shape of this ndarray\nair_temperature_max.shape", "_____no_output_____" ] ], [ [ "`air_temperature` will be the `values` within the DataArray. It is a three-dimensional array, and we've given it a shape of 100x10x10. \n\nThe three dimensions will need names (`dims`) and labels (`coords`)\n\n**Make the `coords` that will be our 100 years**", "_____no_output_____" ] ], [ [ "# Make a sequence of 100 years to be our time dimension\nyears = pd.date_range('1920', periods=100, freq ='1Y')", "_____no_output_____" ] ], [ [ "**Make the `coords` that will be our longitudes and latitudes**", "_____no_output_____" ] ], [ [ "# Make a sequence of linearly spaced longitude and latitude values\nlon = np.linspace(-119, -110, 10)\nlat = np.linspace(30, 39, 10)", "_____no_output_____" ] ], [ [ "**Make the `dims` names**", "_____no_output_____" ] ], [ [ "# We can call our dimensions time, lat, and lon corresponding to the dimensions with lengths 100 (years) and 10 (lat and lon) respectively\ndimensions = ['time', 'lat', 'lon']", "_____no_output_____" ] ], [ [ "**Finally we can create a metadata dictionary which will be included in the DataArray**", "_____no_output_____" ] ], [ [ "metadata = {'units': 'C',\n 'description': 'maximum annual air temperature'}", "_____no_output_____" ] ], [ [ "**Now that we have all the individual components of an xarray DataArray, we can create it**", "_____no_output_____" ] ], [ [ "tair_max = xr.DataArray(air_temperature_max, \n coords=[years, lat, lon], \n dims=dimensions, \n name='tair_max', \n attrs=metadata)", "_____no_output_____" ] ], [ [ "**Inspect the DataArray we just created**", "_____no_output_____" ] ], [ [ "tair_max", "_____no_output_____" ], [ "# Get the DataArray dimensions (labels for coordinates)\ntair_max.dims", "_____no_output_____" ], [ "# Get the DataArray coordinates\ntair_max.coords", "_____no_output_____" ], [ "# Look at our attributes\ntair_max.attrs", "_____no_output_____" ], [ "# Take a look at the data values\ntair_max.values", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "## DataArray indexing/slicing methods\n\nDataArrays can be [indexed or sliced](https://xarray.pydata.org/en/stable/indexing.html) much like ndarrays, but with the addition of using labels.\n\n| Dimension lookup | Index lookup | DataArray syntax |\n| --- | --- | --- |\n| positional | by integer | `da[:,0]` |\n| positional | by label | `da.loc[:,'east_watershed']` |\n| by name | by integer | `da.isel(watershed=0)` |\n| by name | by label | `da.sel(watershed='east_watershed')` |", "_____no_output_____" ], [ "Let's select by name and by label, air temperature for just one year, and plot it. (Conveniently, x-array will add axes labels and a title by default.)", "_____no_output_____" ] ], [ [ "tair_max.sel(time='2019').plot()", "_____no_output_____" ] ], [ [ "Similarly we can select by longitude and latitude to plot a timeseries. (We made this easy on ourselves here by choosing whole number integers for our longitude and latitude)", "_____no_output_____" ] ], [ [ "tair_max.sel(lat=34, lon=-114).plot()", "_____no_output_____" ] ], [ [ "Now let's select a shorter time range using a `slice()` to plot data for this location.", "_____no_output_____" ] ], [ [ "tair_max.sel(lat=34, lon=-114, time=slice('2000','2020')).plot()", "_____no_output_____" ] ], [ [ "And if we try to plot the whole DataArray, xarray gives us a histogram!", "_____no_output_____" ] ], [ [ "tair_max.plot()", "_____no_output_____" ] ], [ [ "---\n# Datasets\n\nSimilar to the `pandas.dataframe`, the `xarray.Dataset` contains one or more labeled `xarray.DataArray` objects.\n\nWe can create a [Dataset](https://xarray.pydata.org/en/stable/data-structures.html#dataset) with our simulated data here. \n\n**First, create a two more DataArrays with annual miminum air temperatures, and annual cumulative precipitation**", "_____no_output_____" ] ], [ [ "# randomly generated annual minimum air temperature data for a 10 by 10 grid\nair_temperature_min = np.random.normal(-10, 10, (100, 10, 10))\n\n# randomly generated annualcumulative precipitation data for a 10 by 10 grid\ncumulative_precip = np.random.normal(100, 25, (100, 10, 10))", "_____no_output_____" ] ], [ [ "Make the DataArrays (note that we're using the same `coords` and `dims` as our first maximum air temperature DataArray)", "_____no_output_____" ] ], [ [ "tair_min = xr.DataArray(air_temperature_min, \n coords=[years, lat, lon], \n dims=dimensions, \n name='tair_min', \n attrs={'units':'C', \n 'description': 'minimum annual air temperature'})\n\nprecip = xr.DataArray(cumulative_precip, \n coords=[years, lat, lon], \n dims=dimensions, \n name='cumulative_precip', \n attrs={'units':'cm', \n 'description': 'annual cumulative precipitation'})", "_____no_output_____" ] ], [ [ "**Now merge our two DataArrays and create a Dataset.**", "_____no_output_____" ] ], [ [ "my_data = xr.merge([tair_max, tair_min, precip])", "_____no_output_____" ], [ "# inspect the Dataset\nmy_data", "_____no_output_____" ] ], [ [ "## Dataset indexing/slicing methods\n\nDatasets can also be [indexed or sliced](https://xarray.pydata.org/en/stable/indexing.html) using the `.isel()` or `.sel()` methods.\n\n\n\n\n| Dimension lookup | Index lookup | Dataset syntax |\n| --- | --- | --- |\n| positional | by integer | *n/a* |\n| positional | by label | *n/a* |\n| by name | by integer | `ds.isel(location=0)` |\n| by name | by label | `ds.sel(location='stream_gage_1')` |", "_____no_output_____" ], [ "**Select with `.sel()` temperatures and precipitation for just one grid cell**", "_____no_output_____" ] ], [ [ "# by name, by label\nmy_data.sel(lon='-114', lat='35')", "_____no_output_____" ] ], [ [ "**Select with `.isel()` temperatures and precipitation for just one year**", "_____no_output_____" ] ], [ [ "# by name, by integer\nmy_data.isel(time=0)", "_____no_output_____" ] ], [ [ "---\n\n## Make some plots:\n\nUsing our indexing/slicing methods, create some plots showing 1) a timseries of all three variables at a single point, then 2) plot some maps of each variable for two points in time.", "_____no_output_____" ] ], [ [ "# 1) create a timeseries for the two temperature variables for a single location\n\n# create a figure with 2 rows and 1 column of subplots\nfig, ax = plt.subplots(nrows=2, ncols=1, figsize=(10,7), tight_layout=True)\n\n# pick a longitude and latitude in our dataset\nmy_lon=-114\nmy_lat=35\n\n# first subplot\n# Plot tair_max\nmy_data.sel(lon=my_lon, lat=my_lat).tair_max.plot(ax=ax[0], color='r', linestyle='-', label='Tair_max')\n# Plot tair_min\nmy_data.sel(lon=my_lon, lat=my_lat).tair_min.plot(ax=ax[0], color='b', linestyle='--', label='Tair_max')\n# Add a title\nax[0].set_title('Annual maximum and minimum air temperatures at {}, {}'.format(my_lon,my_lat))\n# Add a legend\nax[0].legend(loc='lower left')\n\n# second subplot\n# Plot precip\nmy_data.sel(lon=my_lon, lat=my_lat).cumulative_precip.plot(ax=ax[1], color='black', linestyle='-', label='Cumulative Precip.')\n# Add a title\nax[1].set_title('Annualcumulative precipitation at {}, {}'.format(my_lon,my_lat))\n# Add a legend\nax[1].legend(loc='lower left')\n\n# Save the figure\nplt.savefig('my_data_plot_timeseries.jpg')", "_____no_output_____" ], [ "# 2) plot maps of temperature and precipitation for two years\n\n# create a figure with 2 rows and 3 columns of subplots\nfig, ax = plt.subplots(nrows=2, ncols=3, figsize=(15,7), tight_layout=True)\n\n# The two years we want to plot\nyear1 = '1980'\nyear2 = '2019'\n\n# Plot tair_max for the year 1980\nmy_data.sel(time=year1).tair_max.plot(ax=ax[0,0], cmap='RdBu_r', vmin=-20, vmax=40)\n# set a title for this subplot\nax[0,0].set_title('Tair_max {}'.format(year1));\n\n# Plot tair_max for the year 1980\nmy_data.sel(time=year1).tair_min.plot(ax=ax[0,1], cmap='RdBu_r', vmin=-20, vmax=40)\n# set a title for this subplot\nax[0,1].set_title('Tair_min {}'.format(year1));\n\n# Plot tair_max for the year 1980\nmy_data.sel(time=year1).cumulative_precip.plot(ax=ax[0,2], cmap='Blues')\n# set a title for this subplot\nax[0,2].set_title('Precip {}'.format(year1));\n\n\n# Plot tair_max for the year 2019\nmy_data.sel(time=year2).tair_max.plot(ax=ax[1,0], cmap='RdBu_r', vmin=-20, vmax=40)\n# set a title for this subplot\nax[1,0].set_title('Tair_max {}'.format(year2));\n\n# Plot tair_max for the year 2019\nmy_data.sel(time=year2).tair_min.plot(ax=ax[1,1], cmap='RdBu_r', vmin=-20, vmax=40)\n# set a title for this subplot\nax[1,1].set_title('Tair_min {}'.format(year2));\n\n# Plot tair_max for the year 2019\nmy_data.sel(time=year2).cumulative_precip.plot(ax=ax[1,2], cmap='Blues')\n# set a title for this subplot\nax[1,2].set_title('Precip {}'.format(year2));\n\n\n# save the figure as a jpg image\nplt.savefig('my_data_plot_rasters.jpg')", "_____no_output_____" ] ], [ [ "---\n## Save our data to a file:", "_____no_output_____" ], [ "**As a NetCDF file:**", "_____no_output_____" ] ], [ [ "my_data.to_netcdf('my_data.nc')", "_____no_output_____" ] ], [ [ "**We can also convert a Dataset or DataArray to a pandas dataframe**", "_____no_output_____" ] ], [ [ "my_data.to_dataframe()", "_____no_output_____" ] ], [ [ "**Via a pandas dataframe, save our data to a csv file**", "_____no_output_____" ] ], [ [ "my_data.to_dataframe().to_csv('my_data.csv')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a49c7ae14ab97273389e00e7057b298459a9ea5
48,155
ipynb
Jupyter Notebook
DL_TF20/Part 8 - Preprocessing - Preparing Data-Antonio.ipynb
46gom/DLND_Exercises
eb4bc8600af7b23d05191c7156bc90ce0ed7af79
[ "MIT" ]
4
2020-03-06T06:05:58.000Z
2020-03-13T05:10:17.000Z
DL_TF20/Part 8 - Preprocessing - Preparing Data-Antonio.ipynb
46gom/DLND_Exercises
eb4bc8600af7b23d05191c7156bc90ce0ed7af79
[ "MIT" ]
8
2021-03-19T04:53:14.000Z
2022-03-12T00:03:52.000Z
DL_TF20/Part 8 - Preprocessing - Preparing Data-Antonio.ipynb
46gom/DLND_Exercises
eb4bc8600af7b23d05191c7156bc90ce0ed7af79
[ "MIT" ]
4
2019-11-19T00:08:13.000Z
2020-03-13T05:10:23.000Z
70.094614
17,028
0.834784
[ [ [ "import os\nfrom glob import glob\n\nimport numpy as np\n\nimport tensorflow as tf\nfrom PIL import Image\n\nimport matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ] ], [ [ "# 실습데이터 다운로드\n- mnist_png.zip : https://drive.google.com/file/d/1wmODa2zMLxmVzs45xNrf7TswDooxfuDJ/view?usp=sharing\n- cifar.zip : https://drive.google.com/file/d/1c3Mpi4QEGBVMbJwAK0IYsgW2XeB3_EKh/view?usp=sharing", "_____no_output_____" ] ], [ [ "os.listdir('./dataset/')", "_____no_output_____" ], [ "os.listdir('./dataset/mnist_png/training/')", "_____no_output_____" ], [ "os.listdir('./dataset/mnist_png/training/0/')[0]", "_____no_output_____" ], [ "data_paths = glob('./dataset/mnist_png/training/*/*.png')\ndata_paths[0]", "_____no_output_____" ], [ "path = data_paths[0]\npath", "_____no_output_____" ] ], [ [ "# 데이터 분석 ", "_____no_output_____" ] ], [ [ "len(data_paths)", "_____no_output_____" ], [ "os.listdir('./dataset/mnist_png/training')", "_____no_output_____" ], [ "label_nums = os.listdir('./dataset/mnist_png/training')\nlabel_nums", "_____no_output_____" ] ], [ [ "Label 0의 데이터 갯수 확인", "_____no_output_____" ] ], [ [ "len(os.listdir('./dataset/mnist_png/training/' + '0'))", "_____no_output_____" ] ], [ [ "### 데이터 별 갯수 비교", "_____no_output_____" ] ], [ [ "nums_dataset = []\n\nfor lbl_n in label_nums:\n data_per_class = os.listdir('./dataset/mnist_png/training/' + lbl_n)\n nums_dataset.append(len(data_per_class))", "_____no_output_____" ], [ "nums_dataset", "_____no_output_____" ], [ "list(range(10))", "_____no_output_____" ], [ "plt.bar(list(range(10)), nums_dataset)\nplt.title('Number of dataset per class')\nplt.show()", "_____no_output_____" ] ], [ [ "# Pillow로 열기", "_____no_output_____" ] ], [ [ "image_pil = Image.open(path)\nimage = np.array(image_pil)", "_____no_output_____" ], [ "image.shape", "_____no_output_____" ], [ "plt.imshow(image, 'gray')\nplt.show()", "_____no_output_____" ] ], [ [ "# TensorFlow로 열기", "_____no_output_____" ] ], [ [ "gfile = tf.io.read_file(path)\nimage = tf.io.decode_image(gfile)", "_____no_output_____" ], [ "image.shape", "_____no_output_____" ], [ "plt.imshow(image[:, :, 0], 'gray')\nplt.show()", "_____no_output_____" ] ], [ [ "# Label 얻기", "_____no_output_____" ] ], [ [ "path", "_____no_output_____" ], [ "path.split('\\\\')", "_____no_output_____" ], [ "cls_n = path.split('\\\\')[-2]\ncls_n", "_____no_output_____" ], [ "int(cls_n)", "_____no_output_____" ], [ "def get_label(path):\n cls_n = path.split('\\\\')[-2]\n return int(cls_n)", "_____no_output_____" ], [ "lbl = get_label(path)\nlbl", "_____no_output_____" ] ], [ [ "# 데이터 이미지 사이즈 알기", "_____no_output_____" ] ], [ [ "!pip install tqdm", "Collecting tqdm\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/05/f2/764a5d530cf143ded9bc95216edb6e258c6554511e78de7c250557e8f3ed/tqdm-4.37.0-py2.py3-none-any.whl (53kB)\n\u001b[K |████████████████████████████████| 61kB 360kB/s eta 0:00:01\n\u001b[?25hInstalling collected packages: tqdm\nSuccessfully installed tqdm-4.37.0\n" ], [ "from tqdm import tqdm_notebook", "_____no_output_____" ], [ "heights = []\nwidths = []\n\nfor path in tqdm_notebook(data_paths):\n image_pil = Image.open(path)\n image = np.array(image_pil)\n h, w = image.shape\n \n heights.append(h)\n widths.append(w)\n", "_____no_output_____" ], [ "plt.figure(figsize=(20, 10))\n\nplt.subplot(121)\nplt.hist(heights)\nplt.title('Heights')\nplt.axvline(np.mean(heights), color='r', linestyle='dashed', linewidth=2)\n\nplt.subplot(122)\nplt.hist(widths)\nplt.title('Widths')\nplt.axvline(np.mean(widths), color='r', linestyle='dashed', linewidth=2)\n\nplt.show()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
4a49da6aac5a1e93721ba41071a9233d04d75f5f
7,777
ipynb
Jupyter Notebook
data/external/numpy-working-with-multidimensional-data/02/demos/m02_demo07_VectorStacking.ipynb
vickykatoch/pyTitanic
a2ce0278db4b5b4d454e426eb0f80007082bc4a1
[ "MIT" ]
null
null
null
data/external/numpy-working-with-multidimensional-data/02/demos/m02_demo07_VectorStacking.ipynb
vickykatoch/pyTitanic
a2ce0278db4b5b4d454e426eb0f80007082bc4a1
[ "MIT" ]
null
null
null
data/external/numpy-working-with-multidimensional-data/02/demos/m02_demo07_VectorStacking.ipynb
vickykatoch/pyTitanic
a2ce0278db4b5b4d454e426eb0f80007082bc4a1
[ "MIT" ]
null
null
null
19.061275
150
0.444644
[ [ [ "## Vector Stacking\n", "_____no_output_____" ] ], [ [ "import numpy as np", "_____no_output_____" ] ], [ [ "### 1. concatenate", "_____no_output_____" ], [ "The arrays must have the same shape, except in the dimension corresponding to axis. The default axis along which the arrays will be joined is 0.", "_____no_output_____" ] ], [ [ "x = np.array([[\"Germany\",\"France\"],[\"Berlin\",\"Paris\"]])\ny = np.array([[\"Hungary\",\"Austria\"],[\"Budapest\",\"Vienna\"]])", "_____no_output_____" ], [ "print(x)\nprint(x.shape)", "[['Germany' 'France']\n ['Berlin' 'Paris']]\n(2, 2)\n" ], [ "print(y)\nprint(y.shape)", "[['Hungary' 'Austria']\n ['Budapest' 'Vienna']]\n(2, 2)\n" ] ], [ [ "##### The default is row-wise concatenation for a 2D array", "_____no_output_____" ] ], [ [ "print('Joining two arrays along axis 0')\nnp.concatenate((x,y))", "Joining two arrays along axis 0\n" ] ], [ [ "##### Column-wise", "_____no_output_____" ] ], [ [ "print('Joining two arrays along axis 1')\nnp.concatenate((x,y), axis = 1)", "Joining two arrays along axis 1\n" ] ], [ [ "### 2. stack", "_____no_output_____" ] ], [ [ "a = np.array([1, 2, 3])\nb = np.array([2, 3, 4])", "_____no_output_____" ], [ "np.stack((a, b))", "_____no_output_____" ], [ "studentId = np.array([1,2,3,4])\nname = np.array([\"Alice\",\"Beth\",\"Cathy\",\"Dorothy\"])\nscores = np.array([65,78,90,81])", "_____no_output_____" ], [ "np.stack((studentId, name, scores))", "_____no_output_____" ], [ "np.stack((studentId, name, scores)).shape", "_____no_output_____" ], [ "np.stack((studentId, name, scores), axis =1)", "_____no_output_____" ], [ "np.stack((studentId, name, scores), axis =1).shape", "_____no_output_____" ] ], [ [ "### 3. vstack\nStacks row wise", "_____no_output_____" ] ], [ [ "np.vstack((studentId, name, scores)) ", "_____no_output_____" ] ], [ [ "### 4. hstack\nStacks column wise", "_____no_output_____" ] ], [ [ "np.hstack((studentId, name, scores)) ", "_____no_output_____" ], [ "np.hstack((studentId, name, scores)).shape", "_____no_output_____" ] ], [ [ "The functions concatenate, stack and block provide more general stacking and concatenation operations.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
4a49efd1b23e1af427945827978c171832862f53
496,173
ipynb
Jupyter Notebook
DRC2017/Exploration/AbstractFigures.ipynb
samueljamesbader/HyperFET_Project
c9b7a870aa9c63f0bac76d3b9370ef4814acda0b
[ "MIT" ]
null
null
null
DRC2017/Exploration/AbstractFigures.ipynb
samueljamesbader/HyperFET_Project
c9b7a870aa9c63f0bac76d3b9370ef4814acda0b
[ "MIT" ]
null
null
null
DRC2017/Exploration/AbstractFigures.ipynb
samueljamesbader/HyperFET_Project
c9b7a870aa9c63f0bac76d3b9370ef4814acda0b
[ "MIT" ]
null
null
null
508.895385
64,554
0.925599
[ [ [ "%matplotlib inline\nfrom ipywidgets import interact, FloatSlider, HTML\nfrom IPython.display import display\nimport matplotlib.pyplot as plt\nimport matplotlib\nmatplotlib.rc('font',size=18)\nimport matplotlib.ticker as plticker\nimport matplotlib.patches as patches\nimport numpy as np\nimport warnings\nimport os.path\n\n\nfrom hyperfet.devices import SCMOSFET,VO2,HyperFET, Direction\nimport hyperfet.approximations as appr\nimport hyperfet.extractions as extr\nfrom hyperfet.references import si#, mixed_vo2_params\nfrom hyperfet.fitting import show_transistor\nfrom hyperfet import ABSTRACT_IMAGE_DIR", "_____no_output_____" ], [ "def ylog():\n with warnings.catch_warnings():\n warnings.filterwarnings('error')\n plt.yscale('log')\ndef tighten():\n with warnings.catch_warnings():\n warnings.filterwarnings('ignore')\n plt.tight_layout()", "_____no_output_____" ], [ "# Parameters given for Figure 3\n\nvo2_params={\n \"rho_m\":si(\"3e-4 ohm cm\"),\n \"rho_i\":si(\"30 ohm cm\"),\n \"J_MIT\":si(\"1e6 A/cm^2\"),\n \"J_IMT\":si(\".5e4 A/cm^2\"),\n \"v_met\": si(\".05 V/ (20nm)\")\n}\n\n#vo2=VO2(**vo2_params)\n\nVDD=.5", "_____no_output_____" ], [ "opts={\n 'figsize': (6,7),\n 'linidvgpos': [.3,.68,.2,.2],\n 'linidvgxticks': [0,.25,.5],\n 'linidvgxlim': [0,.5],\n 'linidvgyticks': [100,200,300],\n 'linidvdpos': [.62,.25,.25,.3],\n 'linidvdxticks': [0,.25,.5],\n 'linidvdyticks': [0,100,200,300],\n}", "_____no_output_____" ], [ "fet=None\n@interact(VT0=FloatSlider(value=.32,min=0,max=1,step=.05,continuous_update=False),\n W=FloatSlider(value=100,min=10,max=100,step=10,continuous_update=False),\n Cinv_vxo=FloatSlider(value=2500,min=1000,max=5000,step=400,continuous_update=False),\n SS=FloatSlider(value=.070,min=.05,max=.09,step=.005,continuous_update=False),\n alpha=FloatSlider(value=2.5,min=0,max=5,step=.5,continuous_update=False),\n beta=FloatSlider(value=1.8,min=0,max=4,step=.1,continuous_update=False),\n VDD=FloatSlider(value=.5,min=.3,max=1,step=.05,continuous_update=False),\n VDsats=FloatSlider(value=.1,min=.1,max=2,step=.1,continuous_update=False),\n delta=FloatSlider(value=.01,min=0,max=.5,step=.05,continuous_update=False),\n log10Gleak=FloatSlider(value=-12,min=-14,max=-5,step=1,continuous_update=False)\n )\ndef show_HEMT(VT0,W,Cinv_vxo,SS,alpha,beta,VDsats,VDD,delta,log10Gleak):\n global fet\n plt.figure(figsize=(12,6))\n fet=SCMOSFET(\n W=W*1e-9,Cinv_vxo=Cinv_vxo,\n VT0=VT0,alpha=alpha,SS=SS,delta=delta,\n VDsats=VDsats,beta=beta,Gleak=10**log10Gleak)\n \n plt.subplot(121)\n VD=np.array(VDD)\n VG=np.linspace(0,.5,500)\n VDgrid,VGgrid=np.meshgrid(VD,VG)\n I=fet.ID(VD=VDgrid,VG=VGgrid)\n plt.plot(VG,I/fet.W,label=r\"$V_D={:.2g}$\".format(VDD))\n plt.yscale('log')\n plt.xlabel(r\"$V_G\\;\\mathrm{[V]}$\")\n plt.ylabel(r\"$I_D\\;\\mathrm{[\\mu A/\\mu m]}$\")\n plt.legend(loc='lower right',fontsize=16)\n \n plt.subplot(122)\n VD=np.linspace(0,VDD,500)\n VG=np.linspace(0,VDD,10)\n VDgrid,VGgrid=np.meshgrid(VD,VG)\n I=fet.ID(VD=VDgrid,VG=VGgrid)\n plt.plot(VD,I.T/fet.W)\n plt.xlabel(r\"$V_D\\;\\mathrm{[V]}$\")\n plt.ylabel(r\"$I_D\\;\\mathrm{[\\mu A/\\mu m]}$\")\n #plt.legend([r\"$V_G={:.2g}$\".format(vg) for vg in VG],loc='lower right',fontsize=16)\n \n plt.tight_layout()", "_____no_output_____" ], [ "opts={\n 'figsize': (6,7),\n 'linidvgpos': [.25,.69,.2,.2],\n 'linidvgxticks': [0,.5],\n 'linidvgxlim': [0,.5],\n 'linidvgyticks': [100,200,300,400],\n 'linidvdpos': [.62,.25,.25,.3],\n 'linidvdxticks': [0,.5],\n 'linidvdyticks': [100,200,300,400],\n}\nshow_transistor(fet,VDD,data=None,**opts)\nplt.gcf().get_axes()[0].set_ylim(1e-3,1e3)\nplt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,\"Transistor.eps\"))\nplt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,\"Transistor.png\"))", "_____no_output_____" ], [ "vo2=VO2(L=20e-9,W=15e-9,T=5e-9,**vo2_params)\n\n\nfig, (ax, ax2) = plt.subplots(2, 1, sharex=True,figsize=(6,7))\n\nI=np.logspace(-4,4,1000)*fet.W\nV=vo2.V(I,direc=Direction.FORWARD)\n\nplt.axes(ax)\nplt.plot(V,I*1e6)\nplt.ylabel(r\"$I\\;\\mathrm{[{\\bf\\mu} A]}$\",fontsize=20)\nplt.ylim(10,250)\n\nplt.axes(ax2)\nplt.plot(V,I*1e9)\nplt.ylim(0,10)\nplt.ylabel(r\"$I\\;\\mathrm{[{\\bf n}A]}$\",fontsize=20)\nplt.xlim(0,.35)\nplt.xlabel(\"$V\\;\\mathrm{[V]}$\")\nax2.xaxis.set_major_locator(plticker.MultipleLocator(base=.1))\n\nax.spines['bottom'].set_visible(False)\nax2.spines['top'].set_visible(False)\nax.xaxis.tick_top()\nax.tick_params(labeltop='off') # don't put tick labels at the top\nax2.xaxis.tick_bottom()\n\n\nd = .015 # how big to make the diagonal lines in axes coordinates\n# arguments to pass to plot, just so we don't keep repeating them\nkwargs = dict(transform=ax.transAxes, color='k', clip_on=False)\nax.plot((-d, +d), (-d, +d), **kwargs) # top-left diagonal\nax.plot((1 - d, 1 + d), (-d, +d), **kwargs) # top-right diagonal\n\nkwargs.update(transform=ax2.transAxes) # switch to the bottom axes\nax2.plot((-d, +d), (1 - d, 1 + d), **kwargs) # bottom-left diagonal\nax2.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs) # bottom-right diagonal\n\n\nplt.axes([.6,.4,.25,.25])\nplt.plot(V,I)\nplt.yscale('log')\nplt.xticks([0,.3])\nplt.xlim(0,.35)\n#plt.ylabel('$I\\;\\mathrm{[A]}$')\nplt.title(\"$\\mathrm{Log\\;IV}$\")\nplt.yticks([1e-9,1e-7,1e-5])\n\n\nwith warnings.catch_warnings():\n warnings.filterwarnings('ignore')\n plt.tight_layout()\n#plt.yscale('log')\n\n\n#plt.yscale('log')\n\n\nplt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,\"PCR.eps\"))\nplt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,\"PCR.png\"))", "_____no_output_____" ], [ "VD=np.array(VDD)\nVG=np.linspace(0,.5,500)\n\nplt.figure(figsize=(6,7))\nplt.plot(VG,fet.ID(VD,VG)/fet.W,label='Transistor')\n\nhf=HyperFET(fet,vo2)\n#hf=HyperFET(fet.shifted(appr.shift(HyperFET(fet,vo2),VDD)),vo2)\n\nIf,Ib=[np.ravel(i) for i in hf.I_double(VD=VD,VG=VG)]\nl=plt.plot(VG[~np.isnan(If)],If[~np.isnan(If)]/fet.W,linewidth=2,label='HyperFET')[0]\nplt.plot(VG[~np.isnan(Ib)],Ib[~np.isnan(Ib)]/fet.W,linewidth=2,color=l.get_color())\n \nplt.yscale('log')\nplt.xlabel(r\"$V_G\\;\\mathrm{[V]}$\")\nplt.ylabel(r\"$I_D\\;\\mathrm{[\\mu A/\\mu m]}$\")\n\nplt.legend(loc='lower right',fontsize=16)\n\nplt.axes([.25,.69,.2,.2])\nplt.gca().yaxis.tick_right()\nplt.plot(VG,fet.ID(VD,VG)/fet.W)\nl=plt.plot(VG[~np.isnan(If)],If[~np.isnan(If)]/fet.W,linewidth=2)[0]\nplt.plot(VG[~np.isnan(Ib)],Ib[~np.isnan(Ib)]/fet.W,linewidth=2,color=l.get_color())\nplt.xticks([0,.5])\nplt.yticks([250,500])\nplt.title(\"$\\mathrm{Lin\\;IV}$\");\n\nplt.tight_layout()\n\nplt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,\"HyperFET.svg\"))\nplt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,\"HyperFET.png\"))\n", "/home/sam/miniconda3/lib/python3.5/site-packages/matplotlib/figure.py:1744: UserWarning: This figure includes Axes that are not compatible with tight_layout, so its results might be incorrect.\n warnings.warn(\"This figure includes Axes that are not \"\n" ], [ "VD=np.array(VDD)\nVG=np.linspace(0,.5,500)\n\nplt.figure(figsize=(8,4.5))\n\n\nfor LWT in [\"20nm x (9nm)^2\", \"20nm x (10nm)^2\", \"20nm x (11nm)^2\"]:\n L,WT=[si(x) for x in LWT.split(\"x\")]\n W=np.sqrt(WT);T=np.sqrt(WT)\n \n vo2=VO2(L=L,W=W,T=T,**vo2_params)\n \n #plt.subplot(121)\n plt.plot(VG,fet.ID(VD,VG)/fet.W,'k')\n hf=HyperFET(fet,vo2)\n If,Ib=[np.ravel(i) for i in hf.I_double(VD=VD,VG=VG)]\n l=plt.plot(VG[~np.isnan(If)],If[~np.isnan(If)]/fet.W,label=LWT.replace(\"^2\",\"$^2$\"),linewidth=2)[0]\n plt.plot(VG[~np.isnan(Ib)],Ib[~np.isnan(Ib)]/fet.W,linewidth=2,color=l.get_color())\n \nplt.yscale('log')\nplt.xlabel(r\"$V_G\\;\\mathrm{[V]}$\",fontsize=20,labelpad=-10)\nplt.ylabel(r\"$I_D\\;\\mathrm{[\\mu A/\\mu m]}$\")\nplt.legend(loc='upper left',fontsize=14)\n\n\nplt.axes([.17,.35,.3,.25])\nplt.plot(VG,fet.ID(VD,VG)/fet.W,'k')\nfor LWT in [\"20nm x 15nm x 5nm\", \"20nm x 20nm x 5nm\", \"20nm x 25nm x 5nm\"]:\n L,W,T=[si(x) for x in LWT.split(\"x\")]\n vo2=VO2(L=L,W=W,T=T,**vo2_params)\n \n hf=HyperFET(fet.shifted(appr.shift(HyperFET(fet,vo2),VDD)),vo2)\n If,Ib=[np.ravel(i) for i in hf.I_double(VD=VD,VG=VG)]\n l=plt.plot(VG[~np.isnan(If)],If[~np.isnan(If)]/fet.W,linewidth=2)[0]\n plt.plot(VG[~np.isnan(Ib)],Ib[~np.isnan(Ib)]/fet.W,linewidth=2,color=l.get_color())\n \nplt.yscale('log')\nplt.yticks([])\nplt.xticks([])\n \nplt.text(.3,.1,\"$\\mathrm{Shifted}$\",fontsize=20)\n\nplt.axes([.7,.28,.2,.25])\nplt.gca().yaxis.tick_right()\nIll_appr=[]\nIll_extr=[]\nWs=np.linspace(7,11,10)\nfor W in Ws:\n L,W,T=20e-9,W*1e-9,W*1e-9\n vo2=VO2(L=L,W=W,T=T,**vo2_params)\n hf=HyperFET(fet,vo2)\n If,Ib=[np.ravel(i) for i in hf.I_double(VD=VD,VG=VG)]\n \n Ill_appr+=[appr.Ill(hf,VDD)]\n Ill_extr+=[extr.left(VG,If,Ib)[1]]\n\nplt.plot(Ws,np.array(Ill_appr)*1e3/fet.W,'k')\nplt.plot(Ws,np.array(Ill_extr)*1e3/fet.W,'ko')\nplt.yticks([25,50])\nplt.xticks([7,11])\nplt.xlabel(r'$\\sqrt{wt}\\ \\mathrm{[nm]}$',labelpad=-10)\nplt.tick_params(labelsize=12)\nplt.title(r'$I_{ll}\\ \\mathrm{[mA/\\mu m]}$',fontsize=16)\n#plt.yscale('log')\n\n\ntighten()\nplt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,\"HFvsA.svg\"))\nplt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,\"HFvsA.png\"))", "_____no_output_____" ], [ "VD=np.array(VDD)\nVG=np.linspace(0,.5,500)\n\nplt.figure(figsize=(8,4.5))\nplt.plot(VG,fet.ID(VD,VG)/fet.W,'k')\n\nfor LWT in [\"15nm x (10nm)^2\", \"20nm x (10nm)^2\", \"25nm x (10nm)^2\"]:\n L,WT=[si(x) for x in LWT.split(\"x\")]\n W=np.sqrt(WT);T=np.sqrt(WT);\n vo2=VO2(L=L,W=W,T=T,**vo2_params)\n hf=HyperFET(fet,vo2)\n \n If,Ib=[np.ravel(i) for i in hf.I_double(VD=VD,VG=VG)]\n l=plt.plot(VG[~np.isnan(If)],If[~np.isnan(If)]/fet.W,label=LWT.replace(\"^2\",\"$^2$\"),linewidth=2)[0]\n plt.plot(VG[~np.isnan(Ib)],Ib[~np.isnan(Ib)]/fet.W,linewidth=2,color=l.get_color())\n \nplt.yscale('log')\nplt.xlabel(r\"$V_G\\;\\mathrm{[V]}$\",fontsize=20,labelpad=-10)\nplt.ylabel(r\"$I_D\\;\\mathrm{[\\mu A/\\mu m]}$\")\nplt.legend(loc='lower right',fontsize=14) \n\nplt.axes([.17,.65,.3,.25])\nplt.plot(VG,fet.ID(VD,VG)/fet.W,'k')\nfor LWT in [\"15nm x 20nm x 5nm\", \"20nm x 20nm x 5nm\", \"25nm x 20nm x 5nm\"]:\n L,W,T=[si(x) for x in LWT.split(\"x\")]\n vo2=VO2(L=L,W=W,T=T,**vo2_params)\n \n hf=HyperFET(fet.shifted(appr.shift(HyperFET(fet,vo2),VDD)),vo2)\n If,Ib=[np.ravel(i) for i in hf.I_double(VD=VD,VG=VG)]\n l=plt.plot(VG[~np.isnan(If)],If[~np.isnan(If)]/fet.W,linewidth=2)[0]\n plt.plot(VG[~np.isnan(Ib)],Ib[~np.isnan(Ib)]/fet.W,linewidth=2,color=l.get_color())\n \nplt.yscale('log')\nplt.yticks([])\nplt.xticks([])\nplt.text(.3,.1,\"$\\mathrm{Shifted}$\",fontsize=20)\n\nplt.axes([.2,.26,.2,.25])\nplt.gca().yaxis.tick_right()\nVright_appr=[]\nVright_extr=[]\nLs=np.linspace(10,25,10)\nfor L in Ls:\n L,W,T=L*1e-9,20e-9,5e-9\n vo2=VO2(L=L,W=W,T=T,**vo2_params)\n hf=HyperFET(fet,vo2)\n If,Ib=[np.ravel(i) for i in hf.I_double(VD=VD,VG=VG)]\n \n r=appr.Vright(hf,VDD)\n Vright_appr+=[r if r>appr.Vleft(hf,VDD) else np.NaN]\n r=extr.right(VG,If,Ib)\n Vright_extr+=[r[0] if not np.isnan(r[1]) else np.NaN]\n\nplt.plot(Ls,np.array(Vright_appr),'k')\nplt.plot(Ls,np.array(Vright_extr),'ko')\nplt.yticks([.2,.5])\nplt.xticks([10,25])\nplt.xlabel(r'$l\\ \\mathrm{[nm]}$',labelpad=-10)\nplt.tick_params(labelsize=12)\nplt.title(r'$V_\\mathrm{r}\\ \\mathrm{[V]}$',fontsize=16)\n\nunst=\\\n max([l for l,v in zip(Ls,Vright_extr) if np.isnan(v)])\nplt.gca().add_patch(patches.Rectangle(\n (plt.xlim()[0],plt.ylim()[0]),\n unst-plt.xlim()[0],\n plt.ylim()[1]-plt.ylim()[0],\n hatch='/',edgecolor='red',fill=None))\n\n#plt.gca().yaxis.set_label_position(\"right\")\n#plt.yscale('log')\n\ntighten()\n\n\nplt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,\"HFvsl.svg\"))\nplt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,\"HFvsl.png\"))", "_____no_output_____" ], [ "out=HTML()\nvo2=None\nfet2=None\nhf=None\nhf2=None\nVTm,VTp=[None]*2\n@interact(L=FloatSlider(value=20,min=1,max=45,step=1,continuous_update=False),\n W=FloatSlider(value=10,min=.5,max=30,step=.5,continuous_update=False),\n T=FloatSlider(value=10,min=.5,max=20,step=.5,continuous_update=False))\ndef show_hf(L,W,T):\n global vo2, fet2,VTm,VTp, hf, hf2\n plt.figure(figsize=(12,6))\n \n vo2=VO2(L=L*1e-9,W=W*1e-9,T=T*1e-9,**vo2_params)\n hf=HyperFET(fet,vo2)\n shift=appr.shift(hf,VDD)\n fet2=fet.shifted(shift)\n hf2=HyperFET(fet2,vo2)\n \n VD=np.array(VDD)\n VG=np.linspace(0,VDD,500)\n\n plt.subplot(131)\n I=np.ravel(fet.ID(VD=VD,VG=VG))\n plt.plot(VG,I/fet.W,'r')\n \n If,Ib=[np.ravel(i) for i in hf.I_double(VD=VD,VG=VG)]\n plt.plot(VG,If/fet.W,'b')\n plt.plot(VG,Ib/fet.W,'g')\n \n \n \n plt.ylim(1e-3,1e3)\n plt.xlabel(\"$V_{GS}\\;\\mathrm{[V]}$\")\n plt.ylabel(\"$I/W\\;\\mathrm{[mA/mm]}$\")\n ylog()\n \n plt.subplot(132) \n plt.plot(VG,I/fet2.W,'r')\n If2,Ib2=[np.ravel(i) for i in hf2.I_double(VD=VD,VG=VG)]\n plt.plot(VG,If2/fet2.W,'b')\n plt.plot(VG,Ib2/fet2.W,'g')\n \n \n \n plt.ylim(1e-3,1e3)\n plt.yticks([])\n ylog()\n \n out.value=\"Approx shift is {:.2g}mV, which equates the IOFF within {:.2g}%.\"\\\n \" This is expected to increase ION by {:.2g}% and actually increases it by {:.2g}%\"\\\n .format(shift*1e3,(If2[0]-I[0])/I[0]*100,appr.shiftedgain(hf,VDD)*100-100,(If2[-1]-I[-1])/I[-1]*100)\n \n _,_,VTm,VTp=appr.shorthands(hf,VDD,None,\"VTm\",\"VTp\",gridinput=False)\n \ndisplay(out)", "_____no_output_____" ], [ "appr.optsize(fet,VDD,Ml=1,Mr=0,**vo2_params)", "l1 32.471600299611616\nl2 52.3828922105\nl 32.471600299611616\nl2 68.0085151957\nl 32.471600299611616\nw 75.5424783373\n" ], [ "from itertools import product", "_____no_output_____" ], [ "ion0=fet.ID(VDD,VDD)\nioff0=fet.ID(VDD,0)\n\ndef sweep(L,WT):\n ION_extr=[]\n sg_appr=[]\n Ml=[]\n Mr=[]\n Mimt=[]\n for Li,WiTi in product(L,WT):\n Ti=Wi=np.sqrt(WiTi)\n vo2=VO2(L=Li*1e-9,W=Wi*1e-9,T=Ti*1e-9,**vo2_params)\n hf=HyperFET(fet,vo2)\n hf2=HyperFET(fet.shifted(appr.shift(hf,VD)),vo2)\n IONi=hf2.I(VD=VDD,VG=VDD,direc=Direction.FORWARD)\n #print(np.ravel(IONi))\n #print(IONu,IONl)\n #print(Li,extr.boundaries_nonhysteretic(hf2,VDD))\n if extr.boundaries_nonhysteretic(hf2,VDD) and (hf2.I(VD=VDD,VG=0,direc=Direction.FORWARD)-ioff0)/ioff0<.1:\n ION_extr+=[IONi]\n else:\n ION_extr+=[np.NaN]\n Ml+=[appr.Ill(hf2,VDD)/ioff0-1]\n Mr+=[(VDD-appr.Vright(hf2,VDD))/fet.Vth]\n Mimt+=[VDD-hf2.pcr.V_IMT-fet.Vth/2]\n if Ml[-1]>0 and Mr[-1]>0 and Mimt[-1]>0:\n sg_appr+=[appr.shiftedgain(hf,VDD)]\n else:\n sg_appr+=[np.NaN]\n\n ION_extr=np.array(ION_extr)\n ION_appr=np.array(sg_appr)*ion0\n\n return ION_extr,ION_appr,Ml,Mr,Mimt", "_____no_output_____" ], [ "plt.figure(figsize=(6,6))\nmain=plt.gca()\n#marg=plt.axes([.57,.17,.3,.26])\nmarg=main.twinx()\nL=np.linspace(0.1,40.0,15)\n\n#W=10\n#ION_extr,ION_appr,Ml,Mr,Mimt=sweep(L,[W**2]) # sqrt(18*5)\n#plt.sca(main)\n#lp=plt.plot(L,ION_appr/ion0,linewidth=2,label=\"$\\sqrt{{wt}}={:g}\\mathrm{{nm}}$\".format(W))[0]\n#plt.plot(L,ION_extr/ion0,'o',color=lp.get_color())\n#plt.sca(marg)\n#plt.plot(L,Ml,'--',color=lp.get_color(),linewidth=2)\n##plt.plot(L,Mr,'--',color=lp.get_color())\n##plt.plot(L,Mimt,'-.',color=lp.get_color())\n\nW=9\nION_extr,ION_appr,Ml,Mr,Mimt=sweep(L,[W**2]) # sqrt(15*5)\nplt.sca(main)\nlp=plt.plot(L,ION_appr/ion0,linewidth=2,label=\"$\\sqrt{{wt}}={:g}\\mathrm{{nm}}$\".format(W))[0]\nplt.plot(L,ION_extr/ion0,'o',color=lp.get_color())\nplt.sca(marg)\nplt.plot(L,Ml,'--',color=lp.get_color(),linewidth=2)\n#plt.plot(L,Mr,'--',color=lp.get_color())\n#plt.plot(L,Mimt,'-.',color=lp.get_color())\n\nW=8\nION_extr,ION_appr,Ml,Mr,Mimt=sweep(L,[W**2])\nplt.sca(main)\nlp=plt.plot(L,ION_appr/ion0,linewidth=2,label=\"$\\sqrt{{wt}}={:g}\\mathrm{{nm}}$\".format(W))[0]\nplt.plot(L,ION_extr/ion0,'o',color=lp.get_color())\nplt.sca(marg)\nplt.plot(L,Ml,'--',color=lp.get_color(),linewidth=2)\n#plt.plot(L,Mr,'--',color=lp.get_color())\n#plt.plot(L,Mimt,'-.',color=lp.get_color())\n\nW=7\nION_extr,ION_appr,Ml,Mr,Mimt=sweep(L,[W**2])\nplt.sca(main)\nlp=plt.plot(L,ION_appr/ion0,linewidth=2,label=\"$\\sqrt{{wt}}={:g}\\mathrm{{nm}}$\".format(W))[0]\nplt.plot(L,ION_extr/ion0,'o',color=lp.get_color())\nplt.sca(marg)\nplt.plot(L,Ml,'--',color=lp.get_color(),linewidth=2)\n#plt.plot(L,Mr,'--',color=lp.get_color())\n#plt.plot(L,Mimt,'-.',color=lp.get_color())\n\n\nplt.sca(main)\nplt.ylabel(r\"$I_\\mathrm{ON,hyper}/I_\\mathrm{ON,orig}$\",fontsize=22)\nplt.xlabel(r\"$l\\mathrm{\\ [nm]}$\",fontsize=20)\nplt.ylim(1)\nplt.xlim(0,40)\nhandles1, labels1 = main.get_legend_handles_labels()\nplt.sca(marg)\nplt.legend(handles1,labels1,loc=\"center right\",bbox_to_anchor=(1,.25),fontsize=18)\nplt.ylim(0,5)\nplt.tick_params(labelsize=14)\nplt.gca().xaxis.set_major_locator(plticker.MultipleLocator(10))\n\nmintb=(VDD-fet.Vth/2)/(vo2_params['J_IMT']*vo2_params['rho_i'])\nplt.gca().add_patch(patches.Rectangle(\n (mintb*1e9,plt.ylim()[0]),\n plt.xlim()[1]-mintb*1e9,\n plt.ylim()[1]-plt.ylim()[0],\n hatch='/',edgecolor='k',fill=None))\nplt.ylabel(\"$\\mathrm{Safety\\ Margin\\ } M_r$\",fontsize=22)\n#plt.title(\"$\\mathrm{Safety\\ Margin}$\",fontsize=18)\n\nplt.tight_layout()\n\nplt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,\"GainvsL.eps\"))\nplt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,\"GainvsL.png\"))", "_____no_output_____" ], [ "plt.figure(figsize=(6,6))\nmain=plt.gca()\n#marg=plt.axes([.57,.17,.3,.26])\nmarg=main.twinx()\nW=np.sqrt(np.linspace(13**2,5**2,15))\n\nL=15\nION_extr,ION_appr,Ml,Mr,Mimt=sweep([L],W**2)\nplt.sca(main)\nlp=plt.plot(W,ION_appr/ion0,linewidth=2,label=\"$l={:g}\\mathrm{{nm}}$\".format(L))[0]\nplt.plot(W,ION_extr/ion0,'o',color=lp.get_color())\nplt.sca(marg)\nplt.plot(W,Ml,'--',color=lp.get_color(),linewidth=2)\n#plt.plot(L,Mr,'--',color=lp.get_color())\n#plt.plot(L,Mimt,'-.',color=lp.get_color())\n\nL=20\nION_extr,ION_appr,Ml,Mr,Mimt=sweep([L],W**2)\nplt.sca(main)\nlp=plt.plot(W,ION_appr/ion0,linewidth=2,label=\"$l={:g}\\mathrm{{nm}}$\".format(L))[0]\nplt.plot(W,ION_extr/ion0,'o',color=lp.get_color())\nplt.sca(marg)\nplt.plot(W,Ml,'--',color=lp.get_color(),linewidth=2)\n#plt.plot(L,Mr,'--',color=lp.get_color())\n#plt.plot(L,Mimt,'-.',color=lp.get_color())\n\nL=25\nION_extr,ION_appr,Ml,Mr,Mimt=sweep([L],W**2)\nplt.sca(main)\nlp=plt.plot(W,ION_appr/ion0,linewidth=2,label=\"$l={:g}\\mathrm{{nm}}$\".format(L))[0]\nplt.plot(W,ION_extr/ion0,'o',color=lp.get_color())\nplt.sca(marg)\nplt.plot(W,Ml,'--',color=lp.get_color(),linewidth=2)\n#plt.plot(L,Mr,'--',color=lp.get_color())\n#plt.plot(L,Mimt,'-.',color=lp.get_color())\n\nL=30\nION_extr,ION_appr,Ml,Mr,Mimt=sweep([L],W**2)\nplt.sca(main)\nlp=plt.plot(W,ION_appr/ion0,linewidth=2,label=\"$l={:g}\\mathrm{{nm}}$\".format(L))[0]\nplt.plot(W,ION_extr/ion0,'o',color=lp.get_color())\nplt.sca(marg)\nplt.plot(W,Ml,'--',color=lp.get_color(),linewidth=2)\n#plt.plot(L,Mr,'--',color=lp.get_color())\n#plt.plot(L,Mimt,'-.',color=lp.get_color())\n\n\nplt.sca(main)\nplt.ylabel(r\"$I_\\mathrm{ON,hyper}/I_\\mathrm{ON,orig}$\",fontsize=22)\nplt.xlabel(r\"$\\sqrt{wt}\\mathrm{\\ [nm]}$\",fontsize=20)\nplt.ylim(1)\nplt.xlim(6,13)\nhandles1, labels1 = main.get_legend_handles_labels()\nplt.sca(marg)\nplt.legend(handles1,labels1,loc=\"upper right\",fontsize=18)#,bbox_to_anchor=(1,.25)\nplt.ylim(0,5)\nplt.tick_params(labelsize=14)\n#plt.gca().xaxis.set_major_locator(plticker.MultipleLocator(10))\n\nplt.ylabel(\"$\\mathrm{Safety\\ Margin\\ } M_l$\",fontsize=22)\n#plt.title(\"$\\mathrm{Safety\\ Margin}$\",fontsize=18)\n\nplt.tight_layout()\n\nplt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,\"GainvsW.eps\"))\nplt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,\"GainvsW.png\"))", "_____no_output_____" ], [ "out=HTML()\ndef show_hf(L,W,T):\n global vo2, fet2,VTm,VTp, hf, hf2\n plt.figure(figsize=(6,6))\n \n vo2=VO2(L=L,W=W,T=T,**vo2_params)\n hf=HyperFET(fet,vo2)\n shift=appr.shift(hf,VDD)\n fet2=fet.shifted(shift)\n hf2=HyperFET(fet2,vo2)\n \n VD=np.array(VDD)\n VG=np.linspace(0,VDD,500)\n\n #plt.subplot(131)\n I=np.ravel(fet.ID(VD=VD,VG=VG))\n plt.plot(VG,I/fet.W,'r',label='transistor')\n \n If,Ib=[np.ravel(i) for i in hf.I_double(VD=VD,VG=VG)]\n plt.plot(VG,If/fet.W,'b',label='hyperfet',linewidth=2)\n plt.plot(VG,Ib/fet.W,'b',linewidth=2)\n \n \n \n plt.ylim(1e-3,1e3)\n plt.xlabel(\"$V_{GS}\\;\\mathrm{[V]}$\")\n plt.ylabel(\"$I/W\\;\\mathrm{[mA/mm]}$\")\n ylog()\n \n #plt.subplot(132) \n plt.plot(VG,I/fet2.W,'r')\n If2,Ib2=[np.ravel(i) for i in hf2.I_double(VD=VD,VG=VG)]\n plt.plot(VG,If2/fet2.W,'g',label='shifted hyperfet',linewidth=2)\n plt.plot(VG,Ib2/fet2.W,'g',linewidth=2)\n \n #ylog()\n #plt.ylim(1e-3,1e3)\n #plt.yticks([])\n \n plt.legend(loc='lower right',fontsize=14)\n \n Ill=extr.left(VG,If,Ib)[1]\n out.value=\"Approx shift is {:.2g}mV, which equates the IOFF within {:.2g}%.\"\\\n \" This is expected to increase ION by {:.2g}% and actually increases it by {:.2g}%.\"\\\n \" Ml effective is {:.3g}.\"\\\n .format(shift*1e3,(If2[0]-I[0])/I[0]*100,appr.shiftedgain(hf,VDD)*100-100,(If2[-1]-I[-1])/I[-1]*100,Ill/If2[0])\n \n _,_,VTm,VTp=appr.shorthands(hf,VDD,None,\"VTm\",\"VTp\",gridinput=False)\n\nshow_hf(*appr.optsize(fet,VDD,Ml=1.5,Mr=2,**vo2_params,verbose=False))\ndisplay(out)\n\nplt.tight_layout()\nplt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,\"opthf.eps\"))\nplt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,\"opthf.png\"))", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a49f08984213a2c340cfd18c11387be8ab7900c
64,862
ipynb
Jupyter Notebook
One code for all logistic models.ipynb
sudheer1710/Remaining-Files
f6d37a213858ffe998c5d7895dba586cd2302428
[ "Apache-2.0" ]
null
null
null
One code for all logistic models.ipynb
sudheer1710/Remaining-Files
f6d37a213858ffe998c5d7895dba586cd2302428
[ "Apache-2.0" ]
null
null
null
One code for all logistic models.ipynb
sudheer1710/Remaining-Files
f6d37a213858ffe998c5d7895dba586cd2302428
[ "Apache-2.0" ]
null
null
null
59.670653
14,052
0.669036
[ [ [ "\"\"\"Which Classifier is Should I Choose?\nThis is one of the most import questions to ask when approaching a machine learning\nproblem.I find it easier to just test them all at once. \"\"\"", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\ndef warn(*args, **kwargs): pass\nimport warnings\nwarnings.warn = warn\n\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.model_selection import StratifiedShuffleSplit\n\ntrain = pd.read_csv('train.csv')\ntest = pd.read_csv('test.csv')", "_____no_output_____" ], [ "import sys\n\n\nprint ('Total Number of arguments:', len(sys.argv), 'arguments.')\nprint ('Argument List:', str(sys.argv))\nprint (sys.argv[0])\n", "Total Number of arguments: 3 arguments.\nArgument List: ['C:\\\\Anacondanew\\\\lib\\\\site-packages\\\\ipykernel_launcher.py', '-f', 'C:\\\\Users\\\\V Sudheer Kumar\\\\AppData\\\\Roaming\\\\jupyter\\\\runtime\\\\kernel-47c43288-bd21-465b-bee1-2b8217231e5d.json']\nC:\\Anacondanew\\lib\\site-packages\\ipykernel_launcher.py\n" ], [ "#Data Preparation", "_____no_output_____" ], [ "# Swiss army knife function to organize the data\ndef encode(train, test):\n le = LabelEncoder().fit(train.species) \n labels = le.transform(train.species) # encode species strings\n classes = list(le.classes_) # save column names for submission\n test_ids = test.id # save test ids for submission\n \n train = train.drop(['species', 'id'], axis=1) \n test = test.drop(['id'], axis=1)\n \n return train, labels, test, test_ids, classes\n\ntrain, labels, test, test_ids, classes = encode(train, test)\ntrain.shape\n", "_____no_output_____" ], [ "train.head()", "_____no_output_____" ], [ "len(labels)", "_____no_output_____" ], [ "labels", "_____no_output_____" ], [ "test.head()", "_____no_output_____" ], [ "len(test_ids)", "_____no_output_____" ], [ "test_ids", "_____no_output_____" ], [ "classes", "_____no_output_____" ], [ "len(classes)", "_____no_output_____" ], [ "print(labels.shape)\nprint(test.shape)\nprint(test_ids.shape)\nprint(classes)\n", "(990,)\n(594, 192)\n(594,)\n['Acer_Capillipes', 'Acer_Circinatum', 'Acer_Mono', 'Acer_Opalus', 'Acer_Palmatum', 'Acer_Pictum', 'Acer_Platanoids', 'Acer_Rubrum', 'Acer_Rufinerve', 'Acer_Saccharinum', 'Alnus_Cordata', 'Alnus_Maximowiczii', 'Alnus_Rubra', 'Alnus_Sieboldiana', 'Alnus_Viridis', 'Arundinaria_Simonii', 'Betula_Austrosinensis', 'Betula_Pendula', 'Callicarpa_Bodinieri', 'Castanea_Sativa', 'Celtis_Koraiensis', 'Cercis_Siliquastrum', 'Cornus_Chinensis', 'Cornus_Controversa', 'Cornus_Macrophylla', 'Cotinus_Coggygria', 'Crataegus_Monogyna', 'Cytisus_Battandieri', 'Eucalyptus_Glaucescens', 'Eucalyptus_Neglecta', 'Eucalyptus_Urnigera', 'Fagus_Sylvatica', 'Ginkgo_Biloba', 'Ilex_Aquifolium', 'Ilex_Cornuta', 'Liquidambar_Styraciflua', 'Liriodendron_Tulipifera', 'Lithocarpus_Cleistocarpus', 'Lithocarpus_Edulis', 'Magnolia_Heptapeta', 'Magnolia_Salicifolia', 'Morus_Nigra', 'Olea_Europaea', 'Phildelphus', 'Populus_Adenopoda', 'Populus_Grandidentata', 'Populus_Nigra', 'Prunus_Avium', 'Prunus_X_Shmittii', 'Pterocarya_Stenoptera', 'Quercus_Afares', 'Quercus_Agrifolia', 'Quercus_Alnifolia', 'Quercus_Brantii', 'Quercus_Canariensis', 'Quercus_Castaneifolia', 'Quercus_Cerris', 'Quercus_Chrysolepis', 'Quercus_Coccifera', 'Quercus_Coccinea', 'Quercus_Crassifolia', 'Quercus_Crassipes', 'Quercus_Dolicholepis', 'Quercus_Ellipsoidalis', 'Quercus_Greggii', 'Quercus_Hartwissiana', 'Quercus_Ilex', 'Quercus_Imbricaria', 'Quercus_Infectoria_sub', 'Quercus_Kewensis', 'Quercus_Nigra', 'Quercus_Palustris', 'Quercus_Phellos', 'Quercus_Phillyraeoides', 'Quercus_Pontica', 'Quercus_Pubescens', 'Quercus_Pyrenaica', 'Quercus_Rhysophylla', 'Quercus_Rubra', 'Quercus_Semecarpifolia', 'Quercus_Shumardii', 'Quercus_Suber', 'Quercus_Texana', 'Quercus_Trojana', 'Quercus_Variabilis', 'Quercus_Vulcanica', 'Quercus_x_Hispanica', 'Quercus_x_Turneri', 'Rhododendron_x_Russellianum', 'Salix_Fragilis', 'Salix_Intergra', 'Sorbus_Aria', 'Tilia_Oliveri', 'Tilia_Platyphyllos', 'Tilia_Tomentosa', 'Ulmus_Bergmanniana', 'Viburnum_Tinus', 'Viburnum_x_Rhytidophylloides', 'Zelkova_Serrata']\n" ] ], [ [ "\"\"\"Stratified Train/Test Split - Stratification is necessary for this dataset because\nthere is a relatively large number of classes (100 classes for 990 samples). This \nwill ensure we have all classes represented in both the train and test indices\"\"\"", "_____no_output_____" ] ], [ [ "sss = StratifiedShuffleSplit( n_splits=10, test_size=0.3, random_state=23)\nprint(sss.get_n_splits(train,labels))\nfor train_index, test_index in sss.split(train,labels):\n X_train, X_test = train.values[train_index], train.values[test_index]\n y_train, y_test = labels[train_index], labels[test_index]", "10\n" ] ], [ [ "\"\"\"Sklearn Classifiers\nSimply looping through 4 classifiers and printing the results. Obviously, these \nwill perform much better after tuning their hyperparameters, but this gives you\na decent ballpark idea.\"\"\"", "_____no_output_____" ] ], [ [ "from sklearn.metrics import accuracy_score, log_loss\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.svm import SVC, LinearSVC, NuSVC\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.naive_bayes import GaussianNB", "_____no_output_____" ], [ "classifiers = [\n KNeighborsClassifier(3),\n SVC(kernel=\"rbf\", C=0.025, probability=True),\n NuSVC(probability=True),\n DecisionTreeClassifier(),\n RandomForestClassifier(n_estimators=50, random_state=11),\n GaussianNB()]", "_____no_output_____" ], [ "# Logging for Visual Comparison\nlog_cols=[\"Classifier\", \"Accuracy\", \"Log Loss\"]\nlog = pd.DataFrame(columns=log_cols)\n\nfor clf in classifiers:\n clf.fit(X_train, y_train)\n name = clf.__class__.__name__\n \n print(\"=\"*30)\n print(name)\n \n print('****Results****')\n train_predictions = clf.predict(X_test)\n acc = accuracy_score(y_test, train_predictions)\n print(\"Accuracy: {:.4%}\".format(acc))\n \n train_predictions = clf.predict_proba(X_test)\n ll = log_loss(y_test, train_predictions)\n print(\"Log Loss: {}\".format(ll))\n \n log_entry = pd.DataFrame([[name, acc*100, ll]], columns=log_cols)\n log = log.append(log_entry)\n \nprint(\"=\"*30)", "==============================\nKNeighborsClassifier\n****Results****\nAccuracy: 88.2155%\nLog Loss: 1.3838564185042657\n==============================\nSVC\n****Results****\nAccuracy: 84.8485%\nLog Loss: 4.626924675330018\n==============================\nNuSVC\n****Results****\nAccuracy: 92.5926%\nLog Loss: 2.53597224166933\n==============================\nDecisionTreeClassifier\n****Results****\nAccuracy: 63.2997%\nLog Loss: 12.675847229108733\n==============================\nRandomForestClassifier\n****Results****\nAccuracy: 98.6532%\nLog Loss: 0.8199742530443378\n==============================\nGaussianNB\n****Results****\nAccuracy: 51.1785%\nLog Loss: 16.826930438423148\n==============================\n" ], [ "sns.set_color_codes(\"muted\")\nsns.barplot(x='Accuracy', y='Classifier', data=log, color=\"b\")\n\nplt.xlabel('Accuracy %')\nplt.title('Classifier Accuracy')\nplt.show()\n\nsns.set_color_codes(\"muted\")\nsns.barplot(x='Log Loss', y='Classifier', data=log, color=\"g\")\n\nplt.xlabel('Log Loss')\nplt.title('Classifier Log Loss')\nplt.show()", "_____no_output_____" ], [ "#After this choose the classifier with the best accuracy for future predictions", "_____no_output_____" ], [ "import os\nos.getcwd()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
4a49fcf951200eabc41aa82d57f07f63995ea14f
1,925
ipynb
Jupyter Notebook
notebooks/sir/intro.ipynb
epirecipes/epiRecipes
56ce40238692724f2ce4d37fbb068195462fcda3
[ "MIT" ]
14
2018-07-17T19:23:54.000Z
2021-08-16T02:02:04.000Z
notebooks/sir/intro.ipynb
epirecipes/epiRecipes
56ce40238692724f2ce4d37fbb068195462fcda3
[ "MIT" ]
null
null
null
notebooks/sir/intro.ipynb
epirecipes/epiRecipes
56ce40238692724f2ce4d37fbb068195462fcda3
[ "MIT" ]
7
2018-08-02T14:07:51.000Z
2021-03-23T02:10:11.000Z
25.666667
419
0.562597
[ [ [ "## SIR model", "_____no_output_____" ], [ "*Author*: Simon Frost\n\n*Date*: 2018-07-12", "_____no_output_____" ], [ "### Description\n\nThe susceptible-infected-recovered (SIR) model in a closed population was proposed by Kermack and McKendrick as a special case of a more general model, and forms the framework of many compartmental models. Susceptible individuals, $S$, are infected by infected individuals, $I$, at a per-capita rate $\\beta I$, and infected individuals recover at a per-capita rate $\\gamma$ to become recovered individuals, $R$.", "_____no_output_____" ], [ "### Equations", "_____no_output_____" ], [ "$$\n\\frac{dS(t)}{dt} = -\\beta S(t) I(t)\\\\\n\\frac{dI(t)}{dt} = \\beta S(t) I(t)- \\gamma I(t)\\\\\n\\frac{dR(t)}{dt} = \\gamma I(t)\n$$", "_____no_output_____" ], [ "### References\n\n1. [Kermack WO, McKendrick AG (August 1, 1927). \"A Contribution to the Mathematical Theory of Epidemics\". Proceedings of the Royal Society A. 115 (772): 700–721](https://doi.org/10.1098/rspa.1927.0118)\n1. [https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology](https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology)", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
4a4a009338186916039224c26c456629ab7cd306
260,837
ipynb
Jupyter Notebook
Commodity_Indexes_On_a_Page.ipynb
TMQR/SG_Discovery
128212b0f07a8dbcb4bb7293f1636f5ecb8fd08d
[ "MIT" ]
null
null
null
Commodity_Indexes_On_a_Page.ipynb
TMQR/SG_Discovery
128212b0f07a8dbcb4bb7293f1636f5ecb8fd08d
[ "MIT" ]
1
2019-01-10T17:44:12.000Z
2019-01-10T17:44:12.000Z
Commodity_Indexes_On_a_Page.ipynb
TMQR/SG_Discovery
128212b0f07a8dbcb4bb7293f1636f5ecb8fd08d
[ "MIT" ]
null
null
null
995.561069
133,192
0.95335
[ [ [ "# Import key libraries", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\n\nimport scipy\n\n\nimport bt\nimport ffn\nimport jhtalib as jhta\nimport datetime \n\n# import matplotlib as plt \nimport seaborn as sns\nsns.set()\n\n\nimport datetime\nimport matplotlib.pyplot as plt\n\n%matplotlib inline", "_____no_output_____" ] ], [ [ "# Import the datareader with fix", "_____no_output_____" ] ], [ [ "start = datetime.datetime(2005, 1, 1)\nend = datetime.datetime(2019, 1, 27)\n\nfrom pandas_datareader import data as pdr\nimport fix_yahoo_finance as fyf\nfyf.pdr_override()\npd.core.common.is_list_like = pd.api.types.is_list_like\n", "_____no_output_____" ] ], [ [ "# Bring In some Commodity ETF data linked to the 3 main composition choices:\n1. DBC - Invesco DB Commodity Index Tracking Fund \n\nNet Assets: $2.49 billion\n\nDBC\n\nhttps://www.invesco.com/portal/site/us/investors/etfs/product-detail?productId=dbc\n\nDBC is the elephant in the commodities room – by far the largest ETF in terms of assets under management. It tracks an index of 14 commodities using futures contracts for exposure. It tackles the weighting problem creatively, capping energy at 60% to allow for more exposure to non-consumables such as gold and silver. The fund's large size also gives it excellent liquidity.\n\nsource :https://www.investopedia.com/investing/commodities-etfs/\n\n\n2. iPath Dow Jones-UBS Commodity ETN <<<<-------- this is the current incarnation of AIG Comm\n\nNet Assets: $810.0 M\n\nDJP\n\nhttp://www.ipathetn.com/US/16/en/details.app?instrumentId=1193\n\nThe Bloomberg Commodity Index (BCOM) is a broadly diversified commodity price index distributed by Bloomberg Indexes. The index was originally launched in 1998 as the Dow Jones-AIG Commodity Index (DJ-AIGCI) and renamed to Dow Jones-UBS Commodity Index (DJ-UBSCI) in 2009, when UBS acquired the index from AIG. On July 1, 2014, the index was rebranded under its current name.\n\nThe BCOM tracks prices of futures contracts on physical commodities on the commodity markets. The index is designed to minimize concentration in any one commodity or sector. It currently has 22 commodity futures in seven sectors. No one commodity can compose less than 2% or more than 15% of the index, and no sector can represent more than 33% of the index (as of the annual weightings of the components). The weightings for each commodity included in BCOM are calculated in accordance with rules that ensure that the relative proportion of each of the underlying individual commodities reflects its global economic significance and market liquidity. Annual rebalancing and reweighting ensure that diversity is maintained over time\n\nsource : https://en.wikipedia.org/wiki/Bloomberg_Commodity_Index\n\n3. iShares S&P GSCI Commodity-Indexed Trust\n\nNet Assets: $1.32 billion\n\nGSG\n\nThe S&P GSCI contains as many commodities as possible, with rules excluding certain commodities to maintain liquidity and investability in the underlying futures markets. The index currently comprises 24 commodities from all commodity sectors - energy products, industrial metals, agricultural products, livestock products and precious metals. The wide range of constituent commodities provides the S&P GSCI with a high level of diversification, across subsectors and within each subsector. This diversity mutes the impact of highly idiosyncratic events, which have large implications for the individual commodity markets, but are minimised when aggregated to the level of the S&P GSCI.\n\nThe diversity of the S&P GSCI's constituent commodities, along with their economic weighting allows the index to respond in a stable way to world economic growth, even as the composition of global growth changes across time. When industrialised economies dominate world growth, the metals sector of the GSCI generally responds more than the agricultural components. Conversely, when emerging markets dominate world growth, petroleum-based commodities and agricultural commodities tend to be more responsive.\n\nThe S&P GSCI is a world-production weighted index that is based on the average quantity of production of each commodity in the index, over the last five years of available data. This allows the S&P GSCI to be a measure of investment performance as well as serve as an economic indicator.\n\nProduction weighting is a quintessential attribute for the index to be a measure of investment performance. This is achieved by assigning a weight to each asset based on the amount of capital dedicated to holding that asset just as market capitalisation is used to assign weights to components of equity indices. Since the appropriate weight assigned to each commodity is in proportion to the amount of that commodity flowing through the economy, the index is also an economic indicator\n\nsource: https://en.wikipedia.org/wiki/S%26P_GSCI\n\nFrom an investment point of view the index designers are attempting to represent expsosure to commodities but commodities have not proven to have an inherent return so concentration rules have been added to improve the return profile but without a great deal of success. \n\nTo capitalize on commodity markets a strategy must be at liberty to go long as well as short and weight the exposure by metrics other than world prodcution or some other \"economic\" metric.\n\n\n\n\n", "_____no_output_____" ] ], [ [ "DBC = pdr.get_data_yahoo('DBC',start= start)\nDJP = pdr.get_data_yahoo('DJP',start= start)\nGSG = pdr.get_data_yahoo('GSG',start= start)\n\n", "[*********************100%***********************] 1 of 1 downloaded\n[*********************100%***********************] 1 of 1 downloaded\n[*********************100%***********************] 1 of 1 downloaded\n" ], [ "ETFs = bt.merge(DBC['Adj Close'], DJP['Adj Close'],GSG['Adj Close'])\nETFs.columns = [['Invesco DB Commodity Index Tracking Fund',\n 'iPath Dow Jones-UBS Commodity ETN',\n 'iShares S&P GSCI Commodity-Indexed Trust']]\n", "_____no_output_____" ], [ "ETFs.plot(figsize=(15,10))", "_____no_output_____" ], [ "ETFs_re = pd.DataFrame(ETFs)\n# ETFs_re.plot(figsize=(15,10))\nETFs_re = ETFs.dropna()\nETFs_re = ffn.rebase(ETFs_re)\nETFs_re.plot(figsize=(15,10),fontsize=22, title='$100 Invested in different Commodity Indexes')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
4a4a0865f933649fb58df439a7adcb926f29ac17
5,695
ipynb
Jupyter Notebook
2 - connectors/bigquery_quickstart.ipynb
ydataai/Blog
7994727deb1e1ca35e06c6fe4680482920b95855
[ "MIT" ]
16
2020-05-06T14:34:43.000Z
2022-03-27T17:56:06.000Z
2 - connectors/bigquery_quickstart.ipynb
ydataai/Blog
7994727deb1e1ca35e06c6fe4680482920b95855
[ "MIT" ]
2
2022-02-11T00:24:55.000Z
2022-03-31T12:04:44.000Z
2 - connectors/bigquery_quickstart.ipynb
ydataai/Blog
7994727deb1e1ca35e06c6fe4680482920b95855
[ "MIT" ]
1
2020-07-14T20:10:20.000Z
2020-07-14T20:10:20.000Z
25.886364
473
0.587006
[ [ [ "# Big Query Connector - Quick Start\nThe BigQuery connector enables you to read/write data within BigQuery with ease and integrate it with YData's platform. \nReading a dataset from BigQuery directly into a YData's `Dataset` allows its usage for Data Quality, Data Synthetisation and Preprocessing blocks.\n\n## Storage and Performance Notes\nBigQuery is not intended to hold large volumes of data as a pure data storage service. Its main advantages are based on the ability to execute SQL-like queries on existing tables which can efficiently aggregate data into new views. As such, for storage purposes we advise the use of Google Cloud Storage and provide the method `write_query_to_gcs`, available from the `BigQueryConnector`, that allows the user to export a given query to a Google Cloud Storage object.", "_____no_output_____" ] ], [ [ "from ydata.connectors import BigQueryConnector\nfrom ydata.utils.formats import read_json", "_____no_output_____" ], [ "# Load your credentials from a file\\n\",\ntoken = read_json('{insert-path-to-credentials}')", "_____no_output_____" ], [ "# Instantiate the Connector\nconnector = BigQueryConnector(project_id='{insert-project-id}', keyfile_dict=token)", "_____no_output_____" ], [ "# Load a dataset\ndata = connector.query(\n \"SELECT * FROM {insert-dataset}.{insert-table}\"\n)", "_____no_output_____" ], [ "# Load a sample of a dataset\nsmall_data = connector.query(\n \"SELECT * FROM {insert-dataset}.{insert-table}\"\n n_sample=10_000\n)", "_____no_output_____" ], [ "# Check the available datasets\nconnector.datasets", "_____no_output_____" ], [ "# Check the available tables for a given dataset\nconnector.list_tables('{insert-dataset}')", "_____no_output_____" ], [ "connector.table_schema(dataset='{insert-dataset}', table='{insert-table}')", "_____no_output_____" ] ], [ [ "## Advanced\nWith `BigQueryConnector`, you can access useful properties and methods directly from the main class.", "_____no_output_____" ] ], [ [ "# List the datasets of a given project\nconnector.datasets", "_____no_output_____" ], [ "# Access the BigQuery Client\nconnector.client", "_____no_output_____" ], [ "# Create a new dataset\nconnector.get_or_create_dataset(dataset='{insert-dataset}')", "_____no_output_____" ], [ "# Delete a dataset. WARNING: POTENTIAL LOSS OF DATA\n# connector.delete_table_if_exists(dataset='{insert-dataset}', table='{insert-table}')", "_____no_output_____" ], [ "# Delete a dataset. WARNING: POTENTIAL LOSS OF DATA \n# connector.delete_dataset_if_exists(dataset='{insert-dataset}')", "_____no_output_____" ] ], [ [ "### Example #1 - Execute Pandas transformations and store to BigQuery", "_____no_output_____" ] ], [ [ "# export data to pandas\n# small_df = small_data.to_pandas()\n#\n# DO TRANSFORMATIONS\n# (...)\n# \n# Write results to BigQuery table\n# connector.write_table_from_data(data=small_df, dataset='{insert-dataset}', table='{insert-table}')", "_____no_output_____" ] ], [ [ "### Example #2 - Write a BigQuery results to Google Cloud Storage", "_____no_output_____" ] ], [ [ "# Run a query in BigQuery and store it in Google Cloud Storage\n# connector.write_query_to_gcs(query=\"{insert-query}\",\n# path=\"gs://{insert-bucket}/{insert-filepath}\")", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a4a08793a4fe6af9ad4ef7e1e5843ac7e3141c1
57,286
ipynb
Jupyter Notebook
tutorials/alexnet_tutorial.ipynb
lalitjg/ManufacturingNet
5a734e90c77ac0757499353fa3e82bcffe12d1f0
[ "MIT" ]
9
2020-12-07T21:04:44.000Z
2021-11-23T14:43:28.000Z
tutorials/alexnet_tutorial.ipynb
BaratiLab/ManufacturingNet
5a734e90c77ac0757499353fa3e82bcffe12d1f0
[ "MIT" ]
null
null
null
tutorials/alexnet_tutorial.ipynb
BaratiLab/ManufacturingNet
5a734e90c77ac0757499353fa3e82bcffe12d1f0
[ "MIT" ]
2
2021-01-15T18:38:21.000Z
2021-09-18T01:50:23.000Z
164.614943
28,628
0.884579
[ [ [ "# Developing a Pretrained Alexnet model using ManufacturingNet\n###### To know more about the manufacturingnet please visit: http://manufacturingnet.io/ ", "_____no_output_____" ] ], [ [ "import ManufacturingNet\nimport numpy as np", "_____no_output_____" ] ], [ [ "First we import manufacturingnet. Using manufacturingnet we can create deep learning models with greater ease.\n\n\n\nIt is important to note that all the dependencies of the package must also be installed in your environment ", "_____no_output_____" ], [ "##### Now we first need to download the data. You can use our dataset class where we have curated different types of datasets and you just need to run two lines of code to download the data :)", "_____no_output_____" ] ], [ [ "from ManufacturingNet import datasets", "_____no_output_____" ], [ "datasets.CastingData()", "_____no_output_____" ] ], [ [ "##### Alright! Now please check your working directory. The data should be present inside it. That was super easy !!", "_____no_output_____" ], [ "The Casting dataset is an image dataset with 2 classes. The task that we need to perform using Pretrained Alexnet is classification. ManufacturingNet has also provided different datasets in the package which the user can choose depending on type of application", "_____no_output_____" ], [ "Pretrained models use Imagefolder dataset from pytorch and image size is (224,224,channels). The pretrained model needs the root folder path of train and test images(Imagefolder format). Manufacturing pretrained models have image resizing feature. ", "_____no_output_____" ] ], [ [ "#paths of root folder\ntrain_data_address='casting_data/train/'\nval_data_address='casting_data/test/'", "_____no_output_____" ] ], [ [ "#### Now all we got to do is import the pretrained model class and answer a few simple questions and we will be all set. The manufacturingnet has been designed in a way to make things easy for user and provide them the tools to implement complex used", "_____no_output_____" ] ], [ [ "from ManufacturingNet.models import AlexNet", "_____no_output_____" ], [ "# from ManufacturingNet.models import ResNet\n# from ManufacturingNet.models import DenseNet\n# from ManufacturingNet.models import MobileNet\n# from ManufacturingNet.models import GoogleNet\n# from ManufacturingNet.models import VGG", "_____no_output_____" ] ], [ [ "###### We import the pretrained Alexnet model (AlexNet) from package and answer a few simple questions", "_____no_output_____" ] ], [ [ "model=AlexNet(train_data_address,val_data_address)", "Do you want default values for all the training parameters (y/n)? n\n \nNumber of classes: 2\nTotal number of training images: 6633\nTotal number of validation images: 715\n=========================\n1/8 - Image size\nAll the images must have same size.\nPlease enter the dimensions to which images need to be resized (heigth, width, channels): \nFor example - 228, 228, 1 (For gray scale conversion)\n If all images have same size, enter the actual image size (heigth, width, channels) :\n 224,224,1\n=========================\nQuestion [2/7]: Model Selection:\n\nDo you want the pretrained model (y/n)? y\n=========================\n=========================\n3/7 - Batch size input\nPlease enter the batch size: 32\n=========================\n4/7- Loss function\nLoss function: CrossEntropy()\n=========================\n5/7 - Optimizer\nPlease enter the optimizer index for the problem \n Optimizer_list - [1: Adam, 2: SGD] \n For default optimizer, please directly press enter without any input: 1\n \nPlease enter a required value float input for learning rate (learning rate > 0) \n For default learning rate, please directly press enter without any input: \nDefault value for learning rate selected : 0.001\n \n=========================\n6/7 - Scheduler\nPlease enter the scheduler index for the problem: Scheduler_list - [1: None, 2:StepLR, 3:MultiStepLR] \n For default option of no scheduler, please directly press enter without any input: \nBy default no scheduler selected\n \n=========================\n7/7 - Number of epochs\nPlease enter the number of epochs to train the model: 3\n=========================\nNeural network architecture: \n \nAlexNet(\n (features): Sequential(\n (0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (1): ReLU(inplace=True)\n (2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)\n (3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))\n (4): ReLU(inplace=True)\n (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)\n (6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (7): ReLU(inplace=True)\n (8): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (9): ReLU(inplace=True)\n (10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (11): ReLU(inplace=True)\n (12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)\n )\n (avgpool): AdaptiveAvgPool2d(output_size=(6, 6))\n (classifier): Sequential(\n (0): Dropout(p=0.5, inplace=False)\n (1): Linear(in_features=9216, out_features=4096, bias=True)\n (2): ReLU(inplace=True)\n (3): Dropout(p=0.5, inplace=False)\n (4): Linear(in_features=4096, out_features=4096, bias=True)\n (5): ReLU(inplace=True)\n (6): Linear(in_features=4096, out_features=2, bias=True)\n )\n)\n=========================\nModel Summary:\n \nCriterion: CrossEntropyLoss()\nOptimizer: Adam (\nParameter Group 0\n amsgrad: False\n betas: (0.9, 0.999)\n eps: 1e-08\n lr: 0.001\n weight_decay: 0\n)\nScheduler: None\nBatch size: 32\nInitial learning rate: 0.001\nNumber of training epochs: 3\nDevice: cuda\n \n=========================\nTraining the model...\nEpoch_Number: 0\nTraining Loss: 0.692960987014512\nTraining Accuracy: 56.03799185888738 %\nValidation Loss: 0.6741172287323096\nValidation Accuracy: 63.35664335664336 %\nEpoch Time: 1921.2051339149475 s\n##################################################\nEpoch_Number: 1\nTraining Loss: 0.6851664346403937\nTraining Accuracy: 56.34705261570934 %\nValidation Loss: 0.6856002469311209\nValidation Accuracy: 63.35664335664336 %\nEpoch Time: 1918.337572813034 s\n##################################################\nEpoch_Number: 2\nTraining Loss: 0.6857230691407351\nTraining Accuracy: 56.450072867983316 %\nValidation Loss: 0.6732170486211453\nValidation Accuracy: 63.35664335664336 %\nEpoch Time: 1916.1428580284119 s\n##################################################\nConfusion Matix: \n[[453 262]\n [ 0 0]]\nDo you want to save the model weights? (y/n): n\n=========================\n Call get_prediction() to make predictions on new data\n \n=== End of training ===\n" ], [ "# model=ResNet(train_data_address,val_data_address)\n# model=DenseNet(train_data_address,val_data_address)\n# model=MobileNet(train_data_address,val_data_address)\n# model=GoogleNet(train_data_address,val_data_address)\n# model=VGG(train_data_address,val_data_address)", "_____no_output_____" ] ], [ [ "Alright! Its done you have built your pretrained AlexNet using the manufacturingnet package. Just by answering a few simple questions. It is really easy\n\nThe Casting dataset contains more 7000 images including training and testing. The results produced above are just for introducing ManufacturingNet. Hence, only 3 epochs were performed. Better results can be obtained by running more epochs.\n\nA few pointers about developing the pretrained models. These models require image size to be (224,224,channels) as the input size of the image. The number of classes for classification can be varied and is handled by the package. User can use only the architecture without using the pretrained weights.\n\nThe loss functions, optimizer, epochs, scheduler should be chosen by the user. The model summary, training accuracy, validation accuracy, confusion matrix, Loss vs epoch are also provided by the package.\n\nManufacturingNet provides many pretrained models with similar scripts. ManufacturingNet offer ResNet(different variants), AlexNet, GoogleNet, VGG(different variants) and DenseNet(different variants).\n\nUsers can follow a similar tutorial on pretrained ResNet(different variants).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
4a4a1d4165604c4db506fc5e8954db5d4fe72c9d
29,381
ipynb
Jupyter Notebook
d_regression/d_logisticRegression.ipynb
NicoleLund/machine-learning-exoplanets
e12084db21063433890501999a866336850d39b3
[ "MIT" ]
null
null
null
d_regression/d_logisticRegression.ipynb
NicoleLund/machine-learning-exoplanets
e12084db21063433890501999a866336850d39b3
[ "MIT" ]
null
null
null
d_regression/d_logisticRegression.ipynb
NicoleLund/machine-learning-exoplanets
e12084db21063433890501999a866336850d39b3
[ "MIT" ]
null
null
null
35.144737
300
0.469147
[ [ [ "# d_logisticRegression\r\n----\r\n\r\nWritten in the Python 3.7.9 Environment with the following package versions\r\n\r\n * joblib 1.0.1\r\n * numpy 1.19.5\r\n * pandas 1.3.1\r\n * scikit-learn 0.24.2\r\n * tensorflow 2.5.0\r\n\r\nBy Nicole Lund \r\n\r\nThis Jupyter Notebook tunes a Logistic Regression model for Exoplanet classification from Kepler Exoplanet study data.\r\n\r\nColumn descriptions can be found at https://exoplanetarchive.ipac.caltech.edu/docs/API_kepcandidate_columns.html \r\n\r\n**Source Data**\r\n\r\nThe source data used was provided by University of Arizona's Data Analytics homework assignment. Their data was derived from https://www.kaggle.com/nasa/kepler-exoplanet-search-results?select=cumulative.csv\r\n\r\nThe full data set was released by NASA at\r\nhttps://exoplanetarchive.ipac.caltech.edu/cgi-bin/TblView/nph-tblView?app=ExoTbls&config=koi", "_____no_output_____" ] ], [ [ "# Import Dependencies\r\n\r\n# Plotting\r\n%matplotlib inline\r\nimport matplotlib.pyplot as plt\r\n\r\n# Data manipulation\r\nimport numpy as np\r\nimport pandas as pd\r\nfrom statistics import mean\r\nfrom operator import itemgetter\r\nfrom sklearn.model_selection import train_test_split\r\nfrom sklearn.preprocessing import LabelEncoder, MinMaxScaler\r\nfrom tensorflow.keras.utils import to_categorical\r\n\r\n# Parameter Selection\r\nfrom sklearn import tree\r\nfrom sklearn.ensemble import RandomForestClassifier\r\nfrom sklearn.model_selection import GridSearchCV\r\n\r\n# Model Development\r\nfrom sklearn.linear_model import LinearRegression\r\nfrom sklearn.linear_model import LogisticRegression\r\nfrom sklearn.svm import SVC \r\nfrom tensorflow import keras\r\nfrom tensorflow.keras.models import Sequential\r\nfrom tensorflow.keras.layers import Dense\r\nfrom tensorflow.keras.layers import Dropout\r\nfrom tensorflow.keras.wrappers.scikit_learn import KerasClassifier\r\n\r\n# Model Metrics\r\nfrom sklearn.metrics import classification_report\r\n\r\n# Save/load files\r\nfrom tensorflow.keras.models import load_model\r\nimport joblib\r\n\r\n# # Ignore deprecation warnings\r\n# import warnings\r\n# warnings.simplefilter('ignore', FutureWarning)", "_____no_output_____" ], [ "# Set the seed value for the notebook, so the results are reproducible\r\nfrom numpy.random import seed\r\nseed(1)", "_____no_output_____" ] ], [ [ "# Read the CSV and Perform Basic Data Cleaning", "_____no_output_____" ] ], [ [ "# Import data\r\ndf = pd.read_csv(\"../b_source_data/exoplanet_data.csv\")\r\n# print(df.info())\r\n\r\n# Drop columns where all values are null\r\ndf = df.dropna(axis='columns', how='all')\r\n\r\n# Drop rows containing null values\r\ndf = df.dropna()\r\n\r\n# Display data info\r\nprint(df.info())\r\nprint(df.head())\r\nprint(df.koi_disposition.unique())", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 6991 entries, 0 to 6990\nData columns (total 41 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 koi_disposition 6991 non-null object \n 1 koi_fpflag_nt 6991 non-null int64 \n 2 koi_fpflag_ss 6991 non-null int64 \n 3 koi_fpflag_co 6991 non-null int64 \n 4 koi_fpflag_ec 6991 non-null int64 \n 5 koi_period 6991 non-null float64\n 6 koi_period_err1 6991 non-null float64\n 7 koi_period_err2 6991 non-null float64\n 8 koi_time0bk 6991 non-null float64\n 9 koi_time0bk_err1 6991 non-null float64\n 10 koi_time0bk_err2 6991 non-null float64\n 11 koi_impact 6991 non-null float64\n 12 koi_impact_err1 6991 non-null float64\n 13 koi_impact_err2 6991 non-null float64\n 14 koi_duration 6991 non-null float64\n 15 koi_duration_err1 6991 non-null float64\n 16 koi_duration_err2 6991 non-null float64\n 17 koi_depth 6991 non-null float64\n 18 koi_depth_err1 6991 non-null float64\n 19 koi_depth_err2 6991 non-null float64\n 20 koi_prad 6991 non-null float64\n 21 koi_prad_err1 6991 non-null float64\n 22 koi_prad_err2 6991 non-null float64\n 23 koi_teq 6991 non-null int64 \n 24 koi_insol 6991 non-null float64\n 25 koi_insol_err1 6991 non-null float64\n 26 koi_insol_err2 6991 non-null float64\n 27 koi_model_snr 6991 non-null float64\n 28 koi_tce_plnt_num 6991 non-null int64 \n 29 koi_steff 6991 non-null int64 \n 30 koi_steff_err1 6991 non-null int64 \n 31 koi_steff_err2 6991 non-null int64 \n 32 koi_slogg 6991 non-null float64\n 33 koi_slogg_err1 6991 non-null float64\n 34 koi_slogg_err2 6991 non-null float64\n 35 koi_srad 6991 non-null float64\n 36 koi_srad_err1 6991 non-null float64\n 37 koi_srad_err2 6991 non-null float64\n 38 ra 6991 non-null float64\n 39 dec 6991 non-null float64\n 40 koi_kepmag 6991 non-null float64\ndtypes: float64(31), int64(9), object(1)\nmemory usage: 2.2+ MB\nNone\n koi_disposition koi_fpflag_nt koi_fpflag_ss koi_fpflag_co koi_fpflag_ec \\\n0 CONFIRMED 0 0 0 0 \n1 FALSE POSITIVE 0 1 0 0 \n2 FALSE POSITIVE 0 1 0 0 \n3 CONFIRMED 0 0 0 0 \n4 CONFIRMED 0 0 0 0 \n\n koi_period koi_period_err1 koi_period_err2 koi_time0bk \\\n0 54.418383 2.479000e-04 -2.479000e-04 162.513840 \n1 19.899140 1.490000e-05 -1.490000e-05 175.850252 \n2 1.736952 2.630000e-07 -2.630000e-07 170.307565 \n3 2.525592 3.760000e-06 -3.760000e-06 171.595550 \n4 4.134435 1.050000e-05 -1.050000e-05 172.979370 \n\n koi_time0bk_err1 ... koi_steff_err2 koi_slogg koi_slogg_err1 \\\n0 0.003520 ... -81 4.467 0.064 \n1 0.000581 ... -176 4.544 0.044 \n2 0.000115 ... -174 4.564 0.053 \n3 0.001130 ... -211 4.438 0.070 \n4 0.001900 ... -232 4.486 0.054 \n\n koi_slogg_err2 koi_srad koi_srad_err1 koi_srad_err2 ra \\\n0 -0.096 0.927 0.105 -0.061 291.93423 \n1 -0.176 0.868 0.233 -0.078 297.00482 \n2 -0.168 0.791 0.201 -0.067 285.53461 \n3 -0.210 1.046 0.334 -0.133 288.75488 \n4 -0.229 0.972 0.315 -0.105 296.28613 \n\n dec koi_kepmag \n0 48.141651 15.347 \n1 48.134129 15.436 \n2 48.285210 15.597 \n3 48.226200 15.509 \n4 48.224670 15.714 \n\n[5 rows x 41 columns]\n['CONFIRMED' 'FALSE POSITIVE' 'CANDIDATE']\n" ], [ "# Rename \"FALSE POSITIVE\" disposition values\r\ndf.koi_disposition = df.koi_disposition.str.replace(' ','_')\r\nprint(df.koi_disposition.unique())", "['CONFIRMED' 'FALSE_POSITIVE' 'CANDIDATE']\n" ] ], [ [ "# Select features", "_____no_output_____" ] ], [ [ "# Split dataframe into X and y\r\n\r\n# Select features to analyze in X\r\nselect_option = 1\r\n\r\nif select_option == 1:\r\n # Option 1: Choose all features\r\n X = df.drop(\"koi_disposition\", axis=1)\r\n\r\nelif select_option == 2:\r\n # Option 2: Choose all features that are not associated with error measurements\r\n X = df[['koi_fpflag_nt', 'koi_fpflag_ss', 'koi_fpflag_co', 'koi_fpflag_ec', 'koi_period', 'koi_time0bk', 'koi_impact', 'koi_duration','koi_depth', 'koi_prad', 'koi_teq', 'koi_insol', 'koi_model_snr', 'koi_tce_plnt_num', 'koi_steff', 'koi_slogg', 'koi_srad', 'ra', 'dec', 'koi_kepmag']]\r\n\r\nelif select_option == 3:\r\n # Option 3: Choose features from Decision Tree and Random Forest assessment.\r\n tree_features = ['koi_fpflag_nt', 'koi_fpflag_co', 'koi_fpflag_ss', 'koi_model_snr']\r\n forest_features = ['koi_fpflag_co', 'koi_fpflag_nt', 'koi_fpflag_ss', 'koi_model_snr', 'koi_prad']\r\n X = df[set(tree_features + forest_features)]\r\n\r\n# Define y\r\ny = df[\"koi_disposition\"]\r\n\r\nprint(X.shape, y.shape)", "(6991, 40) (6991,)\n" ] ], [ [ "# Create a Train Test Split\n\nUse `koi_disposition` for the y values", "_____no_output_____" ] ], [ [ "# Split X and y into training and testing groups\r\nX_train, X_test, y_train, y_test = train_test_split(\r\n X, y, test_size=0.3, random_state=42)", "_____no_output_____" ], [ "# Display training data\r\nX_train.head()", "_____no_output_____" ] ], [ [ "# Pre-processing", "_____no_output_____" ] ], [ [ "# Scale the data with MinMaxScaler\r\nX_scaler = MinMaxScaler().fit(X_train)\r\nX_train_scaled = X_scaler.transform(X_train)\r\nX_test_scaled = X_scaler.transform(X_test)", "_____no_output_____" ], [ "# One-Hot-Encode the y data\r\n\r\n# Step 1: Label-encode data set\r\nlabel_encoder = LabelEncoder()\r\nlabel_encoder.fit(y_train)\r\nencoded_y_train = label_encoder.transform(y_train)\r\nencoded_y_test = label_encoder.transform(y_test)\r\n\r\n# Step 2: Convert encoded labels to one-hot-encoding\r\ny_train_categorical = to_categorical(encoded_y_train)\r\ny_test_categorical = to_categorical(encoded_y_test)", "_____no_output_____" ], [ "print('Unique KOI Disposition Values')\r\nprint(y.unique())\r\nprint('-----------')\r\nprint('Sample KOI Disposition Values and Encoding')\r\nprint(y_test[:5])\r\nprint(y_test_categorical[:5])", "Unique KOI Disposition Values\n['CONFIRMED' 'FALSE_POSITIVE' 'CANDIDATE']\n-----------\nSample KOI Disposition Values and Encoding\n4982 FALSE_POSITIVE\n4866 CANDIDATE\n2934 FALSE_POSITIVE\n5007 FALSE_POSITIVE\n3869 FALSE_POSITIVE\nName: koi_disposition, dtype: object\n[[0. 0. 1.]\n [1. 0. 0.]\n [0. 0. 1.]\n [0. 0. 1.]\n [0. 0. 1.]]\n" ] ], [ [ "# Create and Train the Model - LogisticRegression\r\n\r\n", "_____no_output_____" ] ], [ [ "# Create model newton-cg\r\nmodel = LogisticRegression(solver='newton-cg', max_iter=1000)\r\n# model = LogisticRegression(solver='sag', max_iter=1000)\r\n\r\n# Train model\r\nmodel.fit(X_train_scaled, y_train)", "_____no_output_____" ], [ "print(f\"Training Data Score: {model.score(X_train_scaled, y_train)}\")\r\nprint(f\"Testing Data Score: {model.score(X_test_scaled, y_test)}\")", "Training Data Score: 0.8600040874718986\nTesting Data Score: 0.847950428979981\n" ] ], [ [ "# Hyperparameter Tuning\r\n\r\nUse `GridSearchCV` to tune the model's parameters", "_____no_output_____" ] ], [ [ "# Create the GridSearchCV model\r\nparam_grid = {'solver': ['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga']}\r\ngrid = GridSearchCV(model, param_grid, verbose=3)", "_____no_output_____" ], [ "# Fit the model using the grid search estimator. \r\ngrid.fit(X_train_scaled, y_train)", "Fitting 5 folds for each of 5 candidates, totalling 25 fits\n[CV 1/5] END ..................solver=newton-cg;, score=0.874 total time= 0.1s\n[CV 2/5] END ..................solver=newton-cg;, score=0.838 total time= 0.0s\n[CV 3/5] END ..................solver=newton-cg;, score=0.868 total time= 0.0s\n[CV 4/5] END ..................solver=newton-cg;, score=0.848 total time= 0.1s\n[CV 5/5] END ..................solver=newton-cg;, score=0.849 total time= 0.1s\n[CV 1/5] END ......................solver=lbfgs;, score=0.874 total time= 0.3s\n[CV 2/5] END ......................solver=lbfgs;, score=0.838 total time= 0.2s\n[CV 3/5] END ......................solver=lbfgs;, score=0.868 total time= 0.3s\n[CV 4/5] END ......................solver=lbfgs;, score=0.848 total time= 0.3s\n[CV 5/5] END ......................solver=lbfgs;, score=0.849 total time= 0.3s\n[CV 1/5] END ..................solver=liblinear;, score=0.869 total time= 0.0s\n[CV 2/5] END ..................solver=liblinear;, score=0.836 total time= 0.0s\n[CV 3/5] END ..................solver=liblinear;, score=0.860 total time= 0.0s\n[CV 4/5] END ..................solver=liblinear;, score=0.827 total time= 0.0s\n[CV 5/5] END ..................solver=liblinear;, score=0.832 total time= 0.0s\n[CV 1/5] END ........................solver=sag;, score=0.874 total time= 0.2s\n[CV 2/5] END ........................solver=sag;, score=0.838 total time= 0.2s\n[CV 3/5] END ........................solver=sag;, score=0.868 total time= 0.2s\n[CV 4/5] END ........................solver=sag;, score=0.848 total time= 0.2s\n[CV 5/5] END ........................solver=sag;, score=0.849 total time= 0.2s\n[CV 1/5] END .......................solver=saga;, score=0.874 total time= 0.7s\n[CV 2/5] END .......................solver=saga;, score=0.838 total time= 0.7s\n[CV 3/5] END .......................solver=saga;, score=0.868 total time= 0.7s\n[CV 4/5] END .......................solver=saga;, score=0.848 total time= 0.7s\n[CV 5/5] END .......................solver=saga;, score=0.849 total time= 0.7s\n" ], [ "print(grid.best_params_)\r\nprint(grid.best_score_)", "{'solver': 'newton-cg'}\n0.8553005758975291\n" ] ], [ [ "# Option 1: Model Results when using all features\r\n* solver: 'newton-cg'\r\n* score: 0.8553005758975291\r\n* Training Data Score: 0.8600040874718986\r\n* Testing Data Score: 0.847950428979981\r\n\r\n# Option 2: Model Results when using all features not associated with error measurements\r\n* solver: 'sag'\r\n* score: 0.8148348446204654\r\n* Training Data Score: 0.8150391347123959\r\n* Testing Data Score: 0.8021925643469972\r\n\r\n# Option 3: Model Results when using selected features from Decision Tree and Random Forest Classifiers\r\n* solver: 'sag'\r\n* 0.762941192444191\r\n* Training Data Score: 0.7392192928673615\r\n* Testing Data Score: 0.7397521448999047", "_____no_output_____" ], [ "# Save the Model\r\n\r\nOption 1 was chosen as the model to save because it yielded the best score of all 3 input options.", "_____no_output_____" ] ], [ [ "# Save the model\r\njoblib.dump(model, './d_logisticRegression_model.sav')\r\njoblib.dump(grid, './d_logisticRegression_grid.sav')", "_____no_output_____" ] ], [ [ "# Model Discussion\r\n\r\nThe option 1 model score using the logistic regression method is reasonable for predicting exoplanet observations. However, the SVM and Neural Network models perform better.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
4a4a3091793b2fb0250bae16b9e4d184a675c717
38,084
ipynb
Jupyter Notebook
Chapter04/4.2 Estimating Value of Pi using Monte Carlo.ipynb
thelastsinger/RL
27405a68b35d8d09192a83d300f890b9025b0a41
[ "MIT" ]
635
2018-06-29T15:13:04.000Z
2022-03-26T05:41:52.000Z
Chapter04/.ipynb_checkpoints/4.2 Estimating Value of Pi using Monte Carlo-checkpoint.ipynb
alialahyari/Hands-On-Reinforcement-Learning-with-Python
a324041a991794013fc89f4623da2160de73edaf
[ "MIT" ]
9
2019-04-12T00:52:11.000Z
2019-04-12T03:06:33.000Z
Chapter04/.ipynb_checkpoints/4.2 Estimating Value of Pi using Monte Carlo-checkpoint.ipynb
alialahyari/Hands-On-Reinforcement-Learning-with-Python
a324041a991794013fc89f4623da2160de73edaf
[ "MIT" ]
309
2018-07-10T05:53:02.000Z
2022-03-21T18:09:02.000Z
198.354167
33,760
0.911984
[ [ [ "# Estimating Value of Pi using Monte Carlo", "_____no_output_____" ], [ " First, we import necessary libraries", "_____no_output_____" ] ], [ [ "import numpy as np\nimport math\nimport random\nimport matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ] ], [ [ "\nNow we initialize square size and no of points inside the circle and no of points inside the square and we also\ninitialize sample size which denotes the no of random points to be generated. We define arc\nwhich is basically the circle quadrant. ", "_____no_output_____" ] ], [ [ "square_size = 1\npoints_inside_circle = 0\npoints_inside_square = 0\nsample_size = 1000\narc = np.linspace(0, np.pi/2, 100)", "_____no_output_____" ] ], [ [ "Then we define a function called generate_points which generates random points\ninside the square.", "_____no_output_____" ] ], [ [ "def generate_points(size):\n x = random.random()*size\n y = random.random()*size\n return (x, y)", "_____no_output_____" ] ], [ [ "Followed by we define a function called is_in_circle which will check if the point we\ngenerated falls within the circle. ", "_____no_output_____" ] ], [ [ "def is_in_circle(point, size):\n return math.sqrt(point[0]**2 + point[1]**2) <= size", "_____no_output_____" ] ], [ [ "Then we define a function for calculating pi value as,", "_____no_output_____" ] ], [ [ "def compute_pi(points_inside_circle, points_inside_square):\n return 4 * (points_inside_circle / points_inside_square) ", "_____no_output_____" ] ], [ [ "\nAccording to the sample size, we generate some random points inside the square and\nincrement our points_inside_square variable and then we will check if the points we\ngenerated lies inside the circle, if yes then we\nincrement points_inside_circle variable.", "_____no_output_____" ] ], [ [ "plt.axes().set_aspect('equal')\nplt.plot(1*np.cos(arc), 1*np.sin(arc))\n\n\nfor i in range(sample_size):\n point = generate_points(square_size)\n plt.plot(point[0], point[1], 'c.')\n points_inside_square += 1\n \n if is_in_circle(point, square_size):\n points_inside_circle += 1\n\nprint(\"Approximate value of pi is {}\"\n.format(compute_pi(points_inside_circle, points_inside_square)))", "Approximate value of pi is 3.196\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a4a3185f2d056fc92abf8ff42303a4d644deddf
231,026
ipynb
Jupyter Notebook
notebooks/dataset-projections/moons/vae-moons-embedding.ipynb
timsainb/ParametricUMAP_paper
00b4d676647e45619552aec8f2663c0903a83e3f
[ "MIT" ]
124
2020-09-27T23:59:01.000Z
2022-03-22T06:27:35.000Z
notebooks/dataset-projections/moons/vae-moons-embedding.ipynb
kiminh/ParametricUMAP_paper
00b4d676647e45619552aec8f2663c0903a83e3f
[ "MIT" ]
2
2021-02-05T18:13:13.000Z
2021-11-01T14:55:08.000Z
notebooks/dataset-projections/moons/vae-moons-embedding.ipynb
kiminh/ParametricUMAP_paper
00b4d676647e45619552aec8f2663c0903a83e3f
[ "MIT" ]
16
2020-09-28T07:43:21.000Z
2022-03-21T00:31:34.000Z
134.239396
46,212
0.766702
[ [ [ "# reload packages\n%load_ext autoreload\n%autoreload 2", "_____no_output_____" ] ], [ [ "### Choose GPU (this may not be needed on your computer)", "_____no_output_____" ] ], [ [ "%env CUDA_DEVICE_ORDER=PCI_BUS_ID\n%env CUDA_VISIBLE_DEVICES=''", "env: CUDA_DEVICE_ORDER=PCI_BUS_ID\nenv: CUDA_VISIBLE_DEVICES=''\n" ] ], [ [ "### load packages", "_____no_output_____" ] ], [ [ "from tfumap.umap import tfUMAP", "/mnt/cube/tsainbur/conda_envs/tpy3/lib/python3.6/site-packages/tqdm/autonotebook/__init__.py:14: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)\n \" (e.g. in jupyter console)\", TqdmExperimentalWarning)\n" ], [ "import tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom tqdm.autonotebook import tqdm\nimport umap\nimport pandas as pd", "_____no_output_____" ] ], [ [ "### Load dataset", "_____no_output_____" ] ], [ [ "from sklearn.datasets import make_moons\n\nX_train, Y_train = make_moons(1000, random_state=0, noise=0.1)\n\nX_test, Y_test = make_moons(1000, random_state=1, noise=0.1)\n\nX_valid, Y_valid = make_moons(1000, random_state=2, noise=0.1)\n\ndef norm(x):\n return (x - np.min(x)) / (np.max(x) - np.min(x))\n\nX_train = norm(X_train)\nX_valid = norm(X_valid)\nX_test = norm(X_test)\nX_train_flat = X_train\n\nX_test_flat = X_test\n\n\nplt.scatter(X_test[:,0], X_test[:,1], c=Y_test)", "_____no_output_____" ] ], [ [ "### Create model and train", "_____no_output_____" ], [ "### define networks", "_____no_output_____" ] ], [ [ "dims = (2)\nn_components = 2", "_____no_output_____" ], [ "from tfumap.vae import VAE, Sampling", "_____no_output_____" ], [ "encoder_inputs = tf.keras.Input(shape=dims)\nx = tf.keras.layers.Flatten()(encoder_inputs)\nx = tf.keras.layers.Dense(units=100, activation=\"relu\")(x)\nx = tf.keras.layers.Dense(units=100, activation=\"relu\")(x)\nx = tf.keras.layers.Dense(units=100, activation=\"relu\")(x)\nz_mean = tf.keras.layers.Dense(n_components, name=\"z_mean\")(x)\nz_log_var = tf.keras.layers.Dense(n_components, name=\"z_log_var\")(x)\nz = Sampling()([z_mean, z_log_var])\nencoder = tf.keras.Model(encoder_inputs, [z_mean, z_log_var, z], name=\"encoder\")\nencoder.summary()", "Model: \"encoder\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_3 (InputLayer) [(None, 2)] 0 \n__________________________________________________________________________________________________\nflatten_1 (Flatten) (None, 2) 0 input_3[0][0] \n__________________________________________________________________________________________________\ndense_7 (Dense) (None, 100) 300 flatten_1[0][0] \n__________________________________________________________________________________________________\ndense_8 (Dense) (None, 100) 10100 dense_7[0][0] \n__________________________________________________________________________________________________\ndense_9 (Dense) (None, 100) 10100 dense_8[0][0] \n__________________________________________________________________________________________________\nz_mean (Dense) (None, 2) 202 dense_9[0][0] \n__________________________________________________________________________________________________\nz_log_var (Dense) (None, 2) 202 dense_9[0][0] \n__________________________________________________________________________________________________\nsampling_1 (Sampling) (None, 2) 0 z_mean[0][0] \n z_log_var[0][0] \n==================================================================================================\nTotal params: 20,904\nTrainable params: 20,904\nNon-trainable params: 0\n__________________________________________________________________________________________________\n" ], [ "latent_inputs = tf.keras.Input(shape=(n_components,))\nx = tf.keras.layers.Dense(units=100, activation=\"relu\")(latent_inputs)\nx = tf.keras.layers.Dense(units=100, activation=\"relu\")(x)\nx = tf.keras.layers.Dense(units=100, activation=\"relu\")(x)\ndecoder_outputs = tf.keras.layers.Dense(units=2, activation=\"sigmoid\")(x)\n\ndecoder = tf.keras.Model(latent_inputs, decoder_outputs, name=\"decoder\")\ndecoder.summary()\n", "Model: \"decoder\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_4 (InputLayer) [(None, 2)] 0 \n_________________________________________________________________\ndense_10 (Dense) (None, 100) 300 \n_________________________________________________________________\ndense_11 (Dense) (None, 100) 10100 \n_________________________________________________________________\ndense_12 (Dense) (None, 100) 10100 \n_________________________________________________________________\ndense_13 (Dense) (None, 2) 202 \n=================================================================\nTotal params: 20,702\nTrainable params: 20,702\nNon-trainable params: 0\n_________________________________________________________________\n" ] ], [ [ "### Create model and train", "_____no_output_____" ] ], [ [ "X_train.shape", "_____no_output_____" ], [ "vae = VAE(encoder, decoder)\nvae.compile(optimizer=tf.keras.optimizers.Adam())", "_____no_output_____" ], [ "vae.fit(X_train, epochs=500, batch_size=128)", "Epoch 1/500\n8/8 [==============================] - 0s 3ms/step - loss: 463.1936 - reconstruction_loss: 459.8925 - kl_loss: 3.3012\nEpoch 2/500\n8/8 [==============================] - 0s 3ms/step - loss: 463.6160 - reconstruction_loss: 460.3813 - kl_loss: 3.2347\nEpoch 3/500\n8/8 [==============================] - 0s 3ms/step - loss: 464.4168 - reconstruction_loss: 461.1894 - kl_loss: 3.2274\nEpoch 4/500\n8/8 [==============================] - 0s 2ms/step - loss: 463.8926 - reconstruction_loss: 460.6926 - kl_loss: 3.2000\nEpoch 5/500\n8/8 [==============================] - 0s 2ms/step - loss: 462.9273 - reconstruction_loss: 459.7479 - kl_loss: 3.1794\nEpoch 6/500\n8/8 [==============================] - 0s 3ms/step - loss: 463.4334 - reconstruction_loss: 460.2505 - kl_loss: 3.1829\nEpoch 7/500\n8/8 [==============================] - 0s 3ms/step - loss: 463.1268 - reconstruction_loss: 459.9898 - kl_loss: 3.1370\nEpoch 8/500\n8/8 [==============================] - 0s 2ms/step - loss: 463.8119 - reconstruction_loss: 460.7013 - kl_loss: 3.1107\nEpoch 9/500\n8/8 [==============================] - 0s 3ms/step - loss: 463.8814 - reconstruction_loss: 460.7907 - kl_loss: 3.0907\nEpoch 10/500\n8/8 [==============================] - 0s 3ms/step - loss: 463.5622 - reconstruction_loss: 460.4869 - kl_loss: 3.0753\nEpoch 11/500\n8/8 [==============================] - 0s 2ms/step - loss: 463.8726 - reconstruction_loss: 460.8403 - kl_loss: 3.0323\nEpoch 12/500\n8/8 [==============================] - 0s 3ms/step - loss: 464.7785 - reconstruction_loss: 461.7397 - kl_loss: 3.0389\nEpoch 13/500\n8/8 [==============================] - 0s 3ms/step - loss: 463.6907 - reconstruction_loss: 460.6614 - kl_loss: 3.0293\nEpoch 14/500\n8/8 [==============================] - 0s 3ms/step - loss: 462.4967 - reconstruction_loss: 459.4764 - kl_loss: 3.0203\nEpoch 15/500\n8/8 [==============================] - 0s 3ms/step - loss: 461.9814 - reconstruction_loss: 458.9635 - kl_loss: 3.0178\nEpoch 16/500\n8/8 [==============================] - 0s 2ms/step - loss: 464.6444 - reconstruction_loss: 461.6560 - kl_loss: 2.9884\nEpoch 17/500\n8/8 [==============================] - 0s 2ms/step - loss: 463.8637 - reconstruction_loss: 460.8776 - kl_loss: 2.9861\nEpoch 18/500\n8/8 [==============================] - 0s 3ms/step - loss: 463.9150 - reconstruction_loss: 460.9389 - kl_loss: 2.9762\nEpoch 19/500\n8/8 [==============================] - 0s 2ms/step - loss: 462.5800 - reconstruction_loss: 459.6362 - kl_loss: 2.9438\nEpoch 20/500\n8/8 [==============================] - 0s 2ms/step - loss: 463.3539 - reconstruction_loss: 460.4290 - kl_loss: 2.9249\nEpoch 21/500\n8/8 [==============================] - 0s 3ms/step - loss: 464.7428 - reconstruction_loss: 461.8307 - kl_loss: 2.9121\nEpoch 22/500\n8/8 [==============================] - 0s 3ms/step - loss: 464.0385 - reconstruction_loss: 461.1098 - kl_loss: 2.9287\nEpoch 23/500\n8/8 [==============================] - 0s 3ms/step - loss: 461.1324 - reconstruction_loss: 458.1972 - kl_loss: 2.9352\nEpoch 24/500\n8/8 [==============================] - 0s 2ms/step - loss: 463.2111 - reconstruction_loss: 460.2965 - kl_loss: 2.9146\nEpoch 25/500\n8/8 [==============================] - 0s 3ms/step - loss: 463.9311 - reconstruction_loss: 461.0225 - kl_loss: 2.9086\nEpoch 26/500\n8/8 [==============================] - 0s 2ms/step - loss: 463.1517 - reconstruction_loss: 460.2548 - kl_loss: 2.8969\nEpoch 27/500\n8/8 [==============================] - 0s 3ms/step - loss: 464.9196 - reconstruction_loss: 462.0494 - kl_loss: 2.8703\nEpoch 28/500\n8/8 [==============================] - 0s 3ms/step - loss: 464.2080 - reconstruction_loss: 461.3461 - kl_loss: 2.8619\nEpoch 29/500\n8/8 [==============================] - 0s 2ms/step - loss: 462.8559 - reconstruction_loss: 459.9877 - kl_loss: 2.8682\nEpoch 30/500\n8/8 [==============================] - 0s 3ms/step - loss: 462.6345 - reconstruction_loss: 459.7727 - kl_loss: 2.8619\nEpoch 31/500\n8/8 [==============================] - 0s 2ms/step - loss: 464.3834 - reconstruction_loss: 461.5300 - kl_loss: 2.8534\nEpoch 32/500\n8/8 [==============================] - 0s 3ms/step - loss: 461.7911 - reconstruction_loss: 458.9271 - kl_loss: 2.8640\nEpoch 33/500\n8/8 [==============================] - 0s 3ms/step - loss: 463.8448 - reconstruction_loss: 460.9984 - kl_loss: 2.8464\nEpoch 34/500\n8/8 [==============================] - 0s 3ms/step - loss: 463.6073 - reconstruction_loss: 460.7587 - kl_loss: 2.8485\nEpoch 35/500\n8/8 [==============================] - 0s 2ms/step - loss: 464.4299 - reconstruction_loss: 461.5880 - kl_loss: 2.8419\nEpoch 36/500\n8/8 [==============================] - 0s 2ms/step - loss: 462.8540 - reconstruction_loss: 460.0024 - kl_loss: 2.8516\nEpoch 37/500\n8/8 [==============================] - 0s 2ms/step - loss: 463.5467 - reconstruction_loss: 460.7095 - kl_loss: 2.8372\nEpoch 38/500\n8/8 [==============================] - 0s 2ms/step - loss: 462.8613 - reconstruction_loss: 460.0344 - kl_loss: 2.8268\nEpoch 39/500\n8/8 [==============================] - 0s 2ms/step - loss: 462.9440 - reconstruction_loss: 460.1001 - kl_loss: 2.8439\nEpoch 40/500\n8/8 [==============================] - 0s 2ms/step - loss: 463.4650 - reconstruction_loss: 460.6143 - kl_loss: 2.8507\nEpoch 41/500\n8/8 [==============================] - 0s 3ms/step - loss: 463.2296 - reconstruction_loss: 460.3850 - kl_loss: 2.8446\nEpoch 42/500\n8/8 [==============================] - 0s 2ms/step - loss: 463.4805 - reconstruction_loss: 460.6542 - kl_loss: 2.8263\nEpoch 43/500\n8/8 [==============================] - 0s 2ms/step - loss: 462.4790 - reconstruction_loss: 459.6594 - kl_loss: 2.8196\nEpoch 44/500\n8/8 [==============================] - 0s 3ms/step - loss: 462.3991 - reconstruction_loss: 459.5799 - kl_loss: 2.8192\nEpoch 45/500\n8/8 [==============================] - 0s 3ms/step - loss: 464.0999 - reconstruction_loss: 461.2827 - kl_loss: 2.8172\nEpoch 46/500\n8/8 [==============================] - 0s 3ms/step - loss: 463.3764 - reconstruction_loss: 460.5754 - kl_loss: 2.8010\nEpoch 47/500\n8/8 [==============================] - 0s 3ms/step - loss: 463.9018 - reconstruction_loss: 461.1078 - kl_loss: 2.7940\nEpoch 48/500\n8/8 [==============================] - 0s 3ms/step - loss: 462.2311 - reconstruction_loss: 459.4523 - kl_loss: 2.7788\nEpoch 49/500\n8/8 [==============================] - 0s 3ms/step - loss: 463.3189 - reconstruction_loss: 460.5464 - kl_loss: 2.7725\nEpoch 50/500\n8/8 [==============================] - 0s 3ms/step - loss: 463.9374 - reconstruction_loss: 461.1484 - kl_loss: 2.7890\nEpoch 51/500\n8/8 [==============================] - 0s 3ms/step - loss: 461.9922 - reconstruction_loss: 459.2186 - kl_loss: 2.7736\nEpoch 52/500\n8/8 [==============================] - 0s 3ms/step - loss: 462.2472 - reconstruction_loss: 459.4499 - kl_loss: 2.7974\nEpoch 53/500\n8/8 [==============================] - 0s 3ms/step - loss: 463.3834 - reconstruction_loss: 460.6067 - kl_loss: 2.7766\nEpoch 54/500\n8/8 [==============================] - 0s 2ms/step - loss: 462.8486 - reconstruction_loss: 460.0693 - kl_loss: 2.7793\nEpoch 55/500\n8/8 [==============================] - 0s 2ms/step - loss: 464.4867 - reconstruction_loss: 461.7101 - kl_loss: 2.7765\nEpoch 56/500\n8/8 [==============================] - 0s 3ms/step - loss: 462.7393 - reconstruction_loss: 459.9529 - kl_loss: 2.7864\nEpoch 57/500\n8/8 [==============================] - 0s 2ms/step - loss: 464.4623 - reconstruction_loss: 461.6994 - kl_loss: 2.7629\nEpoch 58/500\n8/8 [==============================] - 0s 2ms/step - loss: 463.9419 - reconstruction_loss: 461.1779 - kl_loss: 2.7640\nEpoch 59/500\n8/8 [==============================] - 0s 3ms/step - loss: 464.6575 - reconstruction_loss: 461.8954 - kl_loss: 2.7620\nEpoch 60/500\n8/8 [==============================] - 0s 2ms/step - loss: 463.1304 - reconstruction_loss: 460.3876 - kl_loss: 2.7428\nEpoch 61/500\n8/8 [==============================] - 0s 3ms/step - loss: 464.1183 - reconstruction_loss: 461.3852 - kl_loss: 2.7331\nEpoch 62/500\n8/8 [==============================] - 0s 3ms/step - loss: 463.5042 - reconstruction_loss: 460.7839 - kl_loss: 2.7202\nEpoch 63/500\n" ], [ "z = vae.encoder.predict(X_train)[0]", "_____no_output_____" ] ], [ [ "### Plot model output", "_____no_output_____" ] ], [ [ "fig, ax = plt.subplots( figsize=(8, 8))\nsc = ax.scatter(\n z[:, 0],\n z[:, 1],\n c=Y_train.astype(int)[:len(z)].flatten(),\n cmap=\"tab10\",\n s=0.1,\n alpha=0.5,\n rasterized=True,\n)\nax.axis('equal')\nax.set_title(\"UMAP in Tensorflow embedding\", fontsize=20)\nplt.colorbar(sc, ax=ax);", "_____no_output_____" ], [ "z_recon = decoder.predict(z)", "_____no_output_____" ], [ "fig, ax = plt.subplots()\nax.scatter(z_recon[:,0], z_recon[:,1], s = 1, c = z_recon[:,0], alpha = 1)\nax.axis('equal')", "_____no_output_____" ] ], [ [ "### Save output", "_____no_output_____" ] ], [ [ "from tfumap.paths import ensure_dir, MODEL_DIR", "_____no_output_____" ], [ "dataset = 'moons'", "_____no_output_____" ], [ "output_dir = MODEL_DIR/'projections'/ dataset / 'vae'\nensure_dir(output_dir)", "_____no_output_____" ], [ "encoder.save(output_dir / 'encoder')", "WARNING: Logging before flag parsing goes to stderr.\nW0822 11:37:27.278963 140345843763008 deprecation.py:323] From /mnt/cube/tsainbur/conda_envs/tpy3/lib/python3.6/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.\nInstructions for updating:\nThis property should not be used in TensorFlow 2.0, as updates are applied automatically.\nW0822 11:37:27.290274 140345843763008 deprecation.py:323] From /mnt/cube/tsainbur/conda_envs/tpy3/lib/python3.6/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.\nInstructions for updating:\nThis property should not be used in TensorFlow 2.0, as updates are applied automatically.\nI0822 11:37:27.626836 140345843763008 builder_impl.py:775] Assets written to: /mnt/cube/tsainbur/Projects/github_repos/umap_tf_networks/models/projections/moons/vae/encoder/assets\n" ], [ "decoder.save(output_dir / 'encoder')", "I0822 11:37:28.028743 140345843763008 builder_impl.py:775] Assets written to: /mnt/cube/tsainbur/Projects/github_repos/umap_tf_networks/models/projections/moons/vae/encoder/assets\n" ], [ "#loss_df.to_pickle(output_dir / 'loss_df.pickle')", "_____no_output_____" ], [ "np.save(output_dir / 'z.npy', z)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
4a4a5777efe3cbdeaac06d20f857987b24902633
22,176
ipynb
Jupyter Notebook
Mask Detection.ipynb
anushkarjain/Face-Mask-Detector
28d47320dc743aec2ee154a011b4669204c67034
[ "MIT" ]
null
null
null
Mask Detection.ipynb
anushkarjain/Face-Mask-Detector
28d47320dc743aec2ee154a011b4669204c67034
[ "MIT" ]
null
null
null
Mask Detection.ipynb
anushkarjain/Face-Mask-Detector
28d47320dc743aec2ee154a011b4669204c67034
[ "MIT" ]
null
null
null
34.977918
364
0.477498
[ [ [ "import tensorflow as tf", "_____no_output_____" ], [ "label_dict={\"with_mask\":0, \"without_mask\":1} #dictionary", "_____no_output_____" ], [ "categories=[\"with_mask\",\"without_mask\"] #list", "_____no_output_____" ], [ "label=[0,1]", "_____no_output_____" ], [ "data_path=\"C:\\\\Users\\\\anush\\\\Documents\\\\dataset\" ", "_____no_output_____" ], [ "import cv2,os", "_____no_output_____" ], [ "data=[] \ntarget=[] #empty lists", "_____no_output_____" ], [ "for category in categories:\n folder_path=os.path.join(data_path,category)\n img_names=os.listdir(folder_path)\n for img_name in img_names:\n img_path=os.path.join(folder_path,img_name)\n img=cv2.imread(img_path)\n try:\n gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)\n resized=cv2.resize(gray,(100,100))\n data.append(resized)\n target.append(label_dict[category])\n except Exception as e:\n pass", "_____no_output_____" ], [ "import numpy as np\ndata=np.array(data)\ndata=data/255.0", "_____no_output_____" ], [ "data", "_____no_output_____" ], [ "data.shape", "_____no_output_____" ], [ "data=np.reshape(data,(data.shape[0],100,100,1))", "_____no_output_____" ], [ "data.shape", "_____no_output_____" ], [ "target=np.array(target)", "_____no_output_____" ], [ "target.shape", "_____no_output_____" ], [ "from keras.utils import np_utils", "_____no_output_____" ], [ "new_target=np_utils.to_categorical(target)", "_____no_output_____" ], [ "new_target.shape", "_____no_output_____" ], [ "from keras.models import Sequential\nfrom keras.layers import Dense, Activation, Flatten, Dropout\nfrom keras.layers import Conv2D,MaxPooling2D", "_____no_output_____" ], [ "model = Sequential()\nmodel.add(Conv2D(200,(3,3),input_shape=data.shape[1:], activation = \"relu\"))\nmodel.add(MaxPooling2D(pool_size=(2,2)))\nmodel.add(Conv2D(100,(3,3), activation = \"relu\"))\nmodel.add(MaxPooling2D(pool_size=(2,2)))\nmodel.add(Flatten())\nmodel.add(Dropout(0.5))\nmodel.add(Dense(50, activation='relu'))\nmodel.add(Dense(2, activation='softmax'))", "_____no_output_____" ], [ "model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"] )", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split", "_____no_output_____" ], [ "train_data,test_data,train_target,test_target =train_test_split(data,new_target,test_size=0.1)", "_____no_output_____" ], [ "from keras.callbacks import ModelCheckpoint", "_____no_output_____" ], [ "checkpoint=ModelCheckpoint(\"model-{epoch:03d}.model\", save_best_only=True,mode=\"auto\")\nhistory=model.fit(train_data,train_target,epochs=30,validation_split=0.2,callbacks=[checkpoint])", "Epoch 1/30\n31/31 [==============================] - ETA: 0s - loss: 0.7402 - accuracy: 0.5475WARNING:tensorflow:From C:\\Users\\anush\\AppData\\Roaming\\Python\\Python38\\site-packages\\tensorflow\\python\\training\\tracking\\tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.\nInstructions for updating:\nThis property should not be used in TensorFlow 2.0, as updates are applied automatically.\nWARNING:tensorflow:From C:\\Users\\anush\\AppData\\Roaming\\Python\\Python38\\site-packages\\tensorflow\\python\\training\\tracking\\tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.\nInstructions for updating:\nThis property should not be used in TensorFlow 2.0, as updates are applied automatically.\nINFO:tensorflow:Assets written to: model-001.model\\assets\n31/31 [==============================] - 81s 3s/step - loss: 0.7402 - accuracy: 0.5475 - val_loss: 0.7040 - val_accuracy: 0.4879\nEpoch 2/30\n31/31 [==============================] - ETA: 0s - loss: 0.5647 - accuracy: 0.7040INFO:tensorflow:Assets written to: model-002.model\\assets\n31/31 [==============================] - 92s 3s/step - loss: 0.5647 - accuracy: 0.7040 - val_loss: 0.5475 - val_accuracy: 0.7339\nEpoch 3/30\n31/31 [==============================] - ETA: 0s - loss: 0.4113 - accuracy: 0.8020INFO:tensorflow:Assets written to: model-003.model\\assets\n31/31 [==============================] - 74s 2s/step - loss: 0.4113 - accuracy: 0.8020 - val_loss: 0.4020 - val_accuracy: 0.8387\nEpoch 4/30\n31/31 [==============================] - ETA: 0s - loss: 0.2730 - accuracy: 0.8990INFO:tensorflow:Assets written to: model-004.model\\assets\n31/31 [==============================] - 75s 2s/step - loss: 0.2730 - accuracy: 0.8990 - val_loss: 0.3022 - val_accuracy: 0.8790\nEpoch 5/30\n31/31 [==============================] - ETA: 0s - loss: 0.1901 - accuracy: 0.9313INFO:tensorflow:Assets written to: model-005.model\\assets\n31/31 [==============================] - 77s 2s/step - loss: 0.1901 - accuracy: 0.9313 - val_loss: 0.2445 - val_accuracy: 0.8952\nEpoch 6/30\n31/31 [==============================] - ETA: 0s - loss: 0.1668 - accuracy: 0.9313INFO:tensorflow:Assets written to: model-006.model\\assets\n31/31 [==============================] - 76s 2s/step - loss: 0.1668 - accuracy: 0.9313 - val_loss: 0.1959 - val_accuracy: 0.9194\nEpoch 7/30\n31/31 [==============================] - ETA: 0s - loss: 0.1292 - accuracy: 0.9515INFO:tensorflow:Assets written to: model-007.model\\assets\n31/31 [==============================] - 76s 2s/step - loss: 0.1292 - accuracy: 0.9515 - val_loss: 0.1787 - val_accuracy: 0.9435\nEpoch 8/30\n31/31 [==============================] - 74s 2s/step - loss: 0.0918 - accuracy: 0.9657 - val_loss: 0.1874 - val_accuracy: 0.9073\nEpoch 9/30\n31/31 [==============================] - 72s 2s/step - loss: 0.1017 - accuracy: 0.9616 - val_loss: 0.3933 - val_accuracy: 0.8669\nEpoch 10/30\n31/31 [==============================] - ETA: 0s - loss: 0.0885 - accuracy: 0.9747INFO:tensorflow:Assets written to: model-010.model\\assets\n31/31 [==============================] - 75s 2s/step - loss: 0.0885 - accuracy: 0.9747 - val_loss: 0.1662 - val_accuracy: 0.9435\nEpoch 11/30\n31/31 [==============================] - ETA: 0s - loss: 0.0539 - accuracy: 0.9828INFO:tensorflow:Assets written to: model-011.model\\assets\n31/31 [==============================] - 77s 2s/step - loss: 0.0539 - accuracy: 0.9828 - val_loss: 0.1286 - val_accuracy: 0.9556\nEpoch 12/30\n31/31 [==============================] - 72s 2s/step - loss: 0.0449 - accuracy: 0.9818 - val_loss: 0.2067 - val_accuracy: 0.9315\nEpoch 13/30\n31/31 [==============================] - 71s 2s/step - loss: 0.0467 - accuracy: 0.9828 - val_loss: 0.3228 - val_accuracy: 0.9113\nEpoch 14/30\n31/31 [==============================] - 73s 2s/step - loss: 0.0970 - accuracy: 0.9596 - val_loss: 0.1665 - val_accuracy: 0.9435\nEpoch 15/30\n31/31 [==============================] - 72s 2s/step - loss: 0.0577 - accuracy: 0.9808 - val_loss: 0.1477 - val_accuracy: 0.9395\nEpoch 16/30\n31/31 [==============================] - 73s 2s/step - loss: 0.0574 - accuracy: 0.9818 - val_loss: 0.1441 - val_accuracy: 0.9516\nEpoch 17/30\n31/31 [==============================] - 73s 2s/step - loss: 0.0530 - accuracy: 0.9778 - val_loss: 0.1877 - val_accuracy: 0.9315\nEpoch 18/30\n31/31 [==============================] - ETA: 0s - loss: 0.0292 - accuracy: 0.9899INFO:tensorflow:Assets written to: model-018.model\\assets\n31/31 [==============================] - 76s 2s/step - loss: 0.0292 - accuracy: 0.9899 - val_loss: 0.1220 - val_accuracy: 0.9677\nEpoch 19/30\n31/31 [==============================] - 75s 2s/step - loss: 0.0213 - accuracy: 0.9949 - val_loss: 0.1950 - val_accuracy: 0.9395\nEpoch 20/30\n31/31 [==============================] - 76s 2s/step - loss: 0.0222 - accuracy: 0.9939 - val_loss: 0.1358 - val_accuracy: 0.9597\nEpoch 21/30\n31/31 [==============================] - 71s 2s/step - loss: 0.0176 - accuracy: 0.9939 - val_loss: 0.1586 - val_accuracy: 0.9556\nEpoch 22/30\n31/31 [==============================] - 73s 2s/step - loss: 0.0210 - accuracy: 0.9919 - val_loss: 0.1463 - val_accuracy: 0.9637\nEpoch 23/30\n31/31 [==============================] - 72s 2s/step - loss: 0.0293 - accuracy: 0.9879 - val_loss: 0.1368 - val_accuracy: 0.9637\nEpoch 24/30\n31/31 [==============================] - ETA: 0s - loss: 0.0237 - accuracy: 0.9929INFO:tensorflow:Assets written to: model-024.model\\assets\n31/31 [==============================] - 81s 3s/step - loss: 0.0237 - accuracy: 0.9929 - val_loss: 0.1038 - val_accuracy: 0.9677\nEpoch 25/30\n31/31 [==============================] - 69s 2s/step - loss: 0.0139 - accuracy: 0.9949 - val_loss: 0.1269 - val_accuracy: 0.9677\nEpoch 26/30\n31/31 [==============================] - 74s 2s/step - loss: 0.0136 - accuracy: 0.9960 - val_loss: 0.1397 - val_accuracy: 0.9476\nEpoch 27/30\n31/31 [==============================] - 64s 2s/step - loss: 0.0195 - accuracy: 0.9929 - val_loss: 0.1300 - val_accuracy: 0.9677\nEpoch 28/30\n31/31 [==============================] - 65s 2s/step - loss: 0.0151 - accuracy: 0.9939 - val_loss: 0.1276 - val_accuracy: 0.9718\nEpoch 29/30\n31/31 [==============================] - 61s 2s/step - loss: 0.0164 - accuracy: 0.9960 - val_loss: 0.1372 - val_accuracy: 0.9677\nEpoch 30/30\n31/31 [==============================] - 64s 2s/step - loss: 0.0119 - accuracy: 0.9980 - val_loss: 0.1403 - val_accuracy: 0.9516\n" ], [ "face_cascader=cv2.CascadeClassifier(cv2.data.haarcascades + \"haarcascade_frontalface_default.xml\")", "_____no_output_____" ], [ "img=cv2.imread(\"C:\\\\Users\\\\anush\\\\Desktop\\\\Anushka.jpeg\")", "_____no_output_____" ], [ "gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)\nfaces=face_cascader.detectMultiScale(img,1.3,5) \n", "_____no_output_____" ], [ "faces", "_____no_output_____" ], [ "labels_dict={0:'MASK',1:'NO MASK'}\ncolor_dict={0:(0,255,0),1:(0,0,255)}", "_____no_output_____" ], [ "source=cv2.VideoCapture(0)\nwhile(True):\n\n ret,img=source.read()\n #gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)\n faces=face_cascader.detectMultiScale(img,1.3,5) \n\n for (x,y,w,h) in faces:\n \n face_img=img[y:y+w,x:x+w]\n resized=cv2.resize(face_img,(100,100))\n #normalized=resized/255.0\n \n #result=model.predict(normalized)\n normimage=resized/255\n reshapeimage=np.reshape(normimage,(-1,100,100,1))\n modelop=model.predict(reshapeimage)\n \n label=np.argmax(modelop,axis=1)[1]\n \n cv2.rectangle(img,(x,y),(x+w,y+h),color_dict[label],2)\n cv2.rectangle(img,(x,y-40),(x+w,y),color_dict[label],1)\n \n cv2.putText(img, labels_dict[label], (x, y-10),cv2.FONT_HERSHEY_SIMPLEX,0.8,(255,255,255),2)\n \n # cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)\n # cv2.rectangle(img,(x,y-40),(x+w,y),(0,0,255),1)\n \n #cv2.putText(img, \"face\", (x, y-10),cv2.FONT_HERSHEY_SIMPLEX,0.8,(255,255,255),2)\n \n \n cv2.imshow(\"checking...\",img)\n key=cv2.waitKey(2)\n \n if(key==27):\n break\n \ncv2.destroyAllWindows()\nsource.release()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a4a5becbe959921ddd95e9ebdf2f414329cb132
95,633
ipynb
Jupyter Notebook
examples/tutorial.ipynb
Zsailer/pyasr
454215168ff5cb95850bb61a3e5b366b983a1a66
[ "BSD-3-Clause" ]
5
2019-04-11T22:15:00.000Z
2022-03-22T06:21:13.000Z
examples/tutorial.ipynb
Zsailer/pyasr
454215168ff5cb95850bb61a3e5b366b983a1a66
[ "BSD-3-Clause" ]
1
2019-05-08T23:30:01.000Z
2019-05-09T20:36:31.000Z
examples/tutorial.ipynb
Zsailer/pyasr
454215168ff5cb95850bb61a3e5b366b983a1a66
[ "BSD-3-Clause" ]
5
2018-09-08T08:53:53.000Z
2021-03-09T15:26:03.000Z
127.340879
50,041
0.628664
[ [ [ "# Ancestral sequence reconstruction in Python", "_____no_output_____" ], [ "Imports for tutorial.", "_____no_output_____" ] ], [ [ "import phylopandas as pd\nimport dendropy as d\nimport pyasr\nimport toytree", "_____no_output_____" ] ], [ [ "Read sequences and tree data into a **single** dataframe (thanks, Phylopandas). ", "_____no_output_____" ] ], [ [ "# Use phylopandas to read a set of ancestor.\ndf = pd.read_fasta('test.fasta')\n\n# Read the tree data that relates these sequences \ndf = df.phylo.read_newick('tree.newick', combine_on=\"id\")", "_____no_output_____" ] ], [ [ "Reconstruct a tree in a single line of Python code.", "_____no_output_____" ] ], [ [ "# Reconstruct nodes in tree.\ndf = pyasr.reconstruct(df, working_dir='test', alpha=1.235)", "newick\nnewick\n" ] ], [ [ "Write out ancester dataframe.", "_____no_output_____" ] ], [ [ "# Slice out ancestors\nancestors = df[df.type == 'node']\n\n# Write out Ancestors CSV\nancestors.to_csv('ancestors.csv')\n\n# Preview some ancestors\nancestors.head()", "_____no_output_____" ] ], [ [ "You can visualize your tree using the `toytree` package.", "_____no_output_____" ] ], [ [ "# Get a newick string to feed into ToyTree\nnewick = df.phylo.to_newick(taxon_col='id', node_col='reconstruct_label')\n\n# Draw tree.\ntree_to_draw = toytree.tree(newick)\ntree_to_draw.draw(width=400, height=400,\n tip_labels_align=True,\n use_edge_lengths=True,\n node_labels=True)", "newick\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a4a7358bcfea0ecdcf77b983623b6fcae3085e3
316,083
ipynb
Jupyter Notebook
power-production.ipynb
radekv23/power-production
ab2e5ae2f7cfcd30f96d0bb694287db48b6b5435
[ "MIT" ]
1
2020-11-11T17:47:28.000Z
2020-11-11T17:47:28.000Z
power-production.ipynb
radekv23/power-production
ab2e5ae2f7cfcd30f96d0bb694287db48b6b5435
[ "MIT" ]
null
null
null
power-production.ipynb
radekv23/power-production
ab2e5ae2f7cfcd30f96d0bb694287db48b6b5435
[ "MIT" ]
null
null
null
132.52956
24,400
0.867256
[ [ [ "\n\n# Power Production Project for *Fundamentals of Data Analysis* at GMIT\nby Radek Wojtczak G00352936<br>\n\n\n**Instructions:**\n\n>In this project you must perform and explain simple linear regression using Python\non the powerproduction dataset. The goal is to accurately predict wind turbine power output from wind speed values using the data set as a basis.\nYour submission must be in the form of a git repository containing, at a minimum, the\nfollowing items:\n>1. Jupyter notebook that performs simple linear regression on the data set.\n>2. In that notebook, an explanation of your regression and an analysis of its accuracy.\n>3. Standard items in a git repository such as a README.\n\n>To enhance your submission, you might consider comparing simple linear regression to\nother types of regression on this data set.\n", "_____no_output_____" ], [ "# Wind power\n\n\n\n**How does a wind turbine work?**\n\nWind turbines can turn the power of wind into the electricity we all use to power our homes and businesses. They can be stand-alone, supplying just one or a very small number of homes or businesses, or they can be clustered to form part of a wind farm. \n\nThe visible parts of the wind farm that we’re all used to seeing – those towering white or pale grey turbines. Each of these turbines consists of a set of blades, a box beside them called a nacelle and a shaft. The wind – and this can be just a gentle breeze – makes the blades spin, creating kinetic energy. The blades rotating in this way then also make the shaft in the nacelle turn and a generator in the nacelle converts this kinetic energy into electrical energy.\n\n![How it works](img/works.jpg) \n\n**What happens to the wind-turbine generated electricity next?**\n\nTo connect to the national grid, the electrical energy is then passed through a transformer on the site that increases the voltage to that used by the national electricity system. It’s at this stage that the electricity usually moves onto the National Grid transmission network, ready to then be passed on so that, eventually, it can be used in homes and businesses. Alternatively, a wind farm or a single wind turbine can generate electricity that is used privately by an individual or small set of homes or businesses.\n \n\n**How strong does the wind need to be for a wind turbine to work?**\n\nWind turbines can operate in anything from very light to very strong wind speeds. They generate around 80% of the time, but not always at full capacity. In really high winds they shut down to prevent damage.\n\n![Frequency](img/freq2.png)\n\n**Where are wind farms located?**\n\nWind farms tend to be located in the windiest places possible, to maximise the energy they can create – this is why you’ll be more likely to see them on hillsides or at the coast. Wind farms that are in the sea are called offshore wind farms, whereas those on dry land are termed onshore wind farms.", "_____no_output_____" ], [ "**Wind energy in Ireland**\n\nWind energy is currently the largest contributing resource of renewable energy in Ireland. It is both Ireland’s largest and cheapest renewable electricity resource. In 2018 Wind provided 85% of Ireland’s renewable electricity and 30% of our total electricity demand. It is the second greatest source of electricity generation in Ireland after natural gas. Ireland is one of the leading countries in its use of wind energy and 3rd place worldwide in 2018, after Denmark and Uruguay.\n\n![Windfarms in Ireland](img/map.jpg)", "_____no_output_____" ], [ "### Exploring dataset:", "_____no_output_____" ] ], [ [ "# importing all necessary packages\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn import linear_model as lm\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import LinearRegression\nimport seaborn as sns \nfrom sklearn import metrics \nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import r2_score\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.preprocessing import PolynomialFeatures\nfrom matplotlib import pyplot", "_____no_output_____" ], [ "# loading our dataset, seting columns names and changing index to start from 1 instead of 0\ndf = pd.read_csv('dataset/powerproduction.txt', sep=\",\", header=None)\ndf.columns = [\"speed\", \"power\"]\ndf = df[1:]\ndf", "_____no_output_____" ], [ "# checking for nan values\ncount_nan = len(df) - df.count()\ncount_nan", "_____no_output_____" ], [ "# Converting Strings to Floats\ndf = df.astype(float)", "_____no_output_____" ], [ "# showing first 20 results\ndf.head(20)", "_____no_output_____" ], [ "# basic statistic of speed column\ndf['speed'].describe()", "_____no_output_____" ], [ "# basic statistic of power column\ndf['power'].describe()", "_____no_output_____" ], [ "# histogram of 'speed' data\nsns.set_style('darkgrid')\nsns.distplot(df['speed'])\nplt.show()", "_____no_output_____" ] ], [ [ "We can clearly see normal distribution in above 'speed' column data.", "_____no_output_____" ] ], [ [ "# histogram od 'power' data\nsns.set_style('darkgrid')\nsns.distplot(df['power'])\nplt.show()", "_____no_output_____" ] ], [ [ "As we can see above this distribution look like bimodal distribution.", "_____no_output_____" ] ], [ [ "# scatter plot of our dataset\nplt.xlabel('wind speed',fontsize = 16)\nplt.ylabel('power',fontsize = 16)\nplt.scatter(df['speed'],df['power'])\nplt.show()", "_____no_output_____" ], [ "df", "_____no_output_____" ] ], [ [ "## Regression\n\nRegression analysis is a set of statistical methods used for the estimation of relationships between a dependent variable and one or more independent variables. It can be utilized to assess the strength of the relationship between variables and for modeling the future relationship between them.\n\n\nThe term regression is used when you try to find the relationship between variables.\n\nIn Machine Learning, and in statistical modeling, that relationship is used to predict the outcome of future events.", "_____no_output_____" ], [ "## Linear Regression\n\nThe term “linearity” in algebra refers to a linear relationship between two or more variables. If we draw this relationship in a two-dimensional space (between two variables), we get a straight line.\n\nSimple linear regression is useful for finding relationship between two continuous variables. One is predictor or independent variable and other is response or dependent variable. It looks for statistical relationship but not deterministic relationship. Relationship between two variables is said to be deterministic if one variable can be accurately expressed by the other. For example, using temperature in degree Celsius it is possible to accurately predict Fahrenheit. Statistical relationship is not accurate in determining relationship between two variables. For example, relationship between height and weight.\nThe core idea is to obtain a line that best fits the data. The best fit line is the one for which total prediction error (all data points) are as small as possible. Error is the distance between the point to the regression line.", "_____no_output_____" ] ], [ [ "# divide data to x = speed and y = power\nx = df['speed']\ny = df['power']\n\n# model of Linear regression\nmodel = LinearRegression(fit_intercept=True)\n\n# fiting the model\nmodel.fit(x[:, np.newaxis], y)\n\n# making predyctions\nxfit = np.linspace(0, 25, 100)\nyfit = model.predict(xfit[:, np.newaxis])\n\n# creating plot\nplt.xlabel('wind speed',fontsize = 16)\nplt.ylabel('power',fontsize = 16)\nplt.scatter(x, y)\nplt.plot(xfit, yfit, color=\"red\");", "_____no_output_____" ], [ "# slope and intercept parameters\nprint(\"Parameters:\", model.coef_, model.intercept_)\nprint(\"Model slope: \", model.coef_[0])\nprint(\"Model intercept:\", model.intercept_)", "Parameters: [4.91759567] -13.899902630519634\nModel slope: 4.9175956654046695\nModel intercept: -13.899902630519634\n" ] ], [ [ "**Different approach: Simple linear regression model**\n\nFiting line helps to determine, if our model is predicting well on test dataset.\nWith help of a line we can calculate the error of each datapoint from a line on basis of how fare it is from the line.\nError could be +ve or -ve, and on basis of that we can calculate the cost function.\nI have used Fitted Line Plot to display the relationship between one continuous predictor and a response. A fitted line plot shows a scatterplot of the data with a regression line representing the regression equation.", "_____no_output_____" ], [ "A best fitted line can be roughly determined using an eyeball method by drawing a straight line on a scatter plot so that the number of points above the line and below the line is about equal (and the line passes through as many points as possible).As we can see below our data,are a little bit sinusoidal and in this case best fitted line is trying to cover most of points that are on diagonal, but also it has to cover other data points so its little bit tweaked due to overestimation and underestimation.", "_____no_output_____" ], [ "I divided data into training and testing samples at ratio of 70-30%. After that I will apply different models to compare the accuracy scores of all models.", "_____no_output_____" ] ], [ [ "# training our main model\nx_train,x_test,y_train,y_test = train_test_split(df[['speed']],df.power,test_size = 0.3)", "_____no_output_____" ] ], [ [ "Simple linear regression model", "_____no_output_____" ] ], [ [ "reg_simple = lm.LinearRegression()\nreg_simple.fit(x_train,y_train)", "_____no_output_____" ] ], [ [ "Best fit line on test dataset with simple linear regression", "_____no_output_____" ] ], [ [ "plt.xlabel('wind speed',fontsize = 16)\nplt.ylabel('power',fontsize = 16)\nplt.scatter(x_test,y_test, color='blue')\nplt.plot(x_test,reg_simple.predict(x_test),color = 'r')\nplt.show()", "_____no_output_____" ] ], [ [ "Slope, y-intercept and score of our predictions.", "_____no_output_____" ] ], [ [ "reg_simple.coef_ #slope", "_____no_output_____" ], [ "reg_simple.intercept_ #y-intercept", "_____no_output_____" ], [ "reg_simple.score(x_test,y_test)", "_____no_output_____" ] ], [ [ "## Ridge regression and classification\n\nRidge regression is an extension of linear regression where the loss function is modified to minimize the complexity of the model. This modification is done by adding a penalty parameter that is equivalent to the square of the magnitude of the coefficients.\n\nRidge Regression is a technique for analyzing multiple regression data that suffer from multicollinearity. When\nmulticollinearity occurs, least squares estimates are unbiased, but their variances are large so they may be far from\nthe true value. By adding a degree of bias to the regression estimates, ridge regression reduces the standard errors.\nIt is hoped that the net effect will be to give estimates that are more reliable", "_____no_output_____" ] ], [ [ "reg_ridge = lm.Ridge(alpha=.5)\nreg_ridge.fit(x_train,y_train)", "_____no_output_____" ], [ "plt.xlabel('wind speed',fontsize = 16)\nplt.ylabel('power',fontsize = 16)\nplt.scatter(x_test,y_test, color='blue')\nplt.plot(x_test,reg_ridge.predict(x_test),color = 'r')\nplt.show()", "_____no_output_____" ] ], [ [ "Slope, y-intercept and score of our predictions.", "_____no_output_____" ] ], [ [ "reg_ridge.coef_ #slope", "_____no_output_____" ], [ "reg_ridge.intercept_ #y-intercept", "_____no_output_____" ], [ "reg_ridge.score(x_test,y_test)", "_____no_output_____" ] ], [ [ "**With regularization parameter.**", "_____no_output_____" ] ], [ [ "reg_ridgecv = lm.RidgeCV(alphas=np.logspace(-6, 6, 13))\nreg_ridgecv.fit(x_train,y_train)", "_____no_output_____" ], [ "plt.xlabel('wind speed',fontsize = 16)\nplt.ylabel('power',fontsize = 16)\nplt.scatter(x_test,y_test, color='blue')\nplt.plot(x_test,reg_ridgecv.predict(x_test),color = 'r')\nplt.show()", "_____no_output_____" ] ], [ [ "Slope, y-intercept and score of our predictions.", "_____no_output_____" ] ], [ [ "reg_ridgecv.coef_ #slope", "_____no_output_____" ], [ "reg_ridgecv.intercept_ #y-intercept", "_____no_output_____" ], [ "reg_ridgecv.score(x_test,y_test)", "_____no_output_____" ] ], [ [ "# Lasso\n\nLasso regression is a type of linear regression that uses shrinkage. Shrinkage is where data values are shrunk towards a central point, like the mean. The lasso procedure encourages simple, sparse models (i.e. models with fewer parameters). This particular type of regression is well-suited for models showing high levels of muticollinearity or when you want to automate certain parts of model selection, like variable selection/parameter elimination.\n\nThe acronym “LASSO” stands for Least Absolute Shrinkage and Selection Operator.", "_____no_output_____" ] ], [ [ "reg_lasso = lm.Lasso(alpha=0.1)\nreg_lasso.fit(x_train,y_train)", "_____no_output_____" ], [ "plt.xlabel('wind speed',fontsize = 16)\nplt.ylabel('power',fontsize = 16)\nplt.scatter(x_test,y_test, color='blue')\nplt.plot(x_test,reg_lasso.predict(x_test),color = 'r')\nplt.show()", "_____no_output_____" ] ], [ [ "Slope, y-intercept and score of our predictions.", "_____no_output_____" ] ], [ [ "reg_lasso.coef_ #slope", "_____no_output_____" ], [ "reg_lasso.intercept_ #y-intercept", "_____no_output_____" ], [ "reg_lasso.score(x_test,y_test)", "_____no_output_____" ] ], [ [ "# LARS Lasso\n\nIn statistics, least-angle regression (LARS) is an algorithm for fitting linear regression models to high-dimensional data, developed by Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani.[1]\n\nSuppose we expect a response variable to be determined by a linear combination of a subset of potential covariates. Then the LARS algorithm provides a means of producing an estimate of which variables to include, as well as their coefficients.\n\nInstead of giving a vector result, the LARS solution consists of a curve denoting the solution for each value of the L1 norm of the parameter vector. The algorithm is similar to forward stepwise regression, but instead of including variables at each step, the estimated parameters are increased in a direction equiangular to each one's correlations with the residual.", "_____no_output_____" ] ], [ [ "reg_lars = lm.Lars(n_nonzero_coefs=1)\nreg_lars.fit(x_train,y_train)", "_____no_output_____" ], [ "plt.xlabel('wind speed',fontsize = 16)\nplt.ylabel('power',fontsize = 16)\nplt.scatter(x_test,y_test, color='blue')\nplt.plot(x_test,reg_lars.predict(x_test),color = 'r')\nplt.show()", "_____no_output_____" ] ], [ [ "Slope, y-intercept and score of our predictions.", "_____no_output_____" ] ], [ [ "reg_lars.coef_ #slope", "_____no_output_____" ], [ "reg_lars.intercept_ #y-intercept", "_____no_output_____" ], [ "reg_lars.score(x_test,y_test)", "_____no_output_____" ] ], [ [ "**Accuracy** of all models are almost 78% and model having accuracy between 70% to 80% are considered as a good models.<br>\nIf score value is between 80% and 90%, then model is cosidered as excellent model. If score value is between 90% and 100%, it's a probably an overfitting case.\n\n<img src=\"img/img2.png\">\n\n\nAbove image explains over and under **estimation** of data, We can see in below image that how \ndatapoints are overestimating and underestimating at some points\n\n\n\n<img src=\"img/img_exp.png\">\n\n", "_____no_output_____" ], [ "## Logistic Regression\n\nLogistic regression is a statistical method for predicting binary classes. The outcome or target variable is dichotomous in nature. Dichotomous means there are only two possible classes. For example, it can be used for cancer detection problems. It computes the probability of an event occurrence.\n\nIt is a special case of linear regression where the target variable is categorical in nature. It uses a log of odds as the dependent variable. Logistic Regression predicts the probability of occurrence of a binary event utilizing a logit function.\n\n**Linear Regression Vs. Logistic Regression**\n\nLinear regression gives you a continuous output, but logistic regression provides a constant output. An example of the continuous output is house price and stock price. Example's of the discrete output is predicting whether a patient has cancer or not, predicting whether the customer will churn. Linear regression is estimated using Ordinary Least Squares (OLS) while logistic regression is estimated using Maximum Likelihood Estimation (MLE) approach.\n\n<img src=\"img/linlog.png\">\n", "_____no_output_____" ] ], [ [ "# Logistic regression model\nlogistic_regression = LogisticRegression(max_iter=5000)", "_____no_output_____" ], [ "# importing necessary packages\nfrom sklearn import preprocessing\nfrom sklearn import utils\n\n# encoding data to be able to proceed with Logistic regression\nlab_enc = preprocessing.LabelEncoder()\ny_train_encoded = lab_enc.fit_transform(y_train)\nprint(y_train_encoded)\nprint(utils.multiclass.type_of_target(y_train))\nprint(utils.multiclass.type_of_target(y_train.astype('int')))\nprint(utils.multiclass.type_of_target(y_train_encoded))", "[ 34 214 180 266 37 13 154 0 129 122 135 159 170 29 182 166 307 30\n 279 234 173 230 0 272 296 247 54 112 134 0 15 0 205 158 186 67\n 28 227 194 165 261 216 66 250 190 181 153 1 12 78 0 209 83 31\n 235 300 264 79 285 0 55 218 178 0 174 199 302 0 210 151 281 231\n 308 7 4 19 232 176 211 24 220 304 5 85 267 115 223 228 44 240\n 280 188 0 282 95 219 63 126 161 257 70 207 123 0 179 147 293 236\n 198 11 0 59 145 49 277 294 130 0 283 33 16 56 224 73 0 144\n 62 102 148 133 118 61 150 0 0 260 90 71 0 155 88 0 288 259\n 121 192 20 312 201 193 0 106 284 204 9 242 105 99 46 0 77 3\n 39 45 163 213 306 185 75 117 76 286 42 255 0 8 68 295 140 36\n 275 226 197 35 0 196 132 10 100 120 175 65 305 111 229 141 69 289\n 263 172 0 271 0 17 97 103 220 0 258 160 303 127 222 274 91 290\n 238 200 23 14 2 93 0 128 72 84 27 6 162 276 212 183 278 244\n 268 237 125 136 156 142 221 0 38 0 177 298 187 291 18 113 110 309\n 51 251 243 146 25 119 0 109 292 249 168 149 87 107 195 41 40 101\n 26 52 241 270 310 253 167 116 138 299 81 245 114 0 32 0 206 143\n 169 252 254 0 0 94 139 86 82 256 57 208 58 0 0 96 0 0\n 184 60 269 137 21 50 108 217 301 191 47 0 124 215 262 74 171 265\n 22 64 189 152 287 202 89 92 203 80 297 157 43 164 239 53 98 311\n 233 225 104 246 273 48 248 131]\ncontinuous\nmulticlass\nmulticlass\n" ], [ "# training model\nlogistic_regression.fit(x_train, y_train_encoded)", "_____no_output_____" ], [ "logistic_regression.fit(x_train, y_train_encoded)", "_____no_output_____" ], [ "# predicting \"y\"\ny_pred = logistic_regression.predict(x_test)", "_____no_output_____" ], [ "# creating plot\nplt.xlabel('wind speed',fontsize = 16)\nplt.ylabel('power',fontsize = 16)\nplt.scatter(x_test,y_test, color='blue')\nplt.plot(x_test,logistic_regression.predict_proba(x_test)[:,1],color = 'r')\nplt.show()\n", "_____no_output_____" ] ], [ [ "Slope, y-intercept and score of our predictions.", "_____no_output_____" ] ], [ [ "logistic_regression.coef_.mean() #slope", "_____no_output_____" ], [ "logistic_regression.intercept_ .mean()#y-intercept", "_____no_output_____" ], [ "test_enc = preprocessing.LabelEncoder()\ny_test_encoded = test_enc.fit_transform(y_test)\nlogistic_regression.score(x_test,y_test_encoded)", "_____no_output_____" ], [ "# trying to get rid of outliers\nfilter = df[\"power\"]==0.0\nfilter", "_____no_output_____" ], [ "# using enumerate() + list comprehension \n# to return true indices. \nres = [i for i, val in enumerate(filter) if val] \n \n# printing result \nprint (\"The list indices having True values are : \" + str(res))", "The list indices having True values are : [0, 1, 2, 3, 4, 15, 16, 24, 26, 31, 35, 37, 39, 42, 43, 44, 47, 60, 65, 67, 70, 73, 74, 75, 83, 89, 105, 110, 111, 114, 133, 135, 136, 140, 149, 208, 340, 404, 456, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499]\n" ], [ "# updating list by dropping \"0\" power not including first few data points\nupdate = df.drop(df.index[[15, 16, 24, 26, 31, 35, 37, 39, 42, 43, 44, 47, 60, 65, 67, 70, 73, 74, 75, 83, 89, 105, 110, 111, 114, 133, 135, 136, 140, 149, 208, 340, 404, 456, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499]])\nupdate", "_____no_output_____" ], [ "# training updated data\nx_train,x_test,y_train,y_test = train_test_split(update[['speed']],update.power,test_size = 0.3)", "_____no_output_____" ], [ "# updated model\nlog = LogisticRegression(max_iter=5000)", "_____no_output_____" ], [ "# encoding data again\nlab_enc = preprocessing.LabelEncoder()\ny_train_encoded = lab_enc.fit_transform(y_train)\nprint(y_train_encoded)\nprint(utils.multiclass.type_of_target(y_train))\nprint(utils.multiclass.type_of_target(y_train.astype('int')))\nprint(utils.multiclass.type_of_target(y_train_encoded))", "[ 21 109 0 15 242 273 270 14 159 182 222 300 101 239 35 90 44 59\n 279 205 148 120 291 57 31 100 232 89 93 196 54 255 151 253 283 281\n 301 206 207 65 267 40 251 167 268 229 18 37 91 231 302 312 104 307\n 256 1 209 23 257 39 82 25 203 246 284 41 71 85 311 88 228 7\n 95 252 69 122 58 72 265 140 240 10 19 52 63 230 169 80 16 123\n 136 224 247 126 124 142 68 262 11 288 287 106 147 12 105 204 29 94\n 49 133 315 194 20 263 166 30 294 111 113 245 295 138 61 187 211 195\n 218 115 75 103 223 158 234 42 241 198 76 97 172 236 248 0 160 185\n 53 98 83 276 77 292 130 299 153 178 48 173 293 137 296 314 298 141\n 144 43 60 112 38 174 162 210 217 175 32 129 235 86 221 51 108 26\n 45 47 70 303 155 250 28 259 277 308 24 186 170 96 192 254 107 208\n 0 128 55 216 146 73 225 9 3 99 226 102 180 22 285 87 305 164\n 290 27 119 36 227 199 154 67 238 56 289 280 92 304 50 157 168 78\n 156 260 8 243 297 116 46 202 212 2 237 125 150 188 127 118 131 220\n 258 266 313 278 264 215 189 282 244 81 249 165 274 306 132 4 183 145\n 134 310 261 177 200 275 143 216 149 121 201 62 66 84 117 181 213 184\n 269 110 190 286 152 74 191 171 79 5 17 233 219 13 114 197 33 34\n 161 179 214 271 6 135 176 139 64 193 309 163 272]\ncontinuous\nmulticlass\nmulticlass\n" ], [ "# fitting data\nlog.fit(x_train, y_train_encoded)", "_____no_output_____" ], [ "\"predicting \"y\"\ny_pred = log.predict_proba(x_test)[:,1]", "_____no_output_____" ], [ "# creating plot\nplt.xlabel('wind speed',fontsize = 16)\nplt.ylabel('power',fontsize = 16)\nplt.scatter(x_test,y_test, color='blue')\nplt.plot(x_test,log.predict_proba(x_test)[:,300],color = 'r')\nplt.show()", "_____no_output_____" ] ], [ [ "**Logistic regression** is not able to handle a large number of categorical features/variables. It is vulnerable to overfitting. Also, can't solve the non-linear problem with the logistic regression that is why it requires a transformation of non-linear features. Logistic regression will not perform well with independent variables that are not correlated to the target variable and are very similar or correlated to each other.\n\nIt was very bad on our data with score below 0.05, even when I have tried to cut outliners.", "_____no_output_____" ], [ "## Polynomial regression \n\nis a special case of linear regression where we fit a polynomial equation on the data with a curvilinear relationship between the target variable and the independent variables.\n\nIn a curvilinear relationship, the value of the target variable changes in a non-uniform manner with respect to the predictor (s).\n\nThe number of higher-order terms increases with the increasing value of n, and hence the equation becomes more complicated.\n\nWhile there might be a temptation to fit a higher degree polynomial to get lower error, this can result in over-fitting. Always plot the relationships to see the fit and focus on making sure that the curve fits the nature of the problem. Here is an example of how plotting can help:\n\n<img src=\"img/fitting.png\">\n\nEspecially look out for curve towards the ends and see whether those shapes and trends make sense. Higher polynomials can end up producing wierd results on extrapolation.", "_____no_output_____" ] ], [ [ "# Training Polynomial Regression Model\npoly_reg = PolynomialFeatures(degree = 4)\nx_poly = poly_reg.fit_transform(x_train)\npoly_reg.fit(x_poly, y_train)\nlin_reg = LinearRegression()\nlin_reg.fit(x_poly, y_train)", "_____no_output_____" ], [ "# Predict Result with Polynomial Regression\npoly = lin_reg.predict(poly_reg.fit_transform(x_test))\npoly", "_____no_output_____" ], [ "# Change into array\nx = np.array(df['speed'])\ny = np.array(df['power'])", "_____no_output_____" ], [ "# Changing the shape of array\nx = x.reshape(-1,1)\ny = y.reshape(-1,1)", "_____no_output_____" ], [ "# Visualise the Results of Polynomial Regression\nplt.scatter(x_train, y_train, color = 'blue')\nplt.plot(x, lin_reg.predict(poly_reg.fit_transform(x)), color = 'red')\nplt.title('Polynomial Regression')\nplt.xlabel('Wind speed')\nplt.ylabel('Power')\nplt.show()", "_____no_output_____" ] ], [ [ "Slope, y-intercept and score of our predictions.", "_____no_output_____" ] ], [ [ "lin_reg.coef_.mean() #slope", "_____no_output_____" ], [ "lin_reg.intercept_#y-intercept", "_____no_output_____" ], [ "model.score(x_test, y_test) #score", "_____no_output_____" ] ], [ [ "## Spearman’s Rank Correlation\n\nThis statistical method quantifies the degree to which ranked variables are associated by a monotonic function, meaning an increasing or decreasing relationship. As a statistical hypothesis test, the method assumes that the samples are uncorrelated (fail to reject H0).\n\n>The Spearman rank-order correlation is a statistical procedure that is designed to measure the relationship between two variables on an ordinal scale of measurement.\n\n>— Nonparametric Statistics for Non-Statisticians: A Step-by-Step Approach, 2009.\n\nThe intuition for the Spearman’s rank correlation is that it calculates a Pearson’s correlation (e.g. a parametric measure of correlation) using the rank values instead of the real values. Where the Pearson’s correlation is the calculation of the covariance (or expected difference of observations from the mean) between the two variables normalized by the variance or spread of both variables.\n\nSpearman’s rank correlation can be calculated in Python using the spearmanr() SciPy function.\n\nThe function takes two real-valued samples as arguments and returns both the correlation coefficient in the range between -1 and 1 and the p-value for interpreting the significance of the coefficient.", "_____no_output_____" ] ], [ [ "# importing sperman correlation\nfrom scipy.stats import spearmanr\n\n# prepare data\nx = df['speed']\ny = df['power']\n# calculate spearman's correlation\ncoef, p = spearmanr(x, y)\nprint('Spearmans correlation coefficient: %.3f' % coef)\n# interpret the significance\nalpha = 0.05\nif p > alpha:\n print('Samples are uncorrelated (fail to reject H0) p=%.3f' % p)\nelse:\n print('Samples are correlated (reject H0) p=%.3f' % p)", "Spearmans correlation coefficient: 0.819\nSamples are correlated (reject H0) p=0.000\n" ] ], [ [ "The statistical test reports a strong positive correlation with a value of 0.819. The p-value is close to zero, which means that the likelihood of observing the data given that the samples are uncorrelated is very unlikely (e.g. 95% confidence) and that we can reject the null hypothesis that the samples are uncorrelated.", "_____no_output_____" ], [ "## Kendall’s Rank Correlation\n\nThe intuition for the test is that it calculates a normalized score for the number of matching or concordant rankings between the two samples. As such, the test is also referred to as Kendall’s concordance test.\n\nThe Kendall’s rank correlation coefficient can be calculated in Python using the kendalltau() SciPy function. The test takes the two data samples as arguments and returns the correlation coefficient and the p-value. As a statistical hypothesis test, the method assumes (H0) that there is no association between the two samples.\n", "_____no_output_____" ] ], [ [ "# importing kendall correaltion\nfrom scipy.stats import kendalltau\n\n# calculate kendall's correlation\ncoef, p = kendalltau(x, y)\nprint('Kendall correlation coefficient: %.3f' % coef)\n# interpret the significance\nalpha = 0.05\nif p > alpha:\n print('Samples are uncorrelated (fail to reject H0) p=%.3f' % p)\nelse:\n print('Samples are correlated (reject H0) p=%.3f' % p)", "Kendall correlation coefficient: 0.728\nSamples are correlated (reject H0) p=0.000\n" ] ], [ [ "Running the example calculates the Kendall’s correlation coefficient as 0.728, which is highly correlated.\n\nThe p-value is close to zero (and printed as zero), as with the Spearman’s test, meaning that we can confidently reject the null hypothesis that the samples are uncorrelated.", "_____no_output_____" ], [ "## Conclusion \n\nSpearman’s & Kendall’s Rank Correlation shows us that our data are strongly correlated. After trying Linear, Ridge, Lasso and LARS Lasso regressions all of them are equally effective, so the best choice would be to stick with Linear Regression to simplify.\n\nAs I wanted to find the better way I tried Logistic regression and I found out it is pretty useless for our dataset even when I get rid of outliers.\n\nNext in line was Polynomial regression and it was great success with nearly 90% score. Seeing results best approach for our dataset would Polynomial regression with Linear regression for our second choice if we would like to keep it simple.\n", "_____no_output_____" ], [ "**References:**\n\n- https://www.goodenergy.co.uk/media/1775/howawindturbineworks.jpg?width=640&height=&center=0.5,0.5&mode=crop\n\n- https://www.nationalgrid.com/stories/energy-explained/how-does-wind-turbine-work\n\n- https://www.pluralsight.com/guides/linear-lasso-ridge-regression-scikit-learn\n\n- https://www.seai.ie/technologies/wind-energy/\n\n- https://towardsdatascience.com/ridge-regression-python-example-f015345d936b\n\n- https://towardsdatascience.com/ridge-and-lasso-regression-a-complete-guide-with-python-scikit-learn-e20e34bcbf0b\n\n- https://realpython.com/linear-regression-in-python/\n\n- https://en.wikipedia.org/wiki/Least-angle_regression\n\n- https://towardsdatascience.com/simple-and-multiple-linear-regression-in-python-c928425168f9\n\n- https://jakevdp.github.io/PythonDataScienceHandbook/05.06-linear-regression.html\n\n- https://www.statisticshowto.com/lasso-regression/\n\n- https://saskeli.github.io/data-analysis-with-python-summer-2019/linear_regression.html\n\n- https://www.w3schools.com/python/python_ml_linear_regression.asp\n\n- https://www.geeksforgeeks.org/linear-regression-python-implementation/\n\n- https://www.kdnuggets.com/2019/03/beginners-guide-linear-regression-python-scikit-learn.html\n\n- https://towardsdatascience.com/an-introduction-to-linear-regression-for-data-science-9056bbcdf675\n\n- https://www.kaggle.com/ankitjha/comparing-regression-models\n\n- https://machinelearningmastery.com/compare-machine-learning-algorithms-python-scikit-learn/\n\n- https://www.datacamp.com/community/tutorials/understanding-logistic-regression-python\n\n- https://www.researchgate.net/post/Is_there_a_test_which_can_compare_which_of_two_regression_models_is_best_explains_more_variance\n\n- https://heartbeat.fritz.ai/logistic-regression-in-python-using-scikit-learn-d34e882eebb1\n\n- https://www.analyticsvidhya.com/blog/2015/08/comprehensive-guide-regression/\n\n- https://towardsdatascience.com/machine-learning-polynomial-regression-with-python-5328e4e8a386\n\n- https://www.w3schools.com/python/python_ml_polynomial_regression.asp\n\n- https://www.dailysmarty.com/posts/polynomial-regression\n\n- https://www.analyticsvidhya.com/blog/2015/08/comprehensive-guide-regression/\n\n- https://machinelearningmastery.com/how-to-calculate-nonparametric-rank-correlation-in-python/", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ] ]
4a4a887506f2ad3a413525f4cb2299ca05e6f616
48,578
ipynb
Jupyter Notebook
_notebooks/2020-11-06-SMS_Spam_Classifier_Demo.ipynb
tbass134/ml-portfolio
cfed4ff1fb908f6ef18e548844b1b9f3391330db
[ "Apache-2.0" ]
null
null
null
_notebooks/2020-11-06-SMS_Spam_Classifier_Demo.ipynb
tbass134/ml-portfolio
cfed4ff1fb908f6ef18e548844b1b9f3391330db
[ "Apache-2.0" ]
4
2020-11-06T16:41:48.000Z
2021-09-28T05:41:39.000Z
_notebooks/2020-11-06-SMS_Spam_Classifier_Demo.ipynb
tbass134/ml-portfolio
cfed4ff1fb908f6ef18e548844b1b9f3391330db
[ "Apache-2.0" ]
null
null
null
34.137737
336
0.458438
[ [ [ "# Building a Machine Learning model to detect spam in SMS\n> Building a machine learing model to predict that a SMS messages is spam or not\n\n- toc: true \n- badges: true\n- comments: true\n- categories: [jupyter]\n", "_____no_output_____" ], [ "In this notebook, we'll show how to build a simple machine learning model to predict that a SMS is spam or not. \n\nThe notebook was built to go along with my talk in May 2020 for [Vonage Developer Day](https://www.vonage.com/about-us/vonage-stories/vonage-developer-day/)\n\nyoutube: https://www.youtube.com/watch?v=5d4_HpMLXf4&t=1s\n\nWe'll be using the scikit-learn library to train a model on a set of messages which are labeled as spam and non spam(aka ham) messages. \n\nAfter our model is trained, we'll deploy to an AWS Lambda in which its input will be a message, and its output will be the prediction(spam or ham).\n\nBefore we build a model, we'll need some data. So we'll use the [SMS Spam Collection DataSet](http://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection).\n\nThis dataset contains over 5k messages which are labeled spam or ham.\nIn the following cell, we'll download the dataset", "_____no_output_____" ] ], [ [ "!wget --no-check-certificate https://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip\n!unzip /content/smsspamcollection.zip", "--2020-07-22 00:49:18-- https://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip\nResolving archive.ics.uci.edu (archive.ics.uci.edu)... 128.195.10.252\nConnecting to archive.ics.uci.edu (archive.ics.uci.edu)|128.195.10.252|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 203415 (199K) [application/x-httpd-php]\nSaving to: ‘smsspamcollection.zip’\n\nsmsspamcollection.z 100%[===================>] 198.65K 509KB/s in 0.4s \n\n2020-07-22 00:49:19 (509 KB/s) - ‘smsspamcollection.zip’ saved [203415/203415]\n\nArchive: /content/smsspamcollection.zip\n inflating: SMSSpamCollection \n inflating: readme \n" ] ], [ [ "Once we have downloaded the datatset, we'll load into a Pandas Dataframe and view the first 10 rows of the dataset.", "_____no_output_____" ] ], [ [ "import pandas as pd\ndf = pd.read_csv(\"/content/SMSSpamCollection\", sep='\\t', header=None, names=['label', 'message'])\ndf.head()", "_____no_output_____" ] ], [ [ "Next, we need to first understand the data before building a model.\n\nWe'll first need to see how many messages are considered spam or ham", "_____no_output_____" ] ], [ [ "df.label.value_counts()", "_____no_output_____" ] ], [ [ "From the cell above, we see that 4825 messages are valid messages, and only 747 messages are labled as spam.\n\nLets now just view some messages that are ham and some that are spam", "_____no_output_____" ] ], [ [ "spam = df[df[\"label\"] == \"spam\"]\nspam.head()", "_____no_output_____" ], [ "ham = df[df[\"label\"] == \"ham\"]\nham.head()", "_____no_output_____" ] ], [ [ "after looking at some messages that spam and ham, we can see the spam messages look spammy..", "_____no_output_____" ], [ "# Preprocessing", "_____no_output_____" ], [ "The next step is to get the dataset ready to build a model. A machine learning model can only deal with numbers, so we'll have to convert our text into numbers using `TfidfVectorizer`\n\nTfidfVectorizer converts a collection of raw documents to a matrix of [term frequency-inverse document frequency](http://www.tfidf.com/) features. Also known as TF-IDF.\n\nIn our case, a document is each message. For each message, we'll compute the number of times a term is in our document divied by all the terms in the document times the total number of documents divded by the number of documents that contain the specific term\n\n![](https://miro.medium.com/max/1066/1*eIDZG3Ot5DP8SKXAvBVALQ.png)\n[source](https://towardsdatascience.com/spam-or-ham-introduction-to-natural-language-processing-part-2-a0093185aebd)\n\nThe output will be a matrix in which the rows will be all the terms, and the colums will be all the documents\n![](https://miro.medium.com/max/1400/1*n4s0LZS1Qi46pF3aaYzE0A.png)\n\n[This notebook by Mike Bernico](https://github.com/mbernico/CS570/blob/master/module_1/TFIDF.ipynb) by goes into more detail on TF-IDF and how to calucate without using sklearn. ", "_____no_output_____" ], [ "first, we'll split the dataset into a train and test set. For the training set, we'll take 80% of the data from the dataset, and use that for training the model. The rest of the dataset(20%) will be used for testing the model.\n", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(df['message'], df['label'], test_size = 0.2, random_state = 1)", "_____no_output_____" ] ], [ [ "once we split our data, we can use the TfidfVectorizer. This will return a sparse matrix(a matrix with mostly 0's)", "_____no_output_____" ] ], [ [ "from sklearn.feature_extraction.text import TfidfVectorizer\nvectorizer = TfidfVectorizer()\nX_train = vectorizer.fit_transform(X_train)\nX_test = vectorizer.transform(X_test)", "_____no_output_____" ] ], [ [ "After we fit the TfidfVectorizer to the sentenes, lets plot the matrix as a pandas dataframe to understand what TfidfVectorizer is doing", "_____no_output_____" ] ], [ [ "feature_names = vectorizer.get_feature_names()\ntfid_df = pd.DataFrame(tfs.T.todense(), index=feature_names)\nprint(tfid_df[1200:1205])", " 0 1 2 3 4 ... 4452 4453 4454 4455 4456\nbackdoor 0.0 0.0 0.0 0.000000 0.0 ... 0.0 0.0 0.0 0.0 0.0\nbackwards 0.0 0.0 0.0 0.000000 0.0 ... 0.0 0.0 0.0 0.0 0.0\nbad 0.0 0.0 0.0 0.193352 0.0 ... 0.0 0.0 0.0 0.0 0.0\nbadass 0.0 0.0 0.0 0.000000 0.0 ... 0.0 0.0 0.0 0.0 0.0\nbadly 0.0 0.0 0.0 0.000000 0.0 ... 0.0 0.0 0.0 0.0 0.0\n\n[5 rows x 4457 columns]\n" ] ], [ [ "From the table above, each word in our dataset are the rows are the sentenes index are the columns. We've only plotted a few rows in the middle of the dataframe for a better understanding of the data. \n", "_____no_output_____" ], [ "Next, we'll train a model using Gaussian Naive Bayes in scikit-learn. Its a good starting algorithm for text classification. We'll then print out the accuracy of the model by using the training set and our confusion_matrix", "_____no_output_____" ], [ "## Model Training", "_____no_output_____" ], [ "To train our model, we'll use A Navie Bayes algorhtymn to train our model\n\nThe formula for Navie Bayes is:\n\\\\[ P(S|W) = P(W|S) \\times P(S) \\over P(W|S) \\times P(S) + P(W|H) \\times P(h) \\\\].\n\n**P(s|w)** - The probability(**P**) of a message is spam(**s**) Given(**|**) a word(**w**)\n\n**=**\n\n**P(w|s)** - probability(**P**) that a word(**w**) is spam(**s**)\n\n*\n\n**P(s)** - Overall probability(**P**) that ANY message is spam(**s**)\n\n**/**\n\n**P(w|s)** - probability(**P**) that a word(**w**) exists in spam messages(**s**)\n\n*\n\n**P(s)** - Overall probability(**P**) that ANY message is spam(**s**)\n\n**+**\n\n**P(w|h)** - probability(**P**) the word(**w**) appears in non-spam(**h**) messages\n\n*\n\n**P(h)** - Overall probability(**P**) that any message is not-spam(**h**)\n\n\n\n", "_____no_output_____" ] ], [ [ "from sklearn.naive_bayes import GaussianNB\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix\n\nclf = GaussianNB()\nclf.fit(X_train.toarray(),y_train)", "_____no_output_____" ], [ "y_true, y_pred = y_test, clf.predict(X_test.toarray())\naccuracy_score(y_true, y_pred)", "_____no_output_____" ], [ "print(classification_report(y_true, y_pred))", " precision recall f1-score support\n\n ham 0.99 0.89 0.94 968\n spam 0.57 0.93 0.71 147\n\n accuracy 0.90 1115\n macro avg 0.78 0.91 0.82 1115\nweighted avg 0.93 0.90 0.91 1115\n\n" ], [ "cmtx = pd.DataFrame(\n confusion_matrix(y_true, y_pred, labels=['ham', 'spam']), \n index=['ham', 'spam'], \n columns=['ham', 'spam']\n)\nprint(cmtx)", " ham spam\nham 866 102\nspam 11 136\n" ] ], [ [ "## Grid Search", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import GridSearchCV\nparameters = {\"var_smoothing\":[1e-9, 1e-5, 1e-1]}\ngs_clf = GridSearchCV(\n GaussianNB(), parameters)\ngs_clf.fit(X_train.toarray(),y_train)", "_____no_output_____" ], [ "gs_clf.best_params_", "_____no_output_____" ], [ "y_true, y_pred = y_test, gs_clf.predict(X_test.toarray())\naccuracy_score(y_true, y_pred)", "_____no_output_____" ], [ "cmtx = pd.DataFrame(\n confusion_matrix(y_true, y_pred, labels=['ham', 'spam']), \n index=['ham', 'spam'], \n columns=['ham', 'spam']\n)\nprint(cmtx)", " ham spam\nham 932 36\nspam 3 144\n" ], [ "print(classification_report(y_true, y_pred))", " precision recall f1-score support\n\n ham 1.00 0.96 0.98 968\n spam 0.80 0.98 0.88 147\n\n accuracy 0.97 1115\n macro avg 0.90 0.97 0.93 1115\nweighted avg 0.97 0.97 0.97 1115\n\n" ] ], [ [ "From our trained model, we get about 96% accuracy. Which is pretty good. \n\nWe also print out the confusion_matrix. This shows how many messages were classificed correctly. In the first column and first row, we see that 866 messages that were classified as ham were actaully ham and 136 messages that were predicted as spam, were in fact spam.\n\nNext, lets test our model with some examples messages", "_____no_output_____" ], [ "## Inference", "_____no_output_____" ] ], [ [ "message = vectorizer.transform([\"i'm on my way home\"])\nmessage = message.toarray()\ngs_clf.predict(message)", "_____no_output_____" ], [ "message = vectorizer.transform([\"this offer is to good to be true\"])\nmessage = message.toarray()\ngs_clf.predict(message)", "_____no_output_____" ] ], [ [ "The final step is the save the model and the tf-idf vectorizer. We will use these when clasifing incoming messages on our lambda function ", "_____no_output_____" ] ], [ [ "import joblib\njoblib.dump(gs_clf, \"model.pkl\")\njoblib.dump(vectorizer, \"vectorizer.pkl\")", "_____no_output_____" ] ], [ [ "# Lambda", "_____no_output_____" ], [ "Once our model is trained, we'll now put it in a production envioroment.\n\nFor this example, we'll create a lambda function to host our model.\n\nThe lambda function will be attached to an API gateway in which we'll be able to have a endpoint to make our predictions\n\nDeploying a scikit-learn model to lambda isnt as easy as you would think. You can't just import your libraries, espcially scikit-learn to work.\n\nHere's what we'll need to do in order to deploy our model\n* Spin up EC2 instance\n* SSH into the instance and install our dependencies\n* copy the lambda function code from this [repo](https://github.com/tbass134/SMS-Spam-Classifier-lambda)\n* Run a bash script that zips up the :\n* zip the code, including the packages\n* upload to S3\n* point the lambda function to to s3 file\n\n## Create an EC2 instance\nIf you have an aws account:\n* Go to EC2 on the console and click `Launch Instance`.\n* Select the first available AMI(Amazon Linux 2 AMI). \n* Select the t2.micro instance, then click `Review and Launch`\n* Click the Next button\n* Under IAM Role, Click Create New Role\n* Create a new role with the following policies:\n AmazonS3FullAccess\n AWSLambdaFullAccess\n Name your role and click create role\n* Under permissions, create a new role that has access to the following:\n* lambda full access\n* S3 full access\n\nThese will be needed when uploading our code to your S3 bucket and pointing the lambda function to zip file that will be creating later.\n\n* Create a new private key pair and click `Lanuch Instance`\n* Note, in order to use the key, you have to run `chmod 400` on the key when downloaded to your local machine.\n\n\nAfter the instance spins up, you'll need to connect to it via ssh\n* Find the newly created instance on EC2 and click `Connect`\n* On your local machine, navigate to terminal and run the the command from the Example. It will look something like:\n```bash\nssh -i \"{PATH TO KEY}\" {user_name}@ec2-{instance_ip}.compute-1.amazonaws.com\n```\n\n## Install packages\nBefore installing packages, you will need to install python and pip. You can follow the steps [here](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install-linux.html)\nThese will most likey be:\n```bash \nsudo yum install python37\ncurl -O https://bootstrap.pypa.io/get-pip.py\n python3 get-pip.py --user\n verify pip is installed using\n ```bash\n pip --version\n ```\n You will also need to install git\n ```bash\n sudo yum install git -y\n ```\n When connected to the instance, clone the repo\n```bash\ngit clone https://github.com/tbass134/SMS-Spam-Classifier-lambda\n```\nThis repo contains everything we need to make predictions. These includes the pickle files from the model and vectorizer, as well as the lambda function to make predictions and returns its response\ncd into the SMS-Spam-Classifier-lambda/lambda folder\n* Next, you you will need to install the `sklearn` library.\n* On your instance, type:\n`pip install -t . sklearn`\nThis will import the library into its own folder\n\n\nNext, if you want to use your trained model, it will need to be uploaded into your ec2 instance. \nIf your using Google Colab, navigate to the files tab, right click on `my_model.pk` and `vectorizer.pkl` and click download.\nNote, the sample repo already contains a trained model so this is optional.\n\nTo upload your trained model, you can use a few ways:\n * Fork the repo, add your models, and checkout on the ec2 instance\n You can use `scp` to copy to files from your local machine to the instance\n To upload the model file we saved\n ```bash\n scp -i {PATH_TO_KEY} vectorizer.pkl ec2-user@{INSTANCE_NAME}:\n ```\n\n and we'll do the same for the model\n ```bash\n scp -i {PATH_TO_KEY} my_model.pkl ec2-user@{INSTANCE_NAME}:\n ```\n\n* The other method is to upload the files to s3 and have your lambda function load the files from there using Boto\n```Python\n def load_s3_file(key):\n obj = s3.Object(MODEL_BUCKET, key)\n body = obj.get()['Body'].read()\n return joblib.load(BytesIO(body)) \n\n model = load_s3_file({PATH_TO_S3_MODEL}\n vectorizer = load_s3_file({PATH_TO_S3_VECTORIZER}\n```\n\n\n## Create lambda function\n* On the AWS console, navigate to https://console.aws.amazon.com/lambda\n* Click on the Create function button\n* Make sure `Author from scratch` is selected\n* Name your function\n* Set the runtime to Python 3.7\n* Under Execution Role, create a new role with basic permissions\n* Click `Create Function`\n\n## Create S3 bucket\nIn order to push our code to a lambda function, we need to first copy zip up the code and libraies to a S3 bucket. \nFrom here, our lambda function will load the zip file from this bucket.\n* On the AWS console under `Services`, Search for `S3`\n* Click `Create Bucket`\n* Name your bucket, and click Create Bucket at the bottom of the page.\n\n\n## Upload to lambda\nNext, we'll run the `publish.sh`script inside the root of the repo, which does the following:\n* zip up the pacakages, including our Python code, model and transformer.\n* upload the zip to an S3 bucket\n* point our lambda function to this bucket\n\nwhen calling this script, we need to pass in 3 arguments:\n* The name of the zip file. We can call it `zip.zip` for now\n* The name of the S3 bucket that we will upload the zip to\n* the name of lambda function \n```bash\nbash publish.sh {ZIP_FILE_NAME} {S3_BUCKET} {LAMBDA_FUNCTION_NAME}\n```\n\nIf everything is successful, your lambda function will be deployed. \nIf you see errors, make sure your EC2 instance has a IAM role that has an S3 permission, and Lambda permissions.\nSee this [guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html) for more info.\n", "_____no_output_____" ], [ "## Add HTTP endpoint\nThe final piece will be to add a API gateway.\nOn the configuration tab on the lambda function\n* click `Add Trigger`\n* Click on the select a trigger box and select `API Gateway`\n* Click on `Create an API`\n* Set API Type to `REST API`\n* Set Security to `OPEN` (make sure to secure when deploying for production)\n* At the bottom, click `Add`\n\nFor detail, see this [documentation](https://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-lambda.html#api-as-lambda-proxy-create-api-resources)\n\nWe can now test the endpoint by using curl and making a call to our endpoint.\nUnder `API Gateway` section in lambda, click on oi\n\nIn the lambda function, we are looking for the `message` GET parameter. When we make our request, we'll pass a query parameter called `message`. This will contain the string we want to make a prediction on.", "_____no_output_____" ] ], [ [ "ham_message = \"im on my way home\".replace(\" \", \"%20\")\nham_message", "_____no_output_____" ], [ "%%bash -s \"$ham_message\"\ncurl --location --request GET \"https://e18fmcospk.execute-api.us-east-1.amazonaws.com/default/spam-detection?message=$1\"", "{\"prediction\": \"ham\"}" ], [ "spam_message = \"this offer is to good to be true\".replace(\" \", \"%20\")\nspam_message", "_____no_output_____" ], [ "%%bash -s \"$spam_message\"\ncurl --location --request GET \"https://e18fmcospk.execute-api.us-east-1.amazonaws.com/default/spam-detection?message=$1\"", "{\"prediction\": \"spam\"}" ] ], [ [ "", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "# Google Cloud Functions\n\nFor non-amazon users, we can use Google Cloud Functions to deploy our model for use in our Vonage SMS API app\n![](https://pbs.twimg.com/media/EdYKMFDUwAAwj0T?format=png&name=large)\nCode is [here](https://gist.github.com/tbass134/7985c0adf44c938d6e683c18dabac8f9)", "_____no_output_____" ], [ "# Create Vonage SMS Application", "_____no_output_____" ], [ "The final step is to build a Vonage SMS Application.\nHave a look at this blog post on how to build yourself\nOur application will receive an SMS\nhttps://developer.nexmo.com/messaging/sms/code-snippets/receiving-an-sms\n\nand will send a SMS back to the user with its prediction\nhttps://developer.nexmo.com/messaging/sms/code-snippets/send-an-sms\n\n![](https://i.ibb.co/8mxfBKW/IMG-9-BA66209-F969-1.png)", "_____no_output_____" ], [ "To work through this example, you will need the following\n* Login / Signup to [Vonage SMS API](https://dashboard.nexmo.com/sign-up)\n* Rent a phone number\n* Assign a publicly accessable url via [ngrok](https://www.nexmo.com/blog/2017/07/04/local-development-nexmo-ngrok-tunnel-dr) to that phone number\n\nWe'll also build a simple Flask app that will make a request to our API Gateway\n```bash\ngit clone https://github.com/tbass134/SMS-Spam-Classifier-lambda.git\ncd app\n```\n\nNext we'll create a virtual environment and install the requirements using pip\n```bash\nvirtualenv venv --python=python3\nsource venv/bin/activate\npip install -r requirments.txt\n```\n\nNext, create a `.env` file with the following:\n```bash\nNEXMO_API_KEY={YOUR_NEXMO_API_KEY}\nNEXMO_API_SECRET={YOUR_NEXMO_API_SECRET}\nNEXMO_NUMBER={YOUR_NEXMO_NUMBER\nAPI_GATEWAY_URL={FULL_API_GATEWAY}\n```\n\nFinally, you can run the application:\n```bash\npython app.py\n```\nThis will spin up a webserver listening on PORT 3000\n", "_____no_output_____" ], [ "# Fin", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ] ]
4a4a8bdfb8be0a68aa624bd179ecab65eb567c15
324,008
ipynb
Jupyter Notebook
graphplot.ipynb
kototoibashi/20210213-powergrid-freq
66e74b00fae45bc6451693d6557004f2c5f2956f
[ "CC0-1.0" ]
3
2021-02-27T17:00:28.000Z
2022-02-22T10:36:17.000Z
graphplot.ipynb
kototoibashi/20210213-powergrid-freq
66e74b00fae45bc6451693d6557004f2c5f2956f
[ "CC0-1.0" ]
null
null
null
graphplot.ipynb
kototoibashi/20210213-powergrid-freq
66e74b00fae45bc6451693d6557004f2c5f2956f
[ "CC0-1.0" ]
null
null
null
2,745.830508
321,080
0.961094
[ [ [ "import glob\nimport io\nimport matplotlib.pyplot as plt\nfrom datetime import datetime\n\ndef readcsvs():\n files = sorted(glob.glob('*.csv'))\n for filename in files:\n with open(filename,'r',encoding='utf8') as f:\n f.readline()\n for line in f.readlines():\n data = line.strip().strip('\\ufeff').split('\\t')\n yield data", "_____no_output_____" ], [ "# プロットする時刻の範囲\nstart_time = datetime(2021,2,13,23,0)\nend_time = datetime(2021,2,13,23,59)\n\nplotdata = {\n 'time': [],\n 'volt': [],\n 'freq': []\n}\n\nfor line in readcsvs():\n time = datetime.strptime(line[0],'%Y-%m-%d %H:%M:%S.%f')\n volin = float(line[1])\n volout = float(line[2])\n freqin = float(line[3])\n freqout = float(line[4]) \n \n if time < start_time or time > end_time:\n continue\n \n plotdata['time'].append(time)\n plotdata['freq'].append((freqin+freqout)/2)\n plotdata['volt'].append(volin)\n\n\n\nfig, ax1 = plt.subplots(figsize=(20,10),dpi=200)\nax2 = ax1.twinx()\nax1.plot(plotdata['time'],plotdata['freq'], label=\"Freq\",color='#888800')\nax2.plot(plotdata['time'],plotdata['volt'], label=\"Voltage\",color='#000088')\nhandler1, label1 = ax1.get_legend_handles_labels()\nhandler2, label2 = ax2.get_legend_handles_labels()\nax1.legend(handler1 + handler2, label1 + label2, loc=2, borderaxespad=0.)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
4a4a96dfaf379bb53b8add5f4ca6029c283d2e05
269,657
ipynb
Jupyter Notebook
06-TL-GramNegative.ipynb
reymond-group/MLpeptide
675b4777bcaaf09eca21173c15c12450ff515470
[ "MIT" ]
14
2021-03-19T11:52:56.000Z
2022-02-03T13:11:02.000Z
06-TL-GramNegative.ipynb
reymond-group/MLpeptide
675b4777bcaaf09eca21173c15c12450ff515470
[ "MIT" ]
null
null
null
06-TL-GramNegative.ipynb
reymond-group/MLpeptide
675b4777bcaaf09eca21173c15c12450ff515470
[ "MIT" ]
6
2021-03-21T01:28:24.000Z
2021-12-15T09:41:42.000Z
38.270934
982
0.565541
[ [ [ "import random\nimport torch.nn as nn\nimport torch\nimport pickle\nimport pandas as pd\nfrom pandas import Series, DataFrame\nfrom pandarallel import pandarallel\npandarallel.initialize(progress_bar=False)\nfrom sklearn.metrics import roc_auc_score, roc_curve, accuracy_score, matthews_corrcoef, f1_score, precision_score, recall_score\nimport numpy as np\nimport torch.optim as optim\nfolder = \"/data/AIpep-clean/\"\nimport matplotlib.pyplot as plt\nfrom vocabulary import Vocabulary\nfrom datasethem import Dataset\nfrom datasethem import collate_fn_no_activity as collate_fn\nfrom models import Generator\nfrom tqdm.autonotebook import trange, tqdm\nimport os\nfrom collections import defaultdict", "INFO: Pandarallel will run on 8 workers.\nINFO: Pandarallel will use Memory file system to transfer data between the main process and workers.\n" ] ], [ [ "# Load data", "_____no_output_____" ] ], [ [ "df = pd.read_pickle(folder + \"pickles/DAASP_RNN_dataset_with_hemolysis.plk\")\n\ndf_training = df.query(\"Set == 'training' and (baumannii == True or aeruginosa == True) and isNotHemolytic==1\")\ndf_test = df.query(\"Set == 'test' and (baumannii == True or aeruginosa == True) and isNotHemolytic==1\")\n\nif torch.cuda.is_available():\n device = \"cuda\" \nelse:\n device = \"cpu\" ", "_____no_output_____" ], [ "print(\"Against A. baumannii or P. aeruginosa:\\nactive training \"+ str(len(df_training[df_training[\"activity\"]==1])) \\\n + \"\\nactive test \" + str(len(df_test[df_test[\"activity\"]==1])) \\\n + \"\\ninactive training \"+ str(len(df_training[df_training[\"activity\"]==0])) \\\n + \"\\ninactive test \" + str(len(df_test[df_test[\"activity\"]==0])))", "Against A. baumannii or P. aeruginosa:\nactive training 242\nactive test 77\ninactive training 97\ninactive test 25\n" ] ], [ [ "# Define helper functions", "_____no_output_____" ] ], [ [ "def randomChoice(l):\n return l[random.randint(0, len(l) - 1)]\n\ndef categoryFromOutput(output):\n top_n, top_i = output.topk(1)\n category_i = top_i[0].item()\n return category_i\n\ndef nan_equal(a,b):\n try:\n np.testing.assert_equal(a,b)\n except AssertionError:\n return False\n return True\n\ndef models_are_equal(model1, model2):\n model1.vocabulary == model2.vocabulary\n model1.hidden_size == model2.hidden_size\n for a,b in zip(model1.model.parameters(), model2.model.parameters()):\n if nan_equal(a.detach().numpy(), b.detach().numpy()) == True:\n print(\"true\")", "_____no_output_____" ] ], [ [ "# Define hyper parameters", "_____no_output_____" ] ], [ [ "n_embedding = 100\nn_hidden = 400\nn_layers = 2\nn_epoch = 200\nlearning_rate = 0.00001\nmomentum = 0.9\nbatch_size = 10\nepoch = 22", "_____no_output_____" ] ], [ [ "# Loading and Training", "_____no_output_____" ] ], [ [ "if not os.path.exists(folder+\"pickles/generator_TL_gramneg_results_hem.pkl\"):\n \n model = Generator.load_from_file(folder+\"models/RNN-generator/ep{}.pkl\".format(epoch))\n model.to(device)\n vocabulary = model.vocabulary\n\n df_training_active = df_training.query(\"activity == 1\")\n df_test_active = df_test.query(\"activity == 1\")\n df_training_inactive = df_training.query(\"activity == 0\")\n df_test_inactive = df_test.query(\"activity == 0\")\n\n training_dataset_active = Dataset(df_training_active, vocabulary, with_activity=False)\n test_dataset_active = Dataset(df_test_active, vocabulary, with_activity=False)\n training_dataset_inactive = Dataset(df_training_inactive, vocabulary, with_activity=False)\n test_dataset_inactive = Dataset(df_test_inactive, vocabulary, with_activity=False)\n\n \n optimizer = optim.SGD(model.model.parameters(), lr = learning_rate, momentum=momentum)\n\n # the only one used for training\n training_dataloader_active = torch.utils.data.DataLoader(training_dataset_active, batch_size=batch_size, shuffle=True, collate_fn = collate_fn, drop_last=True, pin_memory=True, num_workers=4)\n\n # used for evaluation\n test_dataloader_active = torch.utils.data.DataLoader(test_dataset_active, batch_size=batch_size, shuffle=False, collate_fn = collate_fn, drop_last=False, pin_memory=True, num_workers=4)\n training_dataloader_inactive = torch.utils.data.DataLoader(training_dataset_inactive, batch_size=batch_size, shuffle=False, collate_fn = collate_fn, drop_last=False, pin_memory=True, num_workers=4)\n test_dataloader_inactive = torch.utils.data.DataLoader(test_dataset_inactive, batch_size=batch_size, shuffle=False, collate_fn = collate_fn, drop_last=False, pin_memory=True, num_workers=4)\n training_dataloader_active_eval = torch.utils.data.DataLoader(training_dataset_active, batch_size=batch_size, shuffle=False, collate_fn = collate_fn, drop_last=False, pin_memory=True, num_workers=4)\n\n training_dictionary = {}\n\n for e in trange(1, n_epoch + 1):\n print(\"Epoch {}\".format(e))\n for i_batch, sample_batched in tqdm(enumerate(training_dataloader_active), total=len(training_dataloader_active) ):\n\n seq_batched = sample_batched[0].to(model.device, non_blocking=True) \n seq_lengths = sample_batched[1].to(model.device, non_blocking=True)\n\n nll = model.likelihood(seq_batched, seq_lengths)\n\n loss = nll.mean()\n\n optimizer.zero_grad()\n loss.backward() \n torch.nn.utils.clip_grad_value_(model.model.parameters(), 2)\n optimizer.step()\n\n model.save(folder+\"models/RNN-generator-TL-hem/gramneg_ep{}.pkl\".format(e))\n\n\n print(\"\\tExample Sequences\")\n sampled_seq = model.sample(5)\n for s in sampled_seq:\n print(\"\\t\\t{}\".format(model.vocabulary.tensor_to_seq(s, debug=True)))\n\n nll_training = []\n with torch.no_grad():\n for i_batch, sample_batched in enumerate(training_dataloader_active_eval): \n seq_batched = sample_batched[0].to(model.device, non_blocking=True) \n seq_lengths = sample_batched[1].to(model.device, non_blocking=True) \n\n nll_training += model.likelihood(seq_batched, seq_lengths)\n\n nll_training_active_mean = torch.stack(nll_training).mean().item()\n print(\"\\tNLL Train Active: {}\".format(nll_training_active_mean))\n del nll_training\n\n nll_test = []\n with torch.no_grad():\n for i_batch, sample_batched in enumerate(test_dataloader_active): \n seq_batched = sample_batched[0].to(model.device, non_blocking=True) \n seq_lengths = sample_batched[1].to(model.device, non_blocking=True) \n\n nll_test += model.likelihood(seq_batched, seq_lengths)\n\n nll_test_active_mean = torch.stack(nll_test).mean().item()\n print(\"\\tNLL Test Active: {}\".format(nll_test_active_mean))\n del nll_test\n\n nll_training = []\n with torch.no_grad():\n for i_batch, sample_batched in enumerate(training_dataloader_inactive): \n seq_batched = sample_batched[0].to(model.device, non_blocking=True) \n seq_lengths = sample_batched[1].to(model.device, non_blocking=True) \n\n nll_training += model.likelihood(seq_batched, seq_lengths)\n\n nll_training_inactive_mean = torch.stack(nll_training).mean().item()\n print(\"\\tNLL Train Inactive: {}\".format(nll_training_inactive_mean))\n del nll_training\n\n nll_test = []\n with torch.no_grad():\n for i_batch, sample_batched in enumerate(test_dataloader_inactive): \n seq_batched = sample_batched[0].to(model.device, non_blocking=True) \n seq_lengths = sample_batched[1].to(model.device, non_blocking=True) \n\n nll_test += model.likelihood(seq_batched, seq_lengths)\n\n nll_test_inactive_mean = torch.stack(nll_test).mean().item()\n print(\"\\tNLL Test Inactive: {}\".format(nll_test_inactive_mean))\n del nll_test\n print()\n\n training_dictionary[e]=[nll_training_active_mean, nll_test_active_mean, nll_training_inactive_mean, nll_test_inactive_mean]\n \n with open(folder+\"pickles/generator_TL_gramneg_results_hem.pkl\",'wb') as fd:\n pickle.dump(training_dictionary, fd)\n \nelse:\n with open(folder+\"pickles/generator_TL_gramneg_results_hem.pkl\",'rb') as fd:\n training_dictionary = pickle.load(fd)\n\nmin_nll_test_active = float(\"inf\")\nfor epoch, training_values in training_dictionary.items():\n nll_test_active = training_values[1]\n\n if nll_test_active < min_nll_test_active:\n best_epoch = epoch\n min_nll_test_active = nll_test_active", "_____no_output_____" ] ], [ [ "# Sampling evaluation", "_____no_output_____" ] ], [ [ "print(best_epoch)\nmodel = Generator.load_from_file(folder+\"models/RNN-generator-TL-hem/gramneg_ep{}.pkl\".format(best_epoch))", "200\n" ] ], [ [ "199", "_____no_output_____" ] ], [ [ "training_seq = df_training.Sequence.values.tolist()\n\ndef _sample(model, n):\n sampled_seq = model.sample(n)\n sequences = []\n for s in sampled_seq:\n sequences.append(model.vocabulary.tensor_to_seq(s))\n return sequences\n\ndef novelty(seqs, list_):\n novel_seq = []\n for s in seqs:\n if s not in list_:\n novel_seq.append(s)\n return novel_seq, (len(novel_seq)/len(seqs))*100\n\ndef is_in_training(seq, list_ = training_seq):\n if seq not in list_:\n return False\n else:\n return True\n\ndef uniqueness(seqs):\n unique_seqs = defaultdict(int)\n for s in seqs:\n unique_seqs[s] += 1\n return unique_seqs, (len(unique_seqs)/len(seqs))*100", "_____no_output_____" ], [ "# sample\nseqs = _sample(model, 50000)\nunique_seqs, perc_uniqueness = uniqueness(seqs)\nnotintraining_seqs, perc_novelty = novelty(unique_seqs, training_seq)\n\n\n# create dataframe\ndf_generated = pd.DataFrame(list(unique_seqs.keys()), columns =['Sequence']) \ndf_generated[\"Repetition\"] = df_generated[\"Sequence\"].map(lambda x: unique_seqs[x])\ndf_generated[\"inTraining\"] = df_generated[\"Sequence\"].map(is_in_training)\ndf_generated[\"Set\"] = \"generated-TL-GN-hem\"\n\n# save\ndf_generated.to_pickle(folder+\"pickles/Generated-TL-gramneg-hem.pkl\")", "_____no_output_____" ], [ "print(perc_uniqueness, perc_novelty)", "82.89999999999999 99.61158021712907\n" ] ], [ [ "82.89999999999999 99.61158021712907", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
4a4aa1e3065cd96aa857b9723f43fd0b96577738
8,047
ipynb
Jupyter Notebook
Examples/SEIRHVD_CFR.ipynb
DLab/covid19geomodeller
a3a9eedf064078b21be0928ee41b41c902938eff
[ "MIT" ]
null
null
null
Examples/SEIRHVD_CFR.ipynb
DLab/covid19geomodeller
a3a9eedf064078b21be0928ee41b41c902938eff
[ "MIT" ]
null
null
null
Examples/SEIRHVD_CFR.ipynb
DLab/covid19geomodeller
a3a9eedf064078b21be0928ee41b41c902938eff
[ "MIT" ]
null
null
null
25.225705
186
0.525289
[ [ [ "# SEIRHVD model example\n## Work in progress (equations not ready)\n\\begin{align}\n\\dot{S} & = S_f - \\alpha\\beta\\frac{SI}{N+k_I I+k_R R} + r_{R\\_S} R\\\\\n\\dot{E} & = E_f + \\alpha\\beta\\frac{SI}{N+k_I I+k_R R} - E\\frac{1}{t_{E\\_I}} \\\\\n\\dot{I} & = I_f + E\\frac{1}{t_{E\\_I}} - I\\frac{1}{t_{I\\_R}} \\\\\n\\dot{R} & = R_f + I\\frac{1}{t_{I\\_R}} - r_{I\\_R} R\\\\\n\\end{align}\nWhere: \n* $S:$ Susceptible\n* $E:$ Exposed\n* $I:$ Infectious\n* $R:$ Removed\n* $\\alpha:$ Mobilty\n* $\\beta:$ Infection rate\n* $N:$ Total population\n* $t_{E\\_I}:$ # Transition time between exposed and infectious\n* $t_{I\\_R}:$ # Transition time between infectious and recovered\n* $r_{R\\_S}:$ Immunity loss rate ($\\frac{1}{t_{R\\_S}}$) \n* $S_f,E_f,I_f,R_f:$ External flux\n* $k_I:$ Infected saturation \n* $k_R:$ Immunity shield \n", "_____no_output_____" ] ], [ [ "# Util libraries\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n# Adding lib paths\n\n# cv19 libraries\nfrom cv19gm.models.seirhvd import SEIRHVD \nfrom cv19gm.utils import cv19functions", "_____no_output_____" ], [ "# For pop-up plots execute this code (optional)\nimport platform\nOS = platform.system()\n\nif OS == 'Linux': \n %matplotlib tk\n print('Linux')\nelif OS == 'Windows':\n %matplotlib qt\n print('Windows')\nelif OS == 'Darwin':\n %matplotlib tk\n print('Mac (Funciona?)')", "_____no_output_____" ] ], [ [ "# Variable CFR ", "_____no_output_____" ], [ "## Discrete change\n* pH_r de 0.7 a 0.6\n* pH_D de 0.3 a 0.4", "_____no_output_____" ] ], [ [ "pH_R = cv19functions.events(values=[0.7,0.5],days=[[0,30],[30,500]])\npH_D = cv19functions.events(values=[0.3,0.5],days=[[0,30],[30,500]])", "_____no_output_____" ], [ "%%capture\n# Input configuration file\nconfig = 'cfg/SEIRHVD.toml'\n# Build simulation object\nmodel1 = SEIRHVD(config = config, H_cap=4000, pH_R = pH_R,pH_D = pH_D)\n# Simulate (solve ODE)\nmodel1.solve()", "_____no_output_____" ], [ "t = np.linspace(0,50,1000)\nplt.plot(t,100*pH_R(t),label='pH_R')\nplt.plot(t,100*pH_D(t),label='pH_D')\nplt.plot(t,100*pH_D(t)*model1.pE_Icr(t),label='CFR')\nplt.legend(loc=0)\nplt.title('CFR change (%)')\nplt.show()", "_____no_output_____" ], [ "t = model1.t\nplt.plot(t,100*model1.pH_R(t),label='pH_R')\nplt.plot(t,100*model1.pH_D(t),label='pH_D')\nplt.plot(t,100*model1.CFR,label='CFR')\nplt.xlim(0,50)\nplt.legend(loc=0)\nplt.title('CFR change (%)')\nplt.show()", "_____no_output_____" ] ], [ [ "* Nota: la transicion pareciera ocurrir en 2 días, pero tiee que ver con la resolución en la cual se entregan los datos, la transición que utiliza el integrador es \"instantánea\".", "_____no_output_____" ] ], [ [ "# Plot matplotlib\nfig, axs = plt.subplots(figsize=(13,9),linewidth=5,edgecolor='black',facecolor=\"white\")\naxs2 = axs.twinx()\n\naxs.plot(model1.t,model1.D_d,color='tab:red',label='Daily deaths')\naxs.set_ylabel('Deaths',color='tab:red')\naxs.tick_params(axis='y', labelcolor='tab:red')\n\nt = model1.t\naxs2.plot(t,100*pH_D(t)*model1.pE_Icr(t),color='tab:blue',label='CFR')\naxs2.set_ylabel('CFR',color='tab:blue')\naxs2.tick_params(axis='y',labelcolor='tab:blue')\n\naxs.set_xlim(0,200)\naxs2.set_xlim(0,200)\nfig.legend(loc=8)\nfig.suptitle('CFR vs Deaths')\nfig.show()", "_____no_output_____" ] ], [ [ "## Continious change\n* pH_r de 0.7 a 0.6\n* pH_D de 0.3 a 0.4", "_____no_output_____" ] ], [ [ "pH_R = cv19functions.sigmoidal_transition(t_init=20,t_end=40,initvalue = 0.7, endvalue = 0.5)\npH_D = cv19functions.sigmoidal_transition(t_init=20,t_end=40,initvalue = 0.3, endvalue = 0.5)", "_____no_output_____" ], [ "%%capture\n# Input configuration file\nconfig = 'cfg/SEIRHVD.toml'\n# Build simulation object\nmodel2 = SEIRHVD(config = config, H_cap=4000, pH_R = pH_R,pH_D = pH_D)\n# Simulate (solve ODE)\nmodel2.solve()", "_____no_output_____" ], [ "t = model2.t\nplt.plot(t,100*model2.pH_R(t),label='pH_R')\nplt.plot(t,100*model2.pH_D(t),label='pH_D')\nplt.plot(t,100*model2.CFR,label='CFR')\nplt.xlim(0,50)\nplt.legend(loc=0)\nplt.title('CFR change (%)')\nplt.show()", "_____no_output_____" ], [ "# Plot matplotlib\nfig, axs = plt.subplots(figsize=(13,9),linewidth=5,edgecolor='black',facecolor=\"white\")\naxs2 = axs.twinx()\n\naxs.plot(model2.t,model2.D_d,color='tab:red',label='Daily deaths')\naxs.set_ylabel('Deaths',color='tab:red')\naxs.tick_params(axis='y', labelcolor='tab:red')\n\nt = model2.t\naxs2.plot(t,100*pH_D(t)*model2.pE_Icr(t),color='tab:blue',label='CFR')\naxs2.set_ylabel('CFR',color='tab:blue')\naxs2.tick_params(axis='y',labelcolor='tab:blue')\n\naxs.set_xlim(0,200)\naxs2.set_xlim(0,200)\nfig.legend(loc=8)\nfig.suptitle('CFR vs Deaths')\nfig.show()", "_____no_output_____" ] ], [ [ "# Access CFR value", "_____no_output_____" ], [ "## As a variable", "_____no_output_____" ] ], [ [ "model2.CFR ", "_____no_output_____" ] ], [ [ "## As part of the pandas array", "_____no_output_____" ] ], [ [ "model2.results['CFR']", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a4aad45eac6a26ec87c9f6d68a26623943a7f21
41,337
ipynb
Jupyter Notebook
intro_to_pytorch/Part 2 - Neural Networks in PyTorch (Exercises).ipynb
Hammania689/pytorch_udacity_challenge
0c4b0b87260ebf9de275a10ab1bbea5d262ecd82
[ "MIT" ]
null
null
null
intro_to_pytorch/Part 2 - Neural Networks in PyTorch (Exercises).ipynb
Hammania689/pytorch_udacity_challenge
0c4b0b87260ebf9de275a10ab1bbea5d262ecd82
[ "MIT" ]
null
null
null
intro_to_pytorch/Part 2 - Neural Networks in PyTorch (Exercises).ipynb
Hammania689/pytorch_udacity_challenge
0c4b0b87260ebf9de275a10ab1bbea5d262ecd82
[ "MIT" ]
null
null
null
41,337
41,337
0.759828
[ [ [ "# Neural networks with PyTorch\n\nDeep learning networks tend to be massive with dozens or hundreds of layers, that's where the term \"deep\" comes from. You can build one of these deep networks using only weight matrices as we did in the previous notebook, but in general it's very cumbersome and difficult to implement. PyTorch has a nice module `nn` that provides a nice way to efficiently build large neural networks.", "_____no_output_____" ] ], [ [ "# http://pytorch.org/\nfrom os.path import exists\nfrom wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag\nplatform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())\ncuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\\.\\([0-9]*\\)\\.\\([0-9]*\\)$/cu\\1\\2/'\naccelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'\n\n!pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision\nimport torch", "_____no_output_____" ], [ "# Import necessary packages\n\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport torch\n\nimport helper\n\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "\nNow we're going to build a larger network that can solve a (formerly) difficult problem, identifying text in an image. Here we'll use the MNIST dataset which consists of greyscale handwritten digits. Each image is 28x28 pixels, you can see a sample below\n\n<img src='assets/mnist.png'>\n\nOur goal is to build a neural network that can take one of these images and predict the digit in the image.\n\nFirst up, we need to get our dataset. This is provided through the `torchvision` package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later.", "_____no_output_____" ] ], [ [ "### Run this cell\n\nfrom torchvision import datasets, transforms\n\n# Define a transform to normalize the data\ntransform = transforms.Compose([transforms.ToTensor(),\n transforms.Normalize((0.5,), (0.5,)),\n ])\n\n# Download and load the training data\ntrainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)\ntrainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)", "_____no_output_____" ] ], [ [ "We have the training data loaded into `trainloader` and we make that an iterator with `iter(trainloader)`. Later, we'll use this to loop through the dataset for training, like\n\n```python\nfor image, label in trainloader:\n ## do things with images and labels\n```\n\nYou'll notice I created the `trainloader` with a batch size of 64, and `shuffle=True`. The batch size is the number of images we get in one iteration from the data loader and pass through our network, often called a *batch*. And `shuffle=True` tells it to shuffle the dataset every time we start going through the data loader again. But here I'm just grabbing the first batch so we can check out the data. We can see below that `images` is just a tensor with size `(64, 1, 28, 28)`. So, 64 images per batch, 1 color channel, and 28x28 images.", "_____no_output_____" ] ], [ [ "dataiter = iter(trainloader)\nimages, labels = dataiter.next()\nprint(type(images))\nprint(images.shape)\nprint(labels.shape)", "<class 'torch.Tensor'>\ntorch.Size([64, 1, 28, 28])\ntorch.Size([64])\n" ] ], [ [ "This is what one of the images looks like. ", "_____no_output_____" ] ], [ [ "plt.imshow(images[1].numpy().squeeze(), cmap='Greys_r');", "_____no_output_____" ] ], [ [ "First, let's try to build a simple network for this dataset using weight matrices and matrix multiplications. Then, we'll see how to do it using PyTorch's `nn` module which provides a much more convenient and powerful method for defining network architectures.\n\nThe networks you've seen so far are called *fully-connected* or *dense* networks. Each unit in one layer is connected to each unit in the next layer. In fully-connected networks, the input to each layer must be a one-dimensional vector (which can be stacked into a 2D tensor as a batch of multiple examples). However, our images are 28x28 2D tensors, so we need to convert them into 1D vectors. Thinking about sizes, we need to convert the batch of images with shape `(64, 1, 28, 28)` to a have a shape of `(64, 784)`, 784 is 28 times 28. This is typically called *flattening*, we flattened the 2D images into 1D vectors.\n\nPreviously you built a network with one output unit. Here we need 10 output units, one for each digit. We want our network to predict the digit shown in an image, so what we'll do is calculate probabilities that the image is of any one digit or class. This ends up being a discrete probability distribution over the classes (digits) that tells us the most likely class for the image. That means we need 10 output units for the 10 classes (digits). We'll see how to convert the network output into a probability distribution next.\n\n> **Exercise:** Flatten the batch of images `images`. Then build a multi-layer network with 784 input units, 256 hidden units, and 10 output units using random tensors for the weights and biases. For now, use a sigmoid activation for the hidden layer. Leave the output layer without an activation, we'll add one that gives us a probability distribution next.", "_____no_output_____" ] ], [ [ "def activation(x):\n \"\"\" Sigmoid activation function \n \n Arguments\n ---------\n x: torch.Tensor\n \"\"\"\n return 1/ (1+torch.exp(-x))", "_____no_output_____" ], [ "## Your solution\n\n# Hyperparemeter settings\nbatch_size = 64\nimage_dim = 784\nn_hidden = 256\nn_output = 10\n\ntorch.manual_seed(1)\n# inputs = images.view(images.shape[0], -1)\n\n# Intializing Weights and Bias\nW1 = torch.randn((image_dim, n_hidden))\nW2 = torch.randn((n_hidden, n_output))\n\n\nB1 = torch.randn((1, n_hidden))\nB2 = torch.randn((1, n_output))\n\n# Forward Prop\nh1 = activation(torch.mm(images.view(batch_size, image_dim), W1) + B1) # (64, 256)\n\n# output of your network, should have shape (64,10)\nout = torch.mm(h1, W2) + B3\n\n\nh1.shape, out.shape", "_____no_output_____" ] ], [ [ "Now we have 10 outputs for our network. We want to pass in an image to our network and get out a probability distribution over the classes that tells us the likely class(es) the image belongs to. Something that looks like this:\n<img src='assets/image_distribution.png' width=500px>\n\nHere we see that the probability for each class is roughly the same. This is representing an untrained network, it hasn't seen any data yet so it just returns a uniform distribution with equal probabilities for each class.\n\nTo calculate this probability distribution, we often use the [**softmax** function](https://en.wikipedia.org/wiki/Softmax_function). Mathematically this looks like\n\n$$\n\\Large \\sigma(x_i) = \\cfrac{e^{x_i}}{\\sum_k^K{e^{x_k}}}\n$$\n\nWhat this does is squish each input $x_i$ between 0 and 1 and normalizes the values to give you a proper probability distribution where the probabilites sum up to one.\n\n> **Exercise:** Implement a function `softmax` that performs the softmax calculation and returns probability distributions for each example in the batch. Note that you'll need to pay attention to the shapes when doing this. If you have a tensor `a` with shape `(64, 10)` and a tensor `b` with shape `(64,)`, doing `a/b` will give you an error because PyTorch will try to do the division across the columns (called broadcasting) but you'll get a size mismatch. The way to think about this is for each of the 64 examples, you only want to divide by one value, the sum in the denominator. So you need `b` to have a shape of `(64, 1)`. This way PyTorch will divide the 10 values in each row of `a` by the one value in each row of `b`. Pay attention to how you take the sum as well. You'll need to define the `dim` keyword in `torch.sum`. Setting `dim=0` takes the sum across the rows while `dim=1` takes the sum across the columns.", "_____no_output_____" ] ], [ [ "def softmax(x):\n \"\"\" Softmax activation function \n \n Arguments\n ---------\n x: torch.Tensor\n \"\"\"\n return torch.exp(x) / torch.sum(torch.exp(x), dim=1).view(-1, 1)\n \n\n# Here, out should be the output of the network in the previous excercise with shape (64,10)\nprobabilities = softmax(out)\n\n# Does it have the right shape? Should be (64, 10)\nprint(probabilities.shape)\n# Does it sum to 1?\nprint(probabilities.sum(dim=1))\n# torch.sum(torch.exp(output), dim=1).view(-1,1).shape", "torch.Size([64, 10])\ntensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,\n 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,\n 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,\n 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,\n 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,\n 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,\n 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,\n 1.0000])\n" ] ], [ [ "## Building networks with PyTorch\n\nPyTorch provides a module `nn` that makes building networks much simpler. Here I'll show you how to build the same one as above with 784 inputs, 256 hidden units, 10 output units and a softmax output.", "_____no_output_____" ] ], [ [ "from torch import nn", "_____no_output_____" ], [ "class Network(nn.Module):\n def __init__(self):\n super().__init__()\n \n # Inputs to hidden layer linear transformation\n self.hidden = nn.Linear(784, 256)\n # Output layer, 10 units - one for each digit\n self.output = nn.Linear(256, 10)\n \n # Define sigmoid activation and softmax output \n self.sigmoid = nn.Sigmoid()\n self.softmax = nn.Softmax(dim=1)\n \n def forward(self, x):\n # Pass the input tensor through each of our operations\n x = self.hidden(x)\n x = self.sigmoid(x)\n x = self.output(x)\n x = self.softmax(x)\n \n return x", "_____no_output_____" ] ], [ [ "Let's go through this bit by bit.\n\n```python\nclass Network(nn.Module):\n```\n\nHere we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class for your network. The name of the class itself can be anything.\n\n```python\nself.hidden = nn.Linear(784, 256)\n```\n\nThis line creates a module for a linear transformation, $x\\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to `self.hidden`. The module automatically creates the weight and bias tensors which we'll use in the `forward` method. You can access the weight and bias tensors once the network (`net`) is created with `net.hidden.weight` and `net.hidden.bias`.\n\n```python\nself.output = nn.Linear(256, 10)\n```\n\nSimilarly, this creates another linear transformation with 256 inputs and 10 outputs.\n\n```python\nself.sigmoid = nn.Sigmoid()\nself.softmax = nn.Softmax(dim=1)\n```\n\nHere I defined operations for the sigmoid activation and softmax output. Setting `dim=1` in `nn.Softmax(dim=1)` calculates softmax across the columns.\n\n```python\ndef forward(self, x):\n```\n\nPyTorch networks created with `nn.Module` must have a `forward` method defined. It takes in a tensor `x` and passes it through the operations you defined in the `__init__` method.\n\n```python\nx = self.hidden(x)\nx = self.sigmoid(x)\nx = self.output(x)\nx = self.softmax(x)\n```\n\nHere the input tensor `x` is passed through each operation a reassigned to `x`. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the `__init__` method doesn't matter, but you'll need to sequence the operations correctly in the `forward` method.\n\nNow we can create a `Network` object.", "_____no_output_____" ] ], [ [ "# Create the network and look at it's text representation\nmodel = Network()\nmodel", "_____no_output_____" ] ], [ [ "You can define the network somewhat more concisely and clearly using the `torch.nn.functional` module. This is the most common way you'll see networks defined as many operations are simple element-wise functions. We normally import this module as `F`, `import torch.nn.functional as F`.", "_____no_output_____" ] ], [ [ "import torch.nn.functional as F\n\nclass Network(nn.Module):\n def __init__(self):\n super().__init__()\n # Inputs to hidden layer linear transformation\n self.hidden = nn.Linear(784, 256)\n # Output layer, 10 units - one for each digit\n self.output = nn.Linear(256, 10)\n \n def forward(self, x):\n # Hidden layer with sigmoid activation\n x = F.sigmoid(self.hidden(x))\n # Output layer with softmax activation\n x = F.softmax(self.output(x), dim=1)\n \n return x", "_____no_output_____" ] ], [ [ "### Activation functions\n\nSo far we've only been looking at the softmax activation, but in general any function can be used as an activation function. The only requirement is that for a network to approximate a non-linear function, the activation functions must be non-linear. Here are a few more examples of common activation functions: Tanh (hyperbolic tangent), and ReLU (rectified linear unit).\n\n<img src=\"assets/activation.png\" width=700px>\n\nIn practice, the ReLU function is used almost exclusively as the activation function for hidden layers.", "_____no_output_____" ], [ "### Your Turn to Build a Network\n\n<img src=\"assets/mlp_mnist.png\" width=600px>\n\n> **Exercise:** Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, then a hidden layer with 64 units and a ReLU activation, and finally an output layer with a softmax activation as shown above. You can use a ReLU activation with the `nn.ReLU` module or `F.relu` function.", "_____no_output_____" ] ], [ [ "## Your solution here\nclass DNN(nn.Module):\n def __init__(self):\n super().__init__()\n \n # Inputs to hidden layers linear transformation\n self.hidden1 = nn.Linear(784, 128)\n self.hidden2 = nn.Linear(128, 64)\n \n # Output layer, 10 units - one for each digit\n self.output = nn.Linear(64, 10)\n \n # Define sigmoid activation and softmax output \n# self.sigmoid = nn.Sigmoid()\n self.relu = nn.ReLU()\n self.softmax = nn.Softmax(dim=1)\n \n def forward(self, x):\n # Pass the input tensor through each of our operations\n x = self.hidden1(x)\n x = self.relu(x)\n x = self.hidden2(x)\n x = self.relu(x)\n x = self.output(x)\n x = self.softmax(x)\n \n return x", "_____no_output_____" ], [ "model = DNN()\nmodel", "_____no_output_____" ] ], [ [ "### Initializing weights and biases\n\nThe weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with `model.fc1.weight` for instance.", "_____no_output_____" ] ], [ [ "+\nprint(model.fc1.bias)", "_____no_output_____" ] ], [ [ "For custom initialization, we want to modify these tensors in place. These are actually autograd *Variables*, so we need to get back the actual tensors with `model.fc1.weight.data`. Once we have the tensors, we can fill them with zeros (for biases) or random normal values.", "_____no_output_____" ] ], [ [ "# Set biases to all zeros\nmodel.fc1.bias.data.fill_(0)", "_____no_output_____" ], [ "# sample from random normal with standard dev = 0.01\nmodel.fc1.weight.data.normal_(std=0.01)", "_____no_output_____" ] ], [ [ "### Forward pass\n\nNow that we have a network, let's see what happens when we pass in an image.", "_____no_output_____" ] ], [ [ "# Grab some data \ndataiter = iter(trainloader)\nimages, labels = dataiter.next()\n\n# Resize images into a 1D vector, new shape is (batch size, color channels, image pixels) \nimages.resize_(64, 1, 784)\n# or images.resize_(images.shape[0], 1, 784) to automatically get batch size\n\n# Forward pass through the network\nimg_idx = 0\nps = model.forward(images[img_idx,:])\n\nimg = images[img_idx]\nhelper.view_classify(img.view(1, 28, 28), ps)", "_____no_output_____" ] ], [ [ "As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random!\n\n### Using `nn.Sequential`\n\nPyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, `nn.Sequential` ([documentation](https://pytorch.org/docs/master/nn.html#torch.nn.Sequential)). Using this to build the equivalent network:", "_____no_output_____" ] ], [ [ "# Hyperparameters for our network\ninput_size = 784\nhidden_sizes = [128, 64]\noutput_size = 10\n\n# Build a feed-forward network\nmodel = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),\n nn.ReLU(),\n nn.Linear(hidden_sizes[0], hidden_sizes[1]),\n nn.ReLU(),\n nn.Linear(hidden_sizes[1], output_size),\n nn.Softmax(dim=1))\nprint(model)\n\n# Forward pass through the network and display output\nimages, labels = next(iter(trainloader))\nimages.resize_(images.shape[0], 1, 784)\nps = model.forward(images[0,:])\nhelper.view_classify(images[0].view(1, 28, 28), ps)", "_____no_output_____" ] ], [ [ "Here our model is the same as before: 784 input units, a hidden layer with 128 units, ReLU activation, 64 unit hidden layer, another ReLU, then the output layer with 10 units, and the softmax output.\n\nThe operations are available by passing in the appropriate index. For example, if you want to get first Linear operation and look at the weights, you'd use `model[0]`.", "_____no_output_____" ] ], [ [ "print(model[0])\nmodel[0].weight", "_____no_output_____" ] ], [ [ "You can also pass in an `OrderedDict` to name the individual layers and operations, instead of using incremental integers. Note that dictionary keys must be unique, so _each operation must have a different name_.", "_____no_output_____" ] ], [ [ "from collections import OrderedDict\nmodel = nn.Sequential(OrderedDict([\n ('fc1', nn.Linear(input_size, hidden_sizes[0])),\n ('relu1', nn.ReLU()),\n ('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),\n ('relu2', nn.ReLU()),\n ('output', nn.Linear(hidden_sizes[1], output_size)),\n ('softmax', nn.Softmax(dim=1))]))\nmodel", "_____no_output_____" ] ], [ [ "Now you can access layers either by integer or the name", "_____no_output_____" ] ], [ [ "print(model[0])\nprint(model.fc1)", "_____no_output_____" ] ], [ [ "In the next notebook, we'll see how we can train a neural network to accuractly predict the numbers appearing in the MNIST images.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a4ab167a481498c24f3618988afcd97873d6f5d
230,472
ipynb
Jupyter Notebook
halo-parallel/ipynb/.ipynb_checkpoints/benchmark-checkpoint.ipynb
NeuPhysics/neutrino-halo-problem
153998423275bd8bc76b3b1b2306c2f0177c10de
[ "MIT" ]
3
2018-06-01T00:34:56.000Z
2018-11-17T09:34:46.000Z
halo-parallel/ipynb/.ipynb_checkpoints/benchmark-checkpoint.ipynb
NeuPhysics/neutrino-halo-problem
153998423275bd8bc76b3b1b2306c2f0177c10de
[ "MIT" ]
null
null
null
halo-parallel/ipynb/.ipynb_checkpoints/benchmark-checkpoint.ipynb
NeuPhysics/neutrino-halo-problem
153998423275bd8bc76b3b1b2306c2f0177c10de
[ "MIT" ]
2
2019-09-03T11:23:01.000Z
2020-04-24T15:38:13.000Z
144.587202
10,590
0.829298
[ [ [ "## Benchmark Data", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "# path = \"../halo_sim_parallel_benchmark.record\"\n\npath = \"../gxx/benchmark-banhcall/euler-parallel-benchmark-openmp-bahcall.record\"", "_____no_output_____" ], [ "df = pd.read_csv(path, delimiter=', ')\ndf", "/Users/leima/anaconda/lib/python3.6/site-packages/ipykernel_launcher.py:1: ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support regex separators (separators > 1 char and different from '\\s+' are interpreted as regex); you can avoid this warning by specifying engine='python'.\n \"\"\"Entry point for launching an IPython kernel.\n" ], [ "def barcharts(n):\n \n \n dfsub = df.loc[ (df['Ntop'] == n) ]\n\n dfsub[ dfsub['Size'] == 1000 ].plot.bar(x='OMPThread',y='Time')\n plt.title('Steps 1000')\n\n dfsub[ dfsub['Size'] == 10000 ].plot.bar(x='OMPThread',y='Time')\n plt.title('Steps 10000')\n\n dfsub[ dfsub['Size'] == 20000 ].plot.bar(x='OMPThread',y='Time')\n plt.title('Steps 20000')\n\n dfsub[ dfsub['Size'] == 30000 ].plot.bar(x='OMPThread',y='Time')\n plt.title('Steps 30000')\n\n dfsub[ dfsub['Size'] == 40000 ].plot.bar(x='OMPThread',y='Time')\n plt.title('Steps 40000')\n\n\n plt.show()", "_____no_output_____" ] ], [ [ "### df100", "_____no_output_____" ] ], [ [ "df100 = df.loc[ (df['Ntop'] == 100) ]\ndf100", "_____no_output_____" ], [ "df100.loc[ df100['Size'] == 1000 ]", "_____no_output_____" ], [ "barcharts(100)\n", "_____no_output_____" ] ], [ [ "### df1000", "_____no_output_____" ] ], [ [ "barcharts(1000)", "_____no_output_____" ] ], [ [ "### df2000", "_____no_output_____" ] ], [ [ "barcharts(2000)", "_____no_output_____" ] ], [ [ "### df3000", "_____no_output_____" ] ], [ [ "barcharts(3000)", "_____no_output_____" ], [ "## Testing\n\narr = np.array([ [0,1,2], [3,4,5], [6,7,8], [9,10,11] ])\nprint(arr)\narr = np.delete(arr,0, axis=0)", "[[ 0 1 2]\n [ 3 4 5]\n [ 6 7 8]\n [ 9 10 11]]\n" ], [ "arr[::2]", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
4a4ab327eafe62c698f5ae3bad118cc76fffb08b
30,493
ipynb
Jupyter Notebook
notebooks/n1_start_binance_api.ipynb
konakov-ds/binance_bot
ed5e5f0cfd83abb7617b10a45f84ea63cc03b503
[ "MIT" ]
null
null
null
notebooks/n1_start_binance_api.ipynb
konakov-ds/binance_bot
ed5e5f0cfd83abb7617b10a45f84ea63cc03b503
[ "MIT" ]
null
null
null
notebooks/n1_start_binance_api.ipynb
konakov-ds/binance_bot
ed5e5f0cfd83abb7617b10a45f84ea63cc03b503
[ "MIT" ]
null
null
null
120.051181
24,020
0.861968
[ [ [ "#pip install python-binance", "_____no_output_____" ], [ "from binance import Client\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport time", "_____no_output_____" ], [ "with open('access.txt') as f:\n acc = f.readlines()\napi = acc[0].strip()\nkey = acc[1].strip()", "_____no_output_____" ], [ "client = Client(api,key)", "_____no_output_____" ], [ "def get_interval_data(currency, interval, lookback):\n interval_data = pd.DataFrame(client.get_historical_klines(currency, interval, lookback + ' minutes ago UTC'))\n interval_data = interval_data.iloc[:, :6]\n interval_data.columns = ['Time', 'Open', 'High','Low','Close', 'Volume']\n interval_data.set_index('Time', inplace=True)\n interval_data.index = pd.to_datetime(interval_data.index, unit ='ms')\n interval_data = interval_data.astype(float)\n return interval_data", "_____no_output_____" ], [ "frame = get_interval_data('DOGEUSDT', '1m', '30')", "_____no_output_____" ], [ "frame.Low.plot();\nframe.High.plot();", "_____no_output_____" ], [ "class TestTrade:\n def __init__(self, symb, qnty, entered=False):\n self.symb = symb\n self.qnty = qnty\n self.entered = entered\n self.buy_orders = []\n self.sell_orders = []\n \n def get_interval_data(self, interval, lookback):\n interval_data = pd.DataFrame(client.get_historical_klines(self.symb, interval, lookback + ' minutes ago UTC'))\n interval_data = interval_data.iloc[:, :6]\n interval_data.columns = ['Time', 'Open', 'High','Low','Close', 'Volume']\n interval_data.set_index('Time', inplace=True)\n interval_data.index = pd.to_datetime(interval_data.index, unit ='ms')\n interval_data = interval_data.astype(float)\n return interval_data\n \n def buy_order(self):\n while True:\n frame = self.get_interval_data('1m', '100')\n change = (frame.Open.pct_change() +1).cumprod() - 1\n if change[-1] < -0.005:\n \n order = client.create_order(symbol=self.symb,side='BUY', type='MARKET', quantity=self.qnty)\n print('BUY order executed')\n self.entered = True\n self.buy_orders.append(order)\n break\n \n def sell_order(self):\n time_buy = pd.to_datetime(self.buy_orders[-1]['transactTime'],unit='ms')\n while True:\n frame = self.get_interval_data('1m', '100')\n since_buy = frame.loc[frame.index > time_buy]\n if len(since_buy) > 0:\n change = (since_buy.Open.pct_change() +1).cumprod() -1\n if change[-1] > 0.005:\n order = client.create_order(symbol=self.symb,side='SELL', type='MARKET', quantity=self.qnty)\n print('SELL order executed')\n self.entered = False\n self.sell_orders.append(order)\n break\n \n def trade(self):\n \n while len(self.buy_orders) < 3:\n if not self.entered:\n self.buy_order()\n if self.entered:\n self.sell_order()\n time.sleep(10)", "_____no_output_____" ], [ "test_trade = TestTrade('DOGEUSDT', 100)\n#test_trade.trade()", "_____no_output_____" ], [ "b = [float(i['cummulativeQuoteQty']) for i in test_trade.buy_orders]", "_____no_output_____" ], [ "b", "_____no_output_____" ], [ "s = [float(i['cummulativeQuoteQty']) for i in test_trade.sell_orders]", "_____no_output_____" ], [ "s", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a4ab9b39efdaec1f48ecf5a05d3cce59847e26b
7,414
ipynb
Jupyter Notebook
demos/demos_databases_apis/tigergraph/fraud_raw_REST_calls.ipynb
khmariem/pygraphistry
cbf7888c28bd3f286b27274452475ef9421dd00a
[ "BSD-3-Clause" ]
1,595
2015-08-24T07:27:20.000Z
2022-03-31T17:12:29.000Z
demos/demos_databases_apis/tigergraph/fraud_raw_REST_calls.ipynb
LinguList/pygraphistry
51c87200ff7448f47e03f0f7b626d334e94a2fa7
[ "BSD-3-Clause" ]
206
2015-08-17T04:44:59.000Z
2022-03-26T19:21:02.000Z
demos/demos_databases_apis/tigergraph/fraud_raw_REST_calls.ipynb
LinguList/pygraphistry
51c87200ff7448f47e03f0f7b626d334e94a2fa7
[ "BSD-3-Clause" ]
169
2015-08-27T17:55:05.000Z
2022-03-15T19:47:01.000Z
23.024845
125
0.504451
[ [ [ "# Tigergraph<>Graphistry Fraud Demo: Raw REST\n\nAccesses Tigergraph's fraud demo directly via manual REST calls", "_____no_output_____" ] ], [ [ "#!pip install graphistry\n\nimport pandas as pd\nimport graphistry\nimport requests\n\n#graphistry.register(key='MY_API_KEY', server='labs.graphistry.com', api=2)\n\nTIGER = \"http://MY_TIGER_SERVER:9000\"", "_____no_output_____" ], [ "#curl -X GET \"http://MY_TIGER_SERVER:9000/query/circleDetection?srcId=111\"\n\n# string -> dict\ndef query_raw(query_string):\n url = TIGER + \"/query/\" + query_string\n r = requests.get(url)\n return r.json()\n\n\ndef flatten (lst_of_lst):\n try:\n if type(lst_of_lst[0]) == list:\n return [item for sublist in lst_of_lst for item in sublist]\n else:\n return lst_of_lst\n except:\n print('fail', lst_of_lst)\n return lst_of_lst\n\n#str * dict -> dict\ndef named_edge_to_record(name, edge): \n record = {k: edge[k] for k in edge.keys() if not (type(edge[k]) == dict) }\n record['type'] = name\n nested = [k for k in edge.keys() if type(edge[k]) == dict]\n if len(nested) == 1:\n for k in edge[nested[0]].keys():\n record[k] = edge[nested[0]][k]\n else:\n for prefix in nested:\n for k in edge[nested[prefix]].keys():\n record[prefix + \"_\" + k] = edge[nested[prefix]][k]\n return record\n\n\ndef query(query_string):\n results = query_raw(query_string)['results']\n out = {}\n for o in results:\n for k in o.keys():\n if type(o[k]) == list:\n out[k] = flatten(o[k])\n out = flatten([[named_edge_to_record(k,v) for v in out[k]] for k in out.keys()])\n print('# results', len(out))\n return pd.DataFrame(out)\n\n \ndef plot_edges(edges):\n return graphistry.bind(source='from_id', destination='to_id').edges(edges).plot()", "_____no_output_____" ] ], [ [ "# 1. Fraud", "_____no_output_____" ], [ "## 1.a circleDetection", "_____no_output_____" ] ], [ [ "circle = query(\"circleDetection?srcId=10\")\ncircle.sample(3)", "_____no_output_____" ], [ "plot_edges(circle)", "_____no_output_____" ] ], [ [ "## 1.b fraudConnectivity", "_____no_output_____" ] ], [ [ "connectivity = query(\"fraudConnectivity?inputUser=111&trustScore=0.1\")\nconnectivity.sample(3)", "_____no_output_____" ], [ "plot_edges(connectivity)", "_____no_output_____" ] ], [ [ "## Combined", "_____no_output_____" ] ], [ [ "circle['provenance'] = 'circle'\nconnectivity['provenance'] = 'connectivity'\n\nplot_edges(pd.concat([circle, connectivity]))", "_____no_output_____" ] ], [ [ "## Color by type", "_____no_output_____" ] ], [ [ "edges = pd.concat([circle, connectivity])\n\nfroms = edges.rename(columns={'from_id': 'id', 'from_type': 'node_type'})[['id', 'node_type']]\ntos = edges.rename(columns={'to_id': 'id', 'to_type': 'node_type'})[['id', 'node_type']]\nnodes = pd.concat([froms, tos], ignore_index=True).drop_duplicates().dropna()\nnodes.sample(3)", "_____no_output_____" ], [ "nodes['node_type'].unique()", "_____no_output_____" ], [ "#https://labs.graphistry.com/docs/docs/palette.html\n\ntype2color = {\n 'User': 0,\n 'Transaction': 1,\n 'Payment_Instrument': 2,\n 'Device_Token': 3 \n}\n\nnodes['color'] = nodes['node_type'].apply(lambda type_str: type2color[type_str])\n\nnodes.sample(3)", "_____no_output_____" ], [ "graphistry.bind(source='from_id', destination='to_id', node='id', point_color='color').edges(edges).nodes(nodes).plot()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
4a4ac15ccccec757993b89d4ec005476d74cff6b
8,512
ipynb
Jupyter Notebook
2021/ferran/Day 3: Binary Diagnostic.ipynb
bbglab/adventofcode
65b6d8331d10f229b59232882d60024b08d69294
[ "MIT" ]
null
null
null
2021/ferran/Day 3: Binary Diagnostic.ipynb
bbglab/adventofcode
65b6d8331d10f229b59232882d60024b08d69294
[ "MIT" ]
null
null
null
2021/ferran/Day 3: Binary Diagnostic.ipynb
bbglab/adventofcode
65b6d8331d10f229b59232882d60024b08d69294
[ "MIT" ]
3
2016-12-02T09:20:42.000Z
2021-12-01T13:31:07.000Z
47.027624
435
0.646264
[ [ [ "# Day 3", "_____no_output_____" ] ], [ [ "--- Day 3: Binary Diagnostic ---\nThe submarine has been making some odd creaking noises, so you ask it to produce a diagnostic report just in case.\n\nThe diagnostic report (your puzzle input) consists of a list of binary numbers which, when decoded properly, can tell you many useful things about the conditions of the submarine. The first parameter to check is the power consumption.\n\nYou need to use the binary numbers in the diagnostic report to generate two new binary numbers (called the gamma rate and the epsilon rate). The power consumption can then be found by multiplying the gamma rate by the epsilon rate.\n\nEach bit in the gamma rate can be determined by finding the most common bit in the corresponding position of all numbers in the diagnostic report. For example, given the following diagnostic report:\n\n00100\n11110\n10110\n10111\n10101\n01111\n00111\n11100\n10000\n11001\n00010\n01010\nConsidering only the first bit of each number, there are five 0 bits and seven 1 bits. Since the most common bit is 1, the first bit of the gamma rate is 1.\n\nThe most common second bit of the numbers in the diagnostic report is 0, so the second bit of the gamma rate is 0.\n\nThe most common value of the third, fourth, and fifth bits are 1, 1, and 0, respectively, and so the final three bits of the gamma rate are 110.\n\nSo, the gamma rate is the binary number 10110, or 22 in decimal.\n\nThe epsilon rate is calculated in a similar way; rather than use the most common bit, the least common bit from each position is used. So, the epsilon rate is 01001, or 9 in decimal. Multiplying the gamma rate (22) by the epsilon rate (9) produces the power consumption, 198.\n\nUse the binary numbers in your diagnostic report to calculate the gamma rate and epsilon rate, then multiply them together. What is the power consumption of the submarine? (Be sure to represent your answer in decimal, not binary.)", "_____no_output_____" ] ], [ [ "import numpy as np\n\nbitmatrix = []\nwith open('3.input', 'rt') as f:\n for l in f.readlines():\n bitmatrix.append(list(map(int, list(l.rstrip()))))\nbitmatrix = np.array(bitmatrix)", "_____no_output_____" ], [ "gamma = ''.join(list(map(lambda x: str(int(x)), bitmatrix.mean(axis=0) > 0.5)))\nepsilon = ''.join(list(map(lambda x: str(int(x)), bitmatrix.mean(axis=0) < 0.5)))\nint(gamma, base=2) * int(epsilon, base=2)", "_____no_output_____" ] ], [ [ "--- Part Two ---\nNext, you should verify the life support rating, which can be determined by multiplying the oxygen generator rating by the CO2 scrubber rating.\n\nBoth the oxygen generator rating and the CO2 scrubber rating are values that can be found in your diagnostic report - finding them is the tricky part. Both values are located using a similar process that involves filtering out values until only one remains. Before searching for either rating value, start with the full list of binary numbers from your diagnostic report and consider just the first bit of those numbers. Then:\n\nKeep only numbers selected by the bit criteria for the type of rating value for which you are searching. Discard numbers which do not match the bit criteria.\nIf you only have one number left, stop; this is the rating value for which you are searching.\nOtherwise, repeat the process, considering the next bit to the right.\nThe bit criteria depends on which type of rating value you want to find:\n\nTo find oxygen generator rating, determine the most common value (0 or 1) in the current bit position, and keep only numbers with that bit in that position. If 0 and 1 are equally common, keep values with a 1 in the position being considered.\nTo find CO2 scrubber rating, determine the least common value (0 or 1) in the current bit position, and keep only numbers with that bit in that position. If 0 and 1 are equally common, keep values with a 0 in the position being considered.\nFor example, to determine the oxygen generator rating value using the same example diagnostic report from above:\n\nStart with all 12 numbers and consider only the first bit of each number. There are more 1 bits (7) than 0 bits (5), so keep only the 7 numbers with a 1 in the first position: 11110, 10110, 10111, 10101, 11100, 10000, and 11001.\nThen, consider the second bit of the 7 remaining numbers: there are more 0 bits (4) than 1 bits (3), so keep only the 4 numbers with a 0 in the second position: 10110, 10111, 10101, and 10000.\nIn the third position, three of the four numbers have a 1, so keep those three: 10110, 10111, and 10101.\nIn the fourth position, two of the three numbers have a 1, so keep those two: 10110 and 10111.\nIn the fifth position, there are an equal number of 0 bits and 1 bits (one each). So, to find the oxygen generator rating, keep the number with a 1 in that position: 10111.\nAs there is only one number left, stop; the oxygen generator rating is 10111, or 23 in decimal.\nThen, to determine the CO2 scrubber rating value from the same example above:\n\nStart again with all 12 numbers and consider only the first bit of each number. There are fewer 0 bits (5) than 1 bits (7), so keep only the 5 numbers with a 0 in the first position: 00100, 01111, 00111, 00010, and 01010.\nThen, consider the second bit of the 5 remaining numbers: there are fewer 1 bits (2) than 0 bits (3), so keep only the 2 numbers with a 1 in the second position: 01111 and 01010.\nIn the third position, there are an equal number of 0 bits and 1 bits (one each). So, to find the CO2 scrubber rating, keep the number with a 0 in that position: 01010.\nAs there is only one number left, stop; the CO2 scrubber rating is 01010, or 10 in decimal.\nFinally, to find the life support rating, multiply the oxygen generator rating (23) by the CO2 scrubber rating (10) to get 230.\n\nUse the binary numbers in your diagnostic report to calculate the oxygen generator rating and CO2 scrubber rating, then multiply them together. What is the life support rating of the submarine? (Be sure to represent your answer in decimal, not binary.)", "_____no_output_____" ] ], [ [ "b = np.eye(12)\n\noxygen = bitmatrix\nco2 = bitmatrix\n\nfor i in range(N):\n \n if oxygen.shape[0] > 1:\n more = oxygen.mean(axis=0)[i] >= 0.5\n oxygen = oxygen[np.dot(oxygen, b[i]) == more]\n\n if co2.shape[0] > 1:\n less = co2.mean(axis=0)[i] < 0.5\n co2 = co2[np.dot(co2, b[i]) == less]\n\noxygen = ''.join(list(map(str, oxygen[0])))\nco2 = ''.join(list(map(str, co2[0])))\n\nint(oxygen, base=2) * int(co2, base=2)", "_____no_output_____" ] ] ]
[ "markdown", "raw", "code", "raw", "code" ]
[ [ "markdown" ], [ "raw" ], [ "code", "code" ], [ "raw" ], [ "code" ] ]
4a4ac2fc770687c4612d66857979afdd5fea053d
283,227
ipynb
Jupyter Notebook
Images/organized_notebook.ipynb
klw11j/covid-19-project
8ca877a0e731bc098d508ddb62d3bda40ba573c6
[ "MIT" ]
2
2020-07-11T14:48:39.000Z
2020-07-18T14:44:05.000Z
Images/organized_notebook.ipynb
klw11j/covid-19-project
8ca877a0e731bc098d508ddb62d3bda40ba573c6
[ "MIT" ]
null
null
null
Images/organized_notebook.ipynb
klw11j/covid-19-project
8ca877a0e731bc098d508ddb62d3bda40ba573c6
[ "MIT" ]
null
null
null
128.79809
49,288
0.815911
[ [ [ "# Project 1\n- **Team Members**: Chika Ozodiegwu, Kelsey Wyatt, Libardo Lambrano, Kurt Pessa\n\n![](Images/florida_covid19_data.jpg)\n\n### Data set used:\n* https://open-fdoh.hub.arcgis.com/datasets/florida-covid19-case-line-data\n", "_____no_output_____" ] ], [ [ "import requests\nimport pandas as pd\nimport io\nimport datetime as dt\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom IPython.display import JSON\n\ndf = pd.read_csv(\"Resources/Florida_COVID19_Case_Line_Data_new.csv\")\n\ndf.head(3)", "_____no_output_____" ], [ "#Clean dataframe\n\nnew_csv_data_df = df[['ObjectId', \"County\",'Age',\"Age_group\", \"Gender\", \"Jurisdiction\", \"Travel_related\", \"Hospitalized\",\"Case1\"]]\nnew_csv_data_df.head()", "_____no_output_____" ], [ "#Create new csv\n\nnew_csv_data_df.to_csv (\"new_covid_dataframe.csv\")", "_____no_output_____" ] ], [ [ "# There is no change in hospitalizations since reopening\n### Research Question to Answer:\n* “There is no change in hospitalizations since reopening.” ", "_____no_output_____" ], [ "### Part 1: Six (6) Steps for Hypothesis Testing\n\n<details><summary> click to expand </summary>\n\n#### 1. Identify\n- **Populations** (divide Hospitalization data in two groups of data):\n 1. Prior to opening\n 2. After opening \n* Decide on the **date**:\n * May 4th - restaurants opening to 25% capacity\n * June (Miami opening beaches)\n- Distribution:\n * Distribution\n\n#### 2. State the hypotheses\n- **H0**: There is no change in hospitalizations after Florida has reopened\n- **H1**: There is a change in hospitalizations after Florida has reopened\n\n#### 3. Characteristics of the comparison distribution\n- Population means, standard deviations\n\n#### 4. Critical values\n- p = 0.05\n- Our hypothesis is nondirectional so our hypothesis test is **two-tailed**\n\n#### 5. Calculate\n\n#### 6. Decide!\n \n</details>", "_____no_output_____" ], [ "### Part 2: Visualization", "_____no_output_____" ] ], [ [ "#Calculate total number of cases \nTotal_covid_cases = new_csv_data_df[\"ObjectId\"].nunique()\nTotal_covid_cases = pd.DataFrame({\"Total Number of Cases\": [Total_covid_cases]})\nTotal_covid_cases", "_____no_output_____" ], [ "#Total number of cases per county\ntotal_cases_county = new_csv_data_df.groupby(by=\"County\").count().reset_index().loc[:,[\"County\",\"Case1\"]]\ntotal_cases_county.rename(columns={\"County\": \"County\", \"Case1\": \"Total Cases\"})", "_____no_output_____" ], [ "#Total number of cases per county sorted\ntotal_cases_county = total_cases_county.sort_values('Case1',ascending=False)\ntotal_cases_county.head(20)", "_____no_output_____" ], [ "#Create bar chart for total cases per county\ntotal_cases_county.plot(kind='bar',x='County',y='Case1', title =\"Total Cases per County\", figsize=(15, 10), color=\"blue\")\n\nplt.title(\"Total Cases per County\")\nplt.xlabel(\"County\")\nplt.ylabel(\"Number of Cases\")\nplt.legend([\"Number of Cases\"])\nplt.show()", "_____no_output_____" ], [ "#Calculate top 10 counties with total cases\ntop10_county_cases = total_cases_county.sort_values(by=\"Case1\",ascending=False).head(10)\ntop10_county_cases[\"Rank\"] = np.arange(1,11)\ntop10_county_cases.set_index(\"Rank\").style.format({\"Case1\":\"{:,}\"})", "_____no_output_____" ], [ "#Create bar chart for total cases for top 10 counties\ntop10_county_cases.plot(kind='bar',x='County',y='Case1', title =\"Total Cases for Top 10 Counties\", figsize=(15, 10), color=\"blue\")\n\nplt.title(\"Total Hospitalizations for Top 10 Counties\")\nplt.xlabel(\"County\")\nplt.ylabel(\"Number of Cases\")\nplt.legend([\"Number of Cases\"])\nplt.show()", "_____no_output_____" ], [ "#Total number of cases by gender\ntotal_cases_gender = new_csv_data_df.groupby(by=\"Gender\").count().reset_index().loc[:,[\"Gender\",\"Case1\"]]\ntotal_cases_gender.rename(columns={\"Gender\": \"Gender\", \"Case1\": \"Total Cases\"})", "_____no_output_____" ], [ "#Create pie chart for total number of cases by gender\ntotal_cases_gender = new_csv_data_df[\"Gender\"].value_counts()\n\ncolors=[\"pink\", \"blue\", \"green\"]\n\nexplode=[0.1,0.1,0.1]\n\ntotal_cases_gender.plot.pie(explode=explode,colors=colors, autopct=\"%1.1f%%\", shadow=True, subplots=True, startangle=120);\n\nplt.title(\"Total Number of Cases in Males vs. Females\")", "_____no_output_____" ], [ "#Filter data to show only cases that include hospitalization\nfilt = new_csv_data_df[\"Hospitalized\"] == \"YES\"\ndf = new_csv_data_df[filt]\ndf", "_____no_output_____" ], [ "#Calculate total number of hospitalizations\npd.DataFrame({\n \"Total Hospitalizations (Florida)\" : [df.shape[0]]\n}).style.format(\"{:,}\")", "_____no_output_____" ], [ "#Total number of hospitalization for all counties\nhospitalizations_county = df.groupby(by=\"County\").count().reset_index().loc[:,[\"County\",\"Hospitalized\"]]\nhospitalizations_county ", "_____no_output_____" ], [ "#Total number of hospitalization for all counties sorted\nhospitalizations_county = hospitalizations_county.sort_values('Hospitalized',ascending=False)\nhospitalizations_county.head(10)", "_____no_output_____" ], [ "#Create bar chart for total hospitalizations per county\nhospitalizations_county.plot(kind='bar',x='County',y='Hospitalized', title =\"Total Hospitalizations per County\", figsize=(15, 10), color=\"blue\")\n\nplt.title(\"Total Hospitalizations per County\")\nplt.xlabel(\"County\")\nplt.ylabel(\"Number of Hospitalizations\")\nplt.show()", "_____no_output_____" ], [ "#Calculate top 10 counties with hospitalizations\ntop10_county = hospitalizations_county.sort_values(by=\"Hospitalized\",ascending=False).head(10)\ntop10_county[\"Rank\"] = np.arange(1,11)\ntop10_county.set_index(\"Rank\").style.format({\"Hospitalized\":\"{:,}\"})", "_____no_output_____" ], [ "#Create a bar chart for the top 10 counties with hospitalizations\ntop10_county.plot(kind='bar',x='County',y='Hospitalized', title =\"Total Hospitalizations for the Top 10 Counties\", figsize=(15, 10), color=\"blue\")\n\nplt.title(\"Total Hospitalizations for the Top 10 Counties\")\nplt.xlabel(\"County\")\nplt.ylabel(\"Number of Hospitalizations\")\nplt.show()", "_____no_output_____" ], [ "#Average number of hospitalization by county (Not done yet) (Kelsey)\naverage = hospitalizations_county[\"Hospitalized\"].mean()\naverage", "_____no_output_____" ], [ "#Filter data to show only cases that include hospitalization\nfilt = new_csv_data_df[\"Hospitalized\"] == \"YES\"\ndf = new_csv_data_df[filt]\ndf", "_____no_output_____" ], [ "#Percentage of hospitalization by gender # Create Visualization (Libardo)\n#code on starter_notebook.ipynb\n", "_____no_output_____" ], [ "new_csv_data_df", "_____no_output_____" ], [ "import seaborn as sns\nnew_csv_data_df['Count']=np.where(new_csv_data_df['Hospitalized']=='YES', 1,0)\nnew_csv_data_df.head()\nnew_csv_data_df['Count2']=1\nnew_csv_data_df['Case1']=pd.to_datetime(new_csv_data_df['Case1'])\ncase_plot_df=pd.DataFrame(new_csv_data_df.groupby(['Hospitalized', pd.Grouper(key='Case1', freq='W')])['Count2'].count())\ncase_plot_df.reset_index(inplace=True)\nplt.subplots(figsize=[15,7])\nsns.lineplot(x='Case1', y='Count2', data=case_plot_df, hue='Hospitalized')\nplt.xticks(rotation=45)", "_____no_output_____" ], [ "#Percentage of hospitalization by age group (Chika) #Create visualization", "_____no_output_____" ], [ "#Hospitalization by case date/month (needs more) (Libardo) \n", "_____no_output_____" ], [ "#Compare travel-related hospitalization to non-travelrelated cases (Not done yet) (Chika)", "_____no_output_____" ], [ "#Divide hospitalization data in two groups of data prior to reopening and create new dataframe (Kurt) consider total (Chika)", "_____no_output_____" ], [ "#Divide hospitalization data in two groups of data after reopening and create new dataframe (Kurt) condider total (Chika)", "_____no_output_____" ], [ "#Percentage of hospitalization before shut down (Not done yet) (Rephrase) (Chika)", "_____no_output_____" ], [ "#Percentage of hospitalization during shut down (backburner)", "_____no_output_____" ], [ "#Percentage of hospitalization after reopening(Not done yet) (Rephrase) (Chika)", "_____no_output_____" ], [ "#Statistical testing between before and after reopening", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a4acde6b7202ad2b5d77344f22599cad2ec33c0
17,747
ipynb
Jupyter Notebook
notebooks/source/logistic_regression.ipynb
gully/numpyro
e49101d307366974072fd9939eaaee102e687810
[ "Apache-2.0" ]
3
2020-08-25T14:31:08.000Z
2020-08-26T02:23:08.000Z
notebooks/source/logistic_regression.ipynb
ucals/numpyro
566a5311d660d28a630188063c03a018165a38a9
[ "Apache-2.0" ]
null
null
null
notebooks/source/logistic_regression.ipynb
ucals/numpyro
566a5311d660d28a630188063c03a018165a38a9
[ "Apache-2.0" ]
1
2020-12-23T13:27:39.000Z
2020-12-23T13:27:39.000Z
47.83558
207
0.419846
[ [ [ "# Benchmark NumPyro in large dataset", "_____no_output_____" ], [ "This notebook uses `numpyro` and replicates experiments in references [1] which evaluates the performance of NUTS on various frameworks. The benchmark is run with CUDA 10.1 on a NVIDIA RTX 2070.", "_____no_output_____" ] ], [ [ "import time\n\nimport numpy as np\n\nimport jax.numpy as jnp\nfrom jax import random\n\nimport numpyro\nimport numpyro.distributions as dist\nfrom numpyro.examples.datasets import COVTYPE, load_dataset\nfrom numpyro.infer import HMC, MCMC, NUTS\nassert numpyro.__version__.startswith('0.3.0')\n\n# NB: replace gpu by cpu to run this notebook in cpu\nnumpyro.set_platform(\"gpu\")", "_____no_output_____" ] ], [ [ "We do preprocessing steps as in [source code](https://github.com/google-research/google-research/blob/master/simple_probabilistic_programming/no_u_turn_sampler/logistic_regression.py) of reference [1]:", "_____no_output_____" ] ], [ [ "_, fetch = load_dataset(COVTYPE, shuffle=False)\nfeatures, labels = fetch()\n\n# normalize features and add intercept\nfeatures = (features - features.mean(0)) / features.std(0)\nfeatures = jnp.hstack([features, jnp.ones((features.shape[0], 1))])\n\n# make binary feature\n_, counts = np.unique(labels, return_counts=True)\nspecific_category = jnp.argmax(counts)\nlabels = (labels == specific_category)\n\nN, dim = features.shape\nprint(\"Data shape:\", features.shape)\nprint(\"Label distribution: {} has label 1, {} has label 0\"\n .format(labels.sum(), N - labels.sum()))", "Data shape: (581012, 55)\nLabel distribution: 211840 has label 1, 369172 has label 0\n" ] ], [ [ "Now, we construct the model:", "_____no_output_____" ] ], [ [ "def model(data, labels):\n coefs = numpyro.sample('coefs', dist.Normal(jnp.zeros(dim), jnp.ones(dim)))\n logits = jnp.dot(data, coefs)\n return numpyro.sample('obs', dist.Bernoulli(logits=logits), obs=labels)", "_____no_output_____" ] ], [ [ "## Benchmark HMC", "_____no_output_____" ] ], [ [ "step_size = jnp.sqrt(0.5 / N)\nkernel = HMC(model, step_size=step_size, trajectory_length=(10 * step_size), adapt_step_size=False)\nmcmc = MCMC(kernel, num_warmup=500, num_samples=500, progress_bar=False)\nmcmc.warmup(random.PRNGKey(2019), features, labels, extra_fields=('num_steps',))\nmcmc.get_extra_fields()['num_steps'].sum().copy()\ntic = time.time()\nmcmc.run(random.PRNGKey(2020), features, labels, extra_fields=['num_steps'])\nnum_leapfrogs = mcmc.get_extra_fields()['num_steps'].sum().copy()\ntoc = time.time()\nprint(\"number of leapfrog steps:\", num_leapfrogs)\nprint(\"avg. time for each step :\", (toc - tic) / num_leapfrogs)\nmcmc.print_summary()", "number of leapfrog steps: 5000\navg. time for each step : 0.0015829430103302\n\n mean std median 5.0% 95.0% n_eff r_hat\n coefs[0] 1.98 0.00 1.98 1.98 1.98 4.10 1.56\n coefs[1] -0.03 0.00 -0.03 -0.03 -0.03 3.68 1.68\n coefs[2] -0.12 0.00 -0.12 -0.12 -0.12 5.51 1.13\n coefs[3] -0.30 0.00 -0.30 -0.30 -0.30 3.82 1.58\n coefs[4] -0.10 0.00 -0.10 -0.10 -0.10 5.27 1.02\n coefs[5] -0.15 0.00 -0.15 -0.16 -0.15 2.61 3.16\n coefs[6] -0.04 0.00 -0.04 -0.04 -0.04 2.63 2.70\n coefs[7] -0.49 0.00 -0.49 -0.49 -0.49 9.52 1.19\n coefs[8] 0.25 0.00 0.25 0.24 0.25 3.27 2.03\n coefs[9] -0.02 0.00 -0.02 -0.02 -0.02 7.07 1.29\n coefs[10] -0.23 0.00 -0.23 -0.23 -0.23 3.46 1.62\n coefs[11] -0.32 0.00 -0.32 -0.32 -0.32 3.69 1.26\n coefs[12] -0.55 0.00 -0.55 -0.55 -0.55 2.73 2.35\n coefs[13] -1.96 0.00 -1.96 -1.96 -1.96 2.60 2.70\n coefs[14] 0.25 0.00 0.25 0.25 0.26 8.79 1.15\n coefs[15] -1.05 0.00 -1.05 -1.05 -1.05 4.11 1.72\n coefs[16] -1.25 0.00 -1.25 -1.25 -1.25 5.46 1.12\n coefs[17] -0.21 0.00 -0.21 -0.21 -0.21 4.67 1.17\n coefs[18] -0.08 0.00 -0.08 -0.09 -0.08 2.42 3.03\n coefs[19] -0.68 0.00 -0.68 -0.68 -0.68 2.63 2.28\n coefs[20] -0.13 0.00 -0.13 -0.13 -0.13 2.92 2.06\n coefs[21] -0.02 0.00 -0.02 -0.02 -0.02 7.91 1.20\n coefs[22] 0.02 0.00 0.02 0.02 0.02 2.84 2.23\n coefs[23] -0.15 0.00 -0.15 -0.15 -0.15 2.86 2.37\n coefs[24] -0.12 0.00 -0.12 -0.13 -0.12 4.10 1.22\n coefs[25] -0.33 0.00 -0.33 -0.33 -0.33 6.50 1.18\n coefs[26] -0.18 0.00 -0.18 -0.18 -0.18 3.81 1.25\n coefs[27] -1.20 0.00 -1.20 -1.20 -1.20 3.24 1.79\n coefs[28] -0.06 0.00 -0.06 -0.06 -0.06 7.77 1.06\n coefs[29] -0.02 0.00 -0.02 -0.02 -0.02 6.04 1.29\n coefs[30] -0.04 0.00 -0.04 -0.04 -0.04 2.93 2.03\n coefs[31] -0.06 0.00 -0.06 -0.06 -0.06 4.97 1.32\n coefs[32] -0.02 0.00 -0.02 -0.02 -0.02 6.53 1.03\n coefs[33] -0.03 0.00 -0.03 -0.03 -0.03 10.40 1.12\n coefs[34] 0.11 0.00 0.11 0.11 0.11 6.60 1.21\n coefs[35] 0.07 0.00 0.07 0.07 0.07 2.61 2.52\n coefs[36] -0.00 0.00 -0.00 -0.00 -0.00 8.25 1.13\n coefs[37] -0.07 0.00 -0.07 -0.07 -0.07 2.81 2.24\n coefs[38] -0.03 0.00 -0.03 -0.03 -0.03 4.04 1.53\n coefs[39] -0.06 0.00 -0.06 -0.07 -0.06 7.33 1.14\n coefs[40] -0.01 0.00 -0.01 -0.01 -0.00 2.56 2.47\n coefs[41] -0.06 0.00 -0.06 -0.06 -0.06 3.05 1.99\n coefs[42] -0.39 0.00 -0.39 -0.39 -0.39 2.72 2.37\n coefs[43] -0.27 0.00 -0.27 -0.27 -0.27 6.32 1.17\n coefs[44] -0.07 0.00 -0.07 -0.07 -0.07 6.21 1.21\n coefs[45] -0.25 0.00 -0.25 -0.25 -0.25 2.57 2.51\n coefs[46] -0.09 0.00 -0.09 -0.09 -0.09 8.37 1.06\n coefs[47] -0.12 0.00 -0.12 -0.12 -0.12 3.07 1.81\n coefs[48] -0.15 0.00 -0.15 -0.15 -0.15 4.74 1.36\n coefs[49] -0.04 0.00 -0.04 -0.05 -0.04 3.05 2.22\n coefs[50] -0.95 0.00 -0.95 -0.95 -0.95 8.14 1.00\n coefs[51] -0.32 0.00 -0.32 -0.32 -0.32 4.03 1.76\n coefs[52] -0.30 0.00 -0.30 -0.30 -0.29 13.63 1.01\n coefs[53] -0.30 0.00 -0.30 -0.30 -0.30 8.20 1.02\n coefs[54] -1.76 0.00 -1.76 -1.76 -1.76 3.04 1.66\n\nNumber of divergences: 0\n" ] ], [ [ "In CPU, we get `avg. time for each step : 0.02782863507270813`.", "_____no_output_____" ], [ "## Benchmark NUTS", "_____no_output_____" ] ], [ [ "mcmc = MCMC(NUTS(model), num_warmup=50, num_samples=50, progress_bar=False)\nmcmc.warmup(random.PRNGKey(2019), features, labels, extra_fields=('num_steps',))\nmcmc.get_extra_fields()['num_steps'].sum().copy()\ntic = time.time()\nmcmc.run(random.PRNGKey(2020), features, labels, extra_fields=['num_steps'])\nnum_leapfrogs = mcmc.get_extra_fields()['num_steps'].sum().copy()\ntoc = time.time()\nprint(\"number of leapfrog steps:\", num_leapfrogs)\nprint(\"avg. time for each step :\", (toc - tic) / num_leapfrogs)\nmcmc.print_summary()", "number of leapfrog steps: 48238\navg. time for each step : 0.002193006909087827\n\n mean std median 5.0% 95.0% n_eff r_hat\n coefs[0] 1.98 0.01 1.98 1.96 1.99 52.40 0.99\n coefs[1] -0.04 0.01 -0.04 -0.05 -0.04 55.51 0.98\n coefs[2] -0.06 0.01 -0.06 -0.08 -0.05 41.29 0.99\n coefs[3] -0.30 0.00 -0.30 -0.31 -0.30 46.64 1.01\n coefs[4] -0.09 0.00 -0.09 -0.10 -0.09 110.57 0.98\n coefs[5] -0.14 0.00 -0.14 -0.15 -0.14 20.64 1.11\n coefs[6] 0.24 0.05 0.24 0.17 0.31 33.80 1.02\n coefs[7] -0.66 0.03 -0.66 -0.69 -0.61 35.12 1.04\n coefs[8] 0.58 0.06 0.58 0.49 0.65 33.92 1.03\n coefs[9] -0.01 0.00 -0.01 -0.02 -0.01 44.05 0.99\n coefs[10] 0.65 0.82 0.65 -0.89 1.53 9.97 0.99\n coefs[11] 0.06 0.36 0.06 -0.62 0.45 10.02 0.99\n coefs[12] 0.33 0.81 0.34 -1.20 1.21 9.97 0.99\n coefs[13] -1.51 0.42 -1.57 -2.11 -0.81 9.45 0.98\n coefs[14] -0.54 0.65 -0.40 -1.26 0.23 13.26 1.01\n coefs[15] -1.94 0.52 -2.00 -2.67 -1.21 4.10 1.15\n coefs[16] -1.02 0.51 -0.93 -1.86 -0.24 29.83 0.98\n coefs[17] -0.19 0.08 -0.20 -0.31 -0.07 4.02 2.09\n coefs[18] -0.56 0.59 -0.44 -1.47 0.21 7.52 1.22\n coefs[19] -0.92 0.68 -0.96 -1.81 0.15 6.34 1.17\n coefs[20] -0.99 0.70 -0.79 -2.20 -0.09 13.80 1.31\n coefs[21] -0.01 0.01 -0.01 -0.02 0.01 3.98 2.13\n coefs[22] 0.02 0.02 0.02 -0.01 0.06 4.20 2.06\n coefs[23] -0.13 0.13 -0.14 -0.32 0.07 3.77 2.13\n coefs[24] -0.10 0.08 -0.11 -0.22 0.02 3.92 2.14\n coefs[25] -0.30 0.12 -0.31 -0.47 -0.10 3.98 2.14\n coefs[26] -0.14 0.09 -0.15 -0.28 0.01 3.91 2.18\n coefs[27] -0.87 0.63 -0.81 -1.90 -0.08 19.96 1.00\n coefs[28] -0.85 0.65 -0.69 -1.90 -0.04 30.40 0.98\n coefs[29] -0.02 0.04 -0.03 -0.07 0.04 3.85 2.21\n coefs[30] -0.03 0.04 -0.03 -0.09 0.04 4.50 2.02\n coefs[31] -0.05 0.03 -0.06 -0.10 -0.01 3.84 2.12\n coefs[32] -0.00 0.05 -0.01 -0.07 0.07 3.94 2.14\n coefs[33] 0.02 0.07 0.01 -0.08 0.13 3.99 2.13\n coefs[34] 0.11 0.02 0.11 0.07 0.14 4.33 1.89\n coefs[35] 0.09 0.13 0.08 -0.10 0.29 4.02 2.12\n coefs[36] 0.02 0.17 0.01 -0.22 0.30 3.98 2.13\n coefs[37] -0.03 0.10 -0.04 -0.19 0.13 3.92 2.15\n coefs[38] -0.04 0.02 -0.04 -0.07 -0.02 4.21 2.12\n coefs[39] -0.06 0.04 -0.06 -0.11 0.01 4.18 2.12\n coefs[40] 0.01 0.02 0.00 -0.03 0.04 3.92 2.15\n coefs[41] -0.05 0.02 -0.05 -0.08 -0.01 4.16 1.99\n coefs[42] -0.37 0.22 -0.39 -0.70 -0.02 3.95 2.15\n coefs[43] -0.24 0.12 -0.25 -0.42 -0.05 3.93 2.16\n coefs[44] -0.04 0.11 -0.05 -0.21 0.13 3.94 2.14\n coefs[45] -0.19 0.16 -0.21 -0.44 0.06 4.00 2.12\n coefs[46] -0.06 0.15 -0.08 -0.28 0.18 3.93 2.16\n coefs[47] -0.13 0.03 -0.13 -0.18 -0.09 4.26 2.09\n coefs[48] -0.13 0.03 -0.14 -0.18 -0.08 4.00 2.17\n coefs[49] -0.04 0.01 -0.04 -0.05 -0.02 5.36 1.63\n coefs[50] -1.36 0.31 -1.36 -1.84 -0.86 7.24 0.99\n coefs[51] -0.28 0.09 -0.30 -0.42 -0.15 3.99 2.12\n coefs[52] -0.27 0.08 -0.28 -0.40 -0.14 3.98 2.12\n coefs[53] -0.28 0.07 -0.29 -0.40 -0.18 4.02 2.12\n coefs[54] -1.98 0.14 -1.99 -2.19 -1.80 28.80 1.00\n\nNumber of divergences: 0\n" ] ], [ [ "In CPU, we get `avg. time for each step : 0.028006251705287415`.", "_____no_output_____" ], [ "## Compare to other frameworks", "_____no_output_____" ], [ "| | HMC | NUTS |\n| ------------- |----------:|----------:|\n| Edward2 (CPU) | | 56.1 ms |\n| Edward2 (GPU) | | 9.4 ms |\n| Pyro (CPU) | 35.4 ms | 35.3 ms |\n| Pyro (GPU) | 3.5 ms | 4.2 ms |\n| NumPyro (CPU) | 27.8 ms | 28.0 ms |\n| NumPyro (GPU) | 1.6 ms | 2.2 ms |", "_____no_output_____" ], [ "Note that in some situtation, HMC is slower than NUTS. The reason is the number of leapfrog steps in each HMC trajectory is fixed to $10$, while it is not fixed in NUTS.", "_____no_output_____" ], [ "**Some takeaways:**\n+ The overhead of iterative NUTS is pretty small. So most of computation time is indeed spent for evaluating potential function and its gradient.\n+ GPU outperforms CPU by a large margin. The data is large, so evaluating potential function in GPU is clearly faster than doing so in CPU.", "_____no_output_____" ], [ "## References\n\n1. `Simple, Distributed, and Accelerated Probabilistic Programming,` [arxiv](https://arxiv.org/abs/1811.02091)<br>\nDustin Tran, Matthew D. Hoffman, Dave Moore, Christopher Suter, Srinivas Vasudevan, Alexey Radul, Matthew Johnson, Rif A. Saurous", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
4a4ad780f530a06e05d7c64fe13cd79eb8be1b9c
116,892
ipynb
Jupyter Notebook
development/cursor_coordinate_dev.ipynb
kernb2/raman-spectra-decomp-analysis
9ee09021e8e57735209812b062607de7e123beea
[ "MIT" ]
14
2019-04-23T19:09:56.000Z
2022-03-19T16:51:41.000Z
development/cursor_coordinate_dev.ipynb
kernb2/raman-spectra-decomp-analysis
9ee09021e8e57735209812b062607de7e123beea
[ "MIT" ]
53
2019-04-23T19:55:35.000Z
2020-05-20T03:43:11.000Z
development/cursor_coordinate_dev.ipynb
kernb2/raman-spectra-decomp-analysis
9ee09021e8e57735209812b062607de7e123beea
[ "MIT" ]
10
2020-03-31T18:37:52.000Z
2022-02-12T23:14:16.000Z
129.592018
79,487
0.809901
[ [ [ "\"\"\"\nShow how to modify the coordinate formatter to report the image \"z\"\nvalue of the nearest pixel given x and y\n\"\"\"\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\n\nX = 10*np.random.rand(5,3)\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.imshow(X, cmap=cm.jet, interpolation='nearest')\n\nnumrows, numcols = X.shape\ndef format_coord(x, y):\n col = int(x+0.5)\n row = int(y+0.5)\n if col>=0 and col<numcols and row>=0 and row<numrows:\n z = X[row,col]\n return 'x=%1.4f, y=%1.4f, z=%1.4f'%(x, y, z)\n else:\n return 'x=%1.4f, y=%1.4f'%(x, y)\n\nax.format_coord = format_coord\nplt.show()", "_____no_output_____" ], [ "# oh shit this might be enough right here\n%matplotlib notebook\n\n\n# this code block will return click coordinates too the consule. Not sure how to use in jupyter notebook\nimport matplotlib.pyplot as plt\n\ndef onclick(event):\n print(event.xdata, event.ydata)\n\nfig,ax = plt.subplots()\nax.plot(range(10))\nfig.canvas.mpl_connect('button_press_event', onclick)\nplt.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
4a4adbccaed7d722680dab1b6b5f9229c522d219
47,641
ipynb
Jupyter Notebook
Model/Flight Delay/Flight Delay Prediction.ipynb
GulshanKataria/Air-Trafific-Control-System-
3a38644dcd7cd3d60ad05cec047cacdf64c2e988
[ "MIT" ]
1
2020-07-04T08:21:52.000Z
2020-07-04T08:21:52.000Z
Model/Flight Delay/Flight Delay Prediction.ipynb
GulshanKataria/Air-Trafific-Control-System-
3a38644dcd7cd3d60ad05cec047cacdf64c2e988
[ "MIT" ]
null
null
null
Model/Flight Delay/Flight Delay Prediction.ipynb
GulshanKataria/Air-Trafific-Control-System-
3a38644dcd7cd3d60ad05cec047cacdf64c2e988
[ "MIT" ]
null
null
null
31.845588
1,816
0.364518
[ [ [ "import pandas as pd\n\ndf = pd.read_csv(r'C:\\Users\\rohit\\Documents\\Flight Delay\\flightdata.csv')\ndf.head()", "_____no_output_____" ], [ "df.shape", "_____no_output_____" ], [ "df.isnull().values.any()", "_____no_output_____" ], [ "df.isnull().sum()", "_____no_output_____" ], [ "df = df.drop('Unnamed: 25', axis=1)\ndf.isnull().sum()", "_____no_output_____" ], [ "df = pd.read_csv(r'C:\\Users\\rohit\\Documents\\Flight Delay\\flightdata.csv')", "_____no_output_____" ], [ "df = df[[\"MONTH\", \"DAY_OF_MONTH\", \"DAY_OF_WEEK\", \"ORIGIN\", \"DEST\", \"CRS_DEP_TIME\", \"DEP_DEL15\", \"CRS_ARR_TIME\", \"ARR_DEL15\"]]\ndf.isnull().sum()", "_____no_output_____" ], [ "df[df.isnull().values.any(axis=1)].head()", "_____no_output_____" ], [ "df = df.fillna({'ARR_DEL15': 1})\ndf = df.fillna({'DEP_DEL15': 1})\ndf.iloc[177:185]", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "import math\n\nfor index, row in df.iterrows():\n df.loc[index, 'CRS_DEP_TIME'] = math.floor(row['CRS_DEP_TIME'] / 100)\n df.loc[index, 'CRS_ARR_TIME'] = math.floor(row['CRS_ARR_TIME'] / 100)\ndf.head()", "_____no_output_____" ], [ "df = pd.get_dummies(df, columns=['ORIGIN', 'DEST'])\ndf.head()", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split\ntrain_x, test_x, train_y, test_y = train_test_split(df.drop(['ARR_DEL15','DEP_DEL15'], axis=1), df[['ARR_DEL15','DEP_DEL15']], test_size=0.2, random_state=42)", "_____no_output_____" ], [ "train_x.shape", "_____no_output_____" ], [ "test_x.shape", "_____no_output_____" ], [ "train_y.shape", "_____no_output_____" ], [ "test_y.shape", "_____no_output_____" ], [ "from sklearn.ensemble import RandomForestClassifier\n\nmodel = RandomForestClassifier(random_state=13)\nmodel.fit(train_x, train_y)", "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.\n \"10 in version 0.20 to 100 in 0.22.\", FutureWarning)\n" ], [ "predicted = model.predict(test_x)\nmodel.score(test_x, test_y)", "_____no_output_____" ], [ "from sklearn.metrics import roc_auc_score\nprobabilities = model.predict_proba(test_x)", "_____no_output_____" ], [ "roc_auc_score(test_y, probabilities[0])", "_____no_output_____" ], [ "from sklearn.metrics import multilabel_confusion_matrix\nmultilabel_confusion_matrix(test_y, predicted)", "_____no_output_____" ], [ "from sklearn.metrics import precision_score\n\ntrain_predictions = model.predict(train_x)\nprecision_score(train_y, train_predictions, average = None)", "_____no_output_____" ], [ "from sklearn.metrics import recall_score\n\nrecall_score(train_y, train_predictions, average = None)", "_____no_output_____" ], [ "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nsns.set()", "_____no_output_____" ], [ "from sklearn.metrics import roc_curve\n\nfpr, tpr, _ = roc_curve(test_y, probabilities[0])\nplt.plot(fpr, tpr)\nplt.plot([0, 1], [0, 1], color='grey', lw=1, linestyle='--')\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')", "_____no_output_____" ], [ "def predict_delay(departure_date_time, arrival_date_time, origin, destination):\n from datetime import datetime\n\n try:\n departure_date_time_parsed = datetime.strptime(departure_date_time, '%d/%m/%Y %H:%M:%S')\n arrival_date_time_parsed = datetime.strptime(departure_date_time, '%d/%m/%Y %H:%M:%S')\n except ValueError as e:\n return 'Error parsing date/time - {}'.format(e)\n\n month = departure_date_time_parsed.month\n day = departure_date_time_parsed.day\n day_of_week = departure_date_time_parsed.isoweekday()\n hour = departure_date_time_parsed.hour\n \n \n\n origin = origin.upper()\n destination = destination.upper()\n\n input = [{'MONTH': month,\n 'DAY': day,\n 'DAY_OF_WEEK': day_of_week,\n 'CRS_DEP_TIME': hour,\n 'ORIGIN_ATL': 1 if origin == 'ATL' else 0,\n 'ORIGIN_DTW': 1 if origin == 'DTW' else 0,\n 'ORIGIN_JFK': 1 if origin == 'JFK' else 0,\n 'ORIGIN_MSP': 1 if origin == 'MSP' else 0,\n 'ORIGIN_SEA': 1 if origin == 'SEA' else 0,\n 'DEST_ATL': 1 if destination == 'ATL' else 0,\n 'DEST_DTW': 1 if destination == 'DTW' else 0,\n 'DEST_JFK': 1 if destination == 'JFK' else 0,\n 'DEST_MSP': 1 if destination == 'MSP' else 0,\n 'DEST_SEA': 1 if destination == 'SEA' else 0 }]\n\n return model.predict_proba(pd.DataFrame(input))[0][0]", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a4ae44d5290551445ce79febb9c690dfdb60451
271,736
ipynb
Jupyter Notebook
YOLOX.ipynb
Neuralearn/Parkinson-detection
bab64564b1e82d0fc4127ef898cebc236a58967a
[ "MIT" ]
3
2021-09-11T19:04:13.000Z
2021-10-06T06:29:11.000Z
YOLOX.ipynb
Neuralearn/Parkinson-detection
bab64564b1e82d0fc4127ef898cebc236a58967a
[ "MIT" ]
null
null
null
YOLOX.ipynb
Neuralearn/Parkinson-detection
bab64564b1e82d0fc4127ef898cebc236a58967a
[ "MIT" ]
3
2021-08-10T10:40:03.000Z
2021-08-20T16:57:40.000Z
825.945289
248,961
0.914686
[ [ [ "<a href=\"https://colab.research.google.com/github/Neuralearn/Parkinson-detection/blob/main/YOLOX.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "<a align=\"left\" href=\"https://github.com/Megvii-BaseDetection/YOLOX\" target=\"_blank\">\n<img src=\"https://raw.githubusercontent.com/Megvii-BaseDetection/YOLOX/main/assets/logo.png\"></a>\n\nThis is the **YOLOX 🚀 notebook for inference demo** authored by **Megvii-BaseDetection**, and is freely available for redistribution under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0). \nFor more information please visit https://github.com/Megvii-BaseDetection/YOLOX and https://github.com/Megvii-BaseDetection. Thank you!\n", "_____no_output_____" ], [ "# Inference demo", "_____no_output_____" ], [ "Step1. Download a pretrained model from the benchmark table to.", "_____no_output_____" ], [ "## Benchmark\n\n#### Standard Models.\n|Model |size |mAP<sup>test<br>0.5:0.95 | Speed V100<br>(ms) | Params<br>(M) |FLOPs<br>(G)| weights |\n| ------ |:---: | :---: |:---: |:---: | :---: | :----: |\n|[YOLOX-s](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/exps/default/yolox_s.py) |640 |39.6 |9.8 |9.0 | 26.8 | [Download](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EW62gmO2vnNNs5npxjzunVwB9p307qqygaCkXdTO88BLUg?e=NMTQYw) |\n|[YOLOX-m](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/exps/default/yolox_m.py) |640 |46.4 |12.3 |25.3 |73.8| [Download](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/ERMTP7VFqrVBrXKMU7Vl4TcBQs0SUeCT7kvc-JdIbej4tQ?e=1MDo9y) |\n|[YOLOX-l](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/exps/default/yolox_l.py) |640 |50.0 |14.5 |54.2| 155.6 | [Download](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EWA8w_IEOzBKvuueBqfaZh0BeoG5sVzR-XYbOJO4YlOkRw?e=wHWOBE) |\n|[YOLOX-x](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/exps/default/yolox_x.py) |640 |**51.2** | 17.3 |99.1 |281.9 | [Download](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EdgVPHBziOVBtGAXHfeHI5kBza0q9yyueMGdT0wXZfI1rQ?e=tABO5u) |\n|[YOLOX-Darknet53](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/exps/default/yolov3.py) |640 | 47.4 | 11.1 |63.7 | 185.3 | [Download](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EZ-MV1r_fMFPkPrNjvbJEMoBLOLAnXH-XKEB77w8LhXL6Q?e=mf6wOc) |\n\n#### Light Models.\n|Model |size |mAP<sup>val<br>0.5:0.95 | Params<br>(M) |FLOPs<br>(G)| weights |\n| ------ |:---: | :---: |:---: |:---: | :---: |\n|[YOLOX-Nano](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/exps/default/nano.py) |416 |25.3 | 0.91 |1.08 | [Download](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EdcREey-krhLtdtSnxolxiUBjWMy6EFdiaO9bdOwZ5ygCQ?e=yQpdds) |\n|[YOLOX-Tiny](./exps/default/yolox_tiny.py) |416 |31.7 | 5.06 |6.45 | [Download](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EYtjNFPqvZBBrQ-VowLcSr4B6Z5TdTflUsr_gO2CwhC3bQ?e=SBTwXj) |", "_____no_output_____" ], [ "Step2. Use either -n or -f to specify your detector's config. For example:", "_____no_output_____" ] ], [ [ "!git clone https://github.com/Megvii-BaseDetection/YOLOX.git", "Cloning into 'YOLOX'...\nremote: Enumerating objects: 1221, done.\u001b[K\nremote: Counting objects: 100% (13/13), done.\u001b[K\nremote: Compressing objects: 100% (11/11), done.\u001b[K\nremote: Total 1221 (delta 3), reused 10 (delta 2), pack-reused 1208\u001b[K\nReceiving objects: 100% (1221/1221), 5.88 MiB | 10.15 MiB/s, done.\nResolving deltas: 100% (683/683), done.\n" ], [ "pip install -r requirements.txt", "Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 2)) (1.19.5)\nRequirement already satisfied: torch>=1.7 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 3)) (1.9.0+cu102)\nRequirement already satisfied: opencv_python in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 4)) (4.1.2.30)\nCollecting loguru\n Downloading loguru-0.5.3-py3-none-any.whl (57 kB)\n\u001b[K |████████████████████████████████| 57 kB 2.3 MB/s \n\u001b[?25hRequirement already satisfied: scikit-image in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 6)) (0.16.2)\nRequirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 7)) (4.62.0)\nRequirement already satisfied: torchvision in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 8)) (0.10.0+cu102)\nRequirement already satisfied: Pillow in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 9)) (7.1.2)\nCollecting thop\n Downloading thop-0.0.31.post2005241907-py3-none-any.whl (8.7 kB)\nCollecting ninja\n Downloading ninja-1.10.2-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.whl (108 kB)\n\u001b[K |████████████████████████████████| 108 kB 9.7 MB/s \n\u001b[?25hRequirement already satisfied: tabulate in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 12)) (0.8.9)\nRequirement already satisfied: tensorboard in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 13)) (2.6.0)\nCollecting onnx==1.8.1\n Downloading onnx-1.8.1-cp37-cp37m-manylinux2010_x86_64.whl (14.5 MB)\n\u001b[K |████████████████████████████████| 14.5 MB 8.5 kB/s \n\u001b[?25hCollecting onnxruntime==1.8.0\n Downloading onnxruntime-1.8.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.5 MB)\n\u001b[K |████████████████████████████████| 4.5 MB 34.3 MB/s \n\u001b[?25hCollecting onnx-simplifier==0.3.5\n Downloading onnx-simplifier-0.3.5.tar.gz (13 kB)\nRequirement already satisfied: protobuf in /usr/local/lib/python3.7/dist-packages (from onnx==1.8.1->-r requirements.txt (line 16)) (3.17.3)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from onnx==1.8.1->-r requirements.txt (line 16)) (1.15.0)\nRequirement already satisfied: typing-extensions>=3.6.2.1 in /usr/local/lib/python3.7/dist-packages (from onnx==1.8.1->-r requirements.txt (line 16)) (3.7.4.3)\nRequirement already satisfied: flatbuffers in /usr/local/lib/python3.7/dist-packages (from onnxruntime==1.8.0->-r requirements.txt (line 17)) (1.12)\nCollecting onnxoptimizer>=0.2.5\n Downloading onnxoptimizer-0.2.6-cp37-cp37m-manylinux2014_x86_64.whl (466 kB)\n\u001b[K |████████████████████████████████| 466 kB 30.6 MB/s \n\u001b[?25hRequirement already satisfied: matplotlib!=3.0.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->-r requirements.txt (line 6)) (3.2.2)\nRequirement already satisfied: PyWavelets>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->-r requirements.txt (line 6)) (1.1.1)\nRequirement already satisfied: imageio>=2.3.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->-r requirements.txt (line 6)) (2.4.1)\nRequirement already satisfied: scipy>=0.19.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->-r requirements.txt (line 6)) (1.4.1)\nRequirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->-r requirements.txt (line 6)) (2.6.2)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->-r requirements.txt (line 6)) (2.4.7)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->-r requirements.txt (line 6)) (0.10.0)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->-r requirements.txt (line 6)) (1.3.1)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->-r requirements.txt (line 6)) (2.8.2)\nRequirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements.txt (line 13)) (2.23.0)\nRequirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements.txt (line 13)) (57.4.0)\nRequirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements.txt (line 13)) (1.0.1)\nRequirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements.txt (line 13)) (1.34.0)\nRequirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements.txt (line 13)) (0.37.0)\nRequirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements.txt (line 13)) (3.3.4)\nRequirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements.txt (line 13)) (0.12.0)\nRequirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements.txt (line 13)) (0.6.1)\nRequirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements.txt (line 13)) (0.4.5)\nRequirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements.txt (line 13)) (1.8.0)\nRequirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements.txt (line 13)) (1.39.0)\nRequirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->-r requirements.txt (line 13)) (4.7.2)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->-r requirements.txt (line 13)) (4.2.2)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->-r requirements.txt (line 13)) (0.2.8)\nRequirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard->-r requirements.txt (line 13)) (1.3.0)\nRequirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard->-r requirements.txt (line 13)) (4.6.4)\nRequirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard->-r requirements.txt (line 13)) (0.4.8)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->-r requirements.txt (line 13)) (3.0.4)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->-r requirements.txt (line 13)) (2.10)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->-r requirements.txt (line 13)) (1.24.3)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->-r requirements.txt (line 13)) (2021.5.30)\nRequirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard->-r requirements.txt (line 13)) (3.1.1)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->markdown>=2.6.8->tensorboard->-r requirements.txt (line 13)) (3.5.0)\nBuilding wheels for collected packages: onnx-simplifier\n Building wheel for onnx-simplifier (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for onnx-simplifier: filename=onnx_simplifier-0.3.5-py3-none-any.whl size=12878 sha256=bed1a70d913839622eeae624c613d2096e6a4af56b12a1b1fcf85f4bed3d0239\n Stored in directory: /root/.cache/pip/wheels/8a/b4/1b/6acdd4eb854b215cd4aa1c18ca79399f9d34728edaff47ecce\nSuccessfully built onnx-simplifier\nInstalling collected packages: onnx, onnxruntime, onnxoptimizer, thop, onnx-simplifier, ninja, loguru\nSuccessfully installed loguru-0.5.3 ninja-1.10.2 onnx-1.8.1 onnx-simplifier-0.3.5 onnxoptimizer-0.2.6 onnxruntime-1.8.0 thop-0.0.31.post2005241907\n" ], [ "pip install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'", "Requirement already satisfied: cython in /usr/local/lib/python3.7/dist-packages (0.29.24)\nCollecting git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI\n Cloning https://github.com/cocodataset/cocoapi.git to /tmp/pip-req-build-g65a7rw_\n Running command git clone -q https://github.com/cocodataset/cocoapi.git /tmp/pip-req-build-g65a7rw_\nRequirement already satisfied: setuptools>=18.0 in /usr/local/lib/python3.7/dist-packages (from pycocotools==2.0) (57.4.0)\nRequirement already satisfied: cython>=0.27.3 in /usr/local/lib/python3.7/dist-packages (from pycocotools==2.0) (0.29.24)\nRequirement already satisfied: matplotlib>=2.1.0 in /usr/local/lib/python3.7/dist-packages (from pycocotools==2.0) (3.2.2)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (2.8.2)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (2.4.7)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (0.10.0)\nRequirement already satisfied: numpy>=1.11 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (1.19.5)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (1.3.1)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from cycler>=0.10->matplotlib>=2.1.0->pycocotools==2.0) (1.15.0)\nBuilding wheels for collected packages: pycocotools\n Building wheel for pycocotools (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for pycocotools: filename=pycocotools-2.0-cp37-cp37m-linux_x86_64.whl size=263924 sha256=eec33f31cf9158602e878030d48c2b06ab53c20616a8cf7fb3d7d843a52e8277\n Stored in directory: /tmp/pip-ephem-wheel-cache-6ewg_sob/wheels/e2/6b/1d/344ac773c7495ea0b85eb228bc66daec7400a143a92d36b7b1\nSuccessfully built pycocotools\nInstalling collected packages: pycocotools\n Attempting uninstall: pycocotools\n Found existing installation: pycocotools 2.0.2\n Uninstalling pycocotools-2.0.2:\n Successfully uninstalled pycocotools-2.0.2\nSuccessfully installed pycocotools-2.0\n" ], [ "!python tools/demo.py image -n yolox-s -c /content/YOLOX/models/yolox_s.pth.tar --path assets/dog.jpg --conf 0.3 --nms 0.65 --tsize 640 --save_result", "Traceback (most recent call last):\n File \"tools/demo.py\", line 14, in <module>\n from yolox.data.data_augment import ValTransform\nModuleNotFoundError: No module named 'yolox'\n" ] ], [ [ "![dog.jpg](data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAJAAwADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD88IYd0ZjbpUSxx7f723+GrNujKn3/APgFNaNlmDfIc/1rzwBo1Kj5MbasLH+5WTZ/s021jaKT7/8AwBv4quxxyNt3P8q/1pNXA9F/ZD8Xah4D+MkGoWNzbwtcQTJ5tx91cpnfxzkMu4flX3ppeqaV440nSdd1zwlby38zbXRZpT/aSPEqCWOJ8lSoVsAHq1fnJ8L9TXw18RNG1pnVEtr+FpXZ+Nm9Qc+2GbPtX6EfELw74d8QePvD+n6f4n1e7l8OSusF1pt4ltbXSNDkRYTJ+UsqrKDgj0NfA8XUYyxNKUpWTWvy/wCHObEQqyUXCSWttXt/wdbHqPwt8deHZPAaTaP4VSwg03z53urd28qExlfNikTk+YA2SOtQ+OPFnh/xNqj6l4JSzhuZLezltdWtb93lmuoW3o5fB4HzJ5RBwK84vP2V9c+HdjafE7wz4b1Ky8QXejRRapcXWqvFYXHmzJhr/nDSeWrIJMZ+Zfnxmuo0Xxdpfi7QToPi7wlf2viPTpZ4p7VZot0cYVjHKHjyHGWVeT0XKE18lGdKvLlwVRNrV3eqZwVMLKVZSoybl9q7+8yfD9r488SfD3xFp954ST+wrW/aK8t0s386YhfN3h9nlrGNrYQMclc9eKr/ABG8G/CcaXo8cnxF8PQnWtDR/BV/Z3LmPVpTM1pJs8rJidJHjDo44EqnGN1dA2h/HpfCv9tW3xYsNItXSwgWzlma5jXLMJN8QAkTy/l+ZM5O7P8AerxDx58F/jJpPx5f46aF4b0G40eKWOK9021vElE0h2iW48pwPNSSHaWGPMG3qTtx6+Hq0cZelVlyyirp39NPmdNWVKpUUL7W69tGQeA/G3i7wH8P9U1j48eNr/XtK0bTY9EtftCLJ9jjEzFLLYgAZRMqlSc+oNdP8F/jt8B/2iNa1fw/eeG7rwlYaZpNpFf2Gm2f7y1vXl2JCm/lUZE+91Pm8YK1Ql+GPh34uRt490fxnstr/VP9M8EJbNaW0KvMyGIOQIpwo+dWwcBvuZUtXJf8FTPgr+0QfDPh/wDas8I/2TZeENY0OytdSuPBG4FobNG8u7vzgOkyt5cbScDPln+8a9hYGea05+81OO2lkls216f1oayo160bptO+nZeb7vyPU/2jv24vgD+yXJonjv4a62tz4o0rVHi/4QDxLc7ZNUR0WL7REiO3CxuwLkbwGyNwUhvHv2M9H+JDXWseOtae4TS7j/iY2ETOyJa2aTfO0jyHL7iyws6Nxt5Gflr5a+Hfh2+/aI+M0OtfE7xghZrWFdU8Q3SLuWFG8uN+weQlvLA7Hk19xaP4Tj+F3wufwfF4wuE+Se182/tmMVxptwqiWEAHDbnXfxwXVq5q2Fll2WrDubnPu/8Ahu6RFKnKhTtVk5S7v5frY9l8H/tKaT8MfE2vfCH4ieAbe58C3erPa6SiX6Nc6Gku37QiSjHSRmAJJAj5B67rd5+0p4S8KtrTaDpWl+IIL2/hgs9NtX+z2F4kbJFc3cXlbvI8sN5qt/y2PzhF3NXzZZ/FLSdLFvb/ANqpPrX2d9L8P2ssKO0k0j5RY8j55jCrMjAZB655rzPRNY8beJvHgvtJ8Q39nomnSpa6omxEf7YWc+SQeVOPvcY+XFYVsNUru82/dSeu3lodlfGYirGKaskrvS2x9Z+O/iv4Z8V3jf2T4eunsbzY3lav++n0u3g3xSM9xFndbENGdzgEdfXbmfDv4O+JNU1qDUvBPiq6sBpvk6zqSabqSXU9rZt+7F2kcpyuzb0B47oobdWZ8K/Buj+Lteszf39rqNlaxI9/b2tyybkRFM8LmP7+NyqcZBPSuy8Saf8ADuHWLjRdB8KxJZvKJ5bWffCyncxKo6HzMY45OSF5rGlCFBX8rW6dtfvON4mjRkptavqv8jvPhvqXjb45eJm0PxReWGoaPdJHp+iQM6W9xDKZZftDSxgkeWSrO0owXLYAwqs3cah8MbrQdWtvBnhnWINQlttUffdROyozttRziThQBuyeT8vGN1fPPh261DS/GGl29tqrWNncSxea0sLTHbvUJsKbWXO7y9xJ+9X15r2i+F9Q8NpomuTJpviG7ibUbDzfmSH7HKpkYvkDIkXc0R/gXP8AC1eLmDx0a/PzL2UI6pLfX8Uk7W/yMswxtaWGvSS5G3dddLf5nmup+H9F+Gtr4v8AiFrelWt5c6bYPdadZ6u7eU0f2hYHaUjBy4Zgi+qtXkHhz9q7xh8M/EWr/EbwPbf2Tp2o2VvZWGh/8fEVq+7MrRZA5G3hn6BsEdK6bVviZp/ib4xXa+PPElrMmnayt1Fp1rN9qCyLN5mzjAcHfGRFyD8v3azP7D1C1+K2m6f45+FGraP4Qub+5a6tdZs2YyRMjfaLvA5ym+N+3O3tWVfLauOjTlNOK5Lf3Vq9GtrtP1sjy4YXESpwXK9FrpdK76/ed78DfjFrn7SXxs8PXnhOa9s/EkUUtx4yntUW2WSHpGyPkhh/CQRyd2Nte7/AX41eBfA/w1j+HfxA1Jra9sPG95oc8t/M0rRv808U0khJ+R42XJB2Ats/hr42+GfjjwD4Nt9Tt7X4ovbvY2H2CKLUr9LR1eV1jkuvMjB/0YRrHjJJT5iBnFR/HLxN8E9D+F802j/HXUrzxjf2a2WkLpaTXumNDI++SYTuF2vvdnReSn3Om2pwmQ4jWoppJxs1azburLy0007ndhMBicVBttLv3fRJLfY+2NZ8Z/D/AMWfFjWvBnirwlBcXPhdYnvdUsHd0jR12bJ4v4vvYPUAMpqTwfrmg6t9r8P6B8OtUvNKit/IfUotSV9Pt98uxHAkfZjH71sdB1HzV8N/s5/tDR/DP4uaT4V8C+P/ABR488aa94jOiapLq2juLe1gm09Yt7oPn8yCRY92SQUXJ55r3r4pfAn9oLxdHZ/sp3ni2LS/D9lBPYXsWh2y21jHAWV/tbwOfMuDIjcNuI8xm+70rtfDdChR+sVZvRNb2i33ta+2nm1p1PTllUMJho1Ztvdb2T+b6d/Mi8ZXmoeOvDck2i+KrXxDo+j3V1YS6bbzNMbN4Xy8Ue/LMmNsqKCQU2kcV0+seG/hH4i+Cdj4c1rxP/bctpv1TRtGutqTQ3MaN5liO2JE3NtyMlcCvFviF8D7D9hu1XS/A/jDXpv7S02H7ffzv5X2OSF98EsWMRpIU8xD1OxmHTbVTxV+09p8njKHQfHnhv8AsO5l0uxuNNuIraVk1K5HzussnWJzb7WRhwfrXh4evUoq1FucO9rO6203tY8OFedN80G5PbRX09TlPix+0J4mbx5f+MNP1u6j8H6vpEVrst4cJp9tJb4KypjKusm5hzkHbgGvJPBPgnx/+094gaT4peLXt/AvhJF07RNIurl4Y445NvlOTGPlhd1+YplydvavWfjN4d8P/tAaxpngX4X6dqz6deW9y2qaNo14uy81SdcW11K8gCJyu1mc9FwMVleBfHnjD4XeH9H0n4haw7voVqumWv2zZDDY2cbNHGshjGFjjReXG8jdneRzX02Fq0Flk6mHp++7KK69rt+t7dj0Mrw/1mpCnhY81So7WvZ6Xe/TS5c+EfgH4mXH2rxJ4y0dbK7k8Qvb6JpPk/ZosWq+VBKIj8z7vmdA5OwS5PNbvxHvvGVn8H9c8cG/sJtX+Hl/Z+KGukvG82ZoXaOXj7qvsaSMqn392K8s8TfEv4nat+0d4X1660q/1fwSuiI+qS2t5DPYR6r5u8KhyZEkjRdpBAEglXGSvH0zpvjrSfAfwxtdNmSwiuNUv5J9b1eW2S4S1SXf/o4J+Rrlk+UQnOz77BdvNYfCY1VfaYyaTUb8qd1FbLXTW+rPaowrrF81Wajyq7Sd7LSyv66fI4340ftDL+0Z8TtI1jR9YtZvDNlYWNxPay2bOJn8r96mck4AbaV4+9yKPih8fGt7yHw/q3i24sNHh3vFb6c6+dJNGm+BEB4cZVVK56V4toPw58San4+uPhD8PbmfT7Hwrqm66vftnnbtMmhb7JcHGDOH2+SQOS6t/drE/aA/Zf8A2qPhjoPg/wAQfFjTWd9Rsrm31d9NdikhEvmQPITgZ2P8qoAUEXPNeJjckoY3FtYipFL4rPe92ePmGBji8evaz916v+vQ6v42fFL4teLvgne+IPC/hu60q+1q1fTvFVvLcr5i2wt3kjlGD/q2dtpPVD7VR/ZF/bK8cfFC+/4Ux8eLPzHmnhTS9etbNmmZGRYzFcyZPmuXTK4AxuxXn/wx0Kz1KO40eTRLqa4hfddefMxZt+75t5PTCt8tdJ4d8N2Pwr8VaV4m0F5dFsNO1KBndLlmiWQSrJwDz823BYHpxWdL6jhVUwsqeulrLtt30tt5Hm1IYPDU6mH9nrfR/wBdOx6z401T4qfC7Wrq18P3N/pXh+z1RL/Q9IsN8iNF9q+dZHPLZj4fJPLYx3r0bxt+3t4m8SeGr/XvCeg+GdD0/QbyzsLKDXNN3veec3KxhOYo4RulO8nAXHWs349ftJat4u+E32K3jsjpun35ZEtUVVm2Ll845UBm/GvKvAPhnwx8cPh/rfxI8WWCadonh23W4utLg1J3S8uWbyokDoBIiH+JjgA9Ca8vA0Z4rEzvBqF25XvrotWlvptfY4Z1niJq6aS3v1/pWPT7f4ofsr+PPhn4hsfidoPiPWfFuuXT2F5r2l7vs8duXwLqzLkIsaj5xFjkrnrWd8MZvD9/8V4dPksH1q2/sh9Nl+0Ov7yZUYQPAR+8iuVCxsk3Cb9wA+7XBx63od9Zxaxp/wBlmSBRF9ntfmZWXgK+e/8AOqvg3xv8O18YaDr9rrzxX+lNJqWqS3832aPz4pfMit0IyWkyqt0KZVefmNenUVNVIOM3aDTXkl0012v879zqoYudOk4202XdL1Pvfwf+1l4RXxNrnwT8W6D/AMI34htWia6tdXhih+0TSRJIZgU/5aH5Xb3bNc/+074o8dXng3VvD1rrC3LRazba3rN/BeJGtnCNpt5cxktEP3eWZx5b9/vMtfNHg/X/AId6X+1Np/x48K+LdR8WeHkvPt2sweI4f9KuJpYW+0RFOSxQyt2wQqhT8te56l8SPhL8Y/EHiHR/gj4M0O2n1HSBZS6pfzLE62zNvNqI+qxseCD+AWnUx2GTtXk53bWqv7uv2nbrbdbFVKkKq1m99n2t30/4Ys/GL4ra9p3wf0H48WsNrFLfav8A2XdPaoxtpNqZ850xlCWXKoDxXyp+0h4w8M/HjxRbfEi801NF8Q6latb65awbcXD27fu9QHGP38fysDyjxfxda92+N3x08N6OviP9kfXPA39m6Ppdwt7pNxausMVnv05DEjxn78ZuvMw+cYZQCdrV8yaT4VvPF2uQ6fpKXtzbSXUNn59xbPaLHcyIhMIMgIVwW465G09GqqtLDfXOfDQaulfs7dfm9Vc82slGq1Fdr/5nCyanJp/heDw7qWqrNbWDyvZ2u9l8tJnZy+PQmsjUvBem6tNN4bkhf7W9ukqMtt5zTDr8gHLcc/Suy+NX7Ovxu+GuoX9xpvhKXX7O0tZFluNLdT5aRRZMux8P5KFtpbHJVsD5q4Lw7r3jS6vtL0+ezuE12ZW+yyxQvDtyvP7wd/mwqjruxXTQlWVPmVn6fl8zqVOVblcXr27eRmXMmu6lqjeC9W8UXGqvqOqHUtUe4s03bgixxOkg+fDRrynAG1cetamvXVrqWuQ/bJpdixeVawOm3bt469Kt/s26L4P8TeNrzxF8TvE9h4Q0ye//ALOi1S6RvIt/LR0Rjk/KxdeSeAWyeFrqvGnhf4Xw/DnSI9+vTeN7LxHcJqkUEKPpzWZb9xNHODliE/h6Puz8vFeljKfLhk5Sta/e7fb17HqY+MqODhS0Wl3ff09Tm9U1K8s47aztn3wwJLAjLuPmb9uF/wBkArxUOn3mneItQkj1R0SG2iK/aH/iYdfy+7/wGrP9vW+oW8cNm/2JVvGTymRlLIqsdx+v3t1aGg2vhddYsptes3msGn/0iygfynmQq2FD/wAOTt+avAozUfdta54qqOT5JaN9Susi3MlnHGmIQzeUjdNp55Hv1q1I3i2VfsM3iTbFaXCy/Zfm+Y7lI3gfJj+LkVe0WDS7fUPLupme7m/dNYJCpH2gthAjpktj5fxr2zTf2Q/gv8NfDPhzUfiN8akXxHqF+0viWDex+yx7MpbxgAlpBM0aGU/J94CpxOIw2CjFYiSu27aXenZF1afsqS5nvey6/M+etUs777CYZFVTDPsngd1Em88/c64+Xn06Vo/GTxpq3jPwj4M+HcTtBo3hLQ0t7LTYnUL9pkffcXUmAN0khbljn7tfQ+ufBf4LeB/jF4N8P2qT+J49b1G8h166fbMb542VN0WWHyIWZ25yNvPPFc9400v4D6fN4kbw34nlHha/v3061v8AUvDz3EFw8HRra4g5Y/w/8C5rmp4uWPnGVFPlXvaq110d35/oOGAqyw7lGSu2tL6vqnr0vv8AI87+COhvr+tJrepJcQ2+ssFtfNRFkXTkXO9AeFLbGx7dN1fbf7Lv7Q37Mfw+8ZQ6Db6Pr0Opa2yPp1nBpryraxzRIX/enA+Z0/Cvib4HeHdW8ReLpIbHUrddN0qWHWbydLN7sWr7Wjgt3AOcKeqjgHrxur6j8P8AhP8AZt8C+CU8fat8Y31XVL7S2ltYp3/fXU23ljHH88XztsC8AfwV8tmtJ4LEqV1pqla95Nrp/Wh51OMqL5tLX6/l9594eMtHX4hfCd9Pj0d7m21u3milsPOaN2SROVyOV+TvXzX+zp4J0nwZ4lv/AIA3Wr7nTV5rPTYvEdm0Ajs5Jci1idxtnRdqsMH+JRivob4M+Norv4TaJrOs3bedZ6ZGdRa4dd6O0SlhIR90gfpXzf8AGxfB99+01pPjXxV8UfszaJKdWeC62slrZxLmDy4zwo3qzZ5cnn+EV6OYZxldHA4evNuo5KHNG9vhWr0V1dp/8Me3VrYV048+t7eui7n1V8RPDtjFFa/B61gur6PVGd7+XyWb7i/ITj7qA/rXgum/DH4vQ3dlbrprC48K65PeQPcTf8f1vMGgeHA/vR7X9N6rnbXU+DfiN8aPHWreGPiJd3Gojwze3FzJcbLDyfLtgmYH35MjbjtIBAHzYP8Adrr9L+MB8SfEK+8Ixx6i1zYwRNcXD237nbKjFP3nRs7ccd6+oxNTA5tSWKxOGlTU04wjJaeycUpKySsrNu+uup0wq0aVVVJU21ayutotdPPzPzD/AGyvgn8YvCvj3x/4stfA2ry6HD4je8v9S0tF8m4MqIEcJndtw204GA+7/er5r16zj8O3CXkCTpfNas1vFLtMavuwjODz8vzDafxr9yvil4N8I+PvBV74bh8B2WsfaYvK1GwXdFcwxM3EqFPm+Urnjn5c14B+2x/wTx+EPxW+F3iHxP4DgsIfiDaWX9oxXUDIl1qCxRKDFOBgS7gv+swDmvRpQoZdh6NKm048qjzXSWlklvulo9raaami9lV+B6vbt8n3Pyp8QfEC60ua1SxRluWbe91Ftyz+393FP0P/AEhoLzWo7cKNkW77u7HV35+YserVD4o8G6Xa/wBjS6fqUt/Cz/6essPl7WLfdGw5YDvird1pWhtfQzaxqv8AoNvEz3ESIxkkc8BAB93A/i7UqtSMFvc45p83M3e3QztS1LQ/EfiOXw/b2C7EvxLb6oiMfMiVNhiyOFBfnnkmultPB+sR2LWPhmG3tluW+e4a2Uur7f4Cfun+VZ2rQx3jvq2m6YujolvBA/3nO5F/1r57sNueMfLXs3wx8L6TH8EZfjF4t1h4Taa5/Yel2ESbnvJPJWeSbYccKGxmplVqJxSe/wDVi8PKKqO27PGb7wnqGg6XLeSQy3Dzu0GyDd57TdQ4P8RIrkrPXvG3xGvNY8Ta/wCGFi027R9J0aytZti6S0Sq4t3TgvHj5yzjl/8AvmvZtY+IetaT4gurXwDpsSadZyoyPcWCyPIV5DkkHyiX5GzHpzXqvwP/AGMNH1q+vPEXi7XvNfUYvtnlW6bri8d033ajg7nUvlcdfm6VaqKgkpPf8XfY1lSUWnc+f9D+HPwl8F6PBdWfgzxHqryS2kWpatdPDEulpO/llxGMxsX6RrJg/KxNd/pfhP4qfCXVLCH4K2EuqTXEr3SP9mS7e+VppYrb7bGhMfmJ8wRO3ynrXp/iT9jOa7uJPB/w90/VLabXLgJYQapbMi3VzArAQvk/un3s2WIOAu/FedaHB+01+znpsXhv4c+J7rTbG8vYXuorVIp7eTUn/d+U+UPCnahQHZv6HfV4rE1qHNTqLTazdnps0vJrX59TZuVNOLV0vvLXw/8Ahn4k+CfxI8PftMftAeObfTba+sJ9UuLy/T7VqOoTMzRGye2I8xZN6/vFwBCkXVRjb618N/gj8A/+Ch3xauPHGseKtc8RXmm6tBLqVvqkPlo0KKpihiiz/qGKyLh+SGkH8VYX7fF5421DxRonhu6tntol8H6VPrdrFtcR6nOrPPE8mMsS8CsFz/dr3f8A4J9+CPGHhD4UtHN4W0S0l0zRLmRorxEj8yRtxsrt/wCJnRfMLDOT0+WvNlVjPki/dnLS+6s+700LnVpwqr2adnbdL8f67mV+1x+0V+zz8H/jlefDvU9B0jVZ7uVE1mXQU+wtp8aRb45bmcfJ+53KioOiNk7itXl/bO+L03i7RP2dbH4d2XiGK7dLPUrqwsGFhMsqKCkc53HzvLZVLvjJ7Zf5flT9pO4+E7at/Y+neJNU8Q3MN1bNf367Y/7WYrm5DyjlZMruBA2INqJytaGl/ta/GL4b6t4W8cXWmxWfhzSWFrZ+FVdfmtInQOxBIdpvLb5bl8b32nGN1dFSn7OnLkvKbW7um/L0W68ysRiebESrxT76PXS23kfYXwr8BeNvD/x0uvFHjfwvpunXMGlt/wAIba2uiIHuLa1R8WQcceRGG3KiHe78kr8q1558bvhP8UND/aGi8L/A34dJYw63YW1kkraCvltNJMz/AG6d8grAry5kO4mR12f3jXi8X7SU/wC2h+1h4cm0v4qP4ZsvDFhNdeHNO17Vd8cc1u/mm0QwFQsk0ar87k5eJvvDatW/GX7fv7ZHiLw34m1nVPH1u9hLarJa7vJt20NJG2hrf5N852fKUOQC3mf3VpTwTeLhiuR6aWvpsruz6b2NljFOnUptJxla2uz3vZ9G7223MC+m+KvjTUPiB4F1D4l3WseA/hdq1xPrmjaTqSm61S5jXOzzI03ygybtr/6mMq0a8pXo/wCy/wDsrazrGn3fx3/aS8JXEhsr+HTvDNha3KTWem28ETSSMYt2Plfpk4O6Q18c/A3WE1C28VeG/hzqVxo/ifXLWCwiht3lD69arNve1M4xGr73Z0jJzJuk5UKWrndT/be/a28C+AtV+DvgX4j654c0e6lvbN4LKFZBJFcN5U8shdCeQrdDnHAK7q96KctEkoPot1/wxwKTp2lBK13+R9X/ALXXhPw/+0N8dl+C3wT1jRvFOmyapb2cEXhnbdzXFyYvPuL7yoiSmwNtLnCINo+XdXjn7VH/AAUH+H9r4f8A+Gcfh7+zhE9z4cunsLCeW/h+x2MauwkWIoTL5+9WLbxjPGa8ovPh34I8BXWt6l+x58bbvQNC17QtN0y60vTvtGn6tdQldj20s3BiTzjJJNskQJHJGPm5rm/ix8I/hb8NdWh8J/DPxbF4h2RRf2lqlhbO1t9s3fvLe3k/5aopZR5o4J3AfdNdNTB4SgrN3uvRW+T37l137SSk5a+W23bv8yLQ9O8ZfEmSx+HtvD4V8NQzeIP7Z/tF3YvNeFFEayS8bAoThs4Dtnr81Ok8HaxJZ3kfxGtt1xqU7SrdXVzvXy92S2M9z8oY/Wn+IvB/h/wXoulXXijRNZl1SaKb7fp0syJDGg/1bh0JLZ+bKcY2rWHqmgab4m0P+0vAcOvT64Nv2LQ0sGdLiHbl5jPwjYP8B6Bc1nzw0jT2MoySdospfDbxh8MYfiE1r4s+G/8Awk+lW8si3Gl2V+9m0yKrBGjkQfIVPz88P0OBWv4+h+C8Xif/AIST4Z38HgnR5rALeaX4jv8A7S8dxtbLBxzvP3h/Anyjptp/w5+A0174Xhute1Kezu9Vl+0XVxFteVcDOxEHcjbhT9SKw7r9lnXrPxFMPiV4mit7C0ge8tfIfzZtxf5Ldx0Vz8vzAfStaeIpOpJc1rFxk2276HO+KPG1v4f+H76P4HvIrafWP9IndXZX2bsbnz3AXisjwLriyWq2WpPbu8m7Y9wnz89ef8eldJdfAnR9HtbbxJr3iG1d7+wjvBb2rvM1qZGYC1k/vTJt+cDgFlAJrOl+G+u614mvNF8J2H2Z7bS/tif2t+6abHBWMH7xHpWjxFCKai/NslygtU9TM8J2+qaL8RLPUPCt/LY28dhOmo39rMwf7LJxJCTxuR9qqR0IWvQ/+FieNtH8C+KNJ0H7ZYWHijQ/7N1my062WV9QsVdpESV9hMUals7gUwer/wANch4f/Z++LV9p83iSewunt/s+24ZdypHvOApf7qE9gevasDxJ8QNY+GPiCfS9QvNc0uKayFr5Tbl+0W/Qrxw656r0rX2rxVSKptSaXTc2vOclKK1X9XHfD/T4f+EzttLvoUtDfI89lO0yszbOsXPCuf73pXo/jrVPGnxBtpNB0m2tZIdA8EPasujWzw/braPl1lMR3Xkj/wDLRn4fauR8tcpa/CXS/HXhn/hMvD+t/bI2QvLcJ+6eNl++uD39sVj698QlvNB1G38O+Kr/AE270m1aKwvbCZoZZt6+XPbnGNoYcHsRR7J18SuW6cd7rbXcUE5yvF+T+9H0l/wRv+HWsftLfEzxt8IfDvxUvPCl5qPguC6+36Q7N9ohEzpOj7DnyxG/Y8mVQK+tvjtZ2v7EP7Pnjb9k79kTwfe+O9Pg1SzX4g6lf7obXT7uaH/RtPeC3O+WZY1jkaMFBsZRIcV8Kf8ABGz4h618Iv2mLjw/4a+J/h7wlfeKdBTS4NX8Q2b3FlbtDMtxEjxxkFs/MgYEBOpPSv1A/YN8eWvxq0H4z+DfjFDYW2vabrl/P/YnheZHM0N0zPJdRxgt+8Y7k812zsbA714+fKvh8dJYenzqye9vJ/KxeGnGliXGpFcrnFKWl4uUd+973Sfnbsfnr8D/ANkv9pX9rTxFo39oeAPFF5oT28UX9uaJolrFbqkfyBYnl2Q/KOr8jO7qa/RT4cf8Eb/Avw1hhtPBupaklzNBsuvEOs+KrhLtonVRLElvbBI0Qj5W2YyK818U+Kv2xPg/qXhXwn4A+IV1oVvFAz2/hyy0dS+CzJHbiIZRoY41UAYHO4nPJr6M0H9rjxNoPw6/s34vP4l8ReKof+QpqmnaJFYpb7kzGu98R4Hyr0JJ55rjrYvWPPPlldJRTtdt7N9/I8utXr0cZ7KUG4tNXTuuvZJrSy+Z698TdU1X4efCTS/CXj4/DLR/CnhpYm06WC3a2eERrgNDA+QuF69TXwZ/wXX+Mkdz+z38OvA+g6xFPbeI9bn1f7Qm399b21v5aOmONnmTrXDfH7xde6940tZ7jxPLqniPVLhYLWwvZmufJmkb91kyHDHPRMAetfLn/BQDxN8UJPiJoPw5+Jmq2U1z4P8AC8dhBBpzqYLVJH8woAgCq52rv2d1X+6K9TCYfEZnxDSq4ltyhZ6pWslpt1vY7curV3jINv3YRkt22r3s9W+55HJH5EzfxGntGGwzf7NWVDNJukT/AH/92i4tZGPkwzIylK/Td2eva5Daxr527zPu/N/+qtCFW+8ybs/wVVtYY1/jz8n8VX4496/7S0noMWGNWuIZFfBMvyfjxX6l/sZ/BX4X+Jvhv8PNS8WeJH0iTxZ9pg0RJUWGG68mHMkMD8bpw+47QeUWTA+Wvy4h8z70fLD+Cvu79m+//wCF6fsd+EvBd1Z6jq954A17VJdNg0bTWe40NJP3n2tJAc/OjMOnFfKcX4aFfLFJtqzWyu9f0MMVB1MM4xvzXVrLz1/A9y+IniKH4Y2s3gP4jeJNU1G8klmRrCK23Wix7vLtnR5DscM68oMlOmM15940+O3hnVNQutF+zabpX9lWsdrewWtm4Wbyk2Jkggt8nzAjjPb5att8N/BfxOvpfEk37RV432tI2VtXs2e5VlVTujT7uQV3HAGS2RzXC3Xwe1K1+IWq6TZ6bFdzahAJ5dSuv3MMLhm+YF/uBwu4M/I3NmvziGW0aeC9th3a+7tZ3+Z5DwuLwNBVZPc6iHxlrljo6X1vpvlW+rWXkWt7Fc+Z5lyGU7yByoIVSCf9rmtDxNosPxQ8N2uh61oP2tby3WXyotrS2cqPw0nIKk7d6MOqcHivKvh2v7QMfxAi8C61DZNpF3YBHiiTzJreELvR4yGGwhGVf6V2K+Hbi10uTUIPGb3X9kTsvlPC+/7MTwTjhivdcAgdKvCQVGd5ys3t9/cjCunzuc9G9kR+HdJ8UfDPR5tH8O2FhqF1Zs72t+ttE0lq+/JSWNy3PzNgjHDc7hX0n+yL+0n8MfG/wL0H9nb9pLwBp0Xh67vNV8L6pa3EzPNJY3tqxLSwumPIDqqtLuOw7eP4q+cNTvrrwPa3Nw2pIb+53I1vsR4WZtuyLeMhHYdD07ZzXR+X4++OHgez8C6xDPFZyX4l0GVHa3iW8hRsq7xjPyozMVz0r6rJ8+q4TESdWN1a1+j1O+nXrRlK/otT82Pi18GfiZ+z/wDFDXfBjeCfEDWuj6zN9gvYrCW8i+yCZ/s8sk8QaJ90axuTnk7vu/dr6I/Z1utU+Knw51uP4nXPiufwfB/pn/EkuYrJNNv1TzN0k8+SsaD5xD0fzWOM19C6l8FfBfh+5s/7Y8fy6xrscVwtrbtcvBpV5sTAV4BJsba7Y3ODgrnFQfEDwj8Cvhf8MfFXhvxRNZSX11Fa6trlh4K0G6a2s18pgkJkwY0O9csxA2BV4w26uyrnGFxFNuK1vp17fp+J1urBRgm/3nbfbr81dnJ/EP4e/EbxJ8H9bX4a2fh7TbnxDp1vZeLUeZY1kglSBIorcSHzGu8stwssTIULMAfl216j8E/2afhf8J/D/hb4c/EDwok+hJq7Nf6lqlykCXV+6MHuryWP5mTHyDOE+Ve/zVgfs6yJ8VNMsLPVvD0sOn6pOlrpa3+m/uoY0byxKQR8xJ+fcOgat74jabrWm+G7r4L2r4dXW8utesLaJrmO2CMkUUc6Z8pAUbdHjptzivKxePnyvD6RlK+/S3W5yVMxTqe82mvue2jXbU67UvD/AId8L6wvxK+EWsP9mktZ2e/t9Nh/0y387YcgJt52qnyY+RfVmrnPHHiSfxAv9lybIkgvXv7W8fatxJKE2OvmD7sJ7Ie9Vf8AhbGpTaNZaTdeME160S1VJbiW2WCaxSPjY8cSLGgbqpGcjrzXGasbyPSk1JUlt4ZLqPff3W7y5nDfu/vjodv3c15GHq15QhTnLmf4vXfoevn+eYDMcHQw2Dpxi0kpNLWT03fa99D13wr4VuPDvhU3mp/C7+1NV1GwS68KJavEzTKq+XJNeAvsgJk2lWQkELnZmp9SutJ1DTbf/hKtVl0cxzzNZeINUuXlnVzE5eII4JVJJPMRjx/rfSud/wCGhrqOxuvAMkP9q3SQW8VxKiNBHIQuRk9WRDwoTn1xWTbtod5q114s1vTbe+vpbWVtS/tTzUit4ol4YID8rqfmVe525qqOV1pVZyxDTXMm7NvRdEvN2v5ng/V4QaqTs/Jfj8jsdF+A/gnwnoFh8aPiJ4hfRNVeKG/0HTdN03zZbxBKvl7JX2xMmV2knONrdq4ybw38evi54y13x18ZPiL4jdtKiltdcupbn9xpdpM+ZGJBwwCMv3D02jpzWpo/iS003wjF4k+Knja41TRNN1R9NW31G/8APlWxmiX/AEW2t3zsALLKzjAwrJ1214/4q8TaQs1j4buvGl1peg6pb2CXtgl40VvNchdk8oHAYEqv4bc19hiIUabhUaXLZWjpdXS38/xO6piqcaMYRaivx6b+f/BOj1z4Fw6H4htYfA9tb6xDqUUUukXWxkt7hNrEt5kuA2dnGD14H94yW/w1t/jB4F8RWM15BDLaaokdhbwXluLW1vFdBHly+Ufe33cYIVuflrJ0D4tfErxB48s/hD8CdVv9Sszam3sHlsIY57qSFXkOCd26NUT5cYz0x92rPxI+Et94P+KkPgv4lXMWt69eaXY3kUu9Q1xv3SeVII8KsincrdP4a4nhK+Ji50moq97dVr08uhpCc8Q1TpSs1b5vq79V5fid3/wR3XVNF+NnjDUtW8PWv9pQWvlS3ro0otZkuHjnWKToyPt3hx99FznFfWFr+3J8E/jN8RNdh8D3Lvqvw92SWV1eXKrBq1qVl82a2AJ80K6bl9mz02mvhTRP2iPiN8C/H/2zXNYuk8MeNYLzSNUSyREubeZd0kqxZIMUjhsCUggfMQM1wXhn4xeH7Hxpo/ij4V+FdL0uaSeayfwrao9w1nbeasAhkd+Wdl+Zc/fHz/7NYY2liJz5NeXls0rf0n1Vj38TKGIwqg+iP0qvvAXiT4xeFz8SviEE1DQdTt49V0iw0u/8ya1nWLAmyUwwP3CvI+6T/FXhfiT9nn4iftAeFNU+KXjjw2ltZ2f7/wANeLYLmKG8juImWM2t3H/x7y7XVomcY4XjnbXG+E/i58cvE19DpPwhm1m+htbd7X7PLNgeaFUO2DgMGRcbM4rPuvi02oaLpnw5Hgz7NqE2pfarxEm2KxCNPHM4kPkpufugzlc9a+fylU8LiqsK1NODiuRR93le131bs9W767ny9GOHpYmTUlypaK1tdtfO19T1L4Y/BnS/iB+zDeeILjwN/wATrS3TS7e6azltopLQTMTLnK+aUfdh+uWbs1eT/tsR/BHQfgjYeIdH8T6Xo8OsWCaTq9nPfrK11BHKsd7cGNwePIbYqR7PnVSNw3Vbh/ak17R/Dtn4P+K3iG/S2bVHRYn/AH1lYxNz8kgGGTK/m3FZH7cfjbVPiN+wvD8PtB8E/aP7Z1KJL/UYra0WGzgFxsSYSA/aWnkLRjYcp5e7r8tdfDVbGQzCvhcVOUaaiuRuzjJp9+9m9ra6l4ariKeKVaF48vX10v6u9j5m/Z/8zwv8O9V8YeFEWfTNSutWv4tZuHeORnF1ss7KNJOXVYV5fIyVXAbfx7LHrWo6t4dtrFrm9ubTTLd7xLVrlmXfIy+bKQWxvYtlnAye9aTfAvwprWnfDP8AZem1u807xB4q06a/8Ib4d1pqCac620sMjj/VOsksbFjwQ3BzmtDwX4Lh0PWNbvLy2uE0fwzqhs9Z810d7dg7ILWTYdju8kTDamRjk8VGdU8U4uvGD5Zyd99UrWa6W1tpppY5qzxVSnJRjpJ3f+G118rai+D9al+DfxP8H/tJWdtLqE+iaj/Zt7pb/cktpLd3iYYOf3bsrjPHzNXYftfftEeKPjZ4Z0RPHGsXGm20enRaxaxWe5EvrN7dflcZ2rP9pdV6Hhcg5zXjXjrx9q2reIHtrjTre2s5G2RRNtRMt1RO6AHr/hXZ/ss6P8K/iB+0ZoPh39oa8ih0nR9LuLeCKW/3R3kqyq8cMZOOSFXaRxhW+7XHh8vwOLSr4qLXLFJSu3bdvTXo7fI74UadSgqtSPuqK1/P8DX+HfgfXNQ+H9nrS+Hk0uzTzZXlurlFnvGI+SWJAMyhfubn2YHQY5rxf9pbVPEHh/Q9Z8P6l4ntZra6sliukimVWhhZs7wP4h8vUV9mftyeMPhv4J0fSfGHhHx7p2gXdrus7XTdNRJbexs1dHT7QD98sV4X0r5OuvD+n/FrwXqfi3SYdI15bDTUnvYm0rH2dj8m7Z/rIAx6Lzg9PSvLy+UFj3XS9zma2bt2V+9uxx4GpXqVfaRXu3ttd+VvkcVZ63fW9vBoN9qTWC39rFA3yM7zB4shvM5CDCrljyK9K1D4gfE62+Ett8EdF1v7N4fu7eG8v7LTkTybidm37JCAC5Q/wk4FeaWuta1rV1pFxeaC/wBnht/s1xfy2DRtNtbAZ+MZX7u7j6Vqf8Iv4s0NYNXh1iXTbJ5WSWKdFfzH+VwwzymR6de9e1mdWEq0ox91W37+WhwY+cfbWtbz7+Rb8ba5pPwD1C2uvDM17ria1BcLLpF5YPbvbyxp2lP7uV1ZmfKdAuw81geD/CdrpvhvU9c8QeIUtNcsWjS10O9TBuoimTcDk8DdvOfwNdP468O/Hr4veF7PUrfwX4ZksIdUmuNLvdNs5pbm6e4X553GW+TG5d3Hpir958C/CPhm4h+IGu6kup63ZWRazluP3SNKF5YI/KkbtoB+laxeCwtPzl5avTa3Q1nRWGppvZq4zxF480Hw14ujj/Zv159R02+02G61y316z2SaHdrsf7KX/wCWvz/MHT78bbDz81Vde8d/2j4ivPiF4Js7qxhXRoLe/VJvKmmuZZXSRIP70KhVct1G71rAmt/Mtzo9xfxWsOm2qy3sqIobb/ecJzvb7241keJpm1rUV0H4czXCaJby+RpLX6fZpJol2l7iQPymXbCoeccn/ZqeGwsIWST9d9evyOeg6UKnNON0dn4o8cfERfEqQ+LvFU+qumhzaQiXU3O0N+4l39ZfJ3NsQ5HzNn+Gvev2afDfjz9q7TfE/gy08SwWep6RpdlP4etZbx4be3EZWI7EGRjHLMcuC2QcfKPnS1XVPHnxAs7jT/Kk+x2ZtlaV/kaQc8dsbv4vxr7H/Z5+Gfjr9nf4neAPiF4P1jw5fI2jXlv4q02z1JfMvI7m6TIB6TtAfLAA5A3H5hXiY5U6mIhKNtOl7LTfqr6aaGWNpUnVj7Nrz1PCPBP/AAkDfEe28D6pqXk3c+ry2sUv2x3haRX8v55Ac4Lrw/PG371ea/tValr3hFtduNS1jydYv9UNv8iKkv2gtjjZgK67clhj1r279oSbwD8Hfjl4w8u/l02TRvEdvPa2F1Z7Ht7abdLL8/T926/KRw4bj7tfLPi74ka1+2J+01N4kuNKs7bw3okqpavap5MLKz83UpOdsjll3Oe3FXlOXSr41ODtBavs9dk+hOV4CVbGvW0Yu7Z3fw9t/GXgv4c3PgvS9K86e604RS/b7aK4t42MqylnDgneNuQwIOWbJYVT8J+KtD0fwyngv7BdJqeneYk/lf6lX3qY8DBL5TdnJGPlPNe3/wBk+C9N0Gy0m8s7DUpoLrdL/Zszpb3kKfI7CcHLAn/lqMZHOFryDWvEmir4mtfDmn6xEqO7wK1xMipbkFsJJIcbfvNy/wCdViMzeK9pHV2bVvnuXj8asTXm90tF5GLrln4iOoPJa+HriG3vLIefKjqu113AY6nPzc9jWNb2+pafNbaxrE0sruqKqP8AMITu7/TrXpviL4b/ABE0W3t7zxN4M1SwikuDbW8723mwTEfxRSRlo5chlIVCSa9e+NHwe+Dvwl+EOnaXb+D7DVdW1eUul7qz+dc26GJd+zDAI+eifwda8yePo0IQi4t82itr63e2xx1nKhCMZR1exQ1D9nL4Z+Ef2X9C/aJXxDeyX93FBf6bYWsKKtxEZcFpSeIo8c469uvFcfJ8SPH3iT4f6Jo+h+IdRu/EviTxNcaa8U6If9GTYLeFHfL7N87fKeBuzmtj4f8Awq/4Wh4J0nQ59YvLqz8MNKkukWszO81uP3vko+NiYO35DkY3ONvy11fw+0/4K3HxA8W/Ey4v9UtLmW9SWDQ9G0eW8+x/Z3if7QX6IPMTdjIztweKl14OfNXTbW1o37de73uazo4ihCNWd1GWl/lqcn428M/E7wzY+GfDfxK8T2+iJ4f0m603Rvs+23eb52e5Yn5maSc7g0x4cquB81J+znql1pPjTwTo+oM2q2fh7WZX8L+Gdc2tYWL3PJYImP3hdstvyMqoH3jSeJvEHw98A/FiDx98XtSvdW03T73dErw/vtSd2ld5pUkGyIB2VPJGQK7bQfDvwt+MXxSm8WfBnXtLtrSwgN5FpbOkUm+N0AxFwdkjsoH8fzMRjatY1K9RThKm1z321Wist3ptbrcKbr0pKpf3tFby7fJHnXxw+HPjLwf441TTfDKTw2l/ulvFs0+yw3iwNK/mhAcsgDNnPX3puh6td3Hg+20/wzo9gupWVvby39xKn2gtEH3/AGgnP7oqPlaPH92vor4Z6HpXxk+KXh7x94o0S90nRL/VNS0O6Rdrw6ez7HRPnzuYurRs57soFaHxg/Yu8YeMfib4h0bw5pFhpNtpumqNN1Sd/Je+kXbLPZOUTDWjfu1V3/eRurcsOKdajgMWpVLvXq76J9/ndbnRVw7lJyaaUlf5PZnyl4d/aT+NHhK21nQvD3jW/Fpqc8st5B5zYuJGUpvPfIB4r134N/Fjw742+Kl58bPiYkF/NZokGr6XOiz262gtUjj8tDkr++XJfnHzf3mrwTSbqxsbu6mvJvtbnzYrranyZ3YOwj72T0YcY5rd8C6bHP4suZPC95ZzWyLs+y3r/Z2uh/ewfvAHt3qo8O4OjKMXZSV/x/JnnU6U7pH6pfDX9oLUJv2e9O+JuqTabYWE9m8+l6To0y3H2eCP+CQ4AWQd0HCdK808J/tIeF/F3g6DxlDNL4ceTWY9N1aLVnS3mtfNZnDB84WMjo3VCy8V8Vw6l4ovW0vRbfxPe3PhzRXmisrC1dzDCX/1r7Bt80b92O3aq2u+DdcvIZtemeWVbJ0e6aV2xGsh2I/fcSV21OZYWvmXJTlWceWNkrt9N9fPX8NjrqPFtqLTVujfkffPj79sL4dfDXX7m6l8YacLyx0uzaWL7Tibf5uyXsPkdGVlb+PqKyfjlq0154fuf2gvBGvWs2laLpyXcV/Bcqv2qJYszwuRnqm4c96/PfxF4w0/UIntfElnLfPbqfsss6O7Ky9E6janrzx2FdHpPi34kaT8P4fhvo+sXUmi67ol5Zy6Napst5rme682N0Q8b1DbRjruwauOWYurU55vnVleL+HRLVeber3bLp1G2pW2e3zRr+IP2YbfUFuNa+Hem29/PqEVtHo2g6Xc/wCks9yykOIj8zcNgrwP484WvMvAv7NeqeLPE2q2OteDL+FklhR9Sgdf+JekcrCVyPuPk/IVPJC8c19EfDfS9c1TxJo3xA8GW0CeJLK8hiuItZuUjFrhdgd5CYpFTYsmVTkBa6/XvjR+zZ8CfDvh74Y/Fz4aJqviPzdUvte02LxIm7zHm8yJrjn97GybWjQ5wNvGa+5pUcDUoylJO6fRXvsrJfO79D6WjgqGJnzv5/5Hil9+zDodvJDJeXmlm3sUe18QyvfvDFeJEuXut7jNvndgIeAV6/NXFeOLz4gfEb/hGP2ffgz4AuLjRvDzbv8AhJp7Z/8Aiaeeu8+VkAeSXZS0oJyYscKvO54wX4nftBalreqeD/Ct1b+FtOiN5fxNvd7i3ZvuSPJgvHhdyjGflyf4a9D+CesadD4FtIfA9nFb39062SaTeb5isAZT+7y+LeEHoqHD9xmkqVKjUXtk03e11b+mNUKGFrvnVn08ztPh/wDsq+D/AIc/DvTGmDTa9c6zptvqV5K7xpI7zeajFMbtihW5GMDnmu78E/DbVvCHxj1Pwvoeq6Zr7+FtRhvrC/vb8xWDK77/ALPG6YE4Pfknsai8D/GD4geLtUstH0PwxoeoPdas2nX9reugmmhjXKf6w4ZBGvkq2R6fNup/jrw//wAIzJdzQ63e6bbW9/D9gsItqfY8y/I/lHlOdqqnQ0swnCplbVGKbvG0nbTl1t3ve2/Sxz1o8y6X8126fM5PXr7VPHnxe8ca9odmsKapqVzHey2t/LHI0wbYbe3ccQfvPvXCDLp8melbHg/4f6wvgO/uvHHgaz8nQbgPBaxXLW72bwhREyb3O4NO29c91zVLwfpM1jpGsXFrpuqW1y8p826a58mK43Msm6MhflDfMGTOR1GBtrK+JX7TXwpk8H3PhPxjJq+seMPEV7FP4Z8JeHkWWS482ZfLS5ByIoF+Ul3bGPavEoYPEZji5VcU7tPf18l0l189yJzo05rn3d7+emxxf7SXxo8I6Ppr658QdYnh0+4nSCK4i8l5b6YpktEes5SZVbzUxg9/lY15ddftWeIviQ1/Y6l8Y7LwT4RRp7fVtOtf3uuXCeVku45O99/DR4CBm5JrZ/4KF+DfBOtfFjTvDdx4Ms9L2ada6jq9ha3P2mKHUpN2+JJ8A+WQv+qcDI5AUba+Zfi58L/FVj4v+x+B9Ngjh1tElZ7fa6LJJu/cgDhAu3OwcgV10MuwMcQ2nzPS19ktNF59PkcTlKNZxS6aenY9N1b4tfsY/D3wLrvhvSfGeqeJPEk+ltBo32PTbi4tpJQnETylAEDSLy46BcE9a8903xl4L+J3hfWofF3gy8Wzs1sILe403VV8+Py3YzrKZAQ+6PaqsP8AVntiopPBPgX4ZeFdPg8SaH9q1+CzEGpT6SkrzahcFvnlEZyUz93A4Aq98M9QXxNeTaetsuk2CymD7E6Lub5VzFs6sfmXca7E3BNx1d+v6eROKxVRWTSXojjrvwT4fvtSmuNBtrzSdLt0E8FrdOlxJu3ZRndAAxyvXt2rpfE2tNceF7Dwzr2lNf8A226SJ7VJv9Im3chnxjrt57kda9E8TeD1tPDtxNb2cVzNwsUUrqm516Ln+Ef0rnfgn8O7h/ivpl98ZLD+yraHVHvYtRR/Pt2YxMn2fHBd3VvkXsdvFS5OfqcUa851PdILHwObHTZo9JtoozbqsVqkEO6GNtvK8n5dpb7o/wBrJqPx58N9B+Dup2fhfxFrcHiG/vrWG5tdSsvvyK/O7H8MYHAzyTxXtSr8NbT4hQabo9tqOlaD9vWK1+1TI1xbx95XeT5VfO5vnyB0Na82qfCa3/aAs/EXw7ttG0e31eVrfXPFE6JfvM+5kE1sg/d24KddmQXbIHy/Mq+IjShvtsKri5RpyUpK6fTc888Zfsk+PPGHgvR7XwvrdlNpssTT6zLFco0kIXa5WJMYnmjDZMSceo+WuL+Onwt0Hwn4ktvDfhHw3cWcOlyxy3qSu++R1ThgHy8QO7IU/wB7PpX11+1l8ZV8J6P4KHwPs9c1XUoVGqWF5ZW37rQ7II1vPd3EGz5ZJA8iDfjH1rxL4pTX/i7xlf6tqkzPcSp/pVw6bbi8ZVUpKT1cD7vsOOlJVqrpJdJa3NniJqitN1+up86XXgvVv7cgjurx20l/nla4h+eP/YL/AMX1rsfDt5b+IrGw0nwTps6T6pL9iisLBFXzkd1SNQT/AAv90gfn826vWPBP7OviTxp4dvPGWsPLptgdNb+y7q6hxDeSrMscilz/AKpBvXEhGzLL2p/xE+Cf7P8A8AbXwlY+H/jBZeIfFNz4ot1sNB0F2u4GsIFUuu9N0jTmdfL+cYJ6Js3VdLD+2mk2lbXV2/pk0aVSo+ZtLrq7bdvP9TxP4nfBGT4O6xNp+saJ9gvFQvFawTeaivu2fuxGSGJPy8f3f9mtvxvb6lr2j2Hw3uvCUUa+Hr9kur+V0ab7Q6ZeKUgkYBbgc819K/CW18J/F7WpdQ8WaT/xUmga27xPqWmoDZwT7pI4QiKI9+5mYSA5TpjFe3/FL9m/wb4N+FelTeJ9Ktb3VNbspbDRooLOL/RYyvzvwMeZj703XLZHpWkqWHc5vVRitW9zphHCVKzSbVldrZnw78P/ANhfTfip8P8ATfG1r8RfDlgkEt/L4jtfEOvRWg08W7ZjyeSyOm1wBz+FeIah9q0VWvNP017xru48i1l8n5d453ZP3QByRX0/o/7Pf7OPhr4jPpfijw3rLWcUqwLa3SNNulPyBY5yCW59fzr3nxZ+xv8As0pocV1oOm6ohVZftmlvN8kP7r/j4B6KR0PHPpXBRxNOpU5Gm132v5CWJw01GLSSXVX18n5n50W+oeILj7Zo+h2z21tNsaX/AE9yjbd3zSIDs3ruZhnOwtxXPfF74a+BfFXw/HhvVLOKaKK3R4NWV8zQsXVEZD1cktyvcV9WeE/2c/hDeagNBX4nW93dwxb9SsNLfa1wjbvuD0/hL9M+9dB4w/Yp+C/jrwlNYWfhKXRL8Z+ytYXju8bBMBpA5IX+HPH/AHzWmGxNHDYz2jbVnpbp5ip4zDwxGje/Q/KS3Xxj+zz8WtT8G3V4niaztrjypZdNd4odSVUVx5ZfJiyfkPf5WHSvUPi38D/A/jb+z/FX7JV1/wAJhqOt6XE3iXwrazJHeWN5LEzu6RSOv7uMqybiRvO0gV9G2P8AwTX/AGxvh9rHh7UfDfw00jWLTWIpbq3uItSSZI9zYK5Iykg288HFeT+H/wDgnb4o1z4zax4L+Mmt3ngrVdO1L/SltdNa8nkuH+cKPLmVIgFZcPn+KvpqmZUatb26lZxWttn5Naeuh6VSvGCdW6tbpqnfZ6a3ucf+xNN+0h+z/wCIfGFlqH7Lv9r2V7okNp4x0nxt4QvIZV0wtK8ltFcFAbU3G9VcoJDIkcYxhc1+hP7Ov7Xf7CX7NetXWtfCX9lrVPBkfinSLWBb2Xxa0zq0rf8ALW2BKbLd/uck/NJkL900NF+Ov7S37PNla/CPwx+0fea7oljZpE914n0+C8v7F13ZzdQu3mog+7Exdz3evl/4geJLTx18Xtd+IH/CyNR1eLT5VlstU1mwaLUNeuCuAqRonlwSZ3HysABFz16eBVx2JzJzq14qFtFytu8el7pK79Hba7PPqY2rWjJQvB6dbx73V0lf5L8D7E+JX/BRC8t76/1b4Y6N9jjh1ddEXxRcJFJfXFxJC0ka29oTvw23cxweOOH4X0bRP2iv2fPEX7JbeINR8L+NLbxbb2brqlr4jttr32qvKw3k/c2blzx/q02pjPy18J/swfAn4kftK/HfQfAfjDWLDw9ZrrNrBfz3Vn5L29s0ymQ84LOUXaMHO9lOPlr63+O37WnwnfUNY8I6P8JbC3sNHvbnSfC7XSZmWONvKSYh+Xkba0m89Ny/3aWX4DC15xrzs+SWl/5kt/uZz4So6OI9o2042T2d+bbR9EluvTqeG/s+/GL9nXwt8Rrz4jfHS/vZtRhv91hF/ZTTI0xfY7RjkL5a8bn/ANrHNfKH7WnxE0v4pftFeLfGmhu50y71d105pd277MiqkfB5/havXNU+Htr4w1q5trfTZZ/IiubyW3tZljbYis7tJI/CIo6t+A5r5luEW8ke8V/9d86/w9ea+u4ewzWKq1m2+mvTrY9HLoyU6s09JHWRwtuDbEoa1Zm87ZgbPkT61LDHtVtv8XzbPrUzw+Xtik+bdt+WvrtWeuU1hYNtTZtP97+Kr9vH8pX5Av3vkqJrZWYMqfN/Sp7dVE3k5ZlalewBCssjLIqJz/eTbX1x/wAExfEXjTT/APhK4vAvjG907VbeKzltbOwd1fUCZXTyhs+8cqvyuMfN9a+UFj8u7WOP7p3fJX0T/wAEytU8Y6P+1poml+BfFUWlX+q/aIE8+FJba62xNObedJMboW8pskEONuR/FXJjadGphpKpta/npqb4eVKNS9Re7Z3PonxFb+Ol0+80HxB4PuorPWpbxorX7H8sNwm7z/KKD5XVv4Qeu3iq+sX2ueJGsNU8N/D3VLnTf+EeRLrRnv2/0zyU8s3CXBB5Z1yQRxtYfxV9PfDv9pTw5rUl/wCGZNB0i71yy1Fp7K9sLZrnTV81n34lGH3/AHvmdcH1rjvHnxY+It94Xt9F0tNBs3vGaweW1tlU2tmZcSOUQZUgNkKT13Z4r4+usuxSjGF3F620+8xx2Y4aVJU1ScotJq3non958uahqnjK3vtE8VaPqX9lHV9IivYrqDbKt1COA+OuPl27TzXeRzXUVtc6FY6CkHiC1s0n1KL5g947ffyT3w3GP72K3vFHwyjvviJqmuWfw68OWmjx3UsFvpCXLfZpJY+BNEnzMisV3hDwNzY7V0fh/wCHvhmGHTtehfSb/wASXXiC0tdIeyuXZ7MlPMMr5/jTbjbg87cmvl8woRhJRkmo3smuvlbe54Dw1OV1zW3e+tv61PPJvgT42+JWnS+KPC/jbUbGP7RClrpsULT3MzKuX8xwfkhX5c78+ld54g8M/FazHhzRfDPwx8nT7PzU17Um1Jba63vtBllBBCj7pC4+ccCnahqHjjwP4Zfxf9gsku0ndrCJbNpo7x/OZHV0Q5Y792efeun8UfErwuvw50/7H4z87xNrWqHz9Gv0aOLZHwfPL43Z24TB5DYp0q8o0tFFpJNp3V/89LO3kYwxdajVU4wVo2367afqeM6f8LfFXgvXNYktde03WUvbD/Rbq3+55wlTMrxz7XSRI2bGP9Yef4d1a3/CM33hHwrq2ra9pm/w9qt7BFrd7cTZ8w7mQRJkncRu56/wg1reDfgn8EfG3w8l1z48eIfEa6jYfuPDVvYXmwrNE7fvt/Jd8MqjfxhVyM14v8ZPG3/CL6wlj/wluqQvLLHptvZ6pCqpCjt5u+TqjDHzFxjndXHDFwxddwptX8lon28308i6VeLxKUbbJ+nl69NDuF8aaJY6DpuueFYdefSfNEHn3szTPazL32REFEY8rjGK53XvEnhX4S+IbCHQ/ibeRXWq3V9/aOqW8zPDpof5y0kaAyXCMGZCueBuzXP+CfAuqfELx4nh241iebVdS3oukaW6wwtsVSEHI5KcjvXYa5pfwb8HyWXh+++F39pa7b6i2y1v9SW2S4l3YKO5+SIqOAzg53c817nscPCUKteena39dTWLwqqOd+brtozB/Zh1LWvC3i5fidfXmm6rDqV6090urO6RXSH5NscR+RYcNgN1Hyn5jXbah8ZtDuNY1jwPHDcajZ6g7xfYNZtoti7GeRFiM+BbhR8qYAJ69a0vDfjj4VzaLqXg3xx8ErU6PeapYS2G2/drq41SDiO3MhKp5ZdtoCbEG3Jzu+XzrXtH1TxR4m1TWL65ii1CHVJG1GVnwzbn+5G5z5oUcb+5Wl7Khj8dGdKny2Vr91vt0+Q+XDYmm6kbaW8rW6f8HqdZN468AeA9Ftm1yHUUkuLAPol1a23zrOUbe7uADn+HP8e3n+Gqa6houtaP9jv/ABhaw28sV3eX+r2HmySX3moggspXIB2Z3ZUdH6lhSfEDwX4V1T4S2euWvj+zv9b1PUkbVNIlfzb/AEO2t5WBaBEbD/apmjQqVJ8vc+V2VJ8PfAej+LPA/jDxd8QPE+neHvD3hi1T7RqOr3mxmcuuGjiALz8fwAcnbyK+mwOX0KlerzytZPrZPTv62+ZM6NalUi07pq710SfT1Kj+EdH8aeA9W+2+M20rVYkhi8OaXLDuOpFrpEKA5O0t8xBOAEX1+74H4k+Hvxm1r4sQXXxUubJIWlmtdJl85D5zq/l7MJwo3sqhuh6DvXpPwbsdY8ZeMLjVdJvL19C0fV3tU1LW0VBdfedGdE4XMbxoIh0216X480lfFV1osfjazihvLOX5bBbZ4YreyVcRsmOHBk+c45B/3ttcsZU6tWF46qy8t9G/Nv8AAmrSnUpe15bJO3z7mz+z/wDD34Z+Dfip+zr4mtTf2b/FK11vTdbsLq53yaLfwrsSJMKNk8e2RD6lsj7te0ftCfBbWPFPiKH4d3Wqz3PxFsPFs+peH9O8L2EL/wDEqMXl/aLyUqNhKLtzLgmRcKGDbq84/a8+BPj7wP8AEz4UePIfEMV7t1SVrVJb9kKy7FwfkRe23M6DJPBr6u+Fvw18O/s/+EzqniT4lyz69r2J/EviO8uWE2qS7cjlwX2KPlRc9K++wWFwsKdaGIUIcrcZSbV9bNJeSevl956NJQpU41LWav8A8N6H5j/tbaP4K+M1jpviT4a+CX0Gw03TXt9Wtbi/d7u4vUbBu0BGInXawaE5JLNypXn0X9hP9l7/AISD4f6V8cNYs4ppk8V2FhZKn8Uy3SiWY+wC/KPXmuq/bo8F6L4V+MSeOPBs8S+HfGiw+bLLcojyagVbfMkB/eJHIir85ABdW/vc0/2E/ipJ8Df2hNV/Zm0/xJ/aHh/xHdW9/oN/qSbEt7nZk4L8fKdyhu5Va+Hx6jhZOFOSkubR903v9x9FgK1GSWlrnVfD3S/Fn7N/g3X9c0XwTdQ69FFfRQS3CfaI7EXMvmW1w8b/AHw0e5Xxykir/A1fP8Oj+ItQ8ePoq759Y1G6We6Z3W0ZXKs+7siZHJbj2r65+JGual+158cvFlr4E1tfsnw78IS29lftNsF9dxS/60FGUNIwRkBwRhWyMNWZ4i8P+Pvjh8eNK8RfDHwfo0954j8IWlrda89mrWe6Nsm6MY/duUH7tgOTu/2a5Z4GpmOFU8LG9WCSitPhk27387X/AAPNx+BoxqqaXp6dz5gs/B+m+G/FieAfih4q1S1jkiVt2g3PmLbq0vCSxyYjdDG29X5Gdp6ZrrvhH4g0Hxdrn/Cgfip4e0iC70TxD/ouqX+mrFLfRBWe31B84XHyRtx8meUrU+DfwT8aeLPGniSbxBozeJ9T07dZ3GnXl5C32WFXVI7gSk/M6bNix7tmxpEPG2vTPiF+xnrXi7W2+LGjabFe3Bsv7N0m3utYbzo4Qu8RDru2uzMuc8MozjFediq1F4b2UHzTjeXKrJq1738lp+h51HGxoe0i/eVr2Pm74H2HxK+LH7cH9seIvHlnoSeDNDn07S31Kwe/t7V5UZ57LT4gYjmd0aZ5SwGFj4+bjsLHUvBOn6Pq3h7S0urT7fe/bLX5NhkndVzLOhkYMcK2cH+LNeg/Fz9nP44fD/4f2GteINVv4bDWZ7dfEqvNbyJql4ZVMCRGMmVgoVd7EDlcdK8z8Rfs6eOPFXjHxP4u8Gpdf2VpunJPZRXUP+k3GOPKIGArj5s+y181mOMqTUafPaCUbJbWW7fdtv8AyOXMMbOFONOnomv+AZdx8O9J1TXl1a98r7FYWpv5YvtifKAv33J/gLKz/oK2dU+JXgj4d/Hzwt8WPG15pvidjpcq65okWmy2DKgt2+zM6PjqrKW2ADK46Nuo0+PwZ4fji8QWOgztLLFCt59vv90ckkaf61EI+XB+bb0rhviRrHjL4qeNE8ReL4bVrSwWOLSbV7PZcQoE+dvM67CFXaO4/wCA1eFxUXFxbdkt1bS/Trq/0OnC42hhIcs3dJfddXsN+LXji3+OHiRNe025nu0kv90VrKjRxWrLCsewICQ/ruNdb8PfA/hPwr8P/Geq3V483iHVbexggs4pvJW3hSbI+dPvlj+QXvXm+hxyWuqRaXDo7WcG/wAq1upf3UKk/wAIA56d69A8F67pel3G66s7iayM6Lf6iyZG/dlFx/Cmema8/HYzE0KCoU/ga7Xfrdde44ZlUoTUVpBp+u1vv7nC+NNa+K2lrNpM/iq6aIr+9tWdTHIh59Mr83Oc10nwB8G+Ivj14gXw9rVz5Nnqt+6XGpWrqWs/LTJZA/GwDgseMt/s1y/jXxtpsPjyWPWNY+2rMjN58V5FMmA7Yg+TJyoXGCBz070/4f8Aj7xlY+IJPh34HezsINZ1GJbq/wDmS48kspMQzxFHj7wHL7ev8NdGX0ayouE4LXVP7tfluePGnUqqbs3bVN9PM9w+Knjb4V/D/wAB+Hvhr8G5rpG0VrhdSnih8pIyXzuST705Yry3A9K8suPFFxrcY8628y4be87un3u4wP4R/wDrqx401y61z4gXNn4d0q1ttAS922u3c890S6h3kkPqenA46Vl/EDVma31GPwfYPOtnYTXl1LFt+WGJlEjZ/hxvXiqw9FxblNtt97X/AA0+Ry1asqtNOT1dl8jstS1Twrr3hnwrNq01h9ptNXbTb2306z/fTWkcLS+dOEGXGfu55+ViT/DXhfjbWrfxJ8SrzxBcOird3rQW8G/aI4kbZGwBAK5HzHIzV611bxNp8kV1daxcRIm9vKWZUaNSuNpxzyG6d64nxBpOpXszzxwvK0jH5IuTz0H1rsj+7XO9Xt8j0Z4uLUUldpJN/Ky/A9x+Hd3Nq3iqHw/8NfD0uoyQWux7eytmctJIrJ79tzbq3bzWPEfwp+I0fwm+LXh6WFbdEWXUYHVxprlFKMcHKn5lww5FcF+z38VvF3wv0G2/4QW2vH1251stpd/prvFcebGqyvFcY4aHZuTnH+t4KnFcr4++OHizxZ8RNXj0nwlYXOt+Jr83V0z7nks5gzfuoyWwoU9c5JC4/wBquKGVVcRi7OPupN3ffp6I56GWVqmIWjta931fcs/tWaL4q8R6iI7/AMSS39xd+X9sv7i8eZ2RV+R3JJLAhcDNVNL8O+MfA/hHUfCfw78H3E1jptqt14q1S13GOOEbX/0hx8ihS3C56+pWmaf8SrvWvF1zoKzW6X9pYLa3+rfY1nWGWPd8sSc8M/yl+30rqPg/rHxC0/wjrPh/S/GeyzvPOs9U8PTorxX0UirJJMAR8zoejnlOo+9XcsQsBg44fa2/9drWOxYmODw/sL23u+77XN34b+PvBekeA9S8Wa9rzfbori2s4rBE+WFWZju9dg7sOD81fSP7BfwD/ZR+JHiRPFvxa0dn8T6jLFcaXZalN/orOpZ0QRDA3452vneO1fFfjTw/pvhtba+mhup0t7rc1qiMPM+bje/oK9Y/Zz1zS9D+K+mfEDWfiF/wjel6W7Xq3mzz5rjCPG8UERzvkKOy7nGAPevnMXSdOip0p8rlu/8AhjyHRhTUZwe57p+1J8RvAvwP+K2p6P8As8+Ifsmk6nbuniDwrYTKmn2sirzcJAQ0aO5ZfmTYfl4+9ur528Va5qGqfZNQuH86BbdGsn37LeNJGYI4GfmDMrfXvWLqHjTTviV4q8Sa14k1Xyrm8ulltYEtmj+0M1woSI7BiLEPJzgZ6c1Q1bwfqniLxFaeGbHSvIuZNSS1it0+ULK7qiKAfuj5qzy7DLC4eMJPmqPWUratt7v8vRFOpKuoKprZfr+fmereFfHXw7+Imn23g+9+MbeCNJ0SJoPsEtnNN9omdfMnlnuIyol3PuVFcfInyVb0DTf2h/hr8PPGHxS+F9/e6LouiI9v4jvLpEaLUHndRFEI5Ezny3VwvY89a4Dwn4b0/wAG+LLa71DUkNumpRpPqNnD9oeEed5ckqR9GcBWwD/s1q/Ha4+IngxvGXwpj8V39/ptt4j8pNLlmYrcPG++Bz/d+/kqOM/7tZqtBV2pLni7LfRNp2XS60vrtY1lVcnzVHeK0Sv5GV48s9e+IT3PxA8QeJ316O3nRdk94xlmQIpnukQ87FKqrcdaztah0+80d/GXhHw9e6VYyXS/YLO8vFmmjA/6aJjeMruU9cdea+hP+CeXwZ8CeE5rT4l/F3VNFbwx4t0TWPDsVve3WVt9QWWIAXAfIVH+ba/Qd/vCvGfHHh3WPE3xKfwboHlXKJqj6bolrBMqJI0bMDKXdgMttZt2cY24+9UyeHrUpUVZyTXqk/1vuZpJwvbW/wCh6x+zX+0LfeIPAP8AwzHrU373XfEtpqVvqiTbJLVllSSRSTwvCfLjne3P96vsHxd+0Bpvij4YfEjSvi98QrPR9HlnttD8Ja4qKt/i7iZA7jo5j+bJ64VieK/N/wCH/hfVLP4j2d0upfY9Ss7iCewtVhdzdMr5dQR93CLk56jivV/25vGFiPGHhuyZJYbt9GuLiW1X7u6V4vs7IB94siyHPpx/eoyzDeyxbXNo1trZ6b27LdX6sVKpKdd8z0tseWaloeg6fqjeG9F8QxTWkcpgS6iRoxMgbAfB5QNtzz2616J8M/Afh/S/Bdz4y1y5uEWZPs+jSxIrqyOzI7YOd249O/pXlGuSX8l1aeB/CmmulzfX6eR9sh/fM5RRsJHPXd8o/wBmvVfBfjrQPEGhxeD9eubXR38MWDSy2EsyoJJIn8otHnn5R8231b/gVfWSnao5Jb9TtwUVCTqz3Wy8zF0XUPH2h3kmj6Lre92glZopdrwwuGbOO6P90kHj5sVt6x8ctP0fS/sOm+G4klufLnunuH2zx4XO2M9FBO7LY5FVbzRbfwnYv4ss5p0+02DvEl0++TDJzx7+/NczqFno/wARNLTwzFcrFq0FuXs/9pV5K/T+Rq6mBoScXJWnJGWKx9XFe7NJeittY+pP2HdU/Zm1zwbr/ij4zeAE1LV4desH0m1ukYMySMkf7vtLtm5I6kdqqeBf2ffEH7Xnxp8aXniC6Xw3poie/aV7byrWOLe8ZixgbHVFVgwxxuzXmv7LPir4lafpcmm6hf3EVzoMVy8Trs3XH7rEfUNt2FuhHPY+nW+KvC/jL47eBZfhj4cv5Rf+KreGwRn1J7aDzWZcvKU+8uF3YIORxW+U46CxSw9ROXs2lyvRK/fvvu+hvhJwgqcZrrby+Zg/8LA8A/DOHUvC/gfwlpfivUbPXl0jVtX0a5lnXVFuV8pEt8phD5e6TemcmDZn5qx/gT+w/wCHvgzeS+NPj1oms3d7bLbzwQPbSzBv33l/6fcoSbc7F3HPTa2fSvrH9mP9j/Qf2U/Dj+Lriddb8QyRf6V4j1K8URRhVwVgQ8RABmG/7+K9R8cfE34b+DfgvqVrqV++m/8ACUac/lKtss010ZV4+TncD8q88ntXoQqYei5Qk2uaTvy7Lyitba9e1vU+1w2H9j70Xr+C9DmI/Df7Ka+C7XwnJ4f8OX+kv9oi0m/8Lv8AbE2n5/siS5SR5iH3BPTv8tfMfxG8N+FPBOk3vw10XSotYfSLxH1KLw5ctBf2tnI3Dlz+76rtI6CTmppPihDoq6B8MPHnhi/i07VLVby6XS7BpnsdWT/WfusZWBS0OCnCFsnArZ1zxhofhvT77xRqGpI0SX6rF9guWmaS8+Yotx1DgRv8yJkY5HP3fSxsvrOGjCas46+auul76fecuPo1a1P95JXi+u6uvy7Gh4XbRVhh0PWNmi+H3ghvbJ9RuUe8k+yK3y3AQfLId/3ehHTmsTx142vGvLix8O+FbrXbq/SCP+1HuVP7pfvtIXyzkH5VTr83WrPhWHw7q3hPVJPEllFdavfRSvFf2vyJHJLsBmCAfw7ct2O7gKadfeGfhz4j8PxtceMPsWs2jQ2WkXHk74FiWVfLWUxjHmFFZt/TDYPNfOQy+ksLKUZJat/1utrq679Dgw8I1cO4qyer13+/8vM19B8bXmpeNJtD0t7iTTAkb3Vr/BHcbWjeUFzsbG7Bb0rjfjpceB/gXoq6V8PEfSfEWt6MILDUtBeFppLZHb555DuKRud33P3h2rjhfl6fRfDu3UlttYvLqDQY7W5SXTbV0YfumWQq+OWjkZ24HT5h0rxn9pTVPil8bvGmt+Efgz4Pt9L8OW+mi48VeLX0oR6lePEufs9vv/1UaBVGEG9/mxtC5bhw1St9anDm5YteWqVtb9r6eZ57hUceSXVb/L/M8Z1D4jaE02paR428WyzTRyx3729/cttvLt1W3SYkn7+xVTLn7nFXL/4hal4V+FMHirT7yzksLm9tr10uIUnjmuVlzHszyvKqm1CMjg19A6T/AMEyPAPxI+D2m+IPG0L22v3M7y38X2nY9x+6X7HaSOeIkAbzWXq78E4rg/gH+wz4q+IkN14A1iyvNOsPB7+RYSqjJHHqbS7wsgP3kCbmbnIDL/er1KWEpzpqrT9H5OwLJsdVp06tK8k21pd7GNar8Hby1/4WNN8Y9L1XxfrWLi/0H71zCsyq5XyxzF5Z3IUPX5Tn+GvKfHnwaj1zxtBrlrbz2t5Lfq1vfxP80aLtJbAP8RVRtHWvo7Uf+CZmseAfF+ofFr4mfF2w8PaFpUrr/am9G+1Ps4lKSEIkYdlXa5JP+ya5L9pjwh4M+FOuWWofDvxV/wAJRockSLcazZ3MTww34XL2okjJDOv392AMf7tVUwtTDNRlLVq6tu9f60IzHK8wwM1Urxs5LY4TxdJYpqWj2/iCa4tZo962TS7tu6T+NwPT9K2vBOl2mg3Fwsmq3V7fbFd5ZUVkbHV4sH5RnpnkVzvw38VSfF34uab4X8aPaw6UiXV41u2q/Y1kWCFpSklw4IiVkRsv+A5au00/WvCusaZZWXw58Nva6bvub+3lvZlnvZobh1e3iklAAbyY12KB13MTzXPUpuhJTvr27njSp8lLmv17anFXWoeCZPipFpra9rgluXd5bW9eHyZMc7IxjOQeue1dXD4Bjs/C9/4q8Ka9b2l/DEqwaXKih7ibs6c4QZ5yOnpUGh/CHS7r4m3Hi7VNKurRLd7Zpb2/uU8u3aV/KeWOJxu8wDc3TAPJra1K3j8KzauvhnTVvIj51q91eosrtEkuRKgcfK5CdR2as5KNvaNprzKnShFKV/laxJ8P9Y0uH4brqXiSzv7jxjrPz6lqV5ct+5tw2fJABxjPVQMferl9etfiFJenWNHsNNhhfVLS3gS42vNIofBXPWJCH3HsflBrq9N0vTdXj03V4dV3zOsn+gJZsvlr/AvmbsNn7xA6dDVbxRqljoviC00/Wry3kOpWTajEquu+SEStGWkQZ8pmO7CHkjnpRT53Czjotvn/AMOZw9p7N31Vv11O0+HvhXwb42vr3wH448eaQ17pzzWUr6l4kbZNEj7BaWgL/Ony7jgbPlWtWT9mX4e+JvFFza+H/H/hzS9R0q3Rt8TxPdzbW2BI/KYO5HfnIre/Z1uNJ8C+ALDV/CvgPQdGW8WWeKWW2+2XWx2YhvNkAdSw/g6AcCuM8UeKvB+i/G7QZbfwHpNq8mpO9xrNxZ5ms0KMXliQELE5PyB+oDNXJ7GpGPOpefn59duxnGW8ru1+t9vvNrwT8WG8C3WoaH4u+IVleWej+e9vof8AY720t88i4+0XBG5vMLpsXnorcV5/rnx21Oz+LH/CQWPirV9euJkWyXSbN3jt7WKZ1z5EbgI2z5T1yR616l8RL7xF441i8+Hvg9LW1ud9tB9ld0S7mZ1aTeU/1jJs3fN2HNdf8K/2YvCOhw22i/EW2t31qF3v4Lpb9JLu1QfIJk3ghAN390is8VVlXrexc7rdvu3t8rm2Iq06snvru/Pp52v2OB8aeLtR8K6Lf2OpeHr+7udKuopYvNs8faLfcv762JIDkH7oJHPXio/+F8fETUvEEM2h3FlCkVqsDfbfsu9ZvNUl5PLkbdIY12lAMAM1Y9/4k8ceJvF3iP4a28Nr4306z1eaD+1LCwSHzJg3yb+djJGitjpz85LFlrnYf2X9G0XUn1DxokVkiuJbWz0a53Mzjd2AHUdzWTrVqMZRUU0u6Mb+yi6bSfr/AEjvbP4neEfh3461a4/srSfCWl+ILr7RdJ9gSN5LjaiDy+TthYLv2p/G2a6ZtS8bax4ov9S0XxPo1xp01k8FvpdrbIlzs3ZR5JC5MrALxgJ97HNeB/FTwb4d8XeM9S0X4f8Ah661S/05LeK6uvs0s/2dtquJRnIThtgbj+KsL4c+LPjV8A9ev/CfjLwlL9p1e9tE0t1eIGMl9kjF9rbgyMuORs25qISxMqLcV29SJqrOpzp72v3Pp7wPN4k8P2tnr/jvxVqkNgzyOmh6H/rdg/gkckCLJ+bcOffFeA/tJeNvhv4g8ba34i0PQfNM0Vstlv1JpvM2piRrmV/3ksg+6ADsHq1bvxs+OfgfwzbT/DXxJqqzahbXCLqTwal50MalchXeA/O+OWQHHqe1eIa81v4s0fUfEHw7+HviHVNKt5diXsG3b97Zuy+N3PoMdq7cKp0viur733uWoTprlta/T8SbwH8QtK8SeJLH4bajZq3264EVrb2EO5Vc/wDTMcc1o/GT4N+JvhzdWkWseHrrRLeZPtFrazuqSSYbHmlB93+6CaT4O/CnS/hn5nxM8VeJ1tPEEFvc3UVlawrcPp8Ii/dsjlwjz5b7vT5v9ms7wb4T+LHizXtM8WfG/wASalLpV/8AaLie61K586a4YcJE4H3B/s8DHIFdEcRG0oPY2p8kYyne69evp1OC174yeNvDc0C+C9Hur++jcxWUSJvKyu2zcM87yW4YnrVe3uPH2rpbprWgX9/r257e/utjSpbyq+JEL87ip4J9V4+7Xr/iqx8B+GLPydKSIPPEr7/m/dnc3yAn7wHyturh/D/jJtFb+z7zWJdHjmupHiv2TykktY0+d4y4w+JG2nZWdefsqTlBW6mrqqor2GftIat4V8M/Ae80O08Bpps83yeVb37ymbzNoSa5d+WI+ZljHHzZNfIbBxhm/wC+a9p/aC+Jy674Jh8I2Eks1s+opKt5dQ4mmVVz1Pbe3HrXjEm6NW3V+gcH4V4fKed3vJt6vXp3Pcy5Wwi06s7prWRbjMf/AH3Ukce+PcqfwfeqZovmdev/AI79aZDDtj+ZOQ/y/SvqT0SD5XZ0brv3f8Bp0UO35t+FqZYT9qXd3+V+30pzWrJlm4G3b89AEk0bLNBM33X/AIv72Pau6+B/iiTwP8YfDfi2OFZfsGr207pL9yRfNUFHHoQzZ9q4g26tDF++6Y+6lakcUu9Ggf52X5X/AJfrSlFSVmioaSTP118bfafD8ev2rfCu3+xzNsTXNN01fsm9miiLgRuHZ/mVB6FlOO1eG6trnwn0f4uW2m+EfjNqiaZcyw2cTro7iaOUt5csV4JAPITDMoIzzu3113gXxhruv+FvCnxW1TxLe6xYa34es7j7LrN55hmvxDgNs+4yCZV+YjI2qTXh+meFfiFqeqT3UngJ9SLWs15evcJKo+Vm8xkIG5jvbcW9e+Gr8txeEqYau02007av5fc9zz8VB0vdgn531+6+y62Pd7f4b+J10WbWvAv7+2huJFs/NTPnKjcqNnLZDbRx/drgvFHjCbS/itZ2PiDwvLpE0Omid9ZgvM211EzZRA4wYn+bY6Hvx1r1n4K6brHwVhm03xVrD6lqV/5dxa28W7cyiJflckkLt2qu0f3as+IPi94Jutevta/4Qm8ubm2lKW+srbbo9JbbkKUHLPIXYlSDsCt92vFnWzGpiJ4dwc1HS672+7rueLTh7HFSi4ydlo15R1TXZv7jnPCPjj4X+JNDs/BupDEljq9vPb2t1ctbxXU4m3ypGh+bIgVs7Pqa8x+IXh/w34u1KCG4m1K28qV5dBurrUkbzkeVjHEC+HY/MrFv9n/ar0nwz4oj8TabeWeqeCbDUtVh3xJcXSKZWiZ8l8ccMNv3Me9cL8Qv2ffhv4k8KnWriwv/AArDp95EkVvpN/M0Ejs+H8tJywgwnzccHoK6MHTo4mtT5JOKjvdbvqz2aTWNdKnzJKmuqWv6v0ZV+Ps3jiHSD4u+F6WdtpeiSwtrmzSmRLpoXQXaOEkY4fep3EhwVzlq898XTaT4+0nWfiFq+t2p07w7pJiVNZdV3KzeXtBIyoLttVnzktXY6v8As3+D9Y0u/wDCfw/+KmvSxvL5rJrejw2yXiquXlidAN4AXCsBz83FeTfFL9k280vwrD8QF1m4vxcS/YrpZdqG1X5sLJFn96CPmV+ntnmvbw2Ewta0X8V77Wvex1rB0Kmj3v2te/U1v2KdD+KXjL4ka18QPh/pV7quh6JdRJpbXVmr/Z9S2JJEr9vMh3K4HIPyk8LXo3jj4L/Gz4ieBb/4Px6bea7qv9vC9v8AyIVdV1J13u3mcHftZl64+bgVU/Yf/Zp19vA+o6lof7RV14NJ1FLptOg0TzY5vs+1xdeWXAYj5kGOoXYdw+Wuv+Gv/BKXUNLh1X4mab+3D470qwTVHnlgv3d7S6l3eaWcmb5sbsdcBG2dMBZzl4ChTdatUUYUrJ3T67LTu/X5HPjsJQ53aaioLVf8MeWeH/EHjL4Y6dc6X8UvDEQvtBS7s0up7Zlt5DNKqSv8/wC7d12shcHru+781emfD7SdDk8Ra3efEy50bT7DR/C+mwWa3mqszrcRs7u0ZiJV2kTywQ5+QMuBnNcR8bv2Pvipoqz6lpn7bf8AbNhM9w2l29xYMU01WbzHSCIOQqMXb5Tmrfwa+FPjDw/4XvNN8XfEjSG16aVZ7e61yzlNtboEWONTboV83IXcGBAxx1+aqwkKDpurTmlp1v1XRaW9PvPPw1HDwnzpqdrO2ttuvzNv9qrxx8C7PwRN4u+B/hvSG8ZQX6XlqmjQ7n/0lEjLIYx8rqPk2E9GkPVq4C1+FvxL8Y/DbSviR8SoIkbT7C20nSLWVEZpLWDcCnXe0g+4XPJG7n5Vq98Df2e/HHxW/aMF7ffGmyi0jTJft9/a6XpX2MXiwup2AOS6bu3tXUfF/wAK654o+JWt+OvD/iHUru1VLOxsJZYd8Gn21szo8vl8Bt+9fU4XPVqmnmMMNWdPmUptc2t7JXsl63OjE4qniq3uJJtapXsrbX87nBfEP41R/Bf4fwTeEvDFrrNxb3s97Z+F7LfHZxzvwIXlCNM4CdCM42+vFTX37WHi3UNN0G1+JWq+dr2nWUcE8VnCiLCm7ftQoOkfy7c8krnrXWt4JtbHydPt9b+2XPlGV72e2VIrXarfNGP4QepyT97FcLpPwt8P6bb/AG7xBDbrqkaJLLKzs0MkXzEvxnv/AA9D0FRif3lJx5mnJ3k0/uXonqctSliJe69E9/uPaPG0PxE+Mnw9sviR4q+JF1rHiyxvbeLTmaaaW4XTUi3x/Z40yGLTbXdzj7q9a77Q/iNq3wH+G954o+LnxUg17U7+J7iK6n1VWmt0liwGt3nynmLt+WIR4J3fd+9Xh+l69CniS2+Guk6xLomtwXSW9rLdXiyoyuq+VEQmdj53bcE8bR1r2LwB+0Z8PPHXjPQPBvjr4b6XbWy3UaXW+28x/Oj3I8yBwFjkDKwwe+7Jrheb5lSqJTp3crKLV9LWu9erW/qU61anUT5W+i+477w3+zr4d+Onwfnt/iVYRaLr/iGyRLq/+wRfaFLKr+aDJna2NuNhwnavm/8A4KAfs/8Aw5/Zq1TSvHnwt8fvqS2bv/a+lzzLvhhCpnyz1l+bknAH3cV9S3H7Rng3w/pdzpPjKzW0+w6pN5V/azJMJLYf6qXyMnnHyFeeelfJ/wC0t4J1jxxcXn7RWj6Pr2p6dfyumvaTrKRNY3US8W7kxpmLaG+ZScAcZBrhyufEdXGSWLtCldys9ZWsrJPSyd9b9TbB4mv9bTekVr3ei28l0v5Hvv7OOteDfDP7N8tj4J+y6jL4ot/tmo36punkWVPnWNOdoXpzU/hX45L4P8N2WpXvxXn8NalocT6TB4etdHSZZn8hvs8ojA8xkbq7Hj73Py18ffsw/tDeLvgvfal8MfD+t2H7q6SKzfUX4t7R/wB4iIOuDu2fRa9v8M/GPxBeXVrZ6fpV/r15qFrcWevfuW+z/aZLpTYJ9oxvXhZlGevyivqI4itQqN07NtK176WejXprb1PqscpYnDqrBXaSsv68rnYeAfDN1az6P8QF037Zr3inVmt7+1SzxHGhhaR7i2iDbconz75Dgbsfe+Wu9+DPx60HRfF0OhyaJq+qa/fLDpFxtRSkNtHu8xoxn/WM+1mc/wB3PSuH8SftGTfETwS994P8AS2Nq1l/Z2l3qfu47WSGFAViT+LbukDZ5O5cfdrzf/hbniT4IsPEVjqWmwm6t9kE6bbmW43Lyp2f6rZt2kHnPWvlI0MfUpV4OXvScktbX10b30vrbzaPg1GtSUknrd3812/roex/tW/GTwj4f8QTfDXRfA0WpW/n2kUU8UzPNYxb2nKEFym8SbZCwxkbRXjvxS+L3h/VLHR/Cvwyur2a2trWD+3luoXik+2Iu/bGf44d/wDF3CsK8A/ZX+Kc3xK+LmtfBGS8t9Nh0OW81a8ltbZnn1CKe4dP3Yz80xLq2X4+WvdvG/gOytfFWjWngi8+yaCZUgv01t1lu2yjHchiAVeV79lbvWEsKsFGGFxLUptLV+Wt/K7b0HXlOMlRqWd1p5dU7nn1j4D1zxBI99dXNxMssUzXqN924ZmwVB6fxfdFbjeH9cmuZpvElg8Lvbq8SMip5aBcIxA+6AFrvvHGpW9q1nHpdt5Om28Tpa28X3rzypdkjFP4CHXA31lzR6l8QtXvNFsdKV5rm1See4t4XlWzhDY2vsBKgfeZeP8A2WvPrVVG8Katv+BxRhJ2UtEeMzeIJL6H/hJNLhgYJO0VhO8P+rPQ4P8ADn69Oa4XVPAviT4jeJJY9Y+IUumafNKFuklhciHa3DBIyA23qM819Kro3hPR/HWp/DP4gePLNvDlui6t9nsEWNLeQxLbwfv5flcSSLyfRsf73GeHfA91deKEvp7ay02R7+RGguk86BvL3kK/AGxkX5m6ANnotZVZ1cJjE6TTvbdXt9+j+R24ihXpV1ClqnsrdzlG+Dvw78I38Fj8N/Eia9MYrdb/AFm6tmtUZl4OyOQl1jG7cWPOelLdeHdWs/FVxq1r4ktYUGYLJV+aNc8PLxgZJ5HpXokngPw74Z0F117wdeSXE1klxLPeWzOkKlvkeJx91PmXHJ968m8beMv7C0ua1tYftEi3SP8AaIk3PHGFbepT0Py475r0cPjquLq8sZeV2rL8C416yg6EWry3v+Rq+MvESw6GkFneWrXE77pVi2ozN1OOu3HasTw38RNP8DyaJp/xCufJ0m7geC/ls4VN3JaBXfeU/vs/lhlNcbpfj641jVLyPwZonl3kUqyv9oRdy5blwh+774zjviua8Xat4q0XxBFe65YWGqW6qst1L9p53sy42DHzOR68Zr6WGDhCKdTW53Usuw9GC9s9XrY3dN8Ua14u87WrrR/LhuH3I6fMzN/e/wBrAXk1p6Fqnh2Ox1KS+1VibhV/s5vszjdc7lBTHVeN3zH+7XO65ry6ppNyl39o021mlZInf78adlyOGO3uPwqGGxt9D8P2fmeJ0Hn7WfUrpNrs7NjygnTp8vHJ+tc6wjrSk5e6tiVgZ4urJpciR13jL4yaf8K/DP8Awrvwbpv2/Vb+6RtSnlRmRWdcGIAEPkDbhgcAqp5rG8A+D/AfhfSb3R9W1W/udfvmKrBFueWQs33N/Xef4mHJq1pOh6Tpl1aeNi76jPfXpWXVLhPMSMLEpCxgY2zeuTwO1dfoXgWS+1DUfEUNorzJLHPayo6q8bFednvn5qdbHRwsfZQT82+p14nGxwlNU6bvZIwvhvoXhfwTrD3UnhJHs7nC3Vl8w4CsAwPXOW3Dt8tdDcW+qaL4wxpGlPbabZyuz387qkqvtXCg/wAQf7pUj3rndS8U/ELxR4imvNHtpdRtbeKOJn8nyXVl4Cbz90g/xDtXQ6TrTaP8aNY8MeLrxNbtm8lrW9+V7WP5FeS4w5z/ALAzz8vpXl4qNFt15O7X2flvbyPExTpVE6kd10O58afs5/Ezw/8AA6L4tTX76lp9/qkdnb7LP5pll3uXTHPkoF25I6c8Bc151ruh6pZ29rH9vtXhlldLeKKbc6qFUlsfw5/vV7lfftpW8ngGx+Ddno8sfh+HTbldXup5mL3TywsDKETlEBdSB1IXGMNXhGm/GLxR4g1SPw7fWOmxWenYa1upbBnWHYuwzFM5csPl2Hj5vVa8GM8XiJc1SnypPT072/r5HGrzjG6Ol+Cuga/Z+PtK1PwosTyNPKrfaod6SIExIxH8WA3HvXSeJPBetaX8eovhz4b1K6v/ABPNf2avdLtZ/t0nznygMlI1Dxnc+SNrZ4WrPwVvLjwX46vL7xQiWf8AwicElxr1ndcSebMqxWyhP4fnljG3PVs/w1xeueIvGnw71S8+Lmm+JLqHxK9/f+be27rE9uk/7p3DnO392zIG7DpzXpU5xXLHl+L79OifS7tqd0aaVCKtq2fQfhH9md9S8UaJ4B8SeJvC8Oo63qVzZaWlheKifabJ1R7eRB8zPN8rxv32yfdrZ8bfsn2fhH4/eFfhnrkN1b/21qgi1643q32O5kSWSJcH+GQJk56jdzmvOW+Kfi79oL4kjxdof2DTNVuPD9ve2VrpM20aTBHF5QVJCiFdqRNKJDj72c16p8DvjRpPxf8A2lfB/wAYPix4llvYvtWq3u/yViS6+zWq2kUTguT8+5WCjpIzEcbq86Xs7TpRpct3Gz7LZtrvqn6NnK6TknBJp+Z4R8dh4o+H+o3/AILtfCyDTfCyX3kWsD4SS6kmd5Ze4XJSNR2AXFedfDHxFeeLI7bxReQ2rzSy/aIoLp28u3ZGX5sfx4deK92/aKtf+Eg+I2sW+g2E9zY6gyJoL/K6X3nKmV39MK7N9/srZrwzw/8As6+NtH+JXh7T/hprd1c3OleRdSxabc72VxL5gli4Iwv3lUKQducV9BhcFGphY1IR172t+H4nZhaNVQl7uqt959AfCuRda8ReFtc1jR/s3iFJ7j+0fK01oZo/OdyWG/7phRtvTlGYVe/au+H+ufDH4xeGfHHxa0SeG4v/AAlYWVrb29/E628mWgLgHO0IG3Hv+9UjmptS8eeGdN8UabqnnavqRhvXn8R3946/aby3jmV5YccDzDs5PfdiuZ/bS+I3hv8Aa88feIWj+Lqabd+HreNPAy6lD5lhcMXQ3ELpEQyzSJ8sRz5YO4uGKqKeU4SH9qShKLfMpJO3wpW/4a/W5tgMPRqYxQlZaPV9NPzPPvFTLJ4y/t6zv/skOl3BWwnt9wWML/EhHOdvfrVP4V+C/B914g1Hx944eKZPNtl3y3mJm+1TMPNBPLf3S3+1zXUaP8L9e8aWelXVpo+qTabqlvMr3WnW2WhMD+VI7lwEUeZx1+f5sbqm8Zfs03VjZTa9o/jN3uNN+z29rZy2ayuyo6jcMd13fdwSd1dX1PEQkvaJx5lpddO5jSotNqo7f1sc94++IHizxMuqtdQ2purG/uVtUiRkiuI0b91xz12/N6/Stz4J+AfFmr+NJtB8LvL4j1FIJvstxaptLINrvcIP4UG1uvavS/ih+yn4s/4RtfAd8lvaTztaazPrMW192mSJsf7n3T5m3Kdgy5r7B/Y7/Ytsfh7rlv8AGzxVrFraanqVhcW97pcVswjWF0WOOVD1QmNeU7bq+hyvh3HYzMYe3uoxSbfVrql5m1DBurP3o21uz5x+DPwnsPGHxY8Ff8JNpWoxweKLJ4rr+w5mihaYS4KXD43smxd4WMjngnFetfFf4QaR8CP2kz4FiMsOhQ+Re2dxKnysq7ZApJ43blZSfxr6C+C/wZ1z4Y/AG3+Hul+KrUaimo3l5BLO7bbdZ71pwgOMqFjbaFH0rnv22vDevQ/A+/8AFnhHytS1vSb/AEu6guJXV5WVL6J59gk4bMe5cdMcYr7mfDeXRoKUUlNat6Xbj373Wmp7NHCQcI07dV08zm/hfH448QaJcWfjTQdR1XTdeurlZ3gRhEsSMxGD0QGNsE9iteDfGZfAtx8U7P4Zx+KWhXUp3XS2ndjPIy7YhDG5JCxjeq8dT7rX0f4U/b08F/DLwX4n8TeJvFEE9nYareppekpcor3giXB2cfJg8+n41+bHiT9rLxl8avjJ4p+KHhn+ydL1OaJ1tbLSLZmmt7csxKQDnsuWfqTuP8VeVXwWFpUrRh78km1pZX1+TPbw+Z08bTmqcLRTcdrXcXZv8N9mfVeh6D8M/EWvan8OWfVLfWNNgTS2ukmaC3jc7fMeMnlgD1YZBde+2uM8aR2vgXXrnT5PDEVhYzWZ8/8As11dGRX2JM4lx/pSno6f89emKp/Cnwj8YF8Gw+PLiFbHV7CKNVinud9ytu22SWUD5gpKM37p+Sdv3a9NtdN03x99nj1a/wDEMIudZNx9tvNK3+dDGu+SFBgBkwuwgck8CvDxVN+zs3/mvLTp2O3GUZYjL5WWqSe3b8TjvAPhvxT4NvrnxbZ+G79tN0XTf9Kil2ebZu6rn93/ABZfndj2+Y81J4i+D9j468eaZJfeKrix0rUdt/F9gs1hWGKHaRvH3Hdn+XjgBcmux+Gvirwn8SNaFv4P8QvY7bJUTS33Qrb2cku8xIRlt4PQnqdydK2PiF4W1ixtbKPTdflmnRIntdIR8oqBWTcAMFRj267s8Vxwq4NRlTcU09rvrp0t+Gx87h8FOVFzjZx9d/L/AIHUr+DPBOt61rE/iDVr+13+bHLa28EL7FRneN1QHhAR87d/mxUWpeFYfFTmbQdeuFtJfIns7WVHiuY4pHx5TxHBZGf5ySN6Db2rl/C8PxE8I/FDVfFljc3WptdaIll4jtb/AHRJcbFaVJkOfLSTLMjSgfOm0H7tet2+ux6xZ22uf2Vfx6bBLbXWozy2DR7rRtxjWIJk4WRfu9/++a8ivl9CrKUJq3a3XS/4bHU8sw+MbUnZ9F+f4j4I5DcXFnNqsFg0K28V19s3NDHfM2eUxlydu3cOR2rtNc8Pr4T/ALHsZLlIZ9RnWe6iiffEzBFyxc/eDDbgnn7orxbx18XPgH8JNL174jfGJL+ZNI1J7W4uLWzldN7ujxwxA5LO3ygt/Bu52isT4D/t0aD+2Jca9dah4A/4Raw8MyvOms6zrduZVeVWS3hCISqnYvPPXbjcc1x4OWNoVpKr8Dfy7K79FfTY1yzMa+TV1RqN+zu9uvS5y/8AwVq+Hfxu+KPw18DfCfwDpl5rltqOrXt7qVrptg0rt5Df6OznouA3OerqpH3a/Oz4e6F4muJpPBvhVNOWylvTFfxXlgsqyPGzI65GDhXXkIR868/xCv0e/bs/ak+P3w90f/hHfg6+naNpiWsNvf8Ait5t9xeeerBIrOAghdvzb5HIKfwg9V+CPDPiDxdN8RrHxN40+IUQgsIo4JXntl2SW0S4EOIwAox6cn5jnPzV9Jjq8ZQjKMdUlbVMw4lxWDxs4zoSfNbZ7Ld2/E7TxV8GfDOi+EYY9N02XWtTmlhS/uP4mtt6/aVjB+RHMe7az8Z29qveEb7w74V1b/hZnw91u4tpUsHXSIHdT9nhgl/dPg/I0g+Vg4GD1+YV6Bo/xI8GzeEota8JvdCJ9ytFLYNH5kQXhRv+7uPO/HTpXmP7QWh+LvD99c+Pr7RPsl5qPkteWv2byPLdkUI0cZA4IVcYrgajUtBay6nyEYTnDktqvwO9+FdwvjLS18aa9c3Ak1R5T/rlldZSzO8xPO4u/LZ61Vt9W1HRV1668aXnnSW91C0UVumwLEeHYD/nmPlbvzuruPhH+yR8bPDHwb8MeOp/sr6ZqK2iRaa6Ol8t3db3EUgcAJsCfNnqWWtDxn+zR4i1Dw/F/wAJd8RfDXhbWNVtZrzRtG17VfIubiwiVTLmMj5Sdy7UOPvfw804YRShytJN67q29k/m9BfUq6V5Lfr/AMOYXhLxdofhXw5qK3Hh6wkh1jSV0mDUVdzNZ/abhD5vl7cO7NtQPkbBwO9ec/tP6hoPg/8A4k9xGqaxaXW9Ft4djsgbBUnqg/rXU+HfEGl+G9BXw9dalp1y9+n2dbK1mWbbbDcgZz9zDFGZWB6Ln5eK82+LHhvwzqmjDx9rniT7Te3l6ry2STbytusrJPvIyYiirkRHk7l/vVCpu7hN7fhoOnh685KnPoWvhL8c/wBrj4za9b+CfhjZp5NnFiC3tbNEFvEPkCvLJn5vm9d5+YgfLX1Jef8ABOH4mfGabw9feIPFVhpelaPaxrqktlcvdTahM23zIpZCR++Y7sYGESum8S/siRfsVfsaaV4kTw5O2q6lf21/4r1a3vEltmtpfuxRjgtMQ0cS54+8Qaj8V/tW+E/2QdPtrhvE63uq221ovCtreK/nFuHdx0TaPmyeaMRgK2X1o+0impRva99P8y6+Eq5dUg3FNSV9PX8z5t/aA+Hfib4H/GC98TeNLbydYeL7fcavBcsZZI1XZAkUvD7BtUY6/LzWf+z/APtGW/ijxzcaf8RfGGjeFtOtvB8Ojaa9/M8N9rzvLgwvIhJY4VWdsgyfKP4TWx+0t+1J8Of2lPjJ4e+J118JfFGr6RbW8K6ppGlzIkt5Zws8s8Ww/cOdqGThPL8wk52tXgPxk+JXw71zS7/4ueGfh/eeCdNlaazisP7YW+RsqwkwMb0HzKiAf7R/u1EsHSxdGVWjeK0Sv+XydjaeBVejKtCVr2sv07aeZ6p8YPEvjj4N+J9Ol8O6jLpVlaaks/8AZGmu1t9sCupkQ8gMGC7eTj8K6n9pb9qXxZZ+INLm/ZI0rS9UFpLC/iGe/tkuvM+bfsGXH7lirIXHJCtjAwa8D8L/AB4174qSWHiv9orRF1W7uFRdJsreHEDPCyRwLLG7FGgSNcsCfnfcSa9Q8Ifs46pceNpdatfElhoFrNpbz3l1qjrE8dseZPKTj5ANuWHAG3/Zrz5YJ0ajVRX017HnuhCFW9Tdq1v19Udl8Ff20PiB8YNJ1K68UW1hpdraXW5LDS7byVbPO98fe/ugZPC81p/EL4peFbHwnc614qmt7iCa3KfZX27mTuxJ+4APmyPwrzrxnrHwt+F/h99F8H23nWllatb2TP8Au1mkPVxGM/Kfzx1ryKPxZD4q0+Vtc1hJi87efa3Cb1YbcIuwdh+ArnopVKnJFe6YRo8s3Zu3mzYXxh8HfE39lQSeHVeKGCR7LS7KZUhkaSVnRpDwcAbc55PzE12uh+MPC8mmvpug699suERPtVvYIywq27CIgAxjPRfxrxfwr8LfEfiCOa40vwr9pSCJmlS1TckIHO5yOF+X1Neyfso/FTw18LdN1XXPGFhFd+VFZtpEESL/AMfW5wWAPDbRtfJ+lbTlCjNXQq0ZSSd7/p5nVyfCPxBfL/Y+qPE2t3bqsWjWr77tcrnaETIU455OaxtJt7jwZqF94d8Qwys9z8kH212Z4ZF6riu6074tR6lqtn8ZvhzrH2HVLa9kuItSez3iZ33JIkgP3SR8u7qK808XeNvFXiDUprjUrdZ7y4nd3lV/lVzzup061PEO6WvpYl8rjZf18jj/AImWerXC+T9ptVtt37+9lfZFa/7J9/UVzEfhm18ceXc/8JJZak0MReKygfzXhRWwWMf/ACyT0/OqfxV8SalZr/Z+tWcVvFcs88sCOu6RlZRu+hP515/8NbHx148+LFn4N8B3L2d9r1wNNt1WbYJEmdRszkDp1ycetddKHNHb0Om0ox3O6/4KFfBGH9nn4geFvhvqfiG3v9em8JW2r62trMrw2r3Tv5UMZH3gEib5+56V89yeW3SvSP2smWH9ofxN4fXxyviZNEvV0uLXIN3k3i26KhaLJJWNZPMQD1ViOGBbzmSNflXe3H92v1HLaUqeApxkrO2p9VTpTo0oU5JcySTtte2r++56XNGsEyhXx/CvrTViVpNrfNu++3vVbQdQXXPDGnapv3faLWJ/+Bbea0fL25Vht2/cr0NtDV6FPyW8zbvxh6sw2u7O7oflpbqFWjYwpt/iarKxeSoUI23b8n0pNXEU4h5a7n/hdl/4DWtCqNBG0P8ADVL7Ksc27+Fv4K1NLj32rQ4yA+7/ACazew1ufc37EPxKsbj9n3RvD8lo93dabq1zZsnnMi2qbvMSYE8L+7f7o49a+gNW+EviTx5osXxO1ffp1kYne10u11Jna6tz/wAtnIwV3N8wj6d/4q+KP2EvES3Gk+MPA9wluFltYLxZWm2PGFdY3WMfxltyrt/Gvorwn8YvFXhPRYvB7J/aP9i/aU0u4v3YS2qTbQ8X+1jbwr1+ecT4bG1qsowajdp3avdJbW9TnxsqtWhOF0m7Wv5Iv6l4iXwn4stPD/gmG9mvNkbRS6jC9tJHk8rGCTuJHzB+hPTjdXX/AA+8Fa14dm02+urxbe3/ALLuTtutjTySvcKfNlP8cz/fLj6V5vo/ib4teLNFtNZvPDDa3e6ZcL9llvLzfLbxB8omAMMCNp9u1eh33xIvrjQ7fx94o8EWthaQ3EsV/a2ToiNIi4ODyWPsOK5cDgcXgr1ZRT5u223YvL8vUIOrW33Vnp/Vzc8TeGfg94k1jSNU0Jtmo/aH/tdYHaJrhVXuQ39/ncefvVzP/CvfHFr8RvEMmmw+bolslpLpEupQ5bVrlVac2sXBGU3feI5HX7tcTdfGrT7fw7qWoabZr5s843b4fmVW6KmfT+8a/NX4qfHn4sal+0br3iLxZ8VPH3hW2l1KWJNO8L+ML6CG1ZNsSNHbyTGFlKLuKjBJZufmr3sv4fpZjUl9jrfVO7t+BpiI0rpwilK97rRrY/VLxF8avBviK1/sfxV8B/FEUkTM95/oyzpJN1DoIypTHt07V5f8dvDfgvxtax6x4Z+ISaOmhaJtutN8W6lEjyL5qyJCmB+/GW443j/gVfFPgv46ftm6nrUOm6D+2xp2paDE93dRS+LU+0favKRcQy27w+fBuHyhhIQD14r0T9nf9sKT9qLSb/wz8RNH/sfxZYSxy2uk6dYTXNvdQj/WPJJho7Ur8w2liDt4Lbq+gq5HUoO6knazv93odVHEqpNXjv5XPsn4U+H/AI1f8LKuPhz9g0jWrq2smS8uLPUre5NrbTMpDS7DhI4xuVVxwOMGu1uPj14gt/B2oyeGfDCXnhrSb+5tXurKHz0V2Ty98iRFtoITev8AwKvk74atcSePIbHQbp7P+0H+xpFau8Zm8xWBT5Mbsj1qPxF4fg+DMd3p9r4zsLS51FvseqWHhy5mXyYk3YSdI9obhvvHs2BXj4/BUsXh40sRGMk+lrJtPR27LoRmVGhUXLOOj18v68up6tpt9eeJI5vGmrXmiaVZ+QZ7Wdn/AOPgOyjcgkyW/hVeMVoeD9J+Hui2K/FLWPBN54muY71F/su9h+SbztqIz4yNiO3K5APTK1w/9i+KtU02w8RT6VLe6hrNvbweErVX3hdvzmWQdOIfuIcYO0nivo34T698P/Bfg/Sm8aaxpusQvLHb6lolrMxlhjk5eWdB92YfM+O+1QQ23Nebi+WnZSkoKOl+3S/fQ8PCUZ0nOcpcq29Ox5x8N/iBp/w7/aE1fw23hvSZrQ6N9nltbXcLbzZFWQYc8s6BuRnGeOlcfD4f8TfGrVZrXwrf2Hh/RLCW7T7bf3nleZcSNv2EAH52HUoMJ0yKb4oWz8K/EzUfCLWdvDaX9/PeWsuxftMdvI7PEogIG2Qo3HQHbxXUfst+D7X4naXruuah4q0jwxp+lu97qWqatcoAsRby44oIxgs7lOcdC2f7q185Vo4rFQlUpRWkd29N22/PqY1KM6sn7Dt8vU4+x+HPxF/4QG58deKNbSOy0rV20mW/i1JEtpNm4mJJHwXQxqxfA4FeeXnxM8L619ok03XpX+33H2e4sLW2XyJFVt6PGc5/vLzwP+BV9e/DX9nrXviV4XsPhjrHiHSNYs/GFhqt54ciitkube3cph71y/C5G0b8E/NjGWrjP2gv2Pf2Z/hT4Dhjbx+3hjWLOzDRaleWctyL6X7h83yx8qN9xdmAg5w1edPMcDVq0YYpyi5vljZXu4pc7flfSyJqVqdWEG27u0fmt3/wT428LaXqnhXxI/jTTXtb7T7ZvtFvav8AurhmR1PLg/Lg7W3DkFVxXYeOviNb+KL6DxFa/D1dCutTt5rrVkivFmguriV8+ckYz5G4fM6HgybiKwhJfaav+gvvV23f6lFEihv5elVNZbxDJa7rqZv38pld9mAz/KOv8WP0FfeU8FToQUbJ+bOlOrhYctr3O60v4rtdLpupePLNbDw9c26WtvFo3y7mtxzcEnczHDMucoC7KB92mXXxgs7LQ/8AhAfCtnqkzahfxpYQXF/M8MNq27rG5w3mSNGxbqNvNeG+MNJ8Vab4hS+tPGF1p+nS2u9vscKyos8bfdAPHPqRgfNXX+DtZ1D4peLPO1C8t7aX7HuurW1f/UxL1lCdVB+8ew7Vy4jC06mIUm9euulvPzN5zhNpxVm0r/qWvi18NT4b8USfF7R0liuV2L4gsJ4VTcibQ8wGBtIP3vbmvbfhPqmi+PvBdpca14hv9Lfw5f22pWt/pdz5MUkUW9PKkBI82c7lVGPCI0ndlrzL/hJ11BG09NVbVtKglaC4bWbxhtgkXHBOd7g9unv8tcR4H8P6hq/i4fCn+2LoQ22pMjxQPgyWx5GXH3hs4NVXp80vdVn6XPVwmLUqU6d7WW/bTc+yPFHhv4ufDXwzonjqz8E6bP4Y1W3ZlvVeKazmtrhWAaDY/wAsm/8AEdTXl/irXP8AhOvDcX/CO6PLNdtfyQXEWqbN7QhspKgHCAfccg8npXTR6brl1oz2Om2dxJYWm+9urWyRvJtY0VUMxQfKgG7bnFcpDqUmn6xfz6leIkVpceV8/wArcKp2genzferwcY6dCXNGNtv+H+bufKrExw0lOjD5vXd6u/mWPhrZab4L0F/DOpeCbXTXW6eZ4rdG82Tc+9HeX77cfL16V12iTM2lnTvC+jxIkV+88rTvnartnbzncmeAtcYv9va9dLrMNyu+R0Vmi27ViZlI7H+DpW74y8bQ+C7q2bwfbI1xqKOi2C7iIfm4dyfugLzt5r5nHKU71OW8r3XffZ+TPNniHLFOvtrsa2izWdnrnh9Gh1e7vNSeaXxRA00VvbySKzO/l5yXJj6MRwegasXVvFH2Hx1qUfge5l8uS9Z3s53dftEe7PlTiMgsP4T61yHizxpeeC7zTfHkniS1s3trj7RYXF1tVvtESs+5Ef7xG1sCqNj4y0e3uofHnivU59jOt1Le+S0S7ZGy/mBBleW5wM/MxrnpLF42UZTjbWz0sun+R2YrE18yjShPRLRaWVv+HudR4m0Sz/4XBoHi6z0FW0tPCr3WqaWr7nh+zTeZLLGhyUCBflcgnvj7tReNvFmua1rlxdaTo0EkOqxSfYNz7LaFG4KvnByQ3Pr6VgaT468c6t4m1jxt4216D7XfWE2mwfZ5lMK2Lv8AdR+Nw2bUzgEjrjcal174jW/hq1htLn7LcyWEW6K3l+aXdI6jaAPX73qduM1rmDpRrUaNFXcE1fdXvo/kup34uVN1KeGpu8oq1+lyr4o+MHjKz8MnwXD4huvKhgEF1YRO3lSJH9xn8zhQfwBrgtekX+1E3TbpLiIPF/tA+9eneKNL0vzI77R4bfyZ7WJ5YonSZJG/vH+797hH5Hevmj9ojXtQ0P4pWuh6XbXVob/yP7IvIJsJGd2JMJg/c7p3HSunAUKWb1FQpKz1b6XZxUMPTxUuSDadt2d/q3wn/wCFjeBdb+IFrfyvbeE0VdXisL/yrmFZNwMoA+dgoVt/YDrw1ec6br2g6Ov9m6X4S+3tDvg/dO21V7SlDxkdduTXomk/sz+KNC17UfiFeeMNSvH1C6azafw+nyXCm3zIrgZOHj/gxg/8BrrtD/ZxuF023m8M+G/OluZ9ssvyp5Kb1jEru+Bs38bh029K+oUYYCgqdWalbbXby+89aPsMHRUar5mv6seQeFfBupeIEk1jx5r0TafeXDz2tx5y42IqgYB4/hVStX9B8B3WtaPqGl+IvGEWoM7xvAv2NClm3UNGTyp+X72a+nvgf/wT58N/GDx1eal+0d4Alv8Awl4JW/gvbf7S8FtdX8bqQh2YM8J2tuAAz8v97bWJ8aPhTDdTS3XwT+F2vaJp0N7MkqRaVcXaSSlld0+5st9g6IOg++Sa86pnGGqVPZU0299FeK/7eu9bWfzMZ5pCdXlaa/I+e7hYfh18L5tFsbbzbeHxG+pPbwceZN5KxFRj2XJr2Dwj8B/jB8SvCNr4q0fQb97PUreJrC12eTLIg+5hOrfeyG79elTah+yD8UPD/hPR7X/hG9SutY1bUd66RLZs5jZ0bDSOMqv3V3ZOBu5OK+q774qfGDwn4+sfA9n4Si8PR2/hmK1tb+/RfNku5VQb4ifkcYWRcjken3a+ex+OnOtSp4eKnObfXot/x6nl4mVatVpwpR5mz5Z+I3g/x18K/As+qfEjSpbK9iXzb1b+5Tzbp9mBs2ekaL8x4+X+I14zqWk+TZS+Jr2/t7O01Kzia/ZZv9Yeoi9W+92617d+0V8OPid4V8cLD4+8Qpq81zqlwio8zSSMhb92m8j5cjsOAN3Fea+N/D/hvwj4fTXtSv2urmCURPLFbS3HluHXfFFGgPb+LHO2vSynC1J1nGuvfbXp9/U66GAnSqt4hWk3t5mr4E+GbanDc+NvG3ifTdH0y3sE82X7SgWz811SMbCcs7PtHPPpVDWvEVr4ZuGt/DKQOk8UtndXVxD5sjFmbYoCcbgOfb/gNWpry31y3ibwLpVxe2ht3eW4XrqznkII34QLt47560mm6Lo7aPY3miw+dq08Ul15rzbgsXQoE6KR3PXtWE5YqjUnWrQbV3FJbXMqdR06s5ThfouyZz3iLxR46k3+EdY1i/ltp7+wvJV87LXEyMv+skf53OEUckj5fX5q6r4gfEKPwb8OfEfieZ2kuprWD7LdRIrtHMbjl8EEbAGYVlePvDviC60rTW0S5WC7kxueVNyqpfnfjnH8XHJq58F/Ben/ABC/ag8O+E/jZcJa/C57VbfxbcS2b+ViOKWd8FOd4kaHA77cYau5YOVTE01ZJLV+S0v8rG86eIVWNl5lDSda8TeEfhvb+LpoZdPh1vwQbeWKXdumsXZii5PZxtx6jjpVT4N/FjRVvIrObW9vkW6LB9vf55mZsHBHC43biw/Cvtj/AIKLeA/2f9Q/Zc8M/HLwHYX+nWni7Rk0vSLe6h2SyQwQ+ZaP5Z/1TskXK4x83PNfmbDocfh2+/tCT7RNNHPGksUX/LNiud6e46V1UcPhq8JyVnrulv2+7Y1w2JnDExq8t7M+oIfiJrHwt8dTaDceJIH8PXn2b+1NLa2aa4juIN+GDkgYYPGrKRg7Vr0/9mf4/eBfBevReLvE1z5et6Tp1t/ZF/dbEgwqylIt6H5WDvtLY2AbR/DXxd4wtfEXiSRdev8AXrpbso08s7vl227fmP8Ae/hrN8ZaxfX2m/2ppuvfY3fbHqlr8u2Z1TKNGPf+devQdSFCNJJedvuO/F5hGriqlWkrKTvY9m+LH7S3ijQ/FDbnW5ld7uW4f7B5Qmef5+ev8fzcVtfs6/8ABRD4zfDXwXZ+BbPwZ4N1TR44ntdUt9U8KxF7yGR2kkR7gHepZ3b96BkdhXhrLNqngm3v/GV5LvumiSDdu85crxn2H/663fhH4guI/HVn4duLOKa4N00tlqTblkW4T541IHyMPl3jI9ulY06SotzptqS6rdf8Dujyqcqqq8y0ufsj+wn8I/Cfxk/Yg07S9F8YXELXM8VvqUv2P99p9vAzSfYQTgSortlZj1Dc881H42/4J1WupeMp9Q+G+pJptkJbCWdbzc/2pi/+mded5TkN0B4rM/4J9+LNQ+HPwl0bwjpMfkalfXE1xqSO6vumklZ9/ony7flHBr6xuvHmi30c1vq11BFeQ4SWVX2ozFc7s19jleIy/MsBTp1knKKS1VtvPf7j26SpYqCUldr9DjPCP7PfgPw5ocug6tZ/2qj2T6c8t6i7pLNmz5RA4+8qscAZK1ta94y+H/g238nXtbigaKJflab5sdBxXw3/AMFfP2/vG/wDv7fwv8P/ABZLo2maJoEOv63rem6hJ/pEMrX6bMW4WQCJbF3wHYSeeMpmNSfhTW/+Cofx01Pwtqms+ONf8QLe6PCr3nhzVNSmmvmaQkW6RKt28czTsPLi8t2V5MxZEiOi/rmTcH/2hllPFfWo01JNpWbsotr015W0uqXk7fBZ94jQyTN6uXwwM6jpuMXJSik3KMZKyeunPFN9G0uqv+yGtftffAvS4/ludRmff9yJPu49yfmrl/iF+298A9N8J391rln4h8qK3ZvKg8pXkYLwqHJ25r8YJv8Agp94jg0S51fxD4I1nT7m28TT6E2k39oPtct8li14kcardssnnKqxxbWPmySxhchw1aXir9tP403/AID8RXVn8JZk1jTfDcmo2GmazYG4jubki5WK3H2a6kWWTMClkR8hZ4uQzYX0FwJQqRly469k/sPt6b+W55T8VMXRnHny+UbtJXqQ11Sv5rVXa0V13R7P8QPHXxO8YahqvxKt3sPDFnpryT26fbPNTT0mfMUXIO6dyy9FPO3O35a5n9k+XwPpvjCH4f8AiDVbfw+usX/z+Mot73NuxXJiKIw3wMfvcg/ePSvFdC/af+M/iTQfFWn614F8IX/jHStKW80/w/e+CZbHy7iSOcW5klOoXEFwkrQgbopgE2srsrZCdB8LP2lvjJp3jzxJ4Z1qy8F3CWWn2WdT8OeG7iwBmn+0Ga0m3Xk251iW3k27gQlyhK4ZSfIq+GVTFx5Y4tR57pPkejV99XquV3Wtno7GtbxajRjUbwjvDXSau0+XVXitPeVm7KS1jdan3tYaL418L6Lr3jJfHOneJptBumTVNUazmtobO2KZRo4Pvy7tqgyjIwyivQ/hX8QPB/7SHw3vLyPxjqnw08YWSvbumt3iPaNdlVcOLeXbK8eeiDGO/v8AmR8Sf22f2wfhfceJvGnwp1Pw1pFrH4bgZZJtNvJbrXr4vOq2G6DUIW5K2qIpR9z3AC5I21R+Nv7c/wC2Vq/xEttQ1fTNEGhaROI7bxV4m0+XVmiD2/3rdU1SO582Sdo7RbZISzkhg7bhGPkcb4N55ODhLMYqUZK0lSls7/FG/Lb3X7zcUlZ+t0PFutXpRUcNa6f/AC9itlF8tre9L3laMeZ3urXTR982fjL4W/s/6hbaf8QPipa6rM9rceV4w0REWBkldTJFIR93aV2LjBy1elfs/wDxo+CPxm8aXc/gHxJPfvprOyalqk3+seVfnWNz94r8p9ulfnn+zndfED466joXgP4hfCXwfc67qAuy9rp3g3UbwIgEsqqLbT4dSud/lIvmLDHcKrhzv8sGUdn8cfjF4M/ZB8A6NrHhbxz8OvENrfazqFho2j/DnW9TUyRQzNHdahbm40q2gmsWuYjCt1DI8VxNFIIml+zzmLgzbwh+rxp/WM6jSr8qaTo1Gkm+qSl1TVr9+g8u454izSMq2XZZKpCEnFuNWiouSs3bnlFvRp3219T7s+IXx7/ZL8C+IJfC/iz4nWFtryWq/atNguZWjjRVbCuUQlv9wfOe4rwv4uf8FVtY0HSb3wZ+zHo9/qltpFl5/wDb11YfZ4bWYtyxgk+ecb+FU7Ady4+7XzRcfEPxdY311G/7L9j58dyWuTFrdq5aXO7dlYjuOTncM8nrmtTwdrPxM8V+I4PDnhD9mHSJNQvEaSBU8XafEJNq5I8xgq7wGPy53dRjgivOh4ccOwbnj+IKdRta3pVIrTqlyafed1biXjOvPnqZNP5VsOv/AHIdh42+Mnx4/a68MT/D39oT4nLbaDpVlfX8uraJpVvNaXV7Ci3EbSnCPLs3bDFEQR055avJvjpefDTwz4ysPAPwH8Mf2jpOlaRDZRalsUzas5RXku53QAO5kZgGxwFxW/408XePdHS40jxb+zFpdxDZXqiVU8TWN3As+3aCrRo6MwA2llJxjBI6VlJ8bNetIXtIv2b7VI0cu0UesQbdzDaSAIeSRwcc468V24fgLhelJcmd0uT+VU6ttet+W7I/t/jDrktT/wAH4f8A+WHV/Fr9pf4+fF/wz4S+C/jT4bwaVDp/kypfrZ3DXlwipsMrkjYwb7pfGB0G2vOde8Hx3Guf8IktyrX8lq07Wv8AGsQ/i5/SvU/Af7RH7TvjvTp9G8BfCbVrq18OaJ55s4PHMcaWlhCxGIkYAFEJPyIDtBzgA5ri5fid4gu9cTUJ/wBmLT5NQeE+XdSazamUx7SThzFnbjd3x1ohwVwvBcn9t0Vy9FCto3r1jp3sZzzvi2bv/YtT/wAH4f8A+WEXjT4weIvhr4u8PWfgHXtI17TfDUFjfyy2CNcw6wqxK4tZc5MUaH5JF659BtWvo/4Q/Gdfjd+zF4n/AGmPjz4dsvF3i3w74qjtNE0GVEt7GYKy3Nu8jp+8itbRH8nYOZAqglizV8wTfGS2s5TPJ+y7pis+VMq3kBzuOCCwg7+nfNXNN+Nmo2CS2mk/s12duLieJJoYNUgQyybtseVEI3EF8A9t3vW1HgzhCjVVaOdUXJJptwq9b/3e+p04fiTiqhUU/wCw5/8Ag/D+n856Z8Qv2lf2yvGnhvVfjFD+11rKeJ0ulf8A4Q3TdNitLSGwTcTdW5IaJEi3MShzNs/jcsFrzDQfHXxI/aX8c3/xE+NXjy48Qarp+mwWb3t/Cm9oU3eWpKIqNj5scZ9Sa7/xF4R/aE0vwDd/ELxd+xhb2uiW6GS6l1PxFYxuCkgjKmGRRIzh8DYFLe1cl4a+I3iy3aRvC/7MFjaEyRySGDW7WDc6nKN/qhkr1B7e1c1ThTg/ELTPqDkuvJUf3+7vt+BjieIOKqsbvJKi7v2+H1/8qbnV+OvCPjxdL0n4peILD+zYNY1K7tdLifckipa7UkzG6gqCWyGxzurV8ZeH/Ct7+zD4JXwnpV4fElj4t19vEF1FokojkgmbzY1Nxt2SlEVcoCfLRudtZXjP4p/tI+LtN0v4m/EL4AXuqRa3DO2latrPi+GeS4igk8qRg0il9isNoJwDjjIqnY/Ej43avFpng+P9nxFtJ7+SPTbe78YWsNok8q5kIaRViiLAfMxIBwMnpW9Pg3hSFTnhnlFpp/Yq9evwmn+sXFdGs5QyWpeUba18Prfqv3h6p4N0r9t/9p74R6J4Qg8ZDUvDOiJdQeHINW1JIY45rZkRYI3fAabLKI95L4VthG2vGf2rtB02H4la5D4R8MJpUcaRPLo293e3uREvnxEyfO2Jt3L8/NUk37VXxItNBuvh9L4Q0xdK8M37XT6KPidY/ZrW43BmuIYgdjHfyZYwRuB+bINcNpX7X2j+Mr5rvT/gXBeXNxITLLJfDezN8xLF7cHJ6knv1rerwnwtPDqEc8o30u3Cq9u3u9ia3EPFf1X2f9i1E+rdfD9N0vf0PU/+CXnxH+Dfwt+JHjDxV+0x4oi03R18DXqRWfk+Z9suZmWIQIgBZiEZsL3+bIO2qv7Q1j8DfiR4uPxWjvLK48Iaj4oW1sNNlsPLSzVIkBzbIQX2wrvzkeYehrl7bxnrmoaNqniW0/ZO0yay0ryTqlymqWhEXnMUjJ/dZO5lKjGeg9qwB8XfDjRy3Q/ZU0QnesTj7Rbb3YnAAX7PufnjgGsJ8H8OvDxo/wBvUVG/NpTqq79eXb0+8zjxLxhDCqisnny3v/Hw/wD8sNz9oD4jfs66h8FdN+HPgW303S9Rt5Z0897lPMaENsDzoOVLj50jHbgldteTSareeMPGcPxc8XfEzVtZ1aDw+ugeF9I1a8+8Ao3woBj9yf3ZbP3ztBJ2Vval+0/8IbO6k0zVv2Z9DjmtpmikgnjjBjcH5lINrwQetSQ/tKfCa5T7VD+zJoLeXgbgsOV9v+PXIolwfw21f+3KK0t/Dq//ACJFbiLiytUUnkslZW/j4f8A+WGP4s+Kmj6P9u0O3sP7X1NLWWBb2J90FrcBPn2Z+9j7ua801bx14X0GO0/tC5uDEyrcaiiTKI/lbiJOflGPvk/hX0TceDI/2oPgJrFn8HPgdDbeIX1WO303TdBt0aecq0LyEFEj4MTyZ3fKApYkYyPCPhvca9+zJ8WtD+KX/CK6Dr2r+Hr2Wew0nxDZ/bLBbkI8aPPGhAl8stvAzjeqn+Fa+a4j4OhwliKUPbRrQqwVSM4qUbpylHVSSad4v5NHVkGa1M4wlaVTDypTo1JUpxk4ytJRjJ2lFuL0kuvf1PTPhfrHiT4w+HZtB8FpcaVBc2ryy2qTNuuolbAX5OMH3PStr4f/AA9uta+Iml+A9UuVtptVdokTuqouS3ooFe0eA/8AgoV+zHrnxU8Q+NvBf7LUttea7aw3Gs6lqmpQ28OkpDb7JIbKCAMih5/nLvgvuYEfLtrik8ceIPjReTeIJLyLSkv52S1+RInjiXcQzycbd3UtxxxXw2IownJSV9/6+fc9fEUcPSdqUm791rst/P7ztvgl8PfBcngPWbjx948l0/7DrctnZxL5TtJ5fyHA5PLL+TZryW88deEb7xBd6bonii3lmV5Nll9pX7ThWwcoDlayvhT8Svh54V+Iy6D+0JJ4oj0q9llgd9GtlR8yIwgun8zDMA+19oGZI92P4c/EVxqd/wDs9/tG3niDQ3iu3stXnguLr7M0K30Lv88uDl1Dja4UkkfKCa9XL8rjjI1Ip2mldefkdFDLXXw83e0ovRd73+9dPzPq/wAbfC+8+IGpNqj6bLc/2erSypv/ANXHuwP1rA8QaXp7aNNb6bc2VjcmIqyS7dypt+9+Ndv4TvP2hPjV+xD4/wDj98M7/RLH4cv4qsdL8QJLbSxapa3ttdKba1jkxiXeJfNk2MEj3KmHbOPns6feaT9pbVrmXz5G+ZZUYszBcnJPcf3a0w+W11WjCpvfa5UcBUp0oVJr4m1vdaO1vVPdGf8Au2meRn+fdj/gIpky/e+dT/FUu3Cg/eL/AMNRXC+YxZU/4DX6bCLjFJHtJLlSOj+BN+upfD9YZXTzLK6eLb/sdR/6FXZSR+ZGsip/tfnXlP7MmrLJq2q+HmfieATqv+0vB/8AQlr1qxjmaPFx8jh/ufSqqe7Mu+hFNHN8u4oR92pYVZTtjH3U/wDHelTSLtX02/8Aj1OtVVY9uzPzbdie9TawiGSFnk9M1e0WOZcxr82VamTxJ8vyfd+Wruhqyzph6ErAevfsR3Ggx/H6w0nxRDE9pqNrdW6oyfL9o8rfF/4+mB/vV9C6TqXjT46eJD4V8N+HntvEqX8tvLBL0jRFYyeYj/vHcJtVccV8g/D3xVqHw5+Jej+NtL4m0nVIrqJum7Y2SM+67hXpv7W3jnxJc/tB+HLj4O3vjS68f+HrCPXtW0bw1fqU02OZGJmjckfapod22S2w+UbeENeLi6a/tKHtKblCceVu/wALV2rLzWl2dmXZbHNM0o0ZO0ZaOTvaPm0k3bvpoj3jxF4qm+BKJ4c8deD9Zv72aVWs2S2eD5BuDvnAPB6Ie3Wuc0v4yeHPEAuZr7wfqljbFvntbhJfK2lvvZ6LXjFv+3l8bvEmnp4g1j40/wDCQWHm+XeRapbQyiGINiRPLwvlOPm9wVrqPgj+15pPxm+Dt/rHh3UreLxPZ69NoKWDp/x9TblMEwjJBYTRvGwAzh9yZbbXq5Rl+XZhVnTjKUZR1961rI+14+8Os04FyzC46deFalW0XspN6/OKunsn3O7uPFH2i3it9P8AD32mGFXdd1m+GU8HqPnOOnPFfLX7U3wxm+LXxNg8fab4t0bwJb3Nq6ap/wAJHYSz/wBpXI2pG8vlnFvhF2ZySflyPlrvPhf4Z/amtfihqvhfWLzWZpLTVpYNZlv7nba2Mof95ySYmA+7+7yP1r1H45fssX3jz4JXd5rPxs0TQ9SfzJ7DTb6Xy77y4myXLbRFACOmQcjtXXjKmU5VRc5VLT6db38l0Z8lw3w/m+f5pSw9KDak0n2SfV9l5vRH5q+KtL+JEfjjUvCfwt8L3msX+hNsv7/wbfs9tsZfvgyFCqHoV9VYfMK92/Yb/ZZ8cf8AC0FuPiPc/wBmCeDzfKt9S2w3SKuds8iYCxr99sZ+6or6X/YB+F9r4qj17wDceG59a1OTUgr3F4iwfaGESkMHkC/uzGykyHg/LtruvFXwX+Nn7Tnxcv8ARPg74Y01odO0ObRLjzZlji0lRMgCicABZF8rhMDiVj81ctbEyrYWDhJJyV3drRPZ2b11PS4xympwtxLXwGClzKDtGWj0sr3a0unp6l7QfD/h/wAE+OLOz0FG1C2tIh9q1nfhL67miWT/AEcj7kaxsu1zz8ze1O+K3huxutUt5JvAFlFHeRfb7rVPOZIrfy32SRSn+N/u4ycENx3rtvA/7PnjKz8WWnhPwreWUOh6bElndavdOtxHI5TZKvX5gD8gP+77V3Xh/wCDX/C2NNh8J2/hO61G3sL97XTUsNK2yQx27bCtwJDsnkc/OHHAHbOd3xmGrQxGOqQlVvyq+nT/AIL3t2PmoxxGYSspX5Enp+On6a2Pk+++MHgXxRHPp9rYS2dhvLuquxMh6b37KDtXpwNqj+7XX/C39rrRdF0HSvBPhHW1j1LSr2KK301NNQ299vl5aWUgu8hHy8YOOBX0drnhbSo/Cdz8M/AfkWj6Zhb/AP4lS3M2oQ7mDyiQgBTvXleB/dDGvC/DeqfCfwvrNxcfDv4dRTahFugfUdURRKrn5JIhFghDno/34yvB+YrXVHKsDn2FlQ1vGzSbV2ujlfZPtr5Hn1sHGpenObS3t18rr/M9E+MHw5s/FDax8ZvjRf7IrbTvsGkp9j/sy2mdmyiynmVwh3KCTk/+O1B8N/hD8Nfhv8IdG+J3xP8AA8uq3UySy2+jLpTBpFDt5WBJ8q7gu/zCQMba9ovPhzo9v8O/7N+L3i21bTb28tf7NW6+SK1mWHEkWTneC/zbn6n+7WX8Zfjd8XtYuG8H+A/Ddqtxp0r2dxdfurmG+t9qmLy4j3A/i5B+b/gP5vlVHPs/oSyvLk1ySS5vhioLdJ21d371uh5NOvVqYNYainzX3Witfa/Xv8zwf4c/8FJPHXg3xiniT4d/CuzsdBt5X+1eH7qGJ/LRuqW94mDEH2qWypGV/wBqsH9pP9qzxZ8dlm+H+n6JpD+GTcfaNIlv08vUISWWTaXyFUxu0icZDjbWR4k/Yv8AGEmsX+sSeA/E1mb+4a4uorWzuI4d7bvmCAEKMt0HFZs37LPiiaxS1/sfxAUT5H36U7tu74Oz5a/VKXhxSToVJWcqS93W6Tdm3d63utGbfUMQlGUm7runueN65qUcM3l3iSukf/HxdQIzDlsDnpxWr4J0Gz+IOqGxuNYW2SFHe1a6d0TcOqjtk7a9q0H9hqZdA/4SCT4kRaaltFt+wa4nkpcZ4Kg9cj6Vx/jL9nvxl4VjNjZ/ELwzPC07v9gsHZ/JDL93PG39a9qWUY1S5IRTfkdlSli+VSk1955b4rtfA9x/xTM14qSMzM91FftukTuNn1+YMK8n8G3EPwv+M0lv4k1trawvdNW3s7/Y21iHztL/AMOQ3Kniva/EHwf+JMlv50Ntpdzhf3VvEm3aP98881w/xC/Z1+K3xA8P/wBgr8Mf3o+ZLrTZmmnjlHKN5Y5bHdh261xTynFYOk5TptJbthhZSpy9nJLXrc7f/hW+reJtFl1DwjNFcaW9+sEV18yJNOy5CZI2N+Y4rd+COk+DfBM8viSZ2e/vPs+mOzPuPmyS42j+6PX2rlPgX4i+Lmk/Cy9+G90mpW2iSXT/AG/7K7PD5yLhECBSFmwrMAOSP+BVh65q2rTXU3hO3tL+whDfaolltnilkk2bPnd/UbvxrycQqjhOyXJtdf1uTiIyqNwjtsz9S/2a9J8J3n7HvjPUtHubW5udYaW1a4tUV/MwqBLUZ5wX4ZfXdX5e/tTfEPVvhv4k8W+DvFkd1bXd3prNau9syPlUwHA9c/e9Rtr2b9ln9rLxR8M/hdd/Bf8A4SG/g/tTV/mns4RNNbt1LDkbCyLw/r71wv7YXhO++OHwN1D4xeHfCc9paeDpYreJm+d5InfB35y7Ejk9cV47oU511OavdJeltn959dDLKdfLIySuoRX6HN/sv69f+AfD8fxL+JGtz2d/4usrW91mK6uWeG3xEkEUMYzhHKeWxHrxXpfib4ufDu3vfFDLrkEk3hK9Sw1FIny3nNEkgSPs33scd1Yfw18X+DfEXiLxV4U17wX4t8TTx2V54gi1y81KX53jt0t0j+yxJ0RGdM4HfpXe6L8YPB9/4yT4T6TpSRoNOae1vXmQQyLGmPudXk+blz/tc15mOyirUrTqTbbbvorJR0dvuvH8T4vHZe5VXOSbvrptbS3+R0mk6tqnx5+I2gLq2+4ttB1aS8tbeVPmjgC5fj3fbmtD48ftCf8ACqfil4c8CvpyvY31uz69Kybfs8LtiOWM9MqeoPGOf4fmufBexvJGvte8MWyG7a1dLq8Cbmhh3e3r2rlv2m/CPhTxpc2N1qmq3Q1G+V7O1tbWzVg2FYpmTPyADqx/nRRWGrYqGGnH3Emra3u/6uaQjhsVmMKUo2jFWtrv/X5Hpet6hp+hw3ln4mkeGGVkXTZUhZ0uFZW+cEcbQdv/AH1Vf4J2+kXFxc+LPEwW41tpXVbpnV3hymEbnhM9vT61zPw98Qaxq2l2eg6XrDsk22z8qdFlTftUDGfu529ufSuoh8O33jzT5tPt7PS4dVt7gJYXi3iWPzQryryEruADch61pZBg/qlSndpfzen6M9OeU0sLhXGDd3bXr6L1L/jbxJb+E7dtJ/tKytppVzKu9t+7/b7deR3rufgH8A/BPj7Uj8VvirYRXVnY2D3FrFKi+ZqDxt+9iD5ysa71LkdmUfxV6r8P/iZ+y54M8Maf4T+F/wAPb/WNQfw1eT+JdZ8WzRXkcd+PL8uK0kRAjhtzHeecLj5TXI+Iri4ZvCXwz0O5g02KGKaC6SWFtsJZlcKEj7feB6k/Ln7tfN5tjcNgJSw2BTjJJWltdu13qr7b3tqeLWqSw1N0oq0rafPoR6O2oajrFnoPh3wr59k91ufTrXfCkyDkwh05RMccHOK9t1v4d6b4h1C2+K2kz2vgxPDuiQPpfhrS7Bbvb9mGfNJkOMsWzypPc8034V/Dv9ob9n/w/rfjTwHokut6bfQf6HpcTxGLUH3KAwjk5X5GZtyMPu4O6uvtfA+vePtW17/hblhrPh+bTmhn8P6R4f2K8dpPCzySvJk9duzY/J2qcfMBXybhi6y5YycY63bd7y7curb3u7aK+xwwwGJnh1Vk+WLvv1a6ebON/ZlXxhH8Rovip448fz6roM1vN4j1HS5XaJ7Eu7GCV0A8t9xVsdTsXFesN4k+Lf7Smk3OrfCXSfK8IRao8GravFqTWz7n5lbEZErEblJZMfdUV4b4u13T5PDtppul+bd69DrPl694U8NTLHdLo+3EGDOUjlcfK5VOPmYfw13Uf7Qlj8K/2dtU8LfCP4b6pYWF5rdrNL9quUS92b/KuGuNmdjuV4CZAHSivhaLlOtONopctOGq5pWTUptdna/psdOJpQrSU+TlikktdZO2731/At6x8arX4R/G7UtUv9bg1jTZvC50a4t7B5T5flM0sUrlzj5m3I+zJfcufu14F8X/ABdffHLwfbzab4e1eHVdNslt7CJn8u2vooW3lo9/DyLuxwf1qbWr7+1/CupaDouvxQ2etXlsrxXHzfdfzI5hgE+Wr+nXdjFddr3jTUrj9mvSdD07R7CxTQWuNE1T7RCn2u8ufKZxdgycxBR5avgB3LYzmuHJ8ulF/W6suWpBJRtte6bVtdLN26nHhqaXvydnDbz12+44fx9feKte8VaJ4y+LviTTvECabBCl1pehzMjQ+YnFrG+Au/7rO46fMC7GvHviN4R8YeHfF3hm68EpLDpen3T3Ws2azIjtM7eVbqDJxKgkbe2M7CuTgNXq/wAMdFvPGXjbTfB93YSz3M7/AL2Kwh82aQKrE+WCQN2Fru/iNpPw78VQ6fDotrb+GXsVmtdZlv4U33kzSqkURAcx8jgbME/MDu+WvushzahVzinLFe4pJ27XS3fp+Z7NHF/W8xjKa0a7vdI+P3+Cv7d2i+Or/TdF8ANrulSv9ttf7LsNr2MSMpfCISXAk/iGc7vSof2fZvEXjDXvP8T/ACMJ5XTYmzyy0rF1I/h+dmyK+uPjd+y98a/gL8OUv/Bvj+1l13X9Gvm06eyvGsoLUBVxmQklM7vuIDgr0NeAfse/DfxR4mkdrewlcpsa6upX+Vstnb9WP/16/TaE8tzDDRq4dJwb3728j6TCUcLV5ZU7SXkeleF/hG3jC8ga6RltodRSKV0Rf3x3KXT8Rxu6DdXG+INJ03Q/iL448O6RoN1Fb6b4ga3smZ8pInSPYf4sDdnue1fUvwP0XT/DeoanJr9ygxdPB/ZMSB3Zwy7GJ/gUOq59a+YPiR/ws5v2qvEml2EO6z1rxv8A2ZYL9vi2W9snDzIhbrv8xR3G5j0+WnVymhi4zUFaXf8ArodmOwtD2UZJe8aXjTxV428VeCfCvhXxBrDS6V4R0a7S3t5XzteSbeXx7IyoO4HtXjviL4deKrqO8v8Aw74YsvOuZVZLi/mZUUFcF+M9B0Hc17n4T0P4map4m8Q+CPiDCtnrujsFgl1K8iXday7vLRCigOGjX5mx9eau6lpd74X1J7XxVbJDNJF9ot4kTdH5J6cjhuetfPVchrYFR9gk7PVLu327dTwamDnRpqUVp17Hzpr3w51B7ewX+1YPtlzKljay3CYRpZ3QJv8AoG+737V61qH/AAT1h8YfGB/h/wCHZrC2g8T6o9npGr2b/aNN09hxBFKUz5UhMFw5TqNvJG4LXofg/wCF9n8Qo9N1bwbYI2qWGt28vlfL5c371CZSD94oF3D07VS+A/jqPwX46l8SeItYv/8AhEhq/wDaSape22yzm1FpZX+y/aI8c4Zm5HziXGfvV9BhMn9jTiqjTbd9TbDYTDSw8nJNvy6E+qf8Ep/GWn+MP+EL8TfE7w9tjVb3SLpb/ZFI6fxcBt0Z+b5T36113wH/AOCfdx4J+KFxpHxS+Hsrm8vRLpcUVzElvqDRr+4mScEhU+8Ah4Nb/wAOf2wvh/8AELxlD8ZvGHw6i0qw0vWb/TYlls2WxWwtGXN3HO4ziYurBEzGfK9Vr3jUP2wPhTceG9T1nw/8NJbh9E0OCBNJVG3R+btkdYHkx5sZ3R5xjB3D+HbXbTwuDp1m/Zbu+2n5nbRw+Apwfu6tCfB7wjr3w98bFY794bO21m4s4F1Sz8n7PLtYpsD8vGfmRe2V4JFervqXiTULe60PUrx3MzRT6be2sMSFkK5RsycexPX5at+KfBEHxv8AhBYz/CqxsIZtPZpWsGuNnmQiHKbH5OxX3KO2W7Vwdn4Z+IFv4Hh1L4laVKltbWDbpYrn95HCv+xwFGxfWs8ww9PCVKFWhFK8tbPZ+l9UYtUotSprS6Pz3/4LGfDG/wBV+Id34avfFcT3194V0G/sdUhgVxZXkd7rElvMUVgspjmijlKkhXxhgVJz8gTfD/xvrdtHqHivx9Zzazp14l1oFzpuhG3tbKZYpYyzQPNK83mRzyxyAygGMr5YhkHmn9Ev2odM+E/xP/aobwpp+iXQ8Pat4G0Ww1mzvFeBpUlv9VSUbg4cBo3A3bgwz1GAaxdJ/wCCbn7A+j6tDba1+zXaakl3IEt5ZfHup28Geu3KXJZmI6DIz2r9ywud0sn4bwM5UXNzhN/E4WtVnbSzTavdPdPbU/FsdwvieKOMc2UMQqSpVKS1pqd+ahSvq2mk0kmtU1o9Lo/PvQPg/wCILbXofFXinxlZ3l+vjFten/s/RntoX/4k50xYVR55WXC4kLlmyQRtGcje8VeF/E+vwatb6d45m05bvTYI9IMFsN2nXsbyv9qLBlM6sWgDQOdjLAVbKyuK+/8A4H/8ErP2AfiR431DVNa+BMdrZX9uf+ES8NSeJdXzcpuP+kM4uxIMgEqhb7uGPU4908Hf8EKv+CZniDxWdEk/Z7Y6f4etVfXr8eMNY8y8uWXIt1AvMKFVldyoBBKDPLCscPx7hZUpKnhfdu7/AL16tqz15b/8HXcdXwtxVWqpzx6bikl+5jZJO6sue2j8ttNtD8bp/hx8V9QudQ8R6h8UdGTXjozad4e1Cy8KOkOlLLKr3MvkyXcnnySeXb43ttQ26kKQ0ivN4Z8BfFDw94YvPC0PxA0C2jbTbiLTLnTPC06TW15ISwu5GuL2f7Q29nkcON0rsWZ8lt37h23/AAQS/wCCS9hpN14h1f8AZZdrSKMyQCTxxrqsyYzyBfDBJ4FfJXxZ/wCCZf8AwTo8PeLLy3034JWWm2xdxYWv/CZ6q5YJ1b57ot7df4aznx3haMuZ4V31/wCXsvn09Pkktkrbrwtxtam4PHx5bp29hC2m1ve2302u5PeUm/zx8U+Avihq/j1PF+jfEDQIrWzhC6Ppuq+Fp7oWUhUrLOGS9iDyuGKbyuUjJRNvmTGU1/4YeMPFianP4h+IMK3UmgGy8O3GmaXJbjRryS3kinv0BuGLysXGw5Voo1Mav+8leT7XtP8AgnR+xZNK8tr8FLS4MJVpYx4h1OSPyzj5speq3U4PpXI/Ev8AZi/Yc+G+qiN/2aPDl3l1C2TeJdfU47tuF/3rN+IGFm3/ALHJ3d3+9e/pb7u1la1lZw8KselFRx8fdVlahG6V+/NfXr3vK9+aV/lTwt8Pvh78LfhAfCXxVu7240C0vLm9v4PB2nRWUk3m3sl0lnbLM8iWkbSSJB5x87yEJlEM5QQP1P7Z3xI/Y3+OfwJ8K/FP4aPZaP8AEi1svD+j6h4dt59WkuntbfTbm2uhcC4gNlDaWf2TSbPTlt7ma6ktfMmv5J7mRvJ+vvgt8HP+CR/iuePRvHH7AtiLgMQZbP4m6+pkBOAApvcZz719BaV/wTw/4Ij3viW68L3v7FF/FcWWFuXg+IWuSxhyu4KG+3jLYB49q+VzziHC5pVhW9m6UacFGzbeibtrZd7fLc+14a4ercO4SrSq1vayqVJVHLl5dZKKel2t1fTvsfWnxn/4J4/A/XPh9daf8IPhlpOia0FRbO4gzEgHyIc/8AX/ANC7mvNvA/8AwR88B6T42e98T+L7ifSba3gWBLVPKmun2/vckf6oZ3KNvOK+i9N/aw+EGoY83Ury1b+7dWbLWxZ/tA/B2+ma0j8f2CSrjdFK5RufrXj/AFzCYmybjJra9n8j6VSdrI+I/wDgoL+xv4V+Hum/2j8JPhjFbaFpnh95r1xueKGYy+Wjku33/mXao69cV8ON8H9c1C6S3s9KdvN+WVGTG35fX+Gv2/8AFz/B/wCLfh2Twn4j1vS9SsLiRHltftyhZNjbgDz0ytcP4e/ZA+GunfGyf4o2Vhp8mkvZbbbSEgRoY5yuw8Y27QOR7tWOMwdDGx5ov3v+G/G2pT5ZRSkfFX7G/wCyboPw58eWfxP+K8Vhe3eo+EnupdGv7NY5NNLtsLlCMKPJVeep3NxXqvxE/ZZ+GfifT5NP1O21lLL+y7nZdWENqhtbSbaZLQOYxtjbauMf3etfYXj74ReA/iPpsOmeJ9E8xINvlPA5icAdE3Lzt9ulcv4p/ZX8BeNri2u9e1jXENtb+Ulra6uywt1wzgAFz9a/Pc34DzLE5l9awmLsmldSWzW1rfd6HI6Tcrpn5q6D/wAExZPEXxM8Q+Hda12DStKgSX/hGXuryFrm+c7TEwjBwyKv3/8AgOK67Rf+Cc1v4F8eav448WeG/wDhINJ0qe1utLW1v0SbfCyzlrhE5wNm3CA5+lfYnjb/AIJ0/C7xpYRJc+KtZjv7e88+PVVn2z46eVvHRMfLxzWN44/Yl+JFnetN8J/iFBZ2d1ozWF/YuJELOylTOHySx53YJ6/WuXHcI8Uyjy060JpqzVnHa12nfqtvncqVKU4tX/z+XmfHmvft0fGL42fGK1+C/wAFfBng+9TV7+4RtU1lJXi+0LE8kdxE5B2JCFyGKnfJwOMNXyjY2fiDwpp9zda7Yajczebc+V/oD5uPIb9/MTj5UV2wzdELYOD8tfq/4C/YS8Dfs6afo3iHwt4Pe71Pw5pN8t1fojG51Bp9jvgp3/dqij+BWYCvii3/AGhv2g9L/aL1Hx/r3hiW8hWzv7PXPB9rYRPbWelO++WEIQVULN5byyH753Z7BfKw+U4nLZVIxwzUFbWT1b6rqvS1iVh04KNnbQ8eh1Tx5o37IeneOPFXi3w/d+FtC8TX+m6Hodqnm6ms10/2mdCP4YwfmWQ9OgHes79pD9pLSfihrHhX4e/DHw5KmPDJgTykRvtDSxK+4p0XCRMwPUDdzWv8QPhH8RPBrR614w8DS6CuuwT39nFLYLALhHbny4+SsOWwEx09q8u/4UPZ+IPDMLapDLDqh1HzZbpJm2fYyn/HuI+inf8AOGB6fJjHzV6dKpRg4vVdNOt2a1KEZVItbpeevb8B/wCwPp/7L+q/ErxB4X/aA0S81N9W8F6k+l6jcX6QWtrIkSnpnLz7/nQ9Aiyd8Z5XwX4V8H+D2vo9Bv01FGZ4r2/t4dsSuOrR55xn9K7vwX+zHpek/Cvxt8ZPHUNrBbWF/aaT4Zlurx4Wa7fdJO0SD/X/ALvy09EO41xc2g+II/DcWmyQpZx6aiefLYW21WaRchHI4ZiPXk10uM6kLLZu/wB2n6GGJVWOHUGt23/XkO8Ji30HxTc6x4s26rpEmlzwJYRX8ttumfb5VwTH98Iy7vLPD96r2+h6eLN7zR79Lkw3C/uHhcMrDnf6KmffOf8Avqp/AcekyXV5Neaa15bWahZ0a8aJFl2tjYcHcR94r0xSqvh21+H954z17Ur1dSXUmtbDTrPb5XklN4mc8d9yH/gOB81YqlWmuXscVGm2uTS3meb+JvhZqUniuy1jWNBlFncOb1Ev/wB0LxN+dwz87oT1bv61X/tC61Dxpcx/8KxtbnTbVIW1JfC+lNC9rZxswKg5ZIt5Zd0knU9Npr7K0/4G/B/xV+xLrHjq8SeLx94b0OG4tbD/AISH7dLdIq5uLqSNwBZom7BiQkRjkBi22vAf2U/FXxE+FvijxXoHg2bw89n430aOy8WweI9qf2tYW8zSmxgl6wSS+bIocc/e/urXbTor2EZTkmn0W+m6+86vq8Eqc5O8Zb23VtGnsa37NNhJqv7PuvaXH8TU8HfadalifxJLcPElmjRW6tuMboxVlJQqGBYOV5zg+ceDfC+ueMLF/A9n9nMNz5ct7cPD++3L/q0Q/wAOT1UV3PwL02XWP2adf02CESNN4gCiM/xcWvA9T6DnJ7HpXTeJP2Y/iZ4D+Afh34lQ6rp2lw+L9e/suwtbpJRqzS+d5BZLQL5jwj75cdunLLX3/iVTrVa+WKEb2wsH/wCVap8XwfQniMZm0baLF1Hf0pUf6+Z87614dbwPps/h6zsEuJr2UrLvvPKRSrcZHV8fex0pt94k+Imn+CbPS7jxInmRXQlt5VTcrNG6vGpz98KVXKkYI4ORuFYPxc0Ofwn441Xw7/aUt/qVtePY3F4yOm4o2x2CP8yDO75TyOh5pusXl1N4cs7CS2z9jTZ5sW5iyn/CvzWph+WSbR9TUowUuWS1/U2vGvxu8cfEr4qXHxk8YfZU1ibUoZ5be3RpIJJUiUF8Sk7sld2Dx2AwqrXmXxK0/WPjBqmreMvGE1lpdxYbYmuHRVhmHzbIokjUbnJ59a7bT9D0eO3it/EV+0Fy9wHlt9+z5PrS+INL8PpC9jGkr2jM7Wqp9zd/e9aqhiZYes5w3NKdSpRm5t6ssfsmfH74h/s36p4Xh+HXjDWfsem6wL+/0TUZEudNaWQr9tuEs2xE07w7gjyZaN9pBXnNb4sfEDwH8QPGmueIPAvhW/sNInvy2g2ur3/mXNjCz7yspTKzyNubL546DdtrmofB+sWu/UriF4UWVkt9j/eTtk+pqW4tWtY4reR+n8Hy/L+Ve1lsvrePTk7vcvDznKbvs9beff17lJ13fKv3R/sfyqpIq/OxP/AHetG4jZVZo3qhcwsy/wAX3K+xas7Hecd8EdcXQ/iNpVxJwlzO1vL9HXH89tfRqxqt00O9K+SYbybTbr7Rbpte2lE8X/AW3ivrLT7yz1uGx1qPftmgR0bt8y5Na1o3dwXwlvO2Tbs9Vb/aot4V8x22cD+CppFXbvxnd97+H6U6K3TlsdfmX+GsR2uKsKtGvybj/wCg1JartuBu+7w1SQ/vPmjHy/40sMbLIpbp/FS30C9jRmt2n2N97NV/iV8Otasdas/ideeLdDj8Ua/dJdaTa6DNcS61qEzKsEbmSPGyQDam3JP8HtV+1LTWqrvfK17z+zH4uk0Oz0fWtW02yvJbDV5LXwlE8O7y7/7PLcmWRz90gKyI3/TVs7vlryM5qzw+FU4q+uu/6edvlc/SfDbiClw7mtTEvDqs1FtR66K7to+m/wDdu+h8+ft3fAX9pfxFoWk69pvwtXxD4jT/AEPxf4o8HpCqzIUVI4b+2TG27gmXyWnQbJEb5gCqkfQv/BMX9lHw78HvhSnxu/aA+HUum+P5NSnnbS9WdFkt/Kfy4FGwtGiMF3tIG37Pk/irSuvjp8Uvg/8Asx6P8Wta8JaTZeJ7vxbetqnh+6dTFaxXE0/kNLsA8/aNo34/effOC3y8P4n/AG+PiR8dfgzffBnw34y0TwdrfiDRo7jUdbkt1vLKGDONkcEmCtw43AIchN2fn6V4VPG5nmXNhsNFRb0ck/JX+R3cSZdxVisJ/bFVJ4JVeVuLXu3lvGLtor+7bTVfL6j8E2Nv8ULbW9Ns9H+1XaXnm389lMH++zEeWQflQnd8vpwfu07xF+zr8NvFXjaw8cfGLW7rxDZ2DpZP4c1KaEWk00aLsS4GzMrqHjbYWwflyK+d/wBmjWvix+x34PvvGFxYePPElwL86pfxLZw/YbzfCqGJxbxh7PYFVtxJ2Hr941U/bnm+FH7TngpfHV38Zr3wHoviK6tG8QSeKtdl0208KtBtnKhIP+Py6nnSFFIZBsiU71ZQHyrZNmbxadTEJRUUm1fmbVtWvwXa+hzcQ8bYzD4t0eFpPDUHTUHLlSqy0V25a2u77NNX3PtTxN8MfA/wle20XTfFWqXdz4i+0trP9vbftl5HctgQx3ERA2RxvGluEGECqmc/NXkvwu8A6f8Atf6h4d+BGs/Gq9m0RL97jV9RSZ9Kvr42Uv7+3lSMh3G5dju5wenzfdrxH4D+LvCfxym8D2On/E7VPGFrawQ2Gg+INb/4lkt4bN9iOlt8pUGRVIdyM7d/SvV/B998M/iZ8Xb/AET4c/FrVPDFzZ6zPb6pp3hywitvtVx5vPll0IlBkSRioPHzEivGr1MVRx0Y1F78btOzUYraN7a6Ppbc/N8ZjKtGpy1JOWl1fZX2b62vrbuz61+LHg/wzo8lzqHjWa81DwiHsrC38JeCrD7NtzKqJ+/yNkGdpkfHRea8p1D9sa30f4zP8MfAdtFp+mW2qRaTqniO9v0SwWJt3mucESfJtZQ461wGveA/F1j8Vr7Q/EXxg1mezE/2yJrKZ4XktwqoMbDsWbO3eOA/zYFec6h+xf8AHC11q+8ceFfDF14h08wS3FnqnnLC25Xy6CPk7zu4iB67jntXJgamHrYqnLFVotxu+ybdlq38Wi2b1R5uDqR9yVWor2ctNE29Nb7/AOR+kF94NtfBuh/8In4Ru7PVbyeJtSt5dR2xNefdwxxyyJ8uPQNXzx8YvCviL4mfECz8P6t9nS5026RbhLDSUt/LcsucuMFk/wB8kfjTfhh+0zN4y8M+G9D+N/iG4/t/RoGEWs2uyKdU3bDDKSNm/C4PGcLyK9f+LXjz4UxeH287QdL15r23T7VsfNzMFZSGGD8py2eKxy2jieHeIK+PqVPaObskley05bO9kkrXse3l2Uc9aWMxMk+ZvRK3o/K66HlXxItdW8UaN/wrO8v7+3ttJvJt2pbGlRvmyUGT8w+9tY8YrK1bQfGUUNjpvg/4l3qPbKIpfPRfmccDHBC+zZr1i+8eeC/FXwo1KP4Z/A2/j1OzTz4g37qJe5CP/EPl5XFfPfxG1j4teKpoj4Pmn028h2vbxWtzEiSKfXzMDjtkDJ7V93gM1+rJ0sPBJX5noldvd/ed08swtBOUIuz7eZ3q/Ef9p74czGQeJP7ThhiVrd7hEuNvzcqASpb8685+MXx8+MV5qVvb+NvD0E1/cWvnxS+HJpYZVy3+qkNu+PMxzz2rOk8b/tDeF7O4vPG01lqdxfXSJEt46xyRgcfcHDcdf+BEVnax4s0/4vawfCen+FdL1TUorf8A1VhcsWZh1weApIXcAetexQzKq5qU46rtsZVMJGVO0bnJeLvihfX0dnql/wCG9Wtvs++KKWd7i984q3ztmTJ479qXwr8ZvB9uv2nXnt2mWXcj3+lMyyJ3Qj5dvsa6bXfgHqWqfabO18N6oHsE/fy6c6M20LgId57buaht9J8D+AY/7G1B9ZudRuW22sst4lnG21VJ2OgY/wCyMj+9/stXtf27RhSvG9/Q4P7Mmql2i5eax8KfFa3Dawlrojou6VokuItxO35UicHYcN97p81RajY/CnwzZ3S6HryXmozSp/ZerS3M0XkxPw6R+UAj5+bLuRgcYqP4neFdNuPCcPiyzu2s9V1R/N/suKzllm3FscTyAJPHjb8nUVyGpap4215beW60q91W92rBpv2WFIhDbxbsxbDyuNq+/rxXk4zNamM/cTTcE9U21e3T079Dnr5ZUaalBv0PCv2sPAPxI+ENva/ELwXrbWFjZ5ltbXRLmVrSOYuzh3QHa4J+UM+eOPlrJ8O/FaP4weH7Dxpqmj38Fycrq9mky+XNhvn2OOVTPTI4FfQvjjxRpuoeDYfh2ugywW08TtqNnPePcw3G9OWkjcAbxu9cZ4FfNvjD4W+Kvhva6bpPwz1uKGx8+KeKWWbmGIsxEMh/iB7r3HvXkVpwnenSXLFNO1/x+ZzVsPB0lGD1R2vxR+Evhnw79n8aeC7ae2U7EvIJb9pf3R27GHT7p3Z9m/2a1PBc3iTwza+JdPXXvtOj6/FZy3VrdPlVeL76gdF3fLnHdaydE8feD/i1qmqXGvSpBq+/yH0mLdDCqqq5ePfg4BXjP96oPB2tahZ+K5fA+teEku7+2utiokzbJg3zouOmCjL61x4+pGdSThDlX9a/8A+hyjE2oyoPY8x8cfBGT4d+LPstvbIllqMrxxPsZ1kBXeIiP1qXUPCPwb03RNJ0+80qK91WDxDFb2+s29ssKWckqthgUyWVQ3zZ4r2XUNP1K11C21TXJ7e5uViMdr3SH1/3sfd3V5V8VPEnh/QdM+zrftYXmpSvHZXsFtnc6qx2/wCyPlwGNecsY8QowjfTqur8/I+axWLjUxPsYp6Pp/Wxf0Px1b+G5rzTfhncxXNy8W2/gsLnZ5jRvgeZjjHzM1eWa54sPjT4iQtd+HpdC1uzaRdUtd7uG+T5GAPG87sHHavQPhL4HhsfD8l5bzCG8uoknuJ/l3qzqx4B9/1rivCWr+NtY+MTQ6hNFq97b6dIPtF7Zqu2I7fLY7MFcd8VGFjTWKnK2qW/U3y+nGeJnKKbcUk29zsfAPhPXW0nSrzT0uvL07VIZZ5YNy/vA2RFkfdJ2/WvRG+HPirStQbxNr2gy2lhf3Uk9hYXn3rhm4Luh5wF/M1nfsw/Ei48P/8ACTfDPx5pV/PqPiHbBpepb1FvD5cyyXE2CSVyi+SMcn2r1jS9c0i/8YPJ4whnuLXTNi29q7k/aJWXIV3Odox1rLGYuthJcsb8rV3bf0XmHEGMq0JQp0tbrV9tTG8H6nqnhHSbmz8J/wChx6h5K/Z4oeGKvkLg8cnr69OlWtW1r4hW/iQ+M9W0e1vLa2ul2ebeLHLqD/MJ2ixgoAf4umVwOK0viZrkeqWsfjS6trLSYReLZ6NZ2aLtabrh887AOr1m/FG61LxF4in8mylt4tL0tGeKVNnk2yKqBQD6lvyr4zF4l4j3+S7d076tdvwPn6WaVaNNqpC8pdXsl/ne2p7t4H/b68Za54at/Cfi7wro0sL6XDplhql07xGztkTG47AQxbarOw/D+9Xar8cpk8H6xJY+P7WLxdZapYNeRXHmo8js2A4IPzeXC6kE9Qqgj5c18v2WtaX8NfDng6W18KxarNd2CXWqWGro8dtMkk2E8uVOV+4yMD03ZA+Wu28UeG/D6+LvFX/CF6lavbaqqy2d1LulW1LQ5Nu5/wCWqIdyhsdOa+Y/s1+39vTTbbt176+h56daU1K57t4gPhv4oR614FtdV0bxJb6XZQ6vrd1psyrNZurqZFS4Q4WT5d5TOD8x61zOuXi6b8SPEvwr1nwrcTamjxWfh6J9SVJ7f5VkEtyfuM5jbPHyHrXE/sP3niDT9W1HR/8AhIV0B57CGzdHeIPsmb/SUdH+95abfL7E7qsftDfDOOb9pzxFrvh3x/Z63p72VvBFqMs3zXH2e3SN8k8M7dCMAfLxXqZlg5QhKMpJ2kmtXzax1SXVJ+fbvp1Yr28YcrbcU9L/AI/iZnjpbn4Q+IrCHxN4P+1mP7Jepa3kzW0NxbszHysj512lPvjvt4x11dZ1DwV8UtPm8RtrGh6XcXGqTyxaXdb5nuGmdTtcD7iKfmXPX5j8tcD4svNQk0C60Hxxc3Ul49w8GmxXru7KixeaiI78oMLwM/SuH8A/GKJtJ07wbN4esL821k08UU8LMVEs3mP84Oc5bjPave4aeCjRlSxVP3Xvve+iTXyOvASpypOFVf8ADnvXhPwLpPwV0y88YatNLr2pT6cyxS6Xf/Z/7PeZGTahOQ+0r5pzhxuwDjis3wv4w8D694s0fwHcvcWlnd38Njqj3roftT7mPnAPwgB2sMnOazfCvjL4dixha4+CGkfbYpd1xdRX8sUMg/20d/l4/uE11Oj+BfhHqmhzWuveLfB9g01w94yXGqusbb9xRDwSu1G2DB+tfRZlk2RZxSw3sG4wj1a1Wvz6nt/2esbyKkrJHtHxy+B+m+LJvCfh/wAIa3cRW+twG98NJKm9rO/ihaMsUkJRt53bl7jgH7tfJ/7Jb+LfCun3/hW8eez1zR0nfVN9sd0cUUrAtg/dwW5BG8fLXp3h34neH/A/iTRJPAPxXstdu9Htbn+ydGle6nWO6X/SU8uc/wDLPETIUwSA2RtGRXUfDnTLP41eItd8beBdVgt77xm93qLSxPtRp5P3n2UyfxBPlBbodtcOExOI4aqThVXPQk1ytdHfXTpff1Ci62R1Gq13B2tZrQpfsyXWj+KvjFb+Bb7xm1z4j1W6bW5bBISYobJYvMDFxwx+RmYDpuxXJeAfD/7Lvg++8R+FfjBeWd5rVxf3k+pXl+kss/2qWZ5d0H/PAYdduCMeteyfsr+JdL+CGk33ijxl4Say16WLUreW6a2iP2gJF5YWAn5oi235ozx8u8cNXkfxo8C+D0+IF/qXgfxI+rWl+sV4kUtm32jMitlDsGMqF5r7LKM7wGKrQoqbjKonJXWltLq/q9nue/XzTD1oU1TWrV1prpueKfE288aeLviU9/4k1tb620+D7LpDtcpM9vYD54keWPHmna3zMSSDxn5a9YbS/A/jLw7beEvsFxbPYaWkFrFdaxL5trI38Me/nYH3fLz96uVuvCuj/ZxJNoMq7/l81kZdp/u5xip7Xw7HpNutxBZ2s6pKrK/2nfLHjo2NwP8ASvplRpacjVzgpyVmpXt2Q79n34leFfAvgfxTH4snv7G/0h5ItesH1LzWvIgzB5bYHAbI2gbDn5s9avQ33wj8K/CXWPAui+P00/QvHlk1ho0XjCby/JdF2SPE6ZDOibQvmD8WrpNB8dTX0KWOuNa6laxI0UFneaVC6wgrjaMoTg91zXHfFLxZ4Y+D/gS/8b/Ei507QfDulOlxeXUumoI4X3YjcAoX5O1QPwrlWAqt3lK9ndI9F4zDQjakuXSz89NzrdP8N+JtQ/Z70v4Mt4qs7qFNGewe60h0eWPYjCN0Q/dfO1zkYG1q3Zvgz8WvFn7PdhoHh34qWusX9pqMOpS2+o6V9nm8u3fzAsdxGSF3beV289DxXhPg/wDaE/Zv1yzs/HWj/GfSbM6kksUV0032WSZfmJXnDshHXPUe1eqfBn4sfDVbeeTwr8ctDK61A0SwaD4htfMmU8FRG7k5z6YNaxpVYSTbWhz0p4SOktT61/ZY+L15cfs26P8AEDQZlW4udO+0S2t06yBbObnaDn5iPlz+lLrv7UFr8SvC9tot9oPk2l6u+VGmyWh7LIB6nqtfP1vb+NvD+l2/g/Q797bRL24CxROi28PzdUA/hQn+HpXUTfDu80G3ij1jSr+0t3eJbW9sLm3dGw2R1c8Zbha83NskxeZOElNKUdrPf5HNVUKlH2cd07+p5/8AFXTrjVP2yrOw8MGW4uL3w54fSygljBIlbUNSVIwjcYzjAPBz6GvYpPB914XVJPGUEV3ewTql/YSoqtp+7dgkeoRuf97ivEvFHiFD+2JYa7cgy/ZtG0B5mdPLaXZqOoklvLJwTjqDn0r0/wAYfFjTfB9xrfxS8cPdQ6XDeQrdKiNJLIJGQHCdX+dto6n5Wr9Kxqr0eF8qhVd2qdW7/wC40z4DhenVqcW53Tgm5OrQSS3b+r01ZLv28z179kPw1a/D/wAB698bta0S4V9NWey0GLUodlw1vFwiHP3svtQN3FfUXwI0H+z/AA7DouuQrNfTQLdalK3/AC2uJP3kh/77b/x2vGfhfJ8Hfj58I7C3+EvxguptKt5Y3RNNubedVZW3+VIHTevPVX5r1GOH4mWMa2dv8RbN9v8ABcaIqtz7xyCvBpQoSor2Mly67ev+dz7WpTqUJuFSLUlumrP0a6HE/wDBQn9oBfhz4Q0z4baZdzW934sFzBb38Eyw7TGq/Ikr/IsmGZ1z/c4r869X+E/izxBotxcN4k8WpFb3SQRWrzJfmQfMftHmEHp90881+quoeDde8UWEXhnxlp/hDW7ORd/2e9hfZx7OGGayJ/2ZvhXqkR0rV/2ePD4hjQ7v7HuEiLfiPL20nQozact/kzWnWpxhyyifkzN8F/EjTJDa+MNRttyMuy40TBbLZOSmKxrv9mnxFq2pRRyeMNNld7j5/tvmxPsDc9c7a/XDw5+xn+zxaak81r8L9bsmKblVdYlkij/2cecwz+FT6x+xf8E2ma5N3rETSfKiywrMq/8AkPP5mnChSi9Jfh/kbxr0l9k/LGT4O+JtBs4Y9PsNJa2t72KW6ew1VfN2K656gFgR17+leiabfePda+IWpTL8KL++8PfvoorfS7lQrS/MBM+CC5Pr1HT2r7Z8W/sEfD25SOOD4mWVon2hJNl3pERLFWUjneMfdxn0rqfAf7I+i+CdWbXvDlv4dmn+ZkeIOEV2/j2HcP1qa+Dw9ag4zaf3oUqlCpJX2Pkbwr/wmWg6asniSG4sdkTeVFeI8Lr8uOd4woH3hRqGrap481qw8D+FYZbmZ9iyz2+5n81W5UEfeGP4vVq+wfEH7PPxG1T4e/8ACAzeIVuUaV3uL2V8zTAtkITkfKPSuM039l74k+BdDnsfB9iltf3TH7VrMW7zmj/55Jj7g9WHJrmhleHU1UjFKXqjSKwvMpJ6njHjrR7rwHqUXh+4vLc3cUCNcRQTZWFiv+qyPvOO9fRX7Adrrlx4V1rxFfX0r2dxdJFBEzsw3ovzsM/71eN3n7JHxoe9bzrOLLt99t/U+vFfXfwT+Hsfwu+Gml+EPl86CLN06rt3Sty7VrPDunNSlb5FYypSdJKJ1hZVGW6CvO/gBq+qeIh4l8RXV/LNbXPiGZbNZX+WNV67Paun+JfiJfCfw91vxIx/489LmkUqMnITj9ay/gD4aufCvwh0LTb4f6S9ks91u6+ZJ87/AKtVNaXPOXwkXxe+Klz8NLS3ltLCC5eTcZUldhtQHG4Y/X0rk4v2qprWNjrHgb5ktfPZbXUlY7N2OmK0PjHoml+JfF0Y1SPfHa2oi27/ALxfnBH931rx748XWk/CHwGdS0mZ1v8AVL1LCK4VFJVXVi7D3AXAry6+MnHFKlTV3p95cYXVz3zRvjp4V1fw7b+KUs7pbK4/5axbJPLburANkY78VyHi74M/Dex1W5+I/gXRIoZfE0W3VLuzC+TcxbeF+X7oY8kdG7814X8C/iVqXg/xrD4RukfUdK1Gwt21SB3RWhlfdiWJCR0G3gZr6M0jWZfCGrSaPfw+dpU67rqLtGp/5bIPT+8B9a6sTRp18PLD4qN4yVpfPVP5Pr3sc1GunKzPE/Hn7FbfHT4jaV48+L3i2K70rSLPyE0uwhZJJsPvCu5+8mdvbJHHSu30b9mX4O+C9AvpNG8KeGobW7luZLiW8sPupKiiVIh1iUhMbe3avTtX0W7gVBBctJap++sZ4Pvrlex/iX1/OsjUhHHp81jrj2s4RfN+Z/8AWR/3uOfavA/sfDZa1GnBWto1t5aHs4erGcrx0enqfl//AMFMPhfe/Bvwv4J+HPhzW71/C2+8utO02WG3S3WfcxeXeMzSzkPhnkITG0AZzXypa+NL6x8O3/w9hsPtlnqiDz4JeI4Zty7LhAP+WwC7Ax6BmFfZX7afwn8ffFb9oJ/+E+8Q2sOiWlki6HZaX87Wts7LsVyQN0zn5mPYKor5p1z9mv4xeF9dl0W48DX8bTSzOlusO9/Jg5Mr4+6gDK244B7V5NbC16jUlHVdtjHMsNiJ1lVUXa39P01/M8r8M2aaT4ssIb6w36dpmqQz3+k3W5oJoBKplRwDlgw3AqOvrX0NrFr8LfDL2XxI+GfgbSdf0H+0k1G18K69MyxwzKzuLW4x8+xW8txnqFUGs34DfsY/GL9ojxkkvwu0S3uTbsv2qWW8REhiLbPnB5yfvBR1HPSvo3xv/wAE07vRdE0TRfCV/byXupZi1u1nd0hs3jhbzJQDk/NIrAehZe1cv1fE/FFWS+668/yPOo4HFVE2lpufOMvgXxB+2/peu/Ebwjpvhfwvd2+uOJfCkWtxW1ta2yWquZUD4kbc+4DIO8/3azfGHiDwX8Ffhf4Y+FvhtPB/iXxFfWZvNW8VRQ+clu8rsf7P3jHm/Zx1k/vtgfdr2bx5/wAE2dJ+GfwP0bxt43j1fw5Pdatfw6tBcWyXjzIkv+iN8hKxb1bHGR95z92vnH4+eE47PxNbeG/DNhahdKi8r7VFctMJMfxB8AcewrrrUqlBLmWslv8A1obV4VaUE5rWSevzX43ujV/4Js63B4Y0WXxVP4SsddOk6zfX0Gk6jceVFcTQ2KSxDeQcMJEUr6sFGRnNedftD/tgfFr9pz4tf8LI8Ra9AlzGztpC6Q+y30uAsriGJyAdmVVix6nn+6Fv/sx6WNY+AWu2TQwyINalllSfO0pHDbu3ABJO1TgAcnArn/FvwP8AFvw/8YXNjcaIqXN3ZfJZXVmsj2sU3zhgn8EhG3tkDjHzV934jVaqjlsEnb6tDVd/a1T4Xg72r/thRWn1yf8A6aoHkPjexuvEmoQ3Wx0Sa6Z5ZerN3Lf7WTzk9d1aPhfTNO0+FNPWZEJuP9bKm8Kp9RXY6p8P9V1bWrLw7Yzfv72/htYmfag3zOsYz/cGWXJ7Cujvv2a/i18Pd/iiPw21za2mo3OnW+pRWfn2c11EzxyLGSMS7SrfNjAPPWvzP2eIqwagr2PrYYetiHeKvY8w+MVndafeWdnothFNfTxO3kRQ5Zo/77k1y3hPw7qyTT6l461J4Y13fZdttny8LnaAMcsa+otc+B82m+EYvHnxCv7rR/sssthb3Ethvlmu/KVzFHFw+f4Hk+5D35yteM+PPDupR6naW9wn+ibN8UT9N3Q/7wrz5+0hTSa16/12FiKEqcFKSsQLotrqVrpdnDo8V5NGglupYrljBI52vtwMbdvRu5O6uH8ffN4qutzxfLLt/dQ7B/ugD7or2nwPpP2jwTqevSX9rbXFpfpbpZSv+/uomi8z7RGAMeWp+Rs/x/7y14lq0huNQkum3M0sryfm1e5wrTqzxtSpJaJJffqZ4OM5VJSexjSRsy7W/wB6s6aHcxRu/wDFvrXuI3LfN/F9+qF5D8m5Of8AP3q+9Wh6DSPDbe686R7iab7/AN6vpf4B682vfB/Sri6dGls1a3dP9qNsfqtfJketQyMpj+UGvoX9jXxFDqGl634TmfPlzpdRJ7OuD+q1rWWhnTd1Y9phjaRUk353fNSxq0WGd89fkos/3apuTJX7tWLiNdxZv4krkKFtwzf3P/sqfG37zy/vfxVFZx7plEnyqfmdKtSQLG3y/L81A7XL+lsvk/K/O+vW/wBluz0/UNcnEmvNaalpUsWqaNby3myC6mjV0eJ4j8kpZG2c84ZsV47ZrJ5y+Z90J8ldT4L1TXtJ8QM3hu8uLe8uItlrdWqL50Lnbh48g7XHZsGuPMcNPG4OdKMuVvZ3t+J6WVY2vgMZGtRm4Ndeyej06qzs12Poa6+CPwH+Inhu48VeOktb/wASW2qXE9x4j1LzYJlDu2Ibb7qPZJvxFntxXhNn42/ZT/YB1bVY7Hw5qnxH+M7Xq3FlYazZzaZpOnoYXMeHSN0uozt2rgu6PydoUsvQ/D34Gw6pqGvaX8SdQ1nU/wC3rB9NvNL1u8d45IS6y7fKJ+UgrxwKdr/7Fsn2WD/hE/F88mmC8+0eVLcvdNburZCh3PmIAW4TOB0G0fLXw+R0IYTO6uElXbi9OtrtLTmW2voehmvGWZ4nEVcA8ZKpSnJPlsrRdtl21vtb7zxvxV/wV++IP7QHgu2+G994e07RPE+si60bU08Fw/Z7Ox0p7ZnDRyliHufO+RxJ98fP7V89+O/jx4sf4eTfAHRfEl5qeix5a/1bW7F4by4Jm85+HxtfzF++OHToMYr670b/AIIWazqmqR/Er4d/FzxLbvczvdb7Xw9aGBpSzOd4bL4J3ZNdH44/4JF+Ita8WP4y+JHiS41SZoo0lt4Nmzci8ZSID5v9mv0KlRwtN8sbX/rqcKwlZx5pdfM4X9mv4X+G7z4eeG7dfiXo0r2dquyD7Yqy5Zsnjg9Wr6B+HPhvVPAPxA0v4gaXrcU76LfidLd9ViG59rJtc/xIQ201S+HP7H/g/wAJ+Vp7Q2VtEibZYLqzd22hfu88rn+fNdovwzbwpo81xZeJtPNvGjSzyalZBI7aNUO4szNjaoBO9uABWVXALEVbKKbZrUpUox993VjvvDd5qTRanea1ptvJf3+opcfZ7XUoZGvItvzjeCBhWVcLXn2q+LPjx4N8XTa98H7DVtCluIkXVLVNViu7Rk6bfK6bD3GM+/es2w1z4N6ZJaeb8SvA8hhiw8reI4C0h3Z3ArLhMj5SMVFa2/wQ1XUW1C6+MPglpFtzHFHd+KYUijc9JABL82PQ9+a4JeHGNdeVR4Kbv05JWtbs0ebQp5BCo5+1jrpbmVrel/uJLHwt8ZvG3iC/vtU8K6vpV5rErz/6LDLcMwHIXk9M9WzkDj+GvSvhz4D+MXhvxBbWNw1/cKXX7QtrYXHzYZfmfP8A6FXI/DvVfhV4RMVrN+1roFnIi7WvbXxRbzqeNpJRpe/Xj8q9A0743fs8eE7C/f8A4aQ8P61qU+kGKC8tPFQtfJnz9753+YFeMDG3sa6VwZmlCFlg6v8A4BJ+n2T2qGMyuEPdxFNcvTmRuePvDvxWuPFySaX4h1TSnjUzqkvmwRswXAYjYR9TXKTTfETT9WsptOuUl1Lz5G1eLZLNbyAr8jRvgHPrxwenFZtv8cJ/GNvJaXX7VXhGNYUWOD7V46soIwVHAAeUPsxt+bJOd3FdX4b8aaBYX8F2/wC2h8Mo5FiQrnx5YDy5RyVyJSMejVFHgvN4y97CVF/25L/5Er6/hKj92vC3+OP+Z0ng/wAL6v4guobHUfg4t5cPtaVbr5tpxh5U385x6dK6C+8Kw+E9NfVNS+HsVtND8iWsUys7KG4XgDafu8etdzon7VXwht9Oikuv2zfhlLLE64hl8c6Wc8cnLS9u/wDeqHxH+1h8GfEejSvN+1P8HVljuFjkgufFmmymZNv+tB83HGPu0VOGc+g2oYeql/17k/0LWOwMV/Hp/wDga/zPMdQ+LGnx6T/bnhXwety4Z5JWXeiyMOu/ofx5FeVX/wARpvGlj9svvB+l/wBqpfqzJK7b7WHcvzcffH8JroPFn7Q3ge38QSHSfjN8JL1ZXZQz61bCJVX7zqPMwN3ZSwzXL+M/jp4MsoPtc3iv4TakGDRuNN8VW28BjgFl8wduON3HWohwlnb950K1/wDBIxqZlhOTSrC/+Jf5md4m+LniTxdC2h6pDpf9m2cpiliS5aX5G/iGcHB2/e6iuKaCPVL57yPXtiyL9lie4/fNHt+5jHPyr+VdqPir+zh4e0uSe6k8F3DXMMUjJp3jC2cZ37lRkL5DKeSPu4rDvfjD8AVujd3MfhXzBKzlrHxLbKTk8DCnbkDjP9K0fCuep+7Qq/OEv8jjrYuh1rQ/8Cj/AJmD408M+JLS+/s3V5nlPkJFarBbMnmRdd792P8AtVX1D4W6x4o0H+w76/tbRZZVaLfChXhs/wB7+tek23x5/ZS1P7QPEOvW+EtES3mHiuFpogFyFB3bTghRgDiuQ174lfsxzRiXTvHRiR0O+3bXLeRs92yG2j6VzrhHiOE+ZYeo/WE//kTz1Vwcajm6sHr/ADL/ADPmH4vfC/VPC66hofiSwika9uJolvWTekkLdWSQcNnpsPI9K5Wx+LU2qXEnhPxJYbNR/s5LCK/TdFueP5IpsD7rrG2xsHB2qfl5r6t1Pxr+z9q+mzaVd+OrW40+7jXzrK+v7RsH8G4PuOlfO3xp8B/D/RdWt7z4ceJ9P1GN7km1nkvI2ltx0KSBX+76N3FdU+HOIJUnz4Sp/wCAS/yNfruEpT5/awXf3l/mL4i1TXvA+paPBYaq01tbWrxJb/w72b5/976muN/aMn8SafoeiNcWET6NcLuivFT95ZzNyYnI+8hHTP8AOutvLjRta0OOyfVrW1utJbDoZwyzL1ARj9/HtWrquseHvEHhuHwb4jvtPu9Kv4/+JjHJcqWTeMNjBG1lxlSK8ahw7nuHqK+Dq6P/AJ9z/wAjapVyir7yqQT3XvL/AD73Pn7w78QNW0rxpJp95qz3NjJpv7qdJmYqN33P+A9R6V2Xwd1641zx5N4sulXyXvEieJPvxxbP3fPv/OvPPFnwd8S+FfH6Wtlqcepaajv9j1C0k3l0ByvmBM7Gx1z1r0L4S6OPC0NvqcOoiG4Chb7zkCmTLZO0NyQK9arw7nEqTlHCVdV/z7l/kZ0MbgoS5fawX/by/wAzf+InijVrf9pDT5tL1trLTYNLVEii+55YbfIuPUq3Jr7Nj+DH9rSad4kfWPtKCKGVIt+fkCcJ6t/DkntXwvqVimo/EC6jvGDxxztLDeBxtdWi2bRJ0xu3HbX1Hrf7UelaP4W0Xwl4R1Gyu9XOkWwnnublRZ2nybCzSEjc47Rjn+9tFcFfhbPpYaFOODqN/wCCX+RFbFZZPSdWL/7eX+Y3x5ofiLxFrV7H4sRIbGxV3srWJF8ne7fvGQDt8m2uz8K/GS3+F/iT7VrnhWLUJV0ua38Q6bq9slyl5chNlvES/KRg7XbHJ2qK4f4dJ8NtN0jUX8bfGewurnVZo3uYxrsQMAO7IjYPwPm5A6CvRvEXin9lj4i6+2q+MfHunRxvYyQ3ccesW/8ApcqqkcLHD/JhVIPr8pzXzmK4C4j1UcJV6XtCeu9+nS+h8zPB4ecpONeG63kvmeZ6h8YPFl1rmqrq3gbUbv7e4j8R6jKkJ0+S1fm3iiQP5sU8My+YHxsRNoGTmvT/AIO694V8TeAJdB1LSoNNvxPt/wCEhutSdGkYJ874RMJk9C+celYF5r3wBi8Vahq2keO9CS1vbNII4G1aD5NvGWG/Byv5Gp4/iL8H9It4rTSvHui85LxnVrcx5C4yx3Zc/hW1PgjPqEVClhKqVkv4c9++3U9DDQwFC160HZW3WvnucU+oePJPiHa2/wANdSludTjl2wX7uv7549z7w7gfuztyGftXc6f8Uvidr2sWMnijwZPqeoX1+F1zVFRU3XBZvMchBtiwE4Xp3ONy1Q0vx78ItL19vEE3j7RBNcyb5GTVInwemOG+Xiu58HfHr4QaGhgT4jeG3tmWQTWFzqMIjmEn3izbs7u/tWFXgTP6tva4Oq/+4cv8jH2eBxDcaleK1/mVvzNW1/Z8k+OngPTdV03xgt1m/leW3unRHXDMm15HcbiE2/0rpPD/AOxX8MPg/JeeJNauby9YbPIt4PskyeWqr8uc72+bdn2rc+Hf7W37GXg/SZ7WG10J3kunnRJ/ENg0algowAzhgAVyK4j4vftj/D/x1N5Wk/ELQ7CzThLK31m3CE7s8lW54/ClR4Z4kh+6p4Kol1bpT/yPdpf2BRopurByX95f5l6fwN8IWsbizh0HWXE0rPFudB5aleVjA4wT0bsOKNL8N/CPTZEt4fgDLeDYfNvLrVZmnYD/AGAPLWuB0z48fD7WGAv/AIsaHavDbqlsZtVjKjb0QfN8q46elanhL9qDwH4f16LUofivonyKF8y51KORffKbvmFenDhriWjC0MJVt25Jf5HXhcwy2FlGtCK9Y/5nex/ss6b421T/AIWJ8G9B1bR/FujxS3uiJLcxKjOsTxukY2k75o3aMtj+Ljmo/hP4H+InwD+MV38IZvDFvY6VY6RC1vYay/mus1yvyMjo+EclW+XoNvStbw5+2X8F7ecajrvxT8FiS4YrHcWV5FHdwdgzOsg4PpXc+Ff2sv2W49XfW/Fnxi8C6hezxhftGo6vYyiVR0DN5m+M1yY7hrPsXRVOrha1k0/4c76fL5/I7MbHIMxpck8RBf8Aby6fM5zxFJpdjpt/a3Tz3mqT3lxJLdS3KypsK7JPLyD85/venSsFbePxBfWlv4Z0Fpr+O1jggRJvKlkw2XaM9FP8PP8A7NV34l/Ej4CancLrXhn42/DxbK5uXRdIh8c6cXt4wzbXBaUbctz9G/2a8wufiV4Y+H9yLw/GTwVd+VKJbGWy8aWFw6v1G4QzEjHTmuPK+DOJKU/aTwlT3fh9ya/Cx5OVUstwtb2mIrwajpH3o7el9O59FX3w/wDhXq2h2ti3jC80bVpvll0PxRpTDbNtwf3oGOvzBsVy02j+Jo7UNJZ6bJb2lwsDytbIvluOnIwV9q4XWf8Agof4M1i2W217VdFk1OKdXbWn1y3mllXGCmVJQrjjorCsu3/bc+DWkaguo2Gr2YmSQSRw22tQ+Wp7ZEuQeeSM19BDJeKXo8HV/wDBcvzS1PpZZpkK2rQ/8CX+Z3198M75dLfxZb6Vpz+azP5SuziT5vvDH3DnopH0NfKX/BXT+3NL/Y7sPEWnT/ZpofHmnwWEFrMzxyebujuFkBB3bY2ZsHhD845XNfTNl/wUM/Z9u7Qya7/wj8c7RbhNC0PmiRsjgpLjA6/w/Svi3/gsJ8fdC+LnwI8NeFPg7e/2zNJ44a7vrPSYWleGEWVxiR/LyceYUXLdyK9XKsr4jlioQng6qXnCf+R5+Px+R+wk4VYXt0kv8z4T0/xpoPxEj/tD4jfGy40nWLfzILKB/Day2/2ZUyjGSMDc7SfJsznHSuj/AOCfehafrv7eXgjTLHw94fu7yz1SW6TVJbBtkawpj7QEQqZXG/5Uc4zyeUG7xy38E/Fa0uGeP4c69nYV3/2RN/8AE19Y/wDBF3w7YeCv2zbrx18bCfDOkaf4G1BbG/12MWqPfXEsCJGDOqqxEcLk4OcFa+qxWU55Tw85LDVNF/JL/I+ZpYzAzqpOrH/wJf5n6haP8Zvh34VuGabQZbyxld1gvbpJYRdIvJ5ORle/cGu08G/Hj4M69qieH/Cvgy/sLjyluGZ9t0jKzY3oJEzjK8c1xL/tCfBma1NrrXxe8BarCsjG3iub2zDRrnn+PGT+teo/s6/Hn9lLwi13q97+0L4B8PlolV1t9a03NwnzEqAXZ1A/u4r5GthOKqdPmjh61/8Ar3L/AOR/Q9dVsuloqsF/28v8zx342a3b6T+1YPFOuyRwWsHhLRLqea70z7IgiS+1Ml3XgMuFJLDjAI7VU/bItfEnxG+E/wDZPw70q/hL680l5b2thNePH9nidwh8rld0m0AniuT/AOClHj34ffGPxV8U5/AnxEtfGWhp8HIbFLvRb0XBAMeqvJbh4hy4MrYCjPzL1Nd1+yR+0r8Movhlo58d/Gvw7Y63DZWtvrP9oa/bwNLcJEgecBnGQ6BCccBww68V+nVcpzfHcLZXGdGo37OqpWjK6brS3VtH6o+A4VzmjkniFm+LpTjeFahKN2mnbD09fPU5z/gh34R+KVj+1VqeteINK17ToE8FXC6vFqOlXVpFJM00SQZEiKHcbW55Ir9dta0mxt7W3tbJ08w/PcXVw/zf7uK+ZvBv7Zv7J+iWqvP+078Py+0FgfGViDnt/wAtK6G1/bp/ZKnid7n9pz4dmWRsKJPHFgAv4+bXl4ThfNsFT5KeHqWvfWEuvyP0vi/jPC8W55PMZ8lNyUVyqStokr9NWe5/bIZtJGm6bbRZSVWluv45G/uin2mpahZxhry8dgsvyW/95vcV4ov7bv7J2neWtv8AtY/DQAdfK8c6ef8A2tUkf7cf7HcUkbxftTfDkMr72lfxxYEn/wAjVu8iznb6tU/8Al/kfMfXcF/z9j/4Ev8AM+otJFwbMXF1bLHJIAzr3/Gqt94nkt7/AOw2tssv8P3+c14Fff8ABRr9kWPT31G4/a0+HqIo+YxeMbKSTH+yiSsx/Ba5/RP+Cq//AATxhvVttS/aP0bzRICkivIY8+7FMVxyyfNoP3sPU/8AAJf5B9ewT19pH/wJf5n1VZ/bLqFm1O2XlvkThuKd/Zunbs/YIM/9chmvItK/4KGfsI65b/abL9sb4Zovpd+NbO3b/vmWVTVz/hvH9hz/AKPN+FH/AIcTTP8A4/RHJ85krrDVP/AJf5B9ewX/AD9j/wCBL/M9WWONfup+tL0/2a8o/wCG8f2HP+jzfhR/4cTTP/j9H/DeP7Dn/R5vwo/8OJpn/wAfp/2LnP8A0DVP/AJf5B9ewX/P2P8A4Ev8z1bDf3v0pa8o/wCG8f2HP+jzfhR/4cTTP/j9em6Dr2geKtDsvE/hjWrXUtM1K0jutO1GwuUmguoJFDxyxyISroykMrKSCCCDg1z4jAY7BxTr0pQT25otX+9GlPEUKztTmn6NMzvHnhaXxr4bl8Nx6rLZJPKnmzw/f2K24qM+uMVsW0MdvEkEK4RECr9BSSMBNEp7uT+QrP8AGHiX/hEfDd14hGlXV99mUN9ls03SSc44Fcqu7JG976Hm3xW1LxdLd39v4XvIIbz7YiwPdf6vYGXep4J5G7HvXzZ8YvjJHF4yuY7qG11e1tbdrdrq90dWEMRbMixR8fvGPBfsNoHevr/4cfFjQvinNd21p4c1K0ks1Uyrqlg0X3uON4Gfu1v33g7wpqq7NS8M2FwD1820Rv6V0RlRhpOOvdOxHskp3lf5H5kftCfFrwn8Ste03UNB8PNGU0byJZfliaO4LMRwOMqu0Z7dq9a/Zu/a/t/Cvgm4034pX8uqpYSw2ulwXEyyXUzsv+qjJOWx6np3NfWuvfsz/AfxKCdX+FmkOzfxLahT+lfMXir/AIJo2uleNpvEHhO71Fo/PZ4onRJQv7zcjI/BVh/KrlSw2MSvJpx2uHsKTilF2a6n1R4Iu7VNBt2laCDT73Y9lAl95zW7tzs3cf8A1jx0pmv6UNEkm1aMztDbRMbqCC2V3ZOu5B/EP9j8q4/4QfD74kabZ/2T4u0DTpbP+G8kd0mjx02oMg16TqGhapqehPY3F9FFdBCsE8G7CjHfNYTp0m+ST0/rUv3oarc+SfiN+y/4t+LfxLvfHvgC/wBL1PTJbyFmSzvMSW6oFISSJ8FTx92vVfhT8LNf0W803TPE+lo9wlndeb9qttyeVtwkRJH3Mv0r0LwP8ILbwr4iTxPJcRC4itTb/wCi7gJEO35Xz94AjI9DXcbsjiuWeDw9Op7juj0Z5rWq01Frpb+tzgPh/wCAfhp8NJX0bwf4Ts9Mjlne6le0ttkck7bQ7DGPpj8qxfi34O03XtYhupIbhV2MrTxOqmT5vvA+o/OvTNR0Gw1SJobhGTePmaJ8VQvfBNrcacumxXkqxIwaL+8pHvXHisJOth5UoaX/AD/yMaeJ9/mk3rueR+IfhTZ+OfAupeBdWme607UtJms4E382qSLgsj/wmvOPF3/BPf4P/ELTdb1TWvCzWF3beH7DTtGe3T5rUxuxeYf33kHyu3ovFfRN34O8UWGz+ydfSMJnbFJbrskz647/AEry79ojQf2h/in4W1L4ffCPxHb6GTfQW2sa0rvHLIjJ86QN/CqA7mI5bdgEfNXJgMLXoe7Wjey30t5pep1e1UoJJqy/DVfnax+Ln7FnxF+I3wm+FGqfEH4S6dbXfiPTtYuTpdvd6U19G7vawxnMCkNKdrsQo5yBX3teal+yv4k+IXh7wD4w0S/1L40eJ9EEt5vtntrbT9SvbfMktyYsBZI9rYiySm5R/Fmvjn/glPBqlz4n8NwaNrAsLh/iTahLpmA2DNruHPcrkD3Nfqp8LP2Jfhf8M/j94j+PFxoMl/quqzvLYXF0iuuns8ru7RY5UkNtZzy9fX8ZU6lX+z1F6fVqfS6/i1bu33H57wDUtWzpX1WMlZdHenQ/yPkXXv2QfgP+xlo+h/Fz48fEKLV7nTNel1KXQX01GfXtkPlpbxA5KkO28MTjftJPy19efA34rfD344fAnwn8VNO+Gn9m2PibRkvYtJuvJlNqsrZeJyg2Mc9cdTWR+2x+xGn7RVlZeIvDPlSazDdW9q32+8byY7Itzsj6ZBbec8nbivTvgj8DdL/Z/wDhLoPwn8Lme4s/D9h9mt7ptuZMsxLY/hyW6V8nQw1ahi3GEUqdr38/6/A/Q6k6dOpGVNaNO9tNf6/A8S/ak/Z1/Zl+MVxbeOvjNv0rTNC0u6+1XS3iW6M0jLsaR8fKqnnb3Lc5r8pta+Huk+Nv2jH+HfhEW+qWQ1lre1+xPL5TWa7gJeAzrCu6Pc/YcnA5X9zPiN8OfBfxK8L3/gHx9pX2vT9SQLdW7uqPgNlGBGCpB6V8GfHr9l2T9jXxFqfjT9nnwpb32qz6Xe3XiPxDr26W30/RJpkiisohkfv5HX5mHJG4dMY4s4wMsVBONrLV93boa1oU8fQUG9U1e+9l27+nex8QeKfi54R8L/AEfCOzmiM8L6k1432ZJ2h1LzdgW3kQ7ljIXa0r5H9wMK+ZLqPy2+ZPu16l8XPDMmj3F3Js2CS6b+BduSzPtGOw/u15xfR7WbcmNy/8CpcPUYUsJKUer/I8mlTppS5Fpd2MO4VfMb591VWjVlbdWncQN8yttXHzVUYbYT/tV9GlFqwOHvHx5p983yq33q9l/Y/8ULpHxbt9NmfamqWslv8A8D27x/6C1eHWbFdoUV1nw38SXHhfxppHiCN/+PPUoZX/AN3fh/0Zq1b5lYwgrM+9rdWXdHsUlXqby18wN1Dfw0itaurTR7tsqBko3qzLuf8A2a42rFWYsKsrFW2ffq20m6Pd3rMkuJI5tq79n92tWx/eRYahK4JXJFHl7MP839+up8D61caD4n0zXIZmSS0vEdWi4ZV3dvw3VzMK703dWFaOn+c0O5X+ZcN+XpUygqkXF9dCkovQ+uvGngfVPG0dn420Gb7NrVnbpLZ3nyhrgBvuSZwGOOmeas3Wj6fqUX2iS2eGa5xLOkSMoZzycjPr2rqvh7/wjfjT4Z+H9eguL3/SdLia4g3/ACeaq7D+qtVqPQ/3x+yp5yfx+b86r9cVxZZw1Sy+Tk2m3bfuuvzuZYTDUcJNt6v9TjvD/ge8n1JY9BgZ55mPyRXjoWy2TgZwtdxY+Dfid4ZsLvU2hv7OGZWW6limWVWX35/XrTYdFj2vJG8Sbfl/dfKrN15+apludWj00afHMjIfm/1zf417UsL2SPVWLi1725nx6LrF/NubWIpP3Qbaz58z8z839K8n/bo03xBY/smeNLnzzDDFZ20hjwQwb7XCAQQcA5Ir2Gztb63Z5GdA2/cjt6fSvKP28rBbn9k7xhbwRP5zW1siR+aMFjdwjPOD+fSvouEKFRcV5e1/z/pf+nInlZ3XpyyXE3/59z/9JZ8tftTfDH4e+ItGu9X+E1trHhzxF4A+B/w/8S+MoZdSibS9ct7/AEbw9bSz2sUMET2d2t3qdvJIsrXAujPdTmW3eJYbjX/bF8JeFNH/AGjPip8Xv2nPiD4w1a01b43eKvBGg3fg+30ywvI59IktRc61fQQ20FrcuI72yb7LAlsb6aS6eS6syga4yNa+HX/BTD44fCHTPDFz4D0efQLrw5pVjHqGl6V4c07UNS0q0t4EsbW7vrdYry8gjjt7RhDcyOu+1gdlLwxsuna+Bv8AgqVZeJPEniLXPhp4c1+78VeJrzxBrEPjDwz4V1y3XVLt991dW9vqEU0VlJOVj8z7OkXmCCBWDCGIJ+/xzeGHcI1MfS5oc0WnWXNZ8l1zNcy96LbVu0dkrflrwU6ibjh52lZ/A7fa1stHo0l9/ryfgv8AZm8IeKfAifHHT/ivrGseBPAtnrK/F/V9GvpFKXttdFdLFostuJbS01cXemWdrNcwPMlwupzyW3kWUijmfiJ8Ofhf8LPh9P4S8Y/EvxLD8SV8G6H4rsbmK6MmkX8WqRWF1Bo0UKw+as6affreyX0s0cQkt5rNbd/3V3L6t8Rv2Vf2q7z4OXvw48OfD/xP4l1Txx4k07xf8SPFPjLVtMhuG1i2t76JLe326hO08YbU76SW8ncSXJkgxBbGGQ3NOb4P/wDBUHUvhNe/A+7+Huiy6PqujWmkajez2Hhn+17vT7a4t57Wzl1Zh9vkgie0tBHE05RI7WGJQI40QbU+JaM587zGHLzbOtGL5bp3633krWjzR5fhabcTyqqo8v1WV7fyN67eXk762d9zs/il+xr4S8Q/tBfGn4geIfE9lofhU/Hzxd4P8J6JpXxJ8O+F/wCxvsV2kkt4sOs3NvHeQW8V5aJHp9sY/N3OJLqwEcJufD/hVffA/wCCn7SHhLQPip4//t+Twt8V7608Ya9bzLq/hK50u3ktI7G/iiRDPqdoLhb2e4twIzd2ghiikgklaVPR7rwF/wAFRfC2o+JvHnjXwJ4Q1T/hJ/Et34h1p/HOj+ENZtE1O8cPdXdvBqKTQ2TzkR+YbdYxIIIFYEQxBPE9a+NX7Rvwr8b6FqHin426JD4g8HeP7nxrpV9q3i3SLy4i1+5ayknv5ZZZXNzJI2n2jMJi6ExE7cvJu58Pn9GFKVKvmdJx5baV0n8LXW/X/wCSabVi6uXz51KGFmne+sH3v5f1p5nuHxo8Rz/Cjw98DP2l/jLonw08ZeIrH4r6zD4x8PfD1NJt9H1HS9MTw/ewaTcnR7f+y55HW+umeeFbk+XepDM7Pbm3g4D9sL4r+J774feDPhn8TdJ8M6X8UNC1nWbvxjH4O8F6XoX9n2dzFpyWWlXy6bb26PfW8lrfSyxMrtbfbVhkdZ0uLe33fh9qf/BQf4weOvBH7Vfwn0vwPqV34GtfsfgbUfDekeEYtL0tUuLm6HlWMEa2YlS4u7icO0JkWVxIGDojLs6l4K/4KWy+IPDfi2w+Dfw90PVfCfiSz8QaBqfhXwJ4K0i5t761ffA7SWNvE0yK+GMEhaJiqlkYquHh87wVGdKcsZQbjzO6rpatytFKzSglK3d6X+FczqZfiJxklQqWdv8Al23sldvu216b99ML9gr9oPRfgj8QdA0b9qWSy0r4fa95uq6fqmvfAvRvFP26Uy/Y45JZr+Nbw6Uk9vL562UxlP2W4hgEc8rTxel/s6eOtU+GvgnxL+ynf/spWWn+PPCmst4d8Z/EHXNH0Lxh5GvNrRCW9hpl3pU81/qtxBbSaZbafBqAtZNk14UhWO6vIuS+B/gz/gqH+zx4c0zw38Lvh54YhtdF8SSeItCl17w94T1a40vU5FtUe7tZ9QjmltnIsrXmJlGYEbGRmsXwx8FP+CkvhOx+Hth4U8G6fYR/CzxLP4h8EtbPoCSWepyz200l3M+7deuXs7UA3JlCpbxxjEahKnF5tluJrVJPG0Um0/4y3jzWbStd6pNSc4u97e5yzqjgcZShFLD1NL/8u31tdX17aNcrW3W61U8H698Vv+CnNt8PPjH8L9F8OWur2UE//CK+EdTsJbBrI6AtzY3KTaOY7KaSeAW91JLarFDJLNI6RQq3lL9Wf8O/f2eTE7/8IRl0UFIk1S8y4xzk+dwc9uleDfs6fBn9tC5/bg8NftN/tCeFtNtUgsvst3qOkS6NbwWltBo7afZwx2WnMscKJFHBEEjiUBVGR1NfdlxrnhOb99qE2+dv+WuxtzfkRX5vxtxbneFx2HpZZmElBUYKSo1p8nOnJP4Zdrb+9a1z67hzJcvrYWpLGYePM5u3tKceblsrbr120vc8EH7A/wACViVm+G6HzGC5OsXg2kDJA/fc5756VHN/wTy+CqyI8Xhu2YONzwrq12TH7ZMgr1q/ms7qaVtN1jy13/PuRl9zj2qjfTaobdGs7mV9n8G/j13Y+lfIrjXjVr/kY4j/AMHVP/kj2f7FyFa/VaX/AILj/keXz/sDfAu0mV5PAvmQnOWXW7ngDv8A6z8ar/8ADA3wSnnBtfCJ2BgcHU7rBwc7SfMzyOMjGRyMV6RqGoeIl2r/AGruT5VdG9DUb6hrcIb/AImMTL/Gq/NuB46VL404z/6GOI/8HVP/AJIuOTZE/wDmFpf+C4f5HzD8Xf2OdK+HuoyXEWlytpF/vWwvo5X/ANEmJysT5YgrjhXPUdeevjtl4Z0Ky8QDRvEWh+VsUo7ec5DlT8z8N19QOK++LZrfUrObw7P5T2PleU9nPDsjZP7vPavlP9rL4W/8K1+JUNnJ80OvWD3+kLE+SwiZUkXIzyN0ZH98M3HytXhZtxfx5CKqU81xK8vb1f8A5M6cPknD8p2eEpf+C4f5Hnnjv4YwS+Gpbzwfarayo8nkzl2dpBt3KNrEgn1wKq/BDwTpfjrSPset2Uh1BS2JJJtpYnjDKhAUKVbmvT/BcN94m8LWX2623XST7W2J8ka7e9c5pvgXVvgz8TIFg0dm0zVNstrLb/KjeYzEg/3SD/F6cV5NDxA47dNwlm2Jv/1/q/8AyZ3z4a4f92osHSs/+ncP8i9/wpPwRPpRhj03ZdRAiSdbiRhvHGPvYIJ5yO1dh4c/ZC8L3fg20v7vSlk1GRQZPNupAjrzggLICrEjGOlSaTp/iS18aGxa2t4lZgkHz7vm3YC4/wBnd+TV7dpfiKaHQraxawgUW9uqfvbb/WEcbs/xV6WVcc8c1JyVTNcS/wDuPV/+SMcXkHD9O3Lg6X/guH+R5av7KXwfPhmxhu/A0NrqdlI51G4fVrtkvlLZGFD/ALraOOCd/wDs0/UP2U/hTLcR2mjfCyCYbMyXMPiC72DPfDkHA6Zr1S1161b93NYRb/usmzHy/wDstSRy2N1HJfNZxWaWqfP583/HwNv/ACzxnp3zXrrjHjO//I0xP/g+r/8AJHI8lyKT/wBzpfKnD/I8Ttf2O9K1XUfsVh8J1jxGXkkl19/LQDAwCZAT14qTXv2KrTRwhfwNopQOj3Bj8QXBZV3qSnMnBK5HQ/er0y38fSalfPa+F/DfmOvyN9qm8oNnphzkYHdq7LQvEniSTS10vxdpVhCiS7t1reLOM7cDfwO3SrjxnxnJW/tPEf8Ag+r/APJEPI8ig9cHS/8ABcf8j511j9l34cjULmPTfBJjhj2tCW1SZiwIy/Vv4e3rWg/7LvwNtr0RQaJ9ttiOLj7RdQvnA6xtJ6+9ezajfeH9SufL0+HyxH8u1EX5v9r061a/s/w9NZvHeTS+bxv+cKir3z3Ymn/rhxnf/ka4j/wdV/8AkhLJcgtd4Ol/4Lj/AJHgepfAj4O6S0FxpvwttdRiWMG4Wa+vh9c7Lkcj2quPhJ8GGj/0j4GmMvhQ0d/qHy8+hnb5iPwr3qHQ/Dsl99jt7yUu/wBx9issf49OR2qzq/hHS4fKsVvINip8srbWb+lL/XDjP/oaYj/wfV/+SCWS8P2/3Sl/4Lj/AJHhGg/Af4Wa9d/2ZpP7PclxMsLzOkur38LiJTy2WkCcen3j6Vo6d8G/2WNFvZbL4mfBe40vZOYePEd0W3K3zbQXw2OmenzV7Va+FdHW0LSTS73+aKeK5ZP+AkdKz9a8H+FLmb/Sn2unzI8r7iv51P8ArfxxfTNMR/4Oq/8AyQLJuG0tcJTv/gh/keUeJfhv+xGb8v4Y+CmvmGKMi4J1+V0VuoGDKGDY684qTw/8C/2L9cgSbUPBfiayeWbYkduZpVBP3QWM3y/jzXe6p8PbjT7p2jS1uNy73e6hX5gPbPzAVd0nQY7i1C2KRWNhcsv2h9IwUkcdGIJwx+b14rOXFvHC/wCZrif/AAdU/wDkylk3Drf+6Uv/AAXD/IbH/wAE6/2arWa0leykuXun3iwkublP3fP8Yn+XquSa9TX/AIJ1f8E4tY8BXbpo1lp94blbW11Vdc1Vi83VlRHmKcdNzfLXyt8Wv2+P2X/2e9SuND8UXOpeINStJfmtdIuWO7b0STLhPve9eb69/wAFtvgp4FV9L8M+Ev7She68+y0u4s9SmX7SedySlPmP+wCR3rzp8Zcezl7mZ4l27V63/wAnqbRyjhZaSwtP/wAFw/yPtfxb/wAEvP2QLl7OHwZdeHra3BCbrzxHfyXNzIFywkYTeUiZ7oua8j139gL4e/DHV5b7xp4K8N6npYm2i6svGV19mRSeAXDKxY9gOtfO11/wWC+OXxu1STw38Gv2ebzXLi0hhkuNLi01bKW3j+bD/v5kZsvuU8EOF/hq98Sf+CgH/BQr4reEYPAurf8ABOu8W1jcNE+k+U80jD/pkJm3H6c+1QuMfEF6PMcV6/WKqt/5UKeUcKpe7hKd/wDr1D/I+6vhh+wl/wAE1fih8O4rnV/h9p+k6oJPLmudI8dX7/eO1CFuJznJI6J+GK+Jv2/P+CdfxD+FXja68I/s3eP0hh027ZNQvNZSC7x8vyopiJHOQSSudw214FrH7cHx/wDhhqLah4m/ZX17TZzEVinutKu2a1yuNySQoyqR7mvJL79tq38RXszNYbZlb/SETUpd6tu53gODnPr3qsNxd4hYepd5piZR869V/J++cuIyrh+cPdwdJP8A69wX6HTeIP2df+CgehSiKDxDpd4QxX93bwpu98eQcfnXe/so/Dj4xWGu69D+1N4QubyBTFHpKLJHEEKrukYGDYW3ZA+fAGOMV5bpf7ZnirT42/4R2G8h2/e+y+a/9TXpHwt/ao+K3ibQZ7XwF8AfEfiTxPKzf8VBqzquj6fFu/1oDlVaQDr5rYHUZ27K7qvHPHlSnyxzHEL/ALjVP/kjgWS5Lzf7rT/8Aj/kfR3hz4MfDLXGQ2nwcvrlXf5Xiu7orjjv5gHcd6uL8K/2fUnBb4aoUhwtwr6xdZDdDuxL8v0r4/b9oT4zfEjxhD4b1Lxbq/jDXpZfIs9B0O82WsbD+AvFtRsf3Yx9TjmvQ9atfjR4HXT9a+Lj6paedbtPp3hzw9DEkMex8FZCjnkn+Ik1wx4q8Q5PXNsSl/1/q/8AyZrHJsnav9Upf+C4f5Htuq6b4P8AC3hj4mr8L/DX2aKPwOJRZC6kkElwE1DgNIzEAgIPw6ZzXkf/AASc8QfCP4y/E/xH8Of2uvBst7NALC+0nz557I/ZpBJFKgEDxkpvQPuO4/OMNtre+BHxK8ba/wDDb4x+N/EPhiG0u7HSZX03SY4iUWJLe6kji7lxkkZHXpXgGm/tWfEDSfED+LI/gtpcOqpEiPf2thNDIsS8BM46V9zj+JeKcHw5l8qOOrRnONRzkqs05P2sleTUryduruz4bIcryytxdnUamHg1GpQSXLGyX1enolbReh+2/hf/AIJjf8E89S0xbqT4Pw3hlcujReJ9UXah+6vF1Wzb/wDBKr/gnvv3f8KDWRT03eKNW4/K7r8uPgr/AMF6Pix8HdBi8I+NPgPZ69Z23+o3aq9tcxp/d3lCH9s4+te/fDr/AIOO/wBlnUvLXx98JviR4Xf5f3tqlvqUO72FtI8n5qK8alx9xgoWqY/EX8qtR/8At1z7WeQ5Lf8A3an/AOAR/wAj7ct/+CVH/BOdSDN+zZG2f73i7WQP0vBXkX7fv7Jv/BLz9iv9mXxB+0DrX7PamTSrUx6XpieL9VDanfS/JbwLvu+CXZehrE8If8F1v2AfFSpLL+1LBozP9yLxVolxYH/d/eoOlfDX/Bbb9v8A8L/tfeKPAfwV+DXxR0jxJ4Xsnm1XUrzQbwSQSTphIFfGRn52kH/XJa6KfHvFcnpmNf8A8G1P1kCyHJeuGp/+AR/yPAPhHJ49+N/iOe5bxNF4e0X7W4kksrATOmTu8qESg5Cj5QzliepJNfSfgf4JfAaGFNM8WeKfF9/Iel7HNZxO3/AFi2/pVH/gnr+zD4g+NGpWWg2qfYtNtrfzZ50T/UwBlBb6ktgZ7+y19k+OP2ZfCfwh0eFtS8SfDnwzfW0ssl5p3i3wlcamrWyMwR7jUx+5ty/7s4fAAbB20f668XRjZ5liP/B1T/5I1jkWSNX+q0//AACP+R4dc/8ABOOXWPCcnj74Ba63jmC1haa70DUImtr4qvLCFo2VJmA/gG1vT0rpf2IfD/7Afi34gaH4F/aO/Zvtrq21O4+yXuojxJq9tLaTPLsjLJHdKMK3yOCMjn0r2DQf2qNP0Hw7eeLtQ8By+DNY+H11b6b4o8K2sOE+zPL5dvfW5H8G/ajoScBldH29fm/9r6G31748aF8dvhzZ/Ybbxzp32y9soE2LHfwzLG7jtmXdG5x3Vj/FXHU4140nSbWZYhNXf8apuuj963zW6Z0f2Fw/OFvqtO/+CP8AkfrJ/wAORf8Agl3xj9mTOemzxprZ/wDb2nf8OQf+CXv/AEbF/wCXprf/AMm19C+AtJ0i88OaZrF88U+oNpdt9qlSX+MRKMYBrXvtc0TTIjC+pYb+7E+96qPG/GMoprMsRr/0+qf/ACRwf2Hk1/8Adaf/AIBH/I+H/wBrj/gj9/wTj+F37K/xL+I3gv8AZ5Ww1rQPh9rOp6Lef8Jdq8phuoLGaWKTZJdsj7XVTtZSpxggjivRP+CIP/KL34Y/9xr/ANPd/XUftx32s3n7G/xhmg8TXDQH4X+IA0EkEZBX+zp8jIHGema4z/ghymqD/gmj8OJLi4iNtnWBAnl4Zf8Aic32ec885r6fG5rmubeG1SeOxM6zji6aTnOU2l7GrouZu3yPMo4PCYPiaMaFKME6MvhSV/fh2PrJZTJqRiLf6uAfmzf/AGNWOVNVNOMcs1zdp1ecj/vjirVfmZ9OFFFFABRRRQAUUUUAMb7QTiMKR/t5pU8wp+8RQ3+zzXnPx9/ao+EP7OGnQS/ELX3a/vM/2dounQme8usf3I15x7nivEZv+CkXxQ1i487wR+yH4hmsyMrNqN4kbsvrsH3abUrbGsKFaorpH1udvbNJXyxpH/BS8aZcJb/Fb4E654dR3/4+ppv3f5soH617j8N/2hPhH8VrhLDwb40tbi9e2S4/s9n2TbGGeh6/hUOSWgpUasPiR2zKrfK/IqhJplrDAVaZUCszsz/3mbPNX6ozX1tfXBsIvnZPmbf93ioqVadJxUmlzOyv1fb8CYxbZ+Av/BNHxVeeB7SDxfp0EElzp3iiae1S4hEiGVbaEplTwTuxj3xX6Q/C39tb46+O43j0nQdL1CeCLz723+xqk9rEWxudA/zDPyhhwa/M/wD4J9w/aPCE8Py/Nr9x94ZH/HrFX1z8OdNtdW1bVfENwis022zi2f8APFOq/TdX03EtVxnlsLL/AHaO/wD18qnznh/hqdannEmtfrtRf+UaB7D8TP8Ago1+0d8PfGmlaHp37HPiHxBYfamTXLqw0S4idY+n7jG4M5+8M8Y4JWvqz4V/Fjwn8VvCNv4w8Pvf20ciD7RYavYPZ3drJ3SWKQB0cfke1fCclncaTHBNo+sX9q9vKn2drPUpojHt6ch88ba+6P2Tfh7qHhb4VWut+J7u6udV1z/Tbp7+5eYxo/3EG8nHycn3avmpSl7RJH2eMo0qSjON16s6e3vdPvpP9FmWZk++2zd0/hry79rb4LRfGz4fHQr+/wBRjsLZ1mOkaJbIs2qXI4iR5JMhEUtnjp8xP3a9yTQ9I8zdHbIrY/h9Kq6joErKH050Lrn/AFv/ANapUW4NTS+RzUqzp1FJH8+H7eHwx1L4X/EBfh3rWmxWd5pu9r21imZxG5bhd5xvbHzbq+atVs2V9uz/AHK+4v8AgsV4s/4S/wDbO8VRfY1gXSvs+nbU/ieOJSWP94kvXxTrEbbj6LWFClTw8PZwWiPRxmHhhpckVbRP0ur2+VzmbxWbdHiqNxGzfLGlad8uxvl+8KzLgM0jSNNt/wByto7nnvXY+Kbdv4v4u1aVmu793J91/l/4Cay7Vl2K3Vu1adr975a2vc5bH3j8FfFK+LvhP4b8QzTb5pdNWKf/AK7J8h/9BrqVVWXdHt214n+xr4ibUvhbf6C03zaXqm5V/upIqv8AlndXs1i0nBuNgB/udMVlUSUrFyvfXqWJLVZIYnXrzVqxZVVUY4Ufw1EyxtCrR/M2/wCem2wZWEfl8c/71Z7MlbmjuVm+VPl+7V7RdysF+7ncr1ShZfL+X5quaer/AHl+6HVqdikvePr39kXWtL1b4PppOsX7Q/2dqUsCsu5mVW/efcHb5q9DuLext5LmGz1ZHXZu3bGXzB7DrXAf8Ey7OPxZqHjDwT9g0uS5TTodStZb+2aaVQjNHIsSIRz8y9ele++LvgrceG/C6a5YzWqRGUPtuPmkb/ZxG52/jXsQr03ZPsiqlKajzfM4WNrGG1iW68pWTcz+bu+b64qK4uBNGzQw7E3q33+Pw/vU6SRmmdYEiO5yq/eQR/ic1btYbi8tRY6bZ3CvG+6VpZlUKT6cHiuhJ9Tlbb2M9bq+ZmVb+Lyx/G0Pp/hXl/7at7t/Zm8SWslykjyfY23IuQf9Ng/ir2K8a+gjeHUHlmYOPNfeijHuMZryT9uXwpb6d+zN4g1WwvUeNlsyyM+1hm8gHCkdK+k4PsuL8uf/AE/o/wDpyJ5mdJrJcTb/AJ9z/wDSWdF8Eb28h+B3g1FnjUN4X08BR1x9mjro7e4unuj533VT5P727+9muV+EGgvafBXwW8t+p87wnp0gcMjBN9rGVU4Oc89O1dXb6ppdnIn/ABNYhNLEir5sLt83bPB215ub2/tfEf8AXyf/AKUzpwTf1Okn/LH8kTx6srRrbyW0rSp/zyf5WX6Vq6H4d8RazcMuh6U0qfddGfbtzWLbst19omm1WIuibdqIwVct/uUydtQ8yZbW9iSFE3Ml1cuGb/c6bq8ySidalZ+8XvjJ+ydpfxy8D2/w/wDjVpUs2g3Os2FxceQ7B43S4Uo+RngPtJ44HP8ADX4ZftZeH7Hwz4wk0vw/dI8EGs31r5sH/LQRzOgfj7wIXr361+znxU/aGs/2Zfgv4n/aMvLBryHw3pz/ANjaQ94yfbr6RMJEfYBmc/8AAR1r8tvhl4H+EPiDxF4h8YePL+11nwrpd1NaxRabN5ryajc7ZJFjHDfJuyGI+lXToVsRiVg017yUtFs3382lf7jmr1aS/fRT93e/p/wT6G/4JQ/GjwJ4B+BWt+G/GW+yt01dL2XUpYW+ywo8SjfLIBiAHbt3vhMr1zX2L8Nfi98FfHlvDr3gzxNo2v200TeVLpeqpKm0/wAQKH5q/Kr9nr4/XX7HPxgvNH1Ka6s9Nd3t57XUod3nWbtkJPFzwV5xjIP4ivctf8Tf8EZruG31S01+w8B3/mpK0vgjUrjTW555+zYjZfwric5U58krXWmvkdMakZxUujP0UvPFnw1khMmi6JdfaQv/AC1ddvH8WORisn7do0m+Fk2gYZH/ANodP96viLWf+Csv7OPwo1ODSvBfxM1T4jaOCkU/2rSHhlhTpujvtix3BX+44BPrmqet/wDBdH4C2KTPoPwA8Y3iRtttRcX+n28cwHc5mMkWfTbn6UOtQ1uPnqM+321DS43+0bMfeb5fu/7oHpVf93Lv8u8dWf7/APu/yr4WX/gu18Oba3hguP2V9Zkld/3rL4ktNijqNmSCzfUCn2//AAXM+Hl1JdQzfsx6lbQptazW38TwtIy9xIHwqn02Ej1NJYjCre/y1E3V6H3zotvo/wAra5qFxbj+CWJN/wClc/8AFD4m/A/4Vs2oeLPjNYWEIt/N2ajMtu2wNy5BPT8K+UfCf/BX74H+OJf7Ki+EXi+z1K4aFdNtd9vcfbPNZUCo8TttcFsbDyT9zNeTf8FN/wBnn9pj45eOvDnxI+FP7Othrdhc+HJLf+1/Dmy8maSWZSVuJZ3jLSJ5CqioMJukGctWFfEUUrwdm/JDUpbM+3fhz+0F+z/8atcfw38Lfjf4c1rVNjS/2bb6kv2lkXq/l9WA9a7tdCmsYXumZWdsN8u5uvvX5SfsJzaX+zn8bNNuvi0Lqy1uwum/tm1urOa3uNLV0ZP3okHzIOu9MgjpuHzV+omk65pupaXDrGg63FfWdwoe3uLKZXSQHowI4aujD8k6erV/ImVRp6Gw1rqNvCsnnJFlPufLn8j2ryD9rXwdZ6pY+GPFWoJ5radrLRO0T/6tJImy2OnVVU16Zax6tq9w1vb3KPK/y7ZfkRWHqT/Wud+Jmg6nrHg/VNH1RIty2ryqqPkxyouRj3/hpYvDU62GlFb2/IqliOSon5njeh+G/wDhG90ljMitdynZFz82OR/u5/vV3vxI0HT/ABb4H0X99bpcLOF3yuo/djdlTnuPvBa5rQZNQ1TTtBNvpv2lmYPEvysvC4PJ+9w3T1r1KPR9L8I+G7q+vPBkrxWuy82avcrD5lmVbKCMfedj0x16AV+d1qTjU5kfX0ZKVJR8zhNSWxj8VWd/bXlrPdHVPK+xRIysvypsbn727bw3+1Xoa6k0dqkd5sniSLbFvTa36elV7q30/wAP3iNfaxodvc2d1bXtm9qjyPIZPPQYfopH7vOeny5+61EesR3VvNJfaa9q6yndb7953Fm7/wARHdq97h5QqVZq+v8Awx5uazcIJogmvoyv2VfmRX3ovQsx/h/CpbfVvtkY0230qLzDK7LKm5pZvlxsJzhU79OKsW2uQw27R6lbRNbru/e/Zt7sT/D+dW9HurfSbCfUrd4heSrteJ9w8lP7gP8ANu9fU/VlfXoeJHEtaoa1joekw7Ly2iuXdN16yptWRtuAqcDbj9etY9xcabHtjks1Ur/zyTbuq/qOvQ3W+3vH3sv8WxvvD+HNUF1CxuFWST5Nq/L22/8A66bwxDxUn1JYdU021ka6is9wWIK3z8//AK6LvxBDdXz3Db2llQMj7927/ZIxjI+7TWbRfmbyWRzncqv8u7d/hUkc2m+U8iou9W2r8+75ah4Z21GsQr72JY9UuFZ7ia2t4WdgqI0P3fm+8P8A0E0seoXUMnlx38Uxbds81Pu/XNQR2NrcWoaSbY5+b7/y4/u4/rTo5rMRpp91DEUG5kl2LvyfU9WHp6VP1cr26e5I2tatZ/vpEVUZvkTftqxD4yvFvIrhofnG5d8T7tuF+7zmp/Dtv8OWuPtHiJNRYv8AK8Vq64z7Zq0vhXwrq2sGHw3quyF9/wBn+3+38J2cZqXRKVRPVFP7RFdP9v8AO3sn8Tv8qnrtoXxZZyb7S48PWtzFKjI9u7ugbcuDjB9O9T6joNxCx0y402137N3n/MPl3fex/FWBFpF1b3Cfa4fOjRt0qJuRdndc9V/Ck6MnohxqxvqzyXUv+Cb37Ctx82pfs5QSXFwztLe6trdxezNls7RJI5P8XHoK6H4Lfsd/s0/AHXLjxd8Jfgjo2lantO3VHh8yaGI8fu3kyVB7qOtemw2f2NVVkVoR80UTPvRfpxn863dU8RW+qaP9nXwNpAZcL9otbPy5eOtT7OpFWL9pBq5xOueE/AvxEsE0/wAbeHrB5oHL2GrW9t5VzC/TieP51A/u9Ky7Wx+JPglkttL1u18QWcfzRebefZryP/gYwHOP4sg+orsbWxaaZ7j7N9nT5mRIt2dtQs1vEzwskrF/4dn3vxPasp0G3djVZWOEv/H95pzNDrnwV1bUQr/6p9m3/vtD/Wvkf9uTRdL+KnjbwZq2pfDRdH8P3Wt26X9hFoKpeWcEm3zEE4c/bJtisy78Y3Lwe/pv7Rn/AAUm+GfgnxAP2cdT+F0pa21aa4vfFWmutzL5SIyIrwcHCv5mFjznvXX/AAb+B/hoeR4Vsfip43ubjWdGtvE0/wBq8NpDaQpLu8pi8sZEU427TGDvA8vcMMtc8YzlO0W7K2/UHVTjZnzP4y/4J72PhnUntPBHw9+Ln9sFWuLe68KwWqWkkUrv9mt2TUJgBIsflrI6AIXbI7qqW3/BM39qLxNo9pY/Fr48alpVtLmK/wDCG+a6MMWWwkipMsDPhuUMThT3YV+kWn+HbeHRbaxWGUrZQIjT3Vz5skz93Lnltxbd04+lSzeGbW6uIrrUkideG835X3N02v8AWuhQnuwXsXufJnwl/Y98P/s/6Cmn+FfA2syyN/x8azeWe+a4cf7YARB/spgelX9e8P8A+iyWd1pTpE/DLLZ/Mre390+9fZOm+IPC+i6RLpepeALO8Vsr5sGq3VvweeAOMDsK43UPAPhTXpJryTRFRH+ZYpZnf/gIJ+9j3pJ1JPYpxpJbnyn8NLW+0rwd8T0lhl3L4bVoFCYZl8q8xgfhXzrq3ji8sZkvEvH85MbEaHBXHrmvvNvAOl6N8XNa8Jx2gWG98PaYs8arndvnv0PHfgVY1n9if4a60wuW0SIZQs+1PmXuOODz2r67NnJcP5Yl/JV/9PTPgeG6dOfF2ea/8vKH/qPTPzY1L4hKzM0yWsu//n4Rc1iah4602T/XeG9Nk2/37Zc1+nmm/sJ/D3XkltbybS4IriJ4v9NSJ1Ze6/OOuOjV5d48/wCCM/7P+os9vo/xy8VaJqkafd062sb+wkJ6ZRx5y57qG+lfKe2lH/hj776onHRn52at4gt2jmt9Ls0tHmbb5sXVUPVB2w3euZ0e6aDXPtkmzcqsrsibd3zc5xX2T4s/4Ix/HmxupofAnxU8L+JC+fsSXFncaV5jBv8AloZGmCDb/Fzzx/Fx5P48/wCCc/7bHw1nC618BLrUg+dkvhy/hv4W7bQ4KHcfdacbt2TJlRqRR+sH/BCzwBa+N/h94n0zUNBWVdV8J28ulxSy7GuvIneR0Q5BV/mjYNmu4/aN+Afxe8deIvij4F+Hfxp8Df2D8UYC0+keMkmmuNFtJk2XNrLYA7ZRv3NmNh5nypletfmj+w/+1V+07+zL4y0fwv460H4g+Gk0q4RdGvZfCt7CkL7sBDKYfLYfNt6kEcHivuP4xf8ABYzw74ojTTdK8B6Dca188V7rN1bNCJplb95hB8jNnrg4qaXMqzU1fr5fp/W5cW4w1PPvi5r2h6VoXxO+IuzUf7K8UWGneEPC8GswtFd3yW0UEct26H7uPKZz3HyjrXAftBfGKz8TN8ItNXUtNsJ/AaTas2mzzbIbx5LiKSOKUgg7MQcr33UuvWP7Q37R0z/E1vB2peI1g3LpdnZ2y29rH82dkYJAxn5jjJPrXg/i79jf9tzxF4svPEmvfC7xBJd3MrPceRprusYHRQI87VA4C1vUdC3Lb+rW/KxUKVe3NFaH6N+Bf+C2njdbeGPxj8E/CFxG6/NcaHqsto7MfQOCP1r1zwj/AMFjvgndrH/wkHwx8S2DP/rWsLyG5VT/AHeor8eP+Gbf2qNFjA1D4deI4F+7un0e6RfzMeKktfDPx20eMQ3FhOmPl+ZGT88iuP6vSXwtpet/zuaKliV0/U/ZT9oL/gpb+yz8TP2V/iV4K0jxT4htr7V/AGsWen2l3o7BXuJbGZI0ZxkAFmAz2zVz/gkz+21+z18G/wDgnN4P8JeNfixp1nq+g2msy3OizW7+cxbVLyaNEI4YuHXHH8VfjTDrfxMtrWWw1JJFhaJlmcSjG0jBX8q1PD3xI8deH9Ii0rSLadrWLcImjTK8sWOce5Nfd0qcl4Z1kpP/AHylvr/y5q+h87UpN8VU1Jf8uZf+lwP6Jfhp+2R8A9Z8L2bQ/HrwdNO9uHlgl1UROsjclfm+9y3pXouj/FXwtrMAl03xNoV5u/589dif+dfzZ2/x68ZQwr9ss5RhPn3w/e/Oujtf2itU0Py5Jr1Ed03eVb/ej4yM4wOfTt3r4HkrxS95Pys1+N2e+qCavY/pBi8QGVNy6XOw/wCmDpJ/I0/+37QD99Dcx/8AXW0ev55PDv7bnjjSLhI9L8c6vbJv+TyL+ZNv5PXoPh//AIKjfHDwzj7H8b/FSL/Dt1V32/hJmmvrP8q+/wD4Avq0e5+7kev6NKNi6lEP947f51PHqGnyf6u/gP8AuTLX4s6L/wAFmv2j9Ntdtp8a57ll+ZH1Gwt5lYe+QP510Vv/AMF6fjLo8ca64nhrVH2fP5vhvlv++JBUupVjvTf4Mn6s3omfsSrbl+XmuH/aJ+OXhr9nT4Paz8WfFboYdNti1vb5w1zOeI4l92bivy91D/g4l8SQ6d9ntf2edJu5zt3zxX81r8vt975q8x/aG/4KheL/ANtrVvDvh3UvBj+G9D0q/a8n0tdS+0pcS/KEbeQOFG7C45LZ/hFVGrJv4WvUulhbzXO7L+tD6P8Ag+3jf4k+Kr/4zfEGWXU/FvieffsZMraxH/VWsQ/hUBvuj+e6vd5/g94w02yttQ1Ow8T6lNPn/R/CtikiQkdQ8sjbMjbjAHXisb9hXR/tGmr4qtYbe4u7LRvtlnFO7bGmbaAxIBK43Nz2pv7QF18UPGXxC8VeFb7wT4gsdH0TSzP4NvPC81xDLJcMnmCWB4iI2cu33SHyetdEZSqJyX9a2OiNSUpWiUpPGGm6hoN/caBrba3plpKIvEOka5Zqlzp+W2b5I+UdM8Fh0PUV80/tGaw37Jvxe8P/ABY8C69PYaFqu6ewlSZsWNzBtLw5/ubfmHtuFe/JNNa/tBanF4sMTarD8HUT4gpAiNEuqtb/ALzzCOPMz5e7H8bVj+Pv2N5P2tvgPY6Xrni2XRIPD1495LdJpqztMZLdo/K5ICZ3ferHnhKMotdbet0mvmm7XOmDhKXLJ6Nfmvz7n178Pv2qND8a+DdO1W5tILS/vdOt7hbee+WMSLJEr70YrjHzdDXMfGK/+IF62meIfhJqc9tfwSlJbCC5WdbhTzvOOGxt5z2rzbSfAOpafp9no39gxfZ7GyigiaK5YFURFQLwfm+7Xrnww8GeHfBC/b9VubxL+X5XSKZWWND/AAcgHJ7mpxVCiqUXUinJarTZ2te/R2fzOHDc1Geiv39D8SP2JU1mT4aXkHh8v9rl8RvHF5S5b5oYFOB64Jr7R0XT7zwXpMGlskqNEn73fw2foefvV4Z/wQ1stF1D4waFaeIYg9m3iDUzMrLkHGlEjj6ge/pX64a/8Gv2c/EzMdV8OQvOybUl/ept9MH2r6biiNll0/8AqGj/AOnKp8b4eYijTnnEJXv9dqPT/rzQ8/I+Q/gH4FuPin8StK8JzIzW7y+bfuqf6uBOXb8vl/4FX6E/Y1h09LGzl8pFjCJtT+HoP0ry3wR8IPh18M7651T4ZXFjaXF5F5V1LNM7OyBs7ELfcya73TtV1iKBisdrPKzB2ihu0/4FyetfJ05y9q3JH2eLn7areOyNKwH2OL7Cd5dFOyWX/lp/tZH8qS/1KTSNNm1W/RfLt7d5Zfn5+Vc/0pxv5seeunSs391X7V5R+2X8U5vhp+y74/8AHFqkttc2nh65+yyunzLLInlpj/gTV0KyVkzLDUHicRGmvtNL72kfhV+1n8Rrz4pfGTxP8QL47H1jXru82fe2q0rbFz7JtrwnXvMaYrv612/je8kmun3PuVX21w+qbfMK+tRCLerPRzGsq+LnOOzenp0MK6Dbv9Z833W+SqF1CrM3l9v7lasyxoC38XNZk0bNIyx/w1ry2R5kkfDtlyqtWtYtuUNWLYt8o+etWzkXbt+9VPc53ue5/sY+Jv7N+IF54ckm2pqWmuyp/Czxtkfjhmr6esbqNm3f/W/4DXxH8G/EX/CL/EXRtY+fZFfosv8AuP8AIf8A0KvtSFoW2TLJtH+NZVEHQ2Le63b41SplkXzN0b4x96qFrNIzbl+X+HbU8VwvmFV2r8nyVm1cRowtjCL2q9YybX3Syct8tZ1rM0hVt+Vb71W7Vv32IvurTLWiue6fsb+PrzwD8ctLurPUpbRNVtZrC6eJ9h2SJ6/76LX2TY65++fztVcpNuZ4nTeu4evZfrX50+D9en8P65p+tW77ZbO9SVH/AN1t/wD7LX6A2NpqhtW8QafYM1tNErrKnynay55x2Nexg4050OZ7rQmpKV0WdRsbNk+azx5nz/73+feorjwrNHYm40/zbaNvv7OdvqtT2vi6zmuLa18QPdeVFuW4eyRXZh2z05re0f8A4V5qEyLpfiHVLZrl3SX7bbN+5G3hsjIwa6pT5Vb9DKNNT2Zx1jaalp10bqx1JpZNrN5q8n05J+9Xlf7eN/qV9+zB4le+1QSsq2SyB2DOzC9g4/2cda981zwfNptmbrR9Kvby2tvluL+JGaP8CB/47Xzr+3DqDX37MniN7S2CwbbMvLLEwlkBvYNuARwnv1NfScHezqcXZc10r0f/AE5E8nPI1KeT4lP/AJ9z/wDSWa/wUt764+CHg+PTXIki8N6fIuY927/Ro8rz+ldTY6fHayS/Z9PR3ml3zytuyq9OvA/4DWb8Bba/T4H+CxJZzIknhLTiJNnBX7PHycdvSurWOXcFkeVlb+P5huz7YxmuHN4RWb4h/wB+f/pTNsE5fU6f+GP5IozW6Wbbrdkbau3YjqfMUdeOelU7iGaRvL/10mz5Iok/2uF4961NrRx7YdhYsVeXyfnZvrXBfHb4qQ/AX4S+JPi9DZ+fPoekPdaXBFDn7VcrxBEP7xaRlxXnRUVJO1zocrK58c/8FN/Eviz9rn9q3wh/wTa+COr+TbWV75njK/g4WF1VZLmViO8Yf5f9tYweGr1X9r74FeGPhfoehftJaL4e0221PwbLa2mrazLZp511ZbGt4ppezmKRllDH7m5qqf8ABOP4Raf4Z0XWPjtqnhtD4j8RxJHqnia4RvtGpXjSy3F5MmefLMkuwN38rj5FWu8/4KAavp/hb9jL4h3XiSwlvIb/AMJXdklvP8vmTXC+XGu/+Hll+apxNGpgZTnzJ1L3bW1+iXklp8jOlONWilbR7+fn/XQ/GH44eOP+Ey+JWrapY6lPc7r+ZmvbqZneRt/ztn+Ln8PSsvwlpXjTx3qFv4M8A+HtZ1nUndpIrDRrOW5nk+6CwSMdMtyTgDuaxk0rUb3U00rSoZbmbekEUUUfz3EhZYwo93dlA92r9tf2Af2I9B/Y7+EqafJZxXXi7W7WG48W6su3d5gTItYj/DDFuZQM8ncTyzV52GoOveT2OyTjTSij8tdH/wCCdn7dniyVf7N/Zn8S75F3pLqM1tapt/35Jjz7YrodC/4JUft/65o81z/woX7AIWXEWreILSGST3QIzhgPcj6V+zl215b2/mSy/JNt2rs/j/u/WqN5crJcBZEdl2Mv3MKorqWCoX0MnWe5+PV5/wAEtf2v7fxvpPgXWbHwhp1zq9kZ9LuL/wATMsF1Kv8ArLWNxCS06Bd7R46cqX2vi/4n/wCCS37anhPw5qPiq40vwndrbRf8eth4mLTyN0VI1kgUO7FlCgsPm6kV+sfizwL4N+IegyeH/Fmlu9nc/wDLWKZleORWzHLFInKSIdrLImCDXBajq2raPDD8J/iRfpca3Hrml/2Nrdwixpr1kLtAXKcBbmP5fNQdfldBhsBPB0dmJV5yPzG+CP7D37cN18RdNbS/gjrehSaRqltOmr688VnDZy2lwk6uHDsWIeJcbBgnuo5r7p/4bgk/Z3+F5/Z78UarZ3Wv6UtzqXiC4utKuLZ7f7VcS3M92flOyEvJJtOSO+flNfT0kK2ENwsSb3LMibfvMvt121+d3xH+MHw18Lf8FQ/i5qXxA8XXmi6hZNaWXgHVrexe4hsbqGxhWS3uUGd9rc+a0Tofkztb5ZAjDhx2Dovkp30b/I0pVXO9z55/bw/bEs/2kv2htK8feAdEW3OlaT/Zv221mY/bhvUxtnjcAN2G/wBqvof/AIJW/tqfDXwPr83wN/aSf7P4f1bVEl+1XXzJp7sux1f/AKZt8rZH3DzXm/7Xv7HXgzwl8HI/2v8Awv4ZTwq97r0dn4o+H9hsjt/DNxIViW3gG35nWTb5gzgh98e0cH5xhkjvrOz1SaZ/ImZkstSi+X5xyUB/vjqUPP4Vxzw3sYeyX9dTVWkfV/7THxQ1bwLql341+DfxL8ZQWt78SNWtPD0supusV5oMG9I5SN+XcT7VR3BDxc5y1fSf7BP7TPjT45eBdU0Xx9DFeX+jyxomrOixSTRSL92TZwxXb94DnvX556D4o0/xNNpugfF3xVdafawsuzxBZIs0qwL1Ty3yH/2V9a+1/wBkn9sz/gn58O9Jj+GPhXW/Eej/AGllfV9e8R2CJLI6L/rZfLwNmPmCxjA6da0wChh7KbsrW66+ZNRR5PdWp798M7Wx/tyLw203yaddNBL+53eW+5hG3+yn3Vr23Wvg3q3xS0m80Ge1+0x/aLSKKK6uW8uONWbDSHopD9MkHDY/2q858K+F/hnrPxCudY8I+M9O8U6LcRQX8WraXMrxzeZE2FD+oK81674ij8UN4Lfwjb6xa2tud1r5VhbMHb9006PIQc5HuOS1fK4uK99RV97fefVYB+7GUjyex0ez0PXJtDmRJLaLdGjxfOGSOVcOCOM4Vst/Wk8SR6lb61NDawypDDbo7K8OBhl7nFddovwr8Salq1tHp9hFdXN9YW9xKsXyCPD7HTGeuduWHatHxt8MVbxt/wATLWLWK2trWOJre9fd510rfe39PlO3v1rXhebjjLS6pmWdUm8JddGjh/Dvh28mk+2eILZrMFA9hazu3yof4z/tn9BUt8l8q7mhe5hL/umt5vm4bnOR0x0rpP8AhDfFl1IWWGfVLNPvS2syusw9sHK/lXP+JYbnTbp7Pe+myK+2WJtyfl14r9A32Pj2mlqYvmapJctb3dg+/e3yIjdD06VveH7XQ9sUetalbrL/AM8n42/7J61h/wBoajbXbyLcxPsi8jf5PzyBv4j7/wC1Uuk6lFbs9vrVhcMj7WSe3mUP+Ge3rTfPHVC916G1rUmi2cnmQpazxn5d0Uyn5jWR4gvfs14U2LEyONqK6u+NvbFUL7UbpV+wx3LtbfMsSNCvdshT/d96qfZb6X/RZpIm+fduZ2/Sk1JoLxLK3VmsZjVNr/e2fd3KG/3vlqU3FvDIrLc73b+H5l/Wsh41h/0W6sJf3fyxO6N+7G7Jb/61WrdWurby7e1nS5+ZopUmwrY9sUlz21EnG5o+ZGm1Fd8RRbtkrr1PXCelSyagY2/1/wAip/B8u4isv7HcNsjuXaZBFubynyd393ip9P8A7PSNI9S029SJ0/1sD48v69d36UmpFXS2NOPUL6aQ3X9pOqfLv2PyqnjaDVpdQuFmRPtMrKflRE+Yfj7Vi6Kt094sl1ePHbD7m/5gzCtUWdnB+803Uot33d6PvDfNyuMA/n9Kle67Madx11falb3QLQ+Ui7mV1Rl+T+9Uy61qDf6PInzK+9fvBdvdc1SvNU/s+Ly2SyuXVv8AWy708xtvcdMCqja9NfSDy7mBdifInzbvwz2qn3sM2YdakVVjmT7v91/9ris3x18RLHwj4O1fxdfI6xadp0sq73VdzhfkX82WmR3195e2Tytjp/z2/wBZXzz/AMFJvi7ceCf2f5NPj2mXWrhVi2fe2oPu8fw521jVcYxbsVFXdj4t+CvhvUv2lv2vLeGb98up+IEhd/8AphE3mSt+IVs/9da/X6bXLWwt5ba3RobeGJfKWJP9Xjpj+7jbX55/8EhfhTdQ+Ita+LFxZvPFpVuLGJtn3bifk/jhVxX3zqViI4X+x3kr+ZmNFR/nVh6oef8AZrDDRioOTW5pUbukjobHxBp/mIrQbQ33/wCL5T6CtaHxJ4PWz8m+8PM7n/lqt4y/TA5215/o+rNJby2t1DKk8SbH2QtsyF9cdfX0NbU2sWckfl/ZvK8rZv8An/iP8Nb8sGrolTlE6e31LQ/Ldfs/yfKyfxVLHqGn3Vw8zW2UZhu3IvzfWuZhv/3e6Hanl7V+X5T+OfvVLa30f2VpFm/cl2ifZ825zzt4+7S5I3D2jOa1s2C/tI3H2Zj5a6Do3mHaVIIu9Qz68/SvTLNdL/1yorvsb52f734VxmreA9H1HxB/wlhnuVuJLFbSVoyAk0aOzx7gwJBQyS4K4z5rbt2E2uPgrRtjSm6mjVD8wkmXJGccYX86+tjLIcwyjC0MTiJUp0VNNKnzp805STvzR6Psfnjp8W5NxHj8VgcHCvTxEqck3W9m1yUowaadOXWLe+x2+m+H42Z4ftKTRSfMvzoGj+X7uerf3q1tH+Gun3k0jNrcVmm3a0rQ7tx2+g/9CrkdF+HXwpuIF/trV/EEUuxTJ9mgidRnvyAQPqKjvvhT4L+0M2jeILu4tx1ZgFYe5GyuZZXwvJ2+vS/8EP8A+WHorPuPUk/7Jp/+FS/+UnV3XwvvkvFWxeKWKP7t186rJ6r1qzpvw/1iGTzY7ZkVtyPLFMwVfm+93Cn+GuDT4Y+G04ub+8DDHyq6ck9APlPNXYfhH4Ufme51KPBUFPPjLHPcfJSeU8L3/wB+l/4If/ywpcQcef8AQop/+FS/+Unp0fhvxNfTMzbJotgVfnR149f7tVv+FT6Duhsdc8BrsXeYre6s0wrv1dCMbc7e1cRF8FPAbJvbUtU56YuIRj2PyZ/SrcHwC+H8yGZdc1IorKGAnjJGf+2dSsq4YvZY+f8A4If/AMsNVxFx4l/yKKf/AIVr/wCUnoWl+FdP02aIaFZPbeV8uxIVdVVVx3X5v7tbFj/YNlM0d1oM8jSf61rC5aB2Yt945+7Xmlv+z18KZo+NV155AwDKLmFVP0Yw1avP2avhJtH2TXtbjcgYjlu4mzk46+QtYyyXhWUtcdP/AMEP/wCWGy4m8QbaZRT/APCtf/KT6Y+G/wAZPB+i240/XP7etVTaqJdOlyG/2jgA/nXpek698JfFlq01q3h6bd95Lyziz/ukOB1r4Wi/Zn+HU7Mkes6uCOhN1EQT6f6qp1/ZW8BCVUfWtVIIyfLu4ifb/llWP+r3DEHeOYzX/cD/AO6j/wBZfEB/8yin/wCFa/8AlJ9M/tsfCH4K337IvxV15Ph34WkvbL4ca5PZ3cOnQGSGVLCdkdGUZDBgCCOhArxj/glZ+x/+zV8YP+Ce/gDxb8SPgjoGr6lfDVRdaldWQM82zVrxF3PkE4VVUeyivNPjH8A/h98P/hJ4p8d6HqWrve6J4bvtQs47yeJopJYbd5EDBYgSpZRkAg4zgjrUP7EXwO+Hf7Rn7Pfhz4sfETUtXtdT1Y3f2u30a4jigURXc0K7RJFIV+WNSSWPJPQcV9hTy3JKfANWl9cl7J4mm3P2Oql7KolHk59U02+bm0ta2t14jz7j18RwmsrhzqlJW+tLbnhd39j3srW876H1Z4g/4JQ/sK6zJtj+CEFuX++9nqU0O1f73XFczrn/AARV/Yn1jfFZ6Jrds7P8jWusO/1wHBFc+/7C37OUTKra94vlJIykOt2TE/7Ixanmtmx/4J8fsp3CxC/8c+NrNpDgia9tDtOcAcW3WvjnlvC3XMJ/+CH/APLD3f8AWLxFj/zKKf8A4VL/AOUnI6t/wQT/AGabxdul+P8AxRZ7v7/2dwv5pXG+Iv8Ag3u+GdxJs0f4/apb7vu/atHib89hFe/2P/BLX9mvUovNs/iF4tfnJC6naZx9PstXR/wSZ/Z6b5V8c+MST93Go2n/AMi0o5bwtfTMZ/8Agh//AC0P9ZfENLXJ6X/hUv8A5QfGHjL/AIN3fiBYxmbwT+0DpN2Q/wDx73+lTQ8f74dh+leYePv+CCv7b3h2H7V4bttD8RIsRbZpeq4f/viULu/Ov0cb/gk18AR8v/CaeMA23J/4mVof/bWiH/gk/wDs7ibZd+NPGgGM/JqNpk/TNrVLLeFk/wDkYz/8EP8A+WFf6y8e2/5EtL/wq/8AuJ+N/jj/AIJ9/tefC+Ro/GXwT8UWqjO5/wCx3mTj0eLcK5LR9J8UfD/XYZNe024tijfOt1C0TY+j4NfuY/8AwSI/Z0dfOtPHHjWRCMj/AImNnn6f8etRS/8ABI79nlcFfGPjbB6Y1OyP/trV/wBm8K2s8wn/AOCH/wDLCXxLxy3/AMiWn/4Vr/5QeDf8E1v2nNHstA0+J9Sge5s7fybiCR/9dAeOn8XH5Gvt/TPiB8K5tMitvCPjCTSzBC7wWKtthjyufuZA47c14zF/wSP/AGdy7K3jPxowGMY1WzBPGen2Q04/8Ek/2bthkPjDx2o/hP8AaFodw/C0rGGU8LQ/5mM7f9eH/wDLTF5/x3J3/semv+5tf/KDn9c0vw/4m8RXPhX4Z6AyDxBqnnavf3G37Rqk27eF/wBmMH5sD6mvqj4VeC/D3gTwRbeE4biKZwm+83JxJI3Xg9h0FfNc/wDwSc/Z7R9qeM/Gqhvus97a4A9T/ovp3FNl/wCCUH7PAb/R/Hfi87FJcNqVoNx7EH7L0rT+zeFW7rMJ/wDgh/8AywdXPuPqtrZRTX/c2v8A5QfVi6H4NtbkTLpunRThs7vsyg1dkt/D16uJfsr/ADbv4fmr5Hh/4JOfAQRF7rxX40zjgxalaYz+NrSn/gk9+zyse5vGXjTcPvL/AGjacH0P+i/rT/s3hR/8zCf/AIIf/wAsMf7Z4/vf+yaf/hWv/lJ8D/8ABBexttS+Ovh6yvIleN9f1bcrLkHGjsRx9RX7R/8ACE6PDbtDb22wOo3eV1bHrnNeB/BP/gn18FfgL8QbP4keGtY1/UdTt7eVbH+0r+J4oTIhRpAIYoyW2M64YlcOTjIUj2uSS4tW8vf8xz/G1YcSYvAYqphqeEk5xpUow5nHlu1Kctruy95dTp4Ly7OMBRxtbHwjTnXryq8kZ86inCnFJy5Y3d4vptYnufA+hTMklyjbguz5EVN3+0QPSqy/DnTfKK29/dI3999vT+7RDq2sKx8ufeg/glfhvxq3D4mvF2w3Ft5e35d7f41824U30PteetFaMpL4JvI28qz8SSt/tOn+FfIH/BZ/VvE/gb9k+Pw3J4j3Q6/4gt4JYld97RxK0pXB4x8q5r7Z/wCEgWP5Wh4Xcu/ZxkV+bv8AwX6+KT3U/gb4Z2smFhtbvVJ192KxJ+m6iNOCd0duW1KixXM9opvXyTt+Nj8sPE0jbn+b7zmuR1AeW25huzXS+IpmZmU9TXM3jbt2fvb6pIxqO7uZd39z5U/36zbjy1XJH/AK0bzcv7xnyrVm3jKy7g/FLc52+fQ+GLWZWYeWn+9Wpas38KVjWu1bor/DWrZu3ynHFUYXua9jJIsgEfDL9x/7rdmr7d+HuuN4w8B6Vryvn7TYIzbf7wXnP418P2eVYbq+qf2QdcfVvhrJpfnfPpd+8Wz/AGHXeKiTbjoJNNHrNrdSMqxsmP4m2VItwqsrK9Vfmik2s/zK7U5pFj+ZfmJrK2onsbNrcfxf3fvvV6xuI5JMqcbqw7eaSH5JH3bkqzY3m2SORfloktSr2Ostdxj2/wAXy/o3+Ffd/wACdSuPE/wL8Ma9Z6whvFsvsr2rzZZmiZo+Af8Ad9cV8F6bdCRvL37f/r8V9U/sQ+Lre++Gd7ojWySzabqXmxO1yy+Wkif3Pd1avQwTspRv5lvlcdj27+z4ZoLldQS6S5T5YvkXZx/Sks49Q0G3iWNHthJhfN85ldf9rAPX3NRNrTXUe6RLeCJt29/tjfMv0I+X2qvqmqSTPNCut7jHuVEeFWMOVwP96u1TWzMHF9Da1b4xfEqOFbNdelmjjT5l3/g7EJ3x0rxT9uDWI5/2VfE1qiTFJRZMjMRgML63yvrXr2g+HfEWvafZw2+jym5ES7/ssO5tvdsJz9FOa8W/bhurtv2cvFWny2zokAs+JF2sD9ugxkH27V9LwWoU+L8vStd16P8A6cieXnqqPJcS3/z7n/6Szs/gdd7fgL4JLalvkHhTT1OR8iqbaMKhPrWzDrWoNCzw3nmiH5VVU4X6nPzVy3wTWa9+C/g0XmmJAIfCOngSJjEoNtHtZiePu/w1uQyaT50mNJS8T+KXzmVVKr6Dha8/OJWzfEf9fJ/+lM2wSvgqf+GP5IsXmn6xcW/2281iBbaHczSru+ZR/ueleEfEBvEX7XPjKz+BvhG5ceC7DUoLzxlrb7Uhuo7eXzfsNu5GJXkdVWR0yiR7kzvbcPd7e6hj1KGKSzlihhTd87/Kq9dgx2PvVv8AtzS5rhf3PlKdu53dcbfwB/LivPjJc6beh1KCtqMt/Bq6RYw/2Xc28R+5a26InlxoF4XjjA7YFeD/APBS/QZtT/Yc+JmjWtpeX10/h83UVrGi7z5TLIWHfKhWYYr3W4s/A900s2n6xcW8aL8jPvbc3fHFYmpWPhy8t9Q0fUtup2epWslvcQSo3+rkRg/J9Q3SprSdWDiupKjZn8+/hfxLB4L+Ill4wk814dL1u11L5PvNHBdxXBYf8ARsV/QPa+PP7ctYdY0uzf7Nfolzbv8AK/ySLkMSPWvw+/aq/Ze8Y/sx/FPU/hd4um2rpKo+jXnknZqWnPxBMhP3uBscdnVsjDJn7F/4JE/8FCNNj0vTv2RviwLqa/ecp4I8QXDqwWFU3/2bIWOd42yNE5GCnyE5Xnmws4q9NvfY0qRbSfY++FuLi6XzJHlHlOdr7GwzDnqR1FS282nw7mM0TQ/xP5OWzu/z1qbUo/7StfM/tie5YYX7K0LhY8/w5zhmp+n+GdYs5UuJLl4g8W1UR13bvcH/ANBrvjO2hg46DZLWaaNbpZolX7v+1+Qrgf2jPhjdeMPhjfXlnDavregxHV/C97qMLSiz1K2XzI5tgILYCspAILozJn5q9E8SDQ47e3tf7Sv2lT/j4+6Wz3YY+9+FO0+3stUb7H51vs+5cfaoX3sh46g/7XNTJqcWieTlZwPw7+IlhrvhW88TeJr+DSr3RW8rxHYfad6wzbFkDRk/ejlRleNu6NjqrLXx74s/Y28OftIftS/8Le8QjUdHt/E2uhZLKxtkZdQt4IfPnlZ2GVmfZDC2Dt8vphlBH07D4dutA0Gy8ZQ6DZapdeDJZfD3irSPJWZdU0qzlby5fL6tPBGy3MeQchpI+rqV6b4xf2Hp9n4B+IWjTWraJD4ohX7ZpzqsK219bvbJKMfwGZ4fb5qwq0VWxEarekenq1qaxahFpbs+L/8AgtF4S8QWXwti1DRb9YdK1jWYNW1aziT/AF15YxLEXz/d+yspPHJgWvPv+CRv7P8A8D/jZ4B+Kfhf4p6JFqkMy6bA9rLu3woyu/2iIg5ikD9JBhxtX+6K+tP+CiX7PrfFP9nO716O8Z28F3Umr3FvBNuSa0MMtvdo6YOT5ErOo7lVr8+/+CYP7YNl+xr8ermbx5pjX/hzXrOPQ/Evlbd8KxXTeXejP3lQtIzD/nm2R9zaeZuNPGNyW/6o0gm6Vky7+3B+wr4g/ZNkj8Tab4nTXfCusap9n0bUn/4+bfKPILe5A43gK2HGA/fnr4z8JvAXwd+IWtS6f8Tfj8ngiaN0+xPdeG5b+G477d8ZGx93QHr2r9If+C5Pjn4J2f7P+i+Hfh95txc3iPf3Xn3kUyRzPtis3jKfe+d2lH+7mvz5/YS/Zzb9p/8AaA0rwHqetXmlWH2e4v7+/s0XzIbeHaRguCFZnaMZIPG7vg1y8qq1nGmuun9epsrwWrP0/wD2D/2fdN+Dem2Wk6P8ZtN8ZaVqul2rWerabprW63UcW755H8xvNc7tu/j0Ir6qtdNjWNJLWG1TUI2dXieFmkmQ/IVd8gYAZSvf71eJ/BfwjovwTtNB+HPhu81LVIXaTzb+6dDNJM8qyb8IFC/xDAFfcnwV/Zt03xEw8QeML90tLnf5Fqr7ZdrqpHmHA/u9O1eBmdP/AGmavbb8kfQ5W+ajG5z/AOyb8OfEEfji58TWlrBALTbFdO8O9vs7L+8VD7lVG7r8tcn+1N4X1DwHJosrJBDNLdXL/vU3Ls+UhcP977vOa+4tF8D6R4b0VLfRbNEgt7fyk2J6dGOK+Zf+Cgnh+F/h/o/ia8RPk1kwO2zd99Gxg/w4215+Xr6rmlJtabfJ6fqbY2cMTgqnL0/Q+Z7fxlHM0y3ENq1zcp/o91YQtbTRkN90GLG4HuD1rL8QeMPFHiK1Wz1zXH1GGNNvm3Vsm+PHocCse52xum2Vtjb/AJ0+8re/17VVhjvPM3X1zseSUq0T/wALf+yj0av0dKClex8S3N6DFs5t0asiAsu396nf/P8AFRcR/u/ssmxR/teu3HHt7GtVNDuJP9IjvFcH/lqjt5e0fwnj5SP1pl0ultb7mhZH+6z+dwwPG7P9DWrqRsQoPqYywTXC/K8RX762/wB35RUtqzSQ7Vh3ys3yLv8AkwP1zTr6zazjVmvFMUyboonRdvPGefSmtAtm4j+2bNi/JLs+8p96XPFAqciVblpFeO/TKu275P7392olk23CSTI+1/m27OPvf+O0K0yx5+0o0Lff3vnp/CCKkkjjMJmuHaJ5YtqIvt60c0RcsrhPHJGpt1tvk+VnbpyP4quLb2+Fa6udlwkXzwJudG79R976Vk+dJHIkcLyujN953bZxx+XtVyxurOeQXEbsgVz9/jd23Ck2h2fQsRTR3Fu8JsIk3fNu7fe5wDVe4h0u332snmxjYvzfd57cc1YR12r5M6Kfu7G/i/P/ANCFTW/h3xBqDCHT7OWZlX+GFnGD1xTvELSehVhh0dZVvmR23/N/dbj+Idahkt7S6XzZEiZv9zB/E1vSeAdW/svzG8VaJHh2b7PLNiVW7KUIytZFxYxrus7jUonfYu99m9GP1BzUqUJbMpwmtyGztbO1iTb/AHzs7PGA3bFfnr/wVW+I1j4m+M2nfDXRX2QaagWVd+Vyvzu3+zyy/wDfNfoBqV5Hoek33iK+uYmhsbOWdtjqPur93B71+XsPhnVv2kf2tJreZJXW+1tbV3T5tqb8yt+HzLXFi9LQT3N6F3qz7s/Ye+Ea/D39mPQWuIbf7VrcX9pXEW/58ytlEcDuF217EtvJI26KFLaQLtR4kw27b93/AOyqLTdLs7O1ttHsLDZDa26W8W1+FC8d/u521ca+VZGtr7T4pvlG2dEbcrfyb+9trrjCMYKBjKScm2LHqGpeXHJcXlwN/wBxorn0479vWtRrzVLWH/SrlXBTCXHkrsZj/CayY4Pl86Ybm+8kWxlVsdMCn6PeRX1yt9fIjJ5v+jpvZkjDdVx6+uelDpR6CUrmzb33iLUJmv762faXWKL5FZfu9zj5jnjnpTswreRWdvpqbE3LsSH5ZM9WJx/Xiq9xq118xhm+8/zKn3MDp1+9+VWbG+uG+0X01thSgWXbtG3t/wB9Y70uRR0ZSld2JtSuoZrcRzW1vCItystqjJuX/GobO4WS6ht7GzuId3+qfzt/3fUdePrTI5VuP3JTajfKmx87sdGz1qWO8treZJJLZXTa26L/AGj/ABflUuHM7FKRat9WmupF+0Xlxuh3b28naqr/AHuOGz70t5qH+medot40p3/8tUZdw9xj/gXFNXVlkV9ttErfd2q+xfven+NNkZfM3eT8+5t3ybmb/ZpqFhc5oWci3f8ApU2pWtszLt8p38sKwVc467vxrT0H7Re3L6et+8IR1fzZUwrMeCpcKR/hXP295PqFwzRp/rP9hdy5/hx7it/TTskHk6q1mjKGSJuNzjqoB5bNLkKjK7NfVNHEMw+z63YbfKZtqTMenHXjj5eada31vJa7badn2L83zrsz6cVmtJptxMtxlXuWl+d/lX86m3Qs22N9q7tjRRdJM9GoSsi3KJ0dvdalb2rXQm81GX97Ajq23/gZ+9Vqz1C1upjDHqeVC/uvkXOD68ndXP2N863CSLeOu77jbM7R0C8VrQ3kLebu+dkXau/avTjg47U+VvYpT0N21XyVit7d+X3f61Cir65NTjVmhZrZU3t/so33R/EM1kQ3UyqtrHeYT7n3/l/3iOeK2/Do/slvK1Kzs7tJnbazzNvh/wBpHQ9vcVnJtK7RpH3mjC+P98Z/2ZviKz+WceBNWBRztIJtJeQe9dX/AMEfPBvhjU/2FPAGsT3MIvC2qrIjOoJxqd4ACNwJ4x2rZ/aS8H/C6L9j/wCJWq6BrmmS3Z+G+tO0bXQ8zeLGZgAOpPpkV5j/AME0bPwov7BfgPU3uE0/Uom1Mm8hc73/AOJpdgF15DYGB06AV9PCp7Xw0rcrt/tlP/0zVPMceXimGv8Ay4n/AOnIH25q0nh/w2z3H9jxG4VP47b72PfH9a8l+JnxEt76+T7LbPaSr/zy3L/Kub8RePvG2pWrWs2vebDu2+bZX+FkA9eR/wB84FcxHrULK3+jXCKj/PvmR91fHUKElaUlf53PeqVVayOms/H2qWZElvcxS/P8jsmWVfqOa6TSfjt4ot18uTUtm5vvxTP/ACNeVNqzQs6LbSgK5fYibTu9uyj/AGfWpdP1ayaYyXD3G1Nu9UhGW+blev8AWtpYem9XEwVRrZnvnh/44atcfu28STl0/hltlcbfrwf5133h/wCKmk30KLqF/bw4/j2OE/MjFeB+FdU+Gt5G0d54Sv2EX+tvLLUmYxsfVD6fjXY+Gdc+Gmis8fhv4otDMn3LLWcrE3rl8VxTpxXwp6dzdcrWrPeNO8QMsf2ixmWaF/7j5Vs+h9akXxhps1wYfOQFfvpL8v6V4ZqXxe8TaXCsnh3R9NaEf624sLlJlXtuwMf99YrE1z40NqV4k3iCzleZUGy4tbNFO0fwk5+bP6URo1ZbImUYR1bPp+HWlm/fR+U6bN3yc1OmpRttjr5n0f40eSqroOpXUyL/AMfFve/I656L689mBru/C/7RHhmX93rUN1Dt+9K83nL/AENJ06sN0ZWg9meutcWckQim3/8AfFNVdNaTzG2M23b8yfM1Ymg+LvC/ii1a40PVYrnZ/Au5W3f3cHmtGaQfNIyfIPmX8am4JWLccel/O/yc8Oie1OW201tp3pt2/wAVUdq7fMjn+b5mpsn3t7P/AHfk+nrQLlLzaLZzE+Ym3/aXhv0qOTQI2j2Lctjn5W2tSQ7t26OZv9v/AGVpVW83NHHNx97/AGqAvK5C3heHaI2uX4++nymo5NDmWMKs0TnbtffuzirM02ox/KiJxu+/1po1KbH+kL82z+dA+aRBHpd9DJhbnKcbYP4c+1fjD/wWS+LEfxB/a/17T7ebfbeHoIdIi/3o1zJ/4+//AI7X7N6zrFnoek3evXG2GOygkuJXb7qhFZj/AOg1/O1+0J44u/iD8Ttd8ZXTs8uq6tc3jM3/AE0lZx+jLVLZs9DCNwoVKj62X36/ojy3WJg3ytWBecL833l/hrX1aZgzNs+9WDqEnyqzfLU+ZhKWpnX0jZ2h6zL6Y7v4G/3Ks3V1tY7o/wDZWsu5ulaPadq/3Kn7RhKR8Rjd9oXb/frZs4WHzbKybnUVuPL8mNE2vW5pcjPGKLXRjexehjk2qrJXtn7HfiGbTfGV/wCHZH+XUbDeif7cTZ/k1eQWu1vnNdd8Jdcbwz8RNH1jfsVL1I5f9x/kP/oVJb2Ha7Pru6kAKTN95vvf7wqd5oZI0kjTIf79Zl1N5cyQyP13fJ/s0+O68mRIo/m2fNWa90V7GnHI0kzLv3Ls+lSbmDfK/wA61VjuVjuN0mzay7qe1wqyRzK/31oldvQZu6XqXlyQySPj+Gvo7/gnz4q0u1+I2veFdVtvN+36a7WsWxW/eo6yDGfu8M3zV8v2jeY0SL08/wDnXXfDX4wa98F/H0fxI8O6Va6jdafYTT29hef6m4ZUZCj8js+7qOVWtqMuWepUJcrufoRqiyWcz3EwieA7mlgeFSsOPTnCgdyaxZG0+bUJLqSawcSxI1h/pibdvQ4Gev8ASvkf9hv9tj9oP9r79sLRPgV401jwfpHhvxba6pa39l9jaVIVW0eSNvtPyxr86rlOcjdz8tYnwJ8bSaPr2peG/HGg2upah4a1SfS7/wCy3ieVJLBK0e+M+WweP5cq3cba7Y1IRqck97X/ABsN80kmlufevh/XPHngeb+3PDv22wZ4tjXSoxikU/w5T734V5B+3J44m1D9mXxRpl5paXE97cWrz30kYSSJ1vYDxu+ZgwGMds5rPi/bo0PwfYv8PtB0q8bUliR/3WyKG3Zl+4QjruKnuB0968n/AGjvG3jL4m/DfXfEMOv6pcafbpbnV3v5GIlm+0Qqsa5Z8qu5T1Hb+7X0vBcoz40y5pf8v6Ov/cSJ5ueqX9h4nX/l3P8A9JZ9H/APw7/wkfwL8ICwv4mdPC1huhUgPxbR5BHqD0rTvtD1jTdlrJo8rsU3NLA7Jx/e5ryL9l39p7wO2maT8IPEH/Eh1Ow0bTbeK5vV3x3Qa0heNkZfuMwZSA1ex3+saxDdbdUv2Q/Kyo6MrSL06muPPYTjm1dtaOc//SmaZfODwdNJfZjf7kaWn3Emk2MF5Y2FxqNxMh+1faE3bV/u845p+n2ureMJXtdH8K2pvijNbxb1WRkO75euMfjWPfNrmqRie1jlaNUCo3VZAOuc/pUf9jxyTf2xrVy6OkSrEyvs8vHAbjn615TlI7UoaXWhfvPgn8UGM/8Ab3g+/sEhT5UW2YxZ68kZ4NZN1a6x4DuEOraPZ6rDI4Z4ndijf7LhCGUirekeKPjFpch0/QfiLqMljuLP/pju2P7u854rI1b/AISa6ke81C882Z9zs2/72f4jjjIrKDrXu7fIuoqG0b/M8P8A21v2S/Cf7X3g57nUraKy8S6PFI3hXUd/MLNz9lkD8vA5ChgTx8rjBVWr8g/Fnh/xl8PfFVy2qWF54c1TSr9orqz3tDd2NzG3Z0wVcH5ldDyNrg4YV+8kmjx6lC8lukqbGVG3ws3PTqfX72ea+cf2+v8Agly37V274s/D3VYNH8V6XYfZ3a4RfsGtKjZjiuHGWR1+YLImdgZshhxSlCS1RkkcV/wT7/4KdaX8UPC1h8K/i94rls/GNjZmCLW79EhttYjVuGDg7VuAGXchxvO4qMZx9g6H8QobG8TUvEVna3S+Uf3Tb4/OBXhsp93HUYr8Lfi/8Gvif8DfETeGPj58LtS8NzSPtgi1SzU21wRz+6lG6Gf2w2e+Frvfgn+3X+018D7SG28J/Fme50m34/4RrXoVvLVlP8ID4lXjjhwB6VcKzXxEOLP2Q1Txlc3GoN/YvhiKwtplRUbzmfy89fnIyxNR3GrS2Nmn9oTS+XF/zwTn/e9fwr4x/ZV/4K2P8WPiFp/wz+LXw3s9JvtYv4bPTr/QdS3WqyNwiTpKFdSx+VTHv564FfXl14i1a1mCtom/Y7Lt87c/umDW0JqUboSWuiMeHVLzwb8W1vI9V8228Z2uz/U/6vULZM7frJbdG/6YVmWul+HdDbW/2d/Gltcf8Iz4q0m5fwv9iT57NJObm0QnOwxu3nQY5CMwAxFVvxpE3jTw/c6XpcyWmpQSi60O4bcTHeRtvjfj34I4yjMD96ud17xFefFz4fw6x4d059P1i1n+0We/du0nU4GZHiOOwfzI2/vxyt2an7TTb/hg5dbl7wX8XNQ8beA/F/wh+IH2UeLtH0GfS/ENrs+S+hmt3S31KJON0M4XdkZCSLNHnKGvzu/bX/YnbQvEPhv44fCHyrTwp46utJg1KC1Ty10O8u/IgdwP+eLuWcf3JNw+YNx91ah4aHx+0PQPid4N3+GfGOlJJFpt/s837G7NsuLG7jBAntmdNrxE9VV1KsqOvA+Cfj9p3hv4If8ACFfF/wAINoWrWem3lhpd7qNsDpurXMEzQL9nuMtGpabaqQysk3t8pauTES9yUn0V18jfDUuevGn3aX4o+Jf21vh7Db+O7Dwv4T8SRTabpujQrb6XvZbe3UO6RIiA4RljTJ4/5a4r6h/4JX618JfDvja/s4f2f7fwrqd94Xis01y11h7u0vpomYmIiTlJnDbyR98r6LXyh8XPHifED4lahr1jYeTCjJClv8m6MQRLEclOGJkWRmPOS2am8OfGbxlodnBp+i+JL2zhtp/tUUVvMqqs5XHmgY64+WvlKeZ4uhUTep+7T4E4dzHCXpS5Jtd7pO3brqfr74bvhdfEjwz9qRvKj1lN7pD8u75cKX6ZP930r9M9Ft4bfyprqa3Xzbj5d6feCjnHthl+av5q/D//AAUW/aO0fw9LoLeOftcjz+emqXFsnnq6riNuAFbZ/C2OvXNd14d/4LLft96DGiw/tM63Ls+ZXntrR9rdP+eY7VhmOZRrTbhTbueRgPDnNqdFx9pBWel29fPY/pTms9+np9uuUzt+RV+X5ezf7VeI/t2eEY5P2Ybr7PbebJY6tZz/AD/Kyjfg8++6vxT0f/gv1/wUP0iGOKL49ecIU2xfatEt5WX8eK9U8Kf8FR/+CiX7SHgGTRPiX8XfD9h4Y1JEWV7jwwi3N1Grbw0YRxsGf4j19K5aWJc5xkqTurdns15nm5jwTjsrwzqYrEU4w829dNrKOrPadSt1aKZbWz3IEZWVkVduV5bPt94VkWk1va2qSW81wskWd8rpvfPfHUbcV8qfFj/grD8B/grcT+HtP1zW/H2tw7o7q30R7W1s4ZBu+WS4YFcg/LsQuR6VzXh//gof+0B4+8A23xT8N/DX4eeEtDvb2SCyn8Q+Kb26nuHRsHYkdqOMq3zkDPbivtKeY80b8jt8v8z8xrYaFOo1GSl6Xt+Nj7Us9UtUs2t/9S29mWVfvSHtn6fyouprrVnfzNSd3MrK6K6xj/ez/FXxrof7Z37ZPiLUra00z4XfDq8hlZFs9Wl8Ry29neSsWAgieWESPJ6jy8D+/VnU/wBuz9qDQfB+reO/Fn7MGjW9l4fiNxqkUXiFHvbWFHZDcG2G4+TlW+bOcc4rX6/DsZezPrm6W4mmZprN1xEqvs2ZYlf4yOMkU2azsWt4bqOFPkbcsSP37rgj8/0r5i0X9ub4xa5p+m31x+zGu7WbKK606wl8VWltc3Eb/cbyLiRHXPYGnaN+3p8RtZ8Qax4Xj/Y/8YSX/hvZF4ggtb+xX+zXdVdFkd5hHlkZWwGJw1UsdSff5i9nY+nl8N6TrEMt5b3cVpIz/Lv3Z45246f8CqhJp7RR/Z7LUvNEibkbYuxvl9+9fPfjH/gpP4M+GGnwt8VP2fPiD4Y/tPLaNLf3li9vM6L+8ZPKmffx71i6X/wVo/ZljU/adN8Rwl/vP/Zquq/TDEtxR9dp36/d/kDhGx9Pabp9xHCI5Ztsxw0Tu/yf98VoWOpXmj+bIuq+VH9//SrNHTdtwFP9eK+ctH/4Kqfsi6zcNFceLNR0qN/v3Go6PNFEv1cg7B+len6V+0h8E/FPhtPFmgeOYNb0NVZpbzSf9Kjh7/P5GXTnqzge5pyx1CEW5OyNaGDr4mqoUlzSfRHam8uNWVmms4Gdf+eXyblPoOnNX9D1zxh4dvnXQdae2dH8rzbW82rt9uze7V5zof7S37ON9KJIfixpccLvuR9jK3P8XPar0nxm+B+oXRktfiRpciNhXf7Z/rPwPv8Aw1zrNcBLT2kfm0erLhnPqbvLC1F/26/8jt/EHjLx1q0f9m+ItYg1RonbY91td19dj4B61jW80iSeTcb4jx9373/6qybf4kfC91ENv480NZX2sjfaU+X6Y4pzeJ/B9vC19J48sH2vv3reI6bR13g+3StaeNwiWlRfectTJc1jpKjL5p/5HmX7Z3xq0n4e/Ce/8O3muWsep38S+VavcqrtCOeD7n5fevC/+CVXw/uJvihd/Fq6nguYdLtZUbzU3D7TP99n7ZxyP96uQ/4KKXHgn43ePrS48J+J90yslvvtX3x+VGrfOU6bi7cd8LVr9jH9m/8AbE8I3nh/4ieAfiFo0HhO815P7e0u6vHT7ZZo+x38rYd+5NwXDIQdv3h8tctPHU8XieWDu1+R6uZcJ5zkuXwxOIp2hK2vVN9Gu5+gN9cWtxcRR2s2yJ2+dHfaGX+76rTVupnmENtN874X/dIb7v8A7Nup/lWcduyNbecHb/lkiuVH93fn86Ib3S7XUkmmhl4+/E1s25v9nP8AKvZ9o2tT5X2Y6a+vly1urqxX/lluwvbaP9k1bsbPUNQ0/c6W+Im/e7HVdr9uP73zfeqvuklb7VGjRoz7k23LMyrt9T97jvVX+0pvtwvFeAr5Sq3yLv8AXk1SqC9nZXNr7DMrOrXkCKi/PEj/AHvl7Y96YyzRx7pnRkL/ADrvZvm/CoIdWuNWtRqVvZu7Kn7pnRUaRV6em734+tSW941wr+dMhMK7tivs8st1Uj9KPadQ5NblmC4aba0fyfd3Om3bx0xVqNnaF1urNxui2q7/AHpMfTvmqtu27yVt3d43+be6Y2uevTtViNo7edIfJaX5Btd/uxt/epe0GqdhYrW5uNq7Gx/z16bT3yK0Y7OG3ZIZrlmK/ciZF79VP1qlcLc+cn2dF2zfLKnnbX3bfvAfzpYbfUPL+1TOu/d80W9emPvE+1Nzd9CeSxqWrw2shuLVFDGXdvl5LL07/wCRU6teSfvmRZnlwrM+3PruzVS1WRY4pl3vjLfM67uOvGe/rWtot9JJb7Y9HaSL+OVE4Vm56VKqalcthq3UMLIy7IX+7LvTcrN02gf1p8EkMeyFZ3yJduz+6p9MfeqvNdWcO1o0iLO+5ItmOf8ACkuL66kUyMlrs/uxJtP+9VKok9BcjNddQsDGzTIuxf4U3dR/nmrEOtSXHlRxfMiZZEdNyrn+L0rmTdL9jFvFCgdn+/v+bHv/AI1Zt9Smhj8qN1j2pu2b/mZf8/w1cZxDlZ1mk3kLSJdSTPHK2f4+W/pn61v6XDa/bIrWPxJbssqll+0WzRLG3+2eRivPrHWpo2WORFZS33f4d34cqf8Aaq2uqSbfJV3SZNrNEz7No7sPrRNprQqF10PQf2hPhlqF/wDsx/EjX7PxVpM8Vv4D1aaWJL5JGISylYhdvfivM/8Agn3aeKdJ/Y08E6r9keOyuxqBtrpMkHGpXSsCV+6dwPB6jmm/GPxbNafATxxZ2+oMi3HgrVIykef3ga0lDBvpmk/4JzfHLxX4L/ZI8JeHdPvFNpbfbz5VxbLIilr+5bgHnPOfQV9dQdSHhzW0Uv8Aa6f/AKZqnluUJcTwvp+5l/6XA9b/AOEq1KG+8n7Hsl5/frCq/wAvvfSor7xNeTMq31narKqN/pHk7ZNx6Lx/hTPGHxabxpJFqWoaVZCRG+eW1TyT/wADx97HrVK+8WXGsMk32OJ1Chkd+u4dcdDXyUHFtPlsexKy0T0FW8a4ZbppsBf7u7c3/wCqprh5mh3f6Pvm+5vdhuz/ABHmsq+8RLNdNcRzbVG5du/euzHC84NLDqRkkEy7Dhdu7Yv6fWumNt2YSdjVsZ7wbVW5aHdu8qW3m/8AHeP60yWa8hn8yN9+/wC+yfezu71Ua4kikHkpOfk+95P+H3VqX7c0yrNHCqv91l6/p61rGEW7siUpGkt9cXEflwzSr/Dvi4+qmlWRppgrXMvzJ833s7v72Ky45vKudzbtvzL+Jb7w5xUq3k0bbWhcnZt37F7n1/8AZqtUo3E6jtqWLe4urPVF1aN1luLfK2ru7Y2Hqkn94Hvn6jla6vRfEF94mUNpLfZZU2JdWqTLvjY87TvOGHoe9cTb3l1DHLHcJKkbJtfcm4LhuOaS8uFhjbXNNs1ivLfbt3uyxTIf4HPoexxwef71KeHTV0tQjUlc9os9N+KGnss+hJf3CK+5Li1tuY/9nKEiu68D+PPj1qlqF0W5S9Vfllt73yi649sh1/KvA/BvxA16OxbxB4O8Qy22xlS6t4r/AGTW7/8APKSPIOf59RW/Z/GTxgt4dSmv0uLhF2pPcWyNIo+vFebPBzqJ2Sf33OuNena8kz6R0/4kfEDS4/M8UeAJXVf4rOFuvf7hIrd0X4laPripbtol1bO67tl0jKq+uDXzbp/7SHj6zZZLfxLej5B8u9XT8Miu08M/tVeILeNTr2j/AG+J/wDl4SFU/M9P0rjq4GvBX5b/ADNY1qLdke+R3EML7o3l2v8AfRX3Kp//AFVaW5jQLCs27zfu/J8u4f4VwPh340eHdTs4tSuNKv7QTru817ZnTd/dJTO38a6/TdcsdQtVfT5opIn/AI/4q4rNbo0a5lc0IWkabyftOX/g82n3Ekcq+W00Qfeu9N/FQRX1uqmFdqqz7vv/AJ1FcfZ49zBEy/8AH1/KgVrs8q/4KBfEG8+F/wCxt488QiZYbiTRHsrX/fnbyhj3+avwD8YXm68l2/MqfLX7E/8ABbXx9B4d/ZU07we02Jtc8TReVtfO5IUaQsR9dtfjB4q1BPOfd1qn8B3u9PBQj3bf5L9GYGsXjLvVk21zWoTNt27M7qu6xqMbMdz/AP2VYd1dPJ/G/wAtScTldle+maNmb+Gsy6bd8u/gfwfxVPeXTc/3WrNuJvl3K+cURSb0Mmz4xh/1ZkPWt7RZumXrMt7XdpszKnzKm6tDRY5tq/JStYR02n7mrWhWQKrKPmH3H9+1ZWn748bjWxbsjL8v/j9J73Jb1PqPR/EkfiPw3pmtWzur3FqjsyfNubbg1ehv/wB8d3DD/wAezXmPwh8TRt4DGmzTMktndMqN/DtbkVv/APCUbmPmXO3/AOKrGbsxPc7K88SRwwptk37H+b/eHarDa5DNCH87b8u6vNb7xdHb7287fl1Zvnqrd/EC8+ymG2mRcfxvUp6DTPUk8ZR2qtJcTfc+bb9KXVPiLY28MMyujxJKGdH/AIomHK/Q96898B+GPiN8VtU/sfwX4eutRlPytKqMIYc/xSSdE/n7V9XfA/8AYR8G+GfI1P4kTf29exPFLEro6W8LoyuMDd8+D0Y1nPFUcO1KTv5FKMpa3Pqz4R/sBfsG+EdU8NfFj4WfBy306/tEt9U0O8s/EN3+5keH7wQvswQ7AjGCGYHiuph/Yb/ZbtfFWpeNLX4ez2d/rU7T38tnrdwEkduSwjztT2AFW/hnqVydBih3oTE23/gPbtha7KHUpF+Xzt27+69af2jUq6/1ZmEnKL0Z53H+wv8Ass2+tTa9D4M1F7qZNksv9vXTbh+dcf8AtjfBr4b/AA0/ZC8Vw+DNCltlR7Bl866aQhvtsClstz0ZhXvC3ysV8u5+Zf8AbryL9vW5839k3xYBOxz9h4Lf9P8Ab19HwPjKkuNMsi3viKP/AKcieZnTl/Y2J1/5dz/9JZS+FH7G/wCzd8Sfht4N+I3jbwRNfaxe+B9JivJ01qeNJFW0XbmJGC8AlSRyflzXtkPgfwbZ2trp9jpX2e3sLdILWJJnO1F6cuSW/E1yn7O9zMPgB4GCv93wfpn/AKSx12TTTbfmd2/8eb8a8zP8yr/2viYOTsqk/l7zOjAq2EptdYx/JHEfGf4vfB/9nvS9PvPHEl2q6hcullFbOhb5ACzAyMqgDKjG7J3cAgEjz3Uv2xv2bptDh8U6h4d8Sx6Zf3c1rbaifIWC4ngWJ5Ykk8/azos8DMoJKiaMkAOuV/av8F+Gfid8YPg74E8Z6Y2qWl/4muo4fD7yvB/b90Ika20jz0ZTa/bbhYrL7VkLb/afOb5YzXCeIPhf468X/Dv4QeFP2pP2cf8AhEddufFvxHv7Twr/AMIhL4a/4Sq/t/D2iTaZB9hgWBR9ru4LbT9lilu0+MRkXUjzt4TxeLnfllp6en+Z+VcR8S8W4fPMVQwVaMadNwjFOnzayjRb5pdNaummqT6r3uxs/wBuj9k2xVY7ew1hFXoUurYH/wBKajP7bX7IpXb9l10LnJQXlrg/X/SK8wh+D15a674P+JXxz+CFponxhW71a5034P3/AINh0SDxq9i2ktpQk0vyY4o0uDc6kjRwxRpqQ0Y2cKi+uHne38e/Bvxju/EXg/Up/wBhPwlY+MZv2Y9YvfHWgSfD46JDoscGoa3azeIpLKJreKC9FnBDIkkqGPzpYjDFv+zKq+s43+f8P6+48F8W8dqEm8TG6tp7G7s+Xezdp+98G+72s37z4d8eeF9a+HDfGDwr+zt8WLzwhDaXF03ibT9Dkk0xIIC4nlNykhiCRmOQO27ClGyRtOIX+NPgSX4Un42TfBj4nt4GaPZ/wln9kgaQo+0eRxdCTyc+d+6zu/1ny9eK4Tw34a13xg2halB+zR4h1TwxP8BdP0rUf2lrKzbZ4atG8OxRavIWtLVNO1C1sLJtQ097SeOXU5HgaFL9J0t44ee8J+GdO1Lwnqf7Y3jX4NfEPRLTTf2eZvDR8Qxi3vfC08jeEh4dsr2PUolzJNNcS21m+lJG0lrPJLNNdqtrPbJbxOKf2/w/H+vvO98VcXWVsSut37JJJK153cbWW9k3Fp3VTR26qH4vfso/tJzx/BfTvhB4n8cy6vDNGnhS0sLfUXvVCNJJtt0ldn2ojOcA4VSTwDXzr8YP+CS37Hnxg+I+o/C74JfBH42eB/HWlWpvdV8H6B4bW/e1idYyssumzuzW8f76EhlVMiZef3grS/Zm+E/7QHgvTPG/w40fwj4h07xd8YvgLLc/DvSLBZEvfEGm/wBtWV1ciNIzuCTafpeqMsb7TcxKFjWT7RCsvQ2snx7+DegeBfhBaW+k6L4msPh5qn/CyZfiJ4UtZ7Xwr4am8QfarfTdU0++tZ1Ty7uAanFK0Ju7htctLeGOQi1SbNYnFOK5pfh+H6nBh+NeMXRjOvW83alHukoK7+KzU7t6RafK1qeAfCT/AIJKeJ/2OvjboXx3/a90nxrYeDNG1dX0mW58B3Oivf6h5UhhiaS6uDGmAry7EZmbys5wGr7Nn+Pn7OMzEjSPGwB5Ixbfe9f9bXk37aXwd8XS/s/+EvjxpXju717wbHd28VvrF7ZXe/XdS1mO71G41G4u3H2e81YyWk8F9HC0oskt9NtGu72WGZ4vvH7azN81X9axzfLGdkvI+mybG8b8QYutTp46NFU1Brmoxqc3OpapqcEknFrrfyPmFfjh+zYCzf2P4z3c7GMNqduf+2tc3pPxK+C/hv4m3muaGfFp0DXLJpdatriK2aeLUk2olxCN5UrJF8sikrgxRkbiWr7Akutrblmfd/d31k+MdF0fxx4X1HwT4qs2utL1Wye1vLfkbo2XBwRypH3gw6Hml9YzBaup+B9Csm49vb+16f8A4Sr/AOXHyx4Z+IvwE8JePdVvdJh8WDw/rCRXlxbG1tfPi1FWCuyjzNrRyRhS2SCHTIB3krzOteK/hv8A8Knm8A6To17fTyeK9V1Ax6vaQfZpLSa4uJ7eNwpYly0kXmDG0bDjdxn3/QV1r4leE9b/AGZfihrl0PFOhwQtZa4iKst9Cr79P1qLjHmLIi+YuMCaJuNrrn4e/wCCinxe8faV4Jv/AAH8RdKs4NT16/u3uv7NuWeG3aW+We4SKQAb0mS1jkPHyeeqc/erDE1sZCi1Kelu3pY+j4X4T8Rs1zqnSo5zTi073eEUkreXt1f7z5oh/wCCcvxTsyG0nxzo9n+5CskV7OyFiMNwYehyfzrrfCf/AATfF54a+0eOP2hbrTdZErBLTSvB8V9aiMHjdNJdQSMSMEhUQDGPm+9Xm/wR03SdX+Inh7TPEro1nqPiKygvd5+9C9zEJPzDNn23V+87apdK3lrCqwom1f8AcVcD8MVzYZ1qsG+bbyufbcb8PeKXDSopZ/Sk6nM/dwKha1v+oiV7t+Wx+JfiH/gnn8RLIMfCHxb0W/I/1Y1HQ7izDfXZcz4/DNUdF/YR+OjzFfEGv+F4I9oAew1W5ds59HtV7e+fev1Z8ef8FKv2K/hb4uuPBfjj48aXaXlo+26aKEyQxv8A3TIgIyO9bHgX9u79jT4qXSab4R/aH8OXM0v3Inv1hLf994rSpSrSV9Ld7HxOGzTxbw0k457Cy74VP/3MflPF+wX4+8kmX4g6MHPGw2czgjOeW3Lk9ugGO1ehfEr4E/Fzxj8MbnwZ4f8AiHp2mXt1BHBLcrby7RGOHVWVgy5XgGv1c8K/ET4Z+NJhZ+DfHuk6hMv/ACys79Hfj2DZrdWS3kk+zpMhZf8Ab/iqabqUneLX3HPmlfxQzmMVis6pyUb2/wBkS3/7jH8/d1/wS2+OCECz8d+D5VUYRJDdxgewwjACpdM/4Jz/ALVmiR+TovxE8OWa5zstPFepRL+SWwr+gKO4hb5Ydp3bv0qT7U0fy7K3+uYpv4l9x4v9kcef9Den/wCEq/8Alx+BMP7CH7Z9vJ5sXxk0oOQAZD411QsQPu8tbHbj2ro/AP7A/wC394l8QHw34L+NWlfb9QjmMgfxldBrsLEWcSSS2JL5jQ8MxzjHXAr91P7QZV++67X/AL9eLeOLtv8AhufwZOrsCvhqUA7jkfJfU/rdeEk209V0PLzPEcbcPVcJWr5hTrQqV6NKUfq/JpUmotqSqys0nppufmJ4R/4J3/8ABVXxhoNt4u8G/ErQdat4WeC1v4vGVoZY2jO0ohnsRIu3p7dqi0//AIJv/wDBYzwLp2paToPhi31GHVr83+qQP420+Zry5KqPOd5AsjPhVH3sY4xX7YjUJW/vNR5nfYjN/uLXWsWou6ij9P8AaO2p+DfxR/4Jw/8ABWz4tala6x8VP2f/ABNqc2m2ptbDyNe0d4rWHKnZGkdwoUHauWOSdvJauL1D/glT/wAFCNNLNN+yp43XH/PKGyl/9Auq/oX/ANHHDWcW7/riv+FJvgX71tB/35WmsZ5E+0Z/OPrX7AH7cnhuN5rz9lf4nbY/m/0Xwq0zf+QpGP5VS8K/AD9sX4b6/D4z8M/BT4paDqsDqYtRg8A6pbzcFTgskBDqdvKPvU9wa/pEMlmpwthFn/Y+X+tHk2My7WtsL97Zvb/GtHi6bjblHGrKLutz8Kr3R/iZ8Q/AMvxG1z4Zaz4d8T6ZeRw+JrD/AIRLULODUlk3bL6GNoBtkLf61FyP4v7u7iLzWte0qbydSdrd/wC5dW0sJ/KRFr+g/wCyWu5mWHdu+X5nb/GvLv2jv2NfgT+1Rp+j6b8XNE1F10S9luLCXS9Ve2fc67HUkfeBH8NeTXwdCtNyirH6vwz4p4zLMPDC4xOcY/a3duit19bn4hf2/rka72uYlX/fqrN8Vo7Nf3mvWrZ3L8l4p6euCfyr9qbD/gmT/wAE+7O9bUbn9krwnqNxI257rW7Rr92b/aM5avl3/gsN+xX+yr4X8DeDfH/g3wBpPg68s2vLW6s/DmjxQR3ljHatIjOiADKT+Sof0lYfxVlPAYWMeZs+hy3xTx2bZjHB0qCXM3ZvV7H57+FfH3h261pmutb05XZG+z/bJlWNX2tsyf4Ru25r9d/2f/hPo/iL4e6LD8JZtL1vQYNGh+x3Wkalb3LRqyKSTsfOc7uor8fdD+F+sa5qFvZ+F/C91fXb2rz/AGLS9NluZVROXbZEjNhe5xgd6++P+CSfi34O/Fayuf2b/iBoNvpuv20T6j4I8W6C62GqW7bleWGO7iw7c7Zdj5SQbg4I3VpgcTRwdbmgvi0N+P8AC51mmRyjNp+xam4re1rX+Suz6v8A+FJ/EhbyWxm8K3iWclwj7IIUC7jFh+d2cA1avvgf4oupGjXwlqiRruZWnTef4flIB/lXe/CPx9440fxbc/Af436la33iG1s3vfD/AIjghW3TxJpasqGUxDiK6gLKlwifId0ciBQ+xPTV2jG1K95ZjVT1ivmfz65O58ux/C34hQ6gGh8H6lCI1SL/AI9mUSKXwVHXbx1arC/CvxtDJD9s8JSujq+9/JcHduztNfTO+XO1Y9tOWHcT5nNaLMp/yk858tt8PPFUF0jf8Ifq9oY2fZshdkZdueODtJ+77VVh0fWLNPJ1DwTq63D3Ero32bKeSPuKDjLOP4s49q+rvL/hWZl/4HS7W/imb/vtqTxvNpb8So1lF6o+VrWxvFmlj0/Sr2He+5kls3Dsn5YqO41BY5tt890VG3f5tsybv9rOK+rJLfzF4uWpktqz/K0+f95Kr6//AHQ9qup8rR61DPDGyzokS72R268en96oodQj1CQzTXkWwsf3u9UaRfzr6nk0+Hd/x7RfL/fhX/CmR6PpW8NJptkXH/Tmm7b+VP6/rsUpRPmFpLezbdsSSFXO/wD0lXXH97rmp7XxNM3/AB43MW1fueVz5a9OOf0xzur6Y/svRVb5tEsPm+//AKGn+FMg8N+Clbc3gzSG/wBr+zYv/iaFj4dUDaloj5whkjtVmmmmQIv8D9fvY+i4rRvriP7EkNrMibEZot7q276Yr6E/sHwKwWO48E6SVX/pwT/4mmt4H+EtxJuuPhvozbf7lmo6/TFH9owvs/kV7ttz5zluN8afY5nTe/yb4c/KOq5HektdQaPY0KeUW/jd2fp6gjPNfRbfC/4Kzqqt8NNL+RNqLscbR7YNJ/wpv4Gt93wHFD/E3kXkyH8w9NZjh/Mqy6HiNnbM1qfJhiM2xdzI/wB30/PpV28t5rW4S4a2SVzEzfO427enOK9pX4Q/BNoVt18KsgX+5qUw6/8AA6jm+DHwcuMeXo97HsXanlaxKu2rWZ4e2tw9m+jPmv46aokvwa8WafcGZAPDOoMWYglj9lk2gj/erlv2NZLa2/Zv8NTyRzMT9s5jOCD9sn6Hv9DX0H+0t8FfhfYfs9/EDWtPhvVuLfwXqk0O++3DelnKRkEcjIFcJ/wT6+EHgzxT+x/4P8Q6lPfx3cv9oBjDdbV41C5UYGOOAK+6o5hh/wDiGlab2+t0l/5RqnhTpy/1ngv+nMv/AEuBdt9YVl3NDtRn2uu/52UcbQB92rentM21vtOVG5ki+bft/vdq9Ek+APhO1+a11i9X59yb3Q/+y1TvPgrYZbfrF07cbGfa20+1fELM8N3PclSklY4O8uZrh1mjf+A/7BbHt6Gr2k6tcQ+WyPasoVVTdN83zNgZroLr4WrHD5C63dRn/nr5KsaoXHw1t0nby9Sb5kVX3W3P861hmWH6yM5QJl8cXUka28u18J96L5unWoI/ExkYulhKqojMzbF3bat2fgnT7GRZLebbtTaybPvfrWhceH7e8kjkXYrpg79n3sVtDMsM38WhDjFlGx8XaHND5FwiJn5d+z7vbcMCpLfxJps11tt3iwr/AD+bMw69Ov3c+tP/AOEXmW4EkjwPGP8AlkiY/Wi48L3N0yN/osWzds/ct92toZlhWviM3TiTQ6lYtChuLl4VV/mdXVlYduf4uOvvUtv4is7W4C29zAkMm5N1xMjqvzfeKOM4rJuvBeseYqw39vjdu/u/N/e+7Wpo/hPxxYr/AGnpP2WVXfynRrmL7o9Q/wB3JatljaM9pJ/gVGnd2RB4ksVj1BdY8M6xazai21ZXimzHcD/nlIMDZ/eVuoP/AHzWrot9o+qWbXFvD50sTbLq1Z2jntZe6uOjezdD1FVL618UNeNda94b2Qo/8KIdoPqY+wosrPw7a+IbfUobnUpbiD5UiiRB5kR/5ZODwyenOQeRWjrqMb3bfl1NHTvO2nzNLdNIvnLt2Ln5JU+apoZriO6/fpux95ouF/3eK67RfDPwf+IFrNf/AA/1nWbbU7D/AJCOjalbL5sLewx8wPYg4NWfB/hf4c6pfPp99DqlpMMrvlueN/0PKmp+v0ZQ5kvw1Xkw+pzTWpd8B/Hrxd4Yt4rHS7ZHt0+/FO+8ehweD+tekeHf2ovDt5bq3iDRLqwdvl82JN6M3t3ry3SfAjahrFyvhvWLB5oX2SwX9suyQHpgpxn6gVu+C9L0X+0ms/Enga8RZVZdsTq8UjBvvITkL/u5rjxEMG7tX+R0QjVjo9j2bQ/jF8NdaZIbTxJArv8A8spUZGX8DXS2eqaTqS7dNv4pV2t92ZW/rXkdl+zv4b1KQyaf4o1ezeT5ooriFPlP93j734Grfhf4H+LPDviBL9fFVm67l/geMN9cZ2/rXBKFG14S/A25Zpanwh/wXu+Kij4meEvhzb3ny6Poct5dRf3ZLiXA/wDHEr8xfEusedIzRvu3fer6P/4KrfFa68fftj+Ob2+v/NWz1f7Bbr528Kluix7QRxjf5lfJ+qax8zN8nzVM1sn2OjGTUZ+y/lSX4a/jcrahfb52VXrLkvPvKr/7lLeTN94v8rVSkuP3ZbuazWx55FeXDVW8nzpEtYeWmcJ/vZbn9KdcTJIzMtP0ONptbiaP/l3iZ23epXArSmrysRKWh81XHg+/1BVW6mitov8Anlbp/U1oW+h2tvGFjh+6vy/7VbNw0JYxrMrVVb7p/ffdrOSkmO7ZV+zrH8zVYjZVCs3ZqN21f/sKVfK7VA0mtzrfAeqTact1pa8rIm9P+A8VpXGvSMu2TvXK6PcNDeQrC7jd8u/616L8KfgH8VPjReJpfhHw9K6Bv9Iv7pGS3h+r4+Y/7I5rCrZK7diWpS0OdjkuNWkSzs0ld5mCRJEjF2J6KAOWNfSHwB/YN17xd9n8R/Fp3sbB13RaNBuF1Jnp5hHEQ/2eTXsf7N/7Ifg/4L2Npq2uWFrqXiURFbjVHTckO7+GIH7o9T1r27RfD8aTIyw/Mq/KiJXj4jMm/cpfeawhGKvfUy/APwz8PeBdHt/D/hvRIrCzhXbFa2qYX/eOPvH613OkWUysqq77v4vStzwd8PbfzBqmtWaoifMtq/8AF/tH/CsK6ht4/EGqWMJZPs14yqq/L5YZVcKPbDVzU8NUratjfvOx3/gu4msVWFfmVm/gruLWRZYxI03/AACvB1jkjbdHdSqfTzmqVbq6kO3+0rhf+3lv8a7IOVONrESpXe575hdysz15F+3lNE37J/iuNG6fYf8A0vt6ybea8VTt1Kf7/wDz8tXB/tVTXr/s/a+ralPIg+y70aclT/pcPUV9bwJKT43yv/sIof8Ap2J5OeUmslxLv/y7n/6Sz6E/Z4lx8BPA4Ybv+KP0zj/t1irsFkVm8vY39a+dPg5d6hH8JPC4TUrkAeHbLCrcEAD7OnArpv7S1KP5o9Vuv73/AB8t96vLz3XO8Uv+nk//AEpnVgaX+xUtfsr8kcz+358Nh8XNX+Hngb+2v7P+36zPEbv7N5vl72t487dy5xvz1HT3rhV/4JMMxP8AxftsDv8A8Ip/91V0Hxa1a+ufG3g2aW/ldodV3IzSElD5kHI9On6V36+KvEHmKf7buuPuf6S3evGhTUqkk/L8j8ywnDOTZ/xfnDx9PndOdFR96SsnQpt/C1fXuebeAv8AgmKPAXjvRfHI+LGl60NF1a2vv7I1/wADfaLC+8mVZPIuYvtY82F9ux0yNyswyM10Px2/YQ134867o+oX3xN8PeHNM8OeHbfQ/Dfhvwt4Ea3stNsomkk2L5t9JNK8k81xPJNNJJLJLPIzOQQB16+NPFS7UXxDdcfd/fU5vHni5MeXr06lf9ut1Rjy2Ppo+H/CkaDorD2i9Wuepr/5Ne3ltdJ7pHi3/DpklQ0fx+Bz/wBSt/8AdVKf+CTMYOD+0EB/3Kv/AN1V6/J8RvGsbceIZz97+OqM3xe8WaaD5/ie4/vfvXz+VHsaK6GP/EN+D/8AoG/8nqf/ACZ5Z/w6djHLftAgD1/4RX/7qpR/wSbQnH/DQA/8JX/7qro/Fn7Xl14ZuI2uPiRBAibvPil2MWXbxj+7zXA+IP8Agp5pOisW/wCFkNMqr/yytkaqWGg9kL/iG/B63w//AJPU/wDkzbH/AASciyA37QOM/wDUqf8A3VX2OzK0yTLeNtVdrxfLsb/aP0r8/pv+Ct2lwu23xPO6/wCxbRVY0/8A4KzafM2P7eulUuq+a+mq6/8AjlbRw7hsrHs5Pwzk+Q8/1Gnyc9ub3pO9r2+Jvu9j78zDu+WRcn/YpNqscNtr4m07/gpZaaw22L4nabCdhZkntvJ24/36+jra9+Mdnp9nJ4i8eaJbXMaNc+LYJdkT+H7PyfMSZ/Mwkpy0e5ARhGzltvJWpulR9pU0V7f19x7bjZ2I/wBqz4f6PrHwr1X4hWt/a6b4g8O6Rc/2HqU949slw9wqx/2bLLH86Q3D+WpKco6xyAZRa/IP9rz4qL4/1/RNC06HUd2g6XJFqUF+rtNa3LSqPKJI/ehYYI8TD5JA2QSK+lv2k/8Agpr4H+P/AIy034NeFPHMWq+F9A1uWW61TTtNl8nVpmRow5k4EscA8xgwGwllILGvkbxhZ+PfFnxI1rxZNeXiaJeNJAk87rvjtEZhbqS+CwCKvy9tzCvOqKeI0aaXS61PvuDcbhsmhLG1Jat2t1S7jfg/8SNa8Irdw+HbazWe9sJbO4uL+wS5T7NJ/rEw/Ck/Lz1FetePP+Ci37RDfBmw+A9v8S5XsLC1Nrcaorr9uvIegiln6sFHy5GHI6lu/wA0at4q0Ox0ltF0/wCaZ5R5t0j/AMA/hSuT1TxReW0jW9rCip+fB9feu/C4OFPWWpXF/Fks/aowiuSPdavTv03N/W9aaa8SOH5l3fwvj8q1te0PxF4Nt9P1qa8ikh1KJntfnV+nByP4SK4FbhpLVpF80svzf7tZreKL23tX0+a5f5JVlt/n43d/zr0nyNWsfB8p7V4V+LPjTQWivtH1W6srq2ffa3VrcvHLGR/ECmDX0D8A/wDgsB8fPhTcW2g/EKztfEGl21q8Cy2sLQ321myZpHd2FxJ6k7PpWZ/wTT/Zb+G/7SPw11rx18TtYuANM1SW1iit5tjLhFO4/wB7O6vPv2xv2X7j4F+KobzQ9SS+0293y2d1v+bA/hI/9m71wVvq/tvZX1NHTaXM1ofpb+yj/wAFDPBPx2uDHo+uKibkS4nvNqGNiuQjl8DP+yg+tfV+n6hHeWqXMN4kqOnyMr7lb8a/Cf8AZluvFO3zvhrrb2ExZf7XRUR1kHXo4PB9q+/P2Yfj78cvCqr4R1SwstR0+aJp7XzbzdNHjqo+QBR7Zryqv7qpyFzwzlHnifbskif89uK8Y8Z4b9tPwhtm6+Hpfm9PkvatW/7QGsfZV+1eEot3/Xyyn8eK4HXPiVcX37SXh7xdPonlG10l4hbRTbtylbkZzj/b/SonGpZXXVH59x7SksLgP+wzC/8Ap2J9NrNcRt8tzupzahdK23y1K15pD8apGP8AyB/lP3Pnq3D8ZLHrcaVcDd/cdWqnSxD6H3fspI9AXxAF6w/dqRdehmbbsavPV+LWnuzeZptxtP3fu0jfFjStrSf2bdH/AGE25/WjkxPYTpSZ6Mt9GzYWbbuqT7Qrn7//AI/XmP8AwuTw+rfNZ38S/wC3Cv8ARqif48+EbX5ZJr3/AMBt1Up147xD2MrHqxmZfmX/ANDpftkyt8qZryL/AIaO8Ixt8xutv9/yau6f+0Z4DupNv9rTxkfwy2xqlVqPSw1QnfY9QXVpFbbJbP8A98V+fX/Bar4jaXqUekeC4/NN3LcR2dv8/wAnlo32i5yPdljHv9K+wrz9oX4d6Ppk2uXXiSLybOB55d0LfdRc/rtr8mf2kfF3jL4xfF3WfFyvLqcWgac2qapAtyr/AGX7TK8rvg/8s96rGMZP7rpiubGKs6aSulu/S3+dj9B8O8pdfPqeKqSUIwa30u30XmfaH/BET4QpoHg3xn8dtQs/3+qX8Ph/SZZUX/UWytJcMD1w00rI3qIl/wCBfN37cnwZ1/8AYR/bOh8bfDFP7O0fWb1vEHg2eL7tu/m/6Tadc4jkfdj/AJ5z4HCmvvH9kPVPCPwF/Zl8GfC/U7xYbyw0ZJ9U29GvJ/3s7E/xHe9cv/wUA034PftQ/s76j4VtfGenQ+JNGc6p4Uluptm26jVswk/3Jk3RH/eq40rYVRa1S/E7ocS1o8dV8TPWhVk4NdORPli/wTfk2up6P4f8cW/7ZX7N/h/44/CKaC38W6JONU0RGf8A49dXhRknspDg7Y50aSFuPuS5HKg17B4B8baf8QPBumeN9LtpbaHVLJLj7LcfLLasV+eFwPuuj7kPutflZ/wSj/bCb4M/FJfhn4qv2g8PeKpRGz3D7fsN4v3HOfu/3Gr7l+Gvjq4+HP7Q2v8AgfxN8SNG03wlrCNdeDdGidpY/tcku+XNw/PnTuzbYEHlx7WJLb129WEq+3p2+0v6/rzPmOMOG6uRZrOEF+6l70H5dvk/wPof7QzfxrmmtPIzbV+WsT+3o5GHl38H/AJlqZdU87G2RD/wNa6FJ3Pj3GXY1d0v/PVKVZGXP3DWZ9ruf+eNP+1XjfL5L/8AfFPmkCTW5o/av4W20LdKh+U8VnxzXsvzLbNtqWOO6b7ts5/4BV8xBba4ViPno81W+ZRVbyL5fma2l2/7jULMynYKaqX6XKvZFlmX7vWljVWf2qv5jLwRTvOb+B9tUHMTtCn8PWm7W3fNspnmH+/+lKsysN2/gfxUWT6Duh7bt22N85o/edP/AGekWRmPyvuDUrMze9Hs03qWpNDd027/AF3/AHxSvcSL96lX5v40agx7x8yUeyGpanC/tLXU7/s5fEBC4A/4QnVcg9f+POWuM/4J0XMkf7G/g5F3YH9oYx/2Ebmu1/aXCn9nP4gDfyPBOq/+kctcV/wTpRT+xt4OI6/8TD/043Nfc06P/Grq6/6jKX/piseJOX/GTw/68y/9Lge3fbpNwUbqbNJJKuz+GmNa7mxTtvv8tfn3sX1Pf9oQtvf7ybqhmjhk+9DVlod3zVHJa5aodGa2DmuUJI4f7lQOu1m21oSWbt/BUMtj6JWbjURLjFlCS4ZfvpTRqkK/e+UVPLYyKvy1UmsbhuaycqyVwUYtA2rW+35Zk3f7VJc3lrdRqvnLu/gdKqzW80fQvVVt+fm6Vn9YrRdxchsWIt2j279r/wAaI9PaFVbdG/8As1z/AJ0issizuppv9rXS8vM4rWOZVYqzJ9k+jOjOqXFrcJcb382Jdiyq+HVP7oPXFOuvGFzqEi/bb+4lZf8Anq+Tx79a5aXVLxT8158tUrq88xmka63H/ZrSOZyjsJ0523O703xxeaPcC6sbx4n+75v3t351o2vxo8bWNwk1nr0sTp9xkhC7h7/3vxryiTWrhVOLxj/v1j3/AI08QWTFo9+xacs3lu0NKrHRM+ntN/bE+Klrarb3H9m3Cqm397YfNw33uDVjUv24vHGkaTeapqeiaM0NpZTXEr+S42rGjOeh/wBmvkmT4neJo92yfDfw/PmuS+IHxIvPEGi3vg/xY8txYajavBLbxPs8wN1XePu1VLNYNpNFxVbmTufAXxh+KGoeOvGWo+MNYn/0nV72S8lRP70rtIf/AEKuGvtQjuOK9K+P37MPjb4e+f440N/7U8OtLu+0K++SzX/pqnG0f7Y4rxSTVGZvMV/kLsu9Pu7q9eNeFd80WXVc5TcpO7ZrXF0rKAvb5ag8w7drfxVlzX0it9/av8VK19ukZfRKtO6JiWZrhvl2/dV/krW8Gw/aPtd1ncZpURH9lXn9WrnmvDuMkn3FX+Guw8D2bW+m20kn8Sb2X+67tXRh1eoZVXZH1C37GnwpmkeZvgz4e+dt2z7AnWqk37Dnwrm+98E9BdT/AHbNa+3VtNNb5f7Hsh/27LTmsdJf5V0e1Vh/0xWpq1MLLVNlx9otD4df9gn4R31wklx8ENGdlf5/9D4ZfcVpW/8AwTZ+BN6VFx8H9LjVv7ttsb9DX2c2n6Vu3f2bar/2xpfs9hnb/ZsH12VxPkezNeeT6HyVpP8AwTJ/ZgjvIby4+G6q0UEsXlJeS+U25fvumcM6/eUnlO1evfD34G+Ffh34btvBvhfw9a2NraLti8hNomPdj/tn7xr1mNrWFSraVa/3fuUklwqwsq2duG/3PlxWFSjTmvebfqJyk9DkbTwJZw/vJLZBitjTdH0nT8NZ2yl/+er9fwqtqV9dXV15d1NuVPuKv90+1V9W8TaP4Z09tS1O88tF+VU7yN/dArxcTiMLh7yklGMe44Kc2klc6C4vrXTbN7y8uUiRFLSyyvtVQPWvnfxZ8dprrxxeX3gnRILywl2r9tnmZBMU43Aenoak+MvjrxJ8QNHe3hma2063cO9rE/8ArFHXef4sDt0rktU021jWC/tU2C4twdq/3u7V8NmfGMpzUcI7Jdf+B2PYw+XNLmqfcdPb/GTxRN80nh6zXP8AAty9XI/iprknzf2Da/7f75/8K4e1t1DCteztY+NvSvEnxPmkdVPX5f5Gv1KnzbHWQ/FTXFjC/wBiW+f+uzVyP7QvxB1TW/g7q+mXGkRxJN9nzIspJGLiNv6Vow2/+xmuZ+OkUafCzVGC4J8j/wBHx19n4Z8S5pX8R8mpSnpLF4ZPbZ1oLseRxFg6UMgxbS2pVP8A0hna/DP4natZfDnQNPh0WNlg0W1jVhKcsFhUZP5Vr3HxU16R/l8PIFX/AKeW/wAK4nwDY2z+BNFcr10m2L/9+lrQuIYI12r/AOzV5HEnFWa0+I8bFT0VWotl/O/I3y/BU5ZfSb/lj+SGeM/GGo6jrujahNo6wmzuvMiQyEmQ7ozg8cfdH51tS/GLxdCGa38JWbt/ce8Zf6VxeuI39oWYJIzL3bpytWZbfaNzyt/33XNV4kzClhqNSMtZpt7dJNduyPi+GMLB8ZZ8rbVKH/qNTOgk+P3iS1X/AE7wNaht235L9m6/hVS4/aX1C3X994GVv73+n/8A2Nch4iuNPsbOa+1K8WGCGJnlllfakajncT7V8c/H79r3VvHWtD4d/BWG6Npcz/Z4rq1Rjd6k5/giA5VP1I5O0fNXq5Rmue5xV5KTsursrI+3q0aNHc+mfjD/AMFJtJ8DzPpOm+G7W4vAm14k1Ld5Z9yFxXzN42/bQ/aK+OviB/DPg22v7mUvtXSfDkLytGx6LI4+7/wMivRP2a/+CVfirxpFD44/aIvLizhlTzYvCWnP+9x1/wBJnH3fdUIH3hl6+iPD2sfCn4IWt38LtP8AhLf+Gvs2IrBfD/2VoWj7OZEPmK/sQB/vV9/SVHDU/wB7PmffocTi+azXyPjLT/2P/wBrrxtD9u8WPZaDHN83/E51XdNz6xRcf+PV1uh/8Ev9Uuls4/EfxsujNdf8e66XoiATMFydhl37uK+kbP4t+P7XQX8OzQ6Xf6bImy1utX0qL7c23q0jxYDP6N1FZ7eJPFV5MtxceIZ0xt2LFtAX/a78/wC1XBieI8Bhna7fotDSOGqz6HiMn/BKnULi8SHSfH+r3jF2/cS6bF5smFz8hTHOPauE8Tf8E+fjL4T1j+w9P1LzrxojcRWsthLFK0IZQ7gfxbdy5xX1Xb3GpNMky67fq6fNEy3LAq3tj7tWdSk8UapcadqEnjPV2m06/Se3l+3tuhPQ4Pv0K9D3FcceL8C5JNP8C/qdQ+ZPhf8A8E5f20PiFYt4v03w9Zp4SsdbWy17xDdeJLe2/s2NVV57p0uAw8uOP5yTkZ4wwr2f/gox4++Iuip4I/Zh0PxynjBPFvgu0l8UJFsnDPaukcF7BPgMv2qP5ZEk3xyJAuAm1t9r47/tsRfCfw5rnwt1aLVPEV/4q0ZLK/tdGeGKdbfzvMPmbysb713IcjOxsDivln4d3vxh+PfxzTx74s0K9s7G0s7awtftvzTLa2+4QQ5GNzAM2e2d3avoIYhZhTp1YJqC113ej/C9mvQ5+SUKjXU9a8UfsX6hpNvY67pNzptsJoEW/bTbZYJmUL9yWLkMfV0PPoteHftQw6hHcf8ACFw6xmGLZtgi3qsaj+Eg/eNfSP7QHxd8QeF/D8fh/wAN3N5byRwfOiXKoip/tuR8v4V8+eLr7UviFapq0dgiJCg2v97c/dsnlq551fa1FLojtjS5Y2PEbWztdLvHaaHzBAp2bv4nPSs6+t2W6imb780rfI3tXUappf2bVnXVXddl0PN2/wAX+TVTUtL0mx1C1kvLpSrP+9T+6vevWw7UoanFUjyzsfWX/BLXwT4F1i+1nVPiNbaXcRJtW1tZ0Qysm3lsHt2964X/AIKgfsaeCfg3qUfxk+Dt/F/wjut3/kXWlrtxp87LkeX/ANM2P8PYtxxXkPijxl4js9Y0PxF4B1Kw0h7izleK3sncT2PlPs2XEnTLj5wAMY/vfeqv8RP2lviD8U/hSvw/8Xar9pitrpZ0fZ950fIbPtXLUVeVdTi7Lqar2fs7M63/AIJ/fthXn7Ouuap4H1zTfteh+I5Ua4VH2vb3CpsEoz94EKqlf9mpfi98fviB40+L7Rx+IbOGD7alva3F1bedFbxl+GIz0A6+tfOsNvcPIkkKOp/vpXQ6RHqmvalDa3VzKk8u1Fn37d3+yauWHg6vtbdLERm+Xl6HvcOvap8HPFj+OLfxVa6po+oXht7LxDo1g1tBdTIiSSKIJCSm0sy7T12s44Za94+GP7V3jHR/iVoNvdanavpmsbH068VNg+9iSKT+6Ru+8P71eQ/DX4f6T44+F9n8B/EV+kd9rGuSta3Uu5zbzmFvLfA9XVV4qn+xx8N/Eni+9RtVWWGfwpqiXDWVx0juQ7288X4bW/Fc15uMVGlTniKj+H+l+JtTnJ+4up+lUPxa8A3MaLN4wsFLfwfaVytZlz4k0G4+J+m6vba1bvax2rK1xHMCqnEvBPY8j8688h8O3Ct8yRf/AFqmGni1/wBGaFcHnb2P+cV8nDiV1pOPKvdTl/4DqfF+IGEtg8vffG4Vf+VonuFv468Iq3y+KtNXc/yf6Yn/AMVVu38beE2bcviewx/1+J/jXz5d6faqvzWFvj/cX/4mqkml6XN97SrfB/6YrV0+LVLX2f4n3ksE46H0xD4s8OTLtXxJZfN9xPtKbf51Muvaay/6Pqtq+3+5co39a+WZvDuhSr5cug2pUfweStRr4Z0GHcIdEiQ/7CYrqhxTS6wIeFmkfTt7rllN8q3MTH/fX/GsW+uEkk2pMn/fdfO7aHpcf3bCVD/fV2/xqGS1hhX5Z7of9tn/AMa0/wBY6VT7I1hZWPfZmZpPv063kkh/1iV85X1rHcMrSXl6GT+7cuP5GoF0dmXcmq6krf3v7Sm/+Lq4cQ4aKu4msKErnsPx08daT4Z8Lwxa1bPNZ3N0r39qjqjTWcP7yVAT/fVVj/7a182/s7abqnj7xdqUYlaxsPFXje2sl0mCzaOGO0t3lu3wJEEiqIU8ssGeF+owWKUnxU8VXXh/wXrzLcu82pyt4csL/Ut1wtqxhW4ubiNJMjjdChboPm71o6HJq1nrWiaXa+J5Zl8PeH2liutLuWELPdysR5R6qgjTdsyceb1Yba76uYQeE+sSWjsreVz7rD4WeBoUbq3JCVZv+81aCt5St959rX2pWqlmkdB/8TXmfxKurGRnNvsPWvGf+Er8aRt8vjLWT/vX+f5iorvWPFV9CyzeMNU/3mmQ/wBKzefYOa+E+HSqp8zZ498VLO6+Efxqs/H3h1ESGXVIb+3+T5I7uKVZNuP7rbefUNIP4q+gtU/bu+HN5vbxxpVxbTXV/BeeHtN0a286RmaZBIqE48raHZR6naleZeNPh1Y+NNFubfXPEd6riJmilndNsbjkMeBxXkmlfDXVPH3gSTx14b1O11HX/ADLr0GjtN/yErazmSS5WIj7pQKrjJ/edAuaeBxMauJcqL917n6NWzPC5/wd7DEf7xRsovXVXul+nqfc3xZ/a+8TeHfilrbfC+1TxD4F8FapeWXjlfsey/023KRS2mppz80ADzQSxkbxPA2AyfNXFfEL/gqZ8G/Asz2tnrEuqXafL5Gm7n2n3fgLXzP8E/2svjF42+OPiL4ieCdS0jY8V3fyy+IblrXTbWGaJUke7cAFY0TzH+cE53EVH4L/AOCR/wAeZPEGj6P8QviF4L0PStXsPtHh/WbPWH1C31a2VMo1vIERVQptVS5J9Rnr9JFqzdj8kkpKXL1Oy8b/APBZr4vXG9Phz4Yt7BPmVbjUr+WZ8f7iEBfzry7Xv+Cmn7cXjJmh0/496zaFX3eV4ehVP+A42SH9a9K+DP7GPwXuNQurXWNb0Zr7TLopeW+vXjNcxsvG4W4H3D2cda9V+HvgPwH4Z1S+0fxB4MvIba1cLpt/oNhF5F0h6Z8z50cH2x71hVxWGpr3ml8xqnKSvY+R7z49ft6+P4PI1T4s/FrUkd/4dYvbbd/s/ujDVdNR/bfjHmR638Wvk/6nLU1Zf/Jqvu7w99n0nxRe/wBoeHn1LRniT+zka8W2ubdh1R9g2OD97OQRW/JrXhmbb/xbd0X/AGPEL5X9K4ZZzlkXZ1I/IfsJWvY+A9M/aA/b++Hiu2k/Fr4w6bu+ZnbXr666e07zD9K7Twn/AMFfP+Cgvw322t5+0PqN6rMv+j+MtBt5On8IIjgf9TX2Dcf8Ivew+VJomr27fx+Vc284/JwDXJeJvCfg2+tVhvtEivYJfvpf6JtKr74zV0szwE9Yyj9/+YSotLY574Y/8F/PjhbzQw/FTwNpuowt9+68POsLN/tCOUkf+PV9RfBr/grl8FfjJNb6LZ/EKz0fVJvuabrkP2SWRv7oLny3/A18b+LP2Nf2dfHUjyaf4SisLlvmll0S5aB/94hMbvxFeSfET/gnv480WOS4+GfjO11mHyizaXrKeVKy/wB3zEBRvxX8a741KNRaMj2dj9mdM+NniKZUmW/t5on+ZXRFZcexrct/i1rVwu39wx/3Oa/Dv4W/taftTfsheILfwvrFzqNrb/dXw94jRpLS4X/phJkjp/zzY4HZa+6PgD/wUy+DXxUs4NP1mw1HSNb2L5thsWSKRv8ApnJxu/3TzVT5YRcnokQop6WPue3+ImqPHj7Naqq/7Df41bi+IGpNjdZ27bv9hq+fNP8A2ofhvHt+1JrMI/i36U7fyrbsf2mPhHMylvEk6f8AXXTZl/pXnrNsrbt7aP3mv1apb4T27/hPL5JV/wBDt2RfvJ83zfjUieOryZfM+wRIPvf65st+lePQftFfCW88yGLxh8yfK/8Ao0o/mK2bX44fC+4xt8YWo3f39y1f9p5ZzJKrD70HsKnWJu/tGeLku/2fvHlvLYR75PBmqKrCXOM2koz0rk/+Ce3iaDTf2P8AwjayWjsU/tDDKw5zqFyf61S+OnxL8A6h8C/GNpZ+NNNllm8K6jHDEl2C7u1tIAAO5JNc9+wv4z8IWP7MnhfRr/xVZQXUf23zLaa4CMub2cjP1BB/Gv0GjisJLwwrSU1b65SW639hWPnpUn/rRBW/5cy/9LgfSzeMLE7f9Dl3f7609fFFg27zElH+x8tcbDr3h+4b/R/EVk/+7cp/jVhb+xZtsepQEt/cmX/Gvh1Ww8tVJHvOk7bHWN4q09V3R7nAf5tifdqZda02Rvlmdf8AgFcg1xCv3ryL/vtd1SxyeY37udG2/wBytV7NvcORPY6mPWtJMat9p2j/AG0bdUsd9p8xbbeRbV/vvhvyrlN10p/i5/3qeq+Z80n3qdqa3FyJPQ6k/YWbatzA3/A6ZJZ2rN8s0X/fa1zMz3C/NG7YH3qRbhtzSD5d1TyUW9EK1jeuNPh+7+6P/A1qnJpMM5aNU3bPv7H+7VWOZiv381Ism37qbaylRovWxfKVrrwy5LbRisq68K6oWbyXf/vitwTMjfN8p2bqjku5lXbv2r/sVzywdDqNQ1OPvNB163/5dnYfxVl3FreQsZGtn5+/XaXmoXEY3LNXMeJNTm8tm87G3/brlqYKl0Y+TzMeRpNvEL1SvJ2Zdrfw1l32q3zTMqXkv/fbbapT319/FePj+P568+VDllZMFRb1Y3WJFX94sfymuX16GHUoza3Sbv8Ad6r9K2nur66b/XOF3tUMelXOoXkemiZFMsuxXdN23P8AFxTpYabd0HLZ3uef6hoElrbzabJDFf2U0W2WKWHcrKeqkHhq+Wv2oP2W/wDhH7qXxh8KdBlfTpfnvbVdxa1bb/yzT/nn6dxX2r4q+Hvjbw/qhtVubB4ygbf5L/Nn+Ic9P9qsO+8J6wrCX7Tap/f2Wz/Nn8a6KUquGmXfSzd0fl+t438T7l3/AOf/AK9TrdM0jSfdr60/aC/YVuvG15c+Mvhj5EWsOrPPo33EvnHXyj0R8dFPBP8Ad+9XyVqFjqGi3k2lapC9vcQStFcQSpsdXHVSD90ivbo1o1Foc86bi7rYWNbnUJFso0y88qon4tXqOn28duVhVP8AZV19q4f4e2K3HiK3kk+5CrO/yZ2t0FekMLGOZNt4mxF27k969bCQsmzlqSvofp+3ia1X5vscv/fa0v8AwlVnjd9jlz/wGuR/sxWHzB6d9l27m39fvV/OS8Q83vqkfTf2bROqk8XWa/8ALtcbv7rJSf8ACXWOd7W1xn/rjXJrZyIu5XahbWRflWTNNeImbt7L7hf2dRbOrbxlpa4/0a6/781X1Lx1oMdqZpvNijX5pXlhwsYHJYmub8m4ST5ZnqtqlhJfabNY3HzCVCvzVo/EPNXul93/AAQeXU+hJ4k+J2h3l59q0FJXbytr+amxcj/0IYrgvE19qGrX32q+d33r97f93/ZxT4YfL3Kycr8tQ3kas23fXkZjnmOzWX71q3ZaI68Nh6VF3itSjbwsMqycfdb+LdWbfRq+jpbqmGtbp4vn/ujpW/bqrKo67aoeIGWOF2k+VHi3/wDAlbn/AMdavKV5PQ75XtqYtvHsZdvetO1RlAVqzbfUtNfH+mRf+g1etdX01fvXiK1bbqxhs/I1beONvmrlfj3GF+FOq4GMeRx/23jrpLfVtJztW/i/77rmfjtqFhcfCfVY4byN2PkYCtn/AJbx19v4W05f8RNyN/8AUZhv/T0DxOJJX4exn/Xqp/6QzY8Cssfw/wBCIdf+QPa5/wC/S1PJ80mNny1neCLuBvAuiRm4iwukWwILcg+UtabTR7NvnJurw+KIv/WjHX/5/Vf/AEuR05c1/Z9H/BH8kY+vqBqVgB3m/qtO1JVg3Mz4VU3Pv/hUUzxDKi6hYMJgdsxJI7crXzb/AMFCv2krrwrpKfAzwLfytq+t26tq0tk/7y3tnbCQpjnzJj8ox23d8Z68Nl1bM44TD0lq1LXsud3f3f5HwnDtaNLjHiCX/TzD/wDqNTPNP2nP2hvFX7Q3jpPgZ8FYZbzS2uvI/wBHfH9qTK3zuT/DbJ/ePXr/AHVP0F+yb+y34d/Zt0eL4ieNPDd/qupTRNHqXiGyhiK2o/5424kcbU9XGSe+6qv7CP7Kfh34J+DX+JHxQtlTUbm3X7f5XysqffSyiP8ACP77j/4kDpfiX8SNc+KOsJbwSfZtItPksNNt+I8Dpx6Dt+dfqK+pZJg40aS0X3yfd+R9U+erO7ev5eRteOviVD4j8SPefDNNU0W3fC3F1FqsyPeYXHzxh9nHbjNYVj4fVWaZi5Z33MzPuZm9z/FT9D0t0ULs3Guhs9P2x52V8Vmma1sRO0np26HdSoxgroLPQ5LzwpcyLv8A9Dv4m/4BKrJ/NVqqunsrei12HgXTZNQOreH44d73+hz+Qn/TaLbPH/6A1ZK2O5Qyp8p+b8+leJiK8pRi+6/I3tZtGdHYqvFWVhZVSNf4pUq3Hp7f3Km+xssa4T/lqn/oVc1OTdTQq1j8yviT8afFfh39pbxD4k1bTWt71teuEigvIXBjRH8tMZHzAhVIPT5q9z8JftmaHp+myWVj4burbVbizdLe9urN7dVf/pmHA/77r7Qj0HS7qbzLnR7WV/l+eW2Rj+ZFfLH7fUepL4+0e88OzWUV9pdqrWFnPbbUmErf655MbFEWxm2YORLmv1DI8/ebTjhoUeVRjq77WstrHmV6DoLnb1Z534muvFWoaHJ4m+LF5FoqSW/nxWDPvuJE/wCe0mfu57Z/KuP8C+MtHvNSuNN0mz3Ru8KrFF8rXDM33nPVqh8J2/iz4/6tL4f8bX/2lo712uvnVmvJkXEagj7yKfm44+7XFeJPBvxE+A+k23iy8SXdqUVvdNKqcRuJeFB9itfRLDxTdyPrErJIZ8XPFmnwzXOpWuzzXv1ZIl4+VDgr/wChUz4VyWbakvxQ8VeDE1nSrXUo0n02f/UyKedhPrivOPFGqTat/pkj7hNKzN/wJs19afDbwTptn/wTP0v4hWNtEJrnxHdLqjRQ7pZG+1NHHn6Iqr9K6qaVGGv9XMJNylc4D9oH4W+DfEEkfxI+DPiH/iSaluaXTX3I1mR1Qj+HFePa1ot1penx2Vumd+1U/Gu7t7PxNdK8Ph+2uHSb5Xt4kY/kPWvUvhP+xH8XPij4i0GHXPDz2dhdzwyuzdViPI/3Sdv4VcUovV6CV5aI+c7rT7zQVjXULZ0LJu2Om2t3wLfaDaatE3iq2ZtOm+WX7P8AM6/7QHt96vuf/gph+w+ul/DO2+KHgXRH83RbWG1v7W1hz+6LY83juO9fAFhDqk1ymn2Ns8hZ9vyJk8/xCkpqSdhSg07M+x/BP7NPxc0PUPDfxU8K63BrWhfbNPvNI1vTvnW6tvtUTh8fwkbWDDqKt+FPjB4L/Zx/bC+MvgLxfYX6Wuo+NJrywS1tnneOWbZcbcL91GM7MOwr1v8A4I5+Ktc0XwX4i+CfxESWOCfUkuvD9le7cKzrmXy8/d3P8xX15rm/2x/AHgjVv26tL1/VoZftHiDQRb3EEWxY5LmzmeP945+6WSVfrtrwcco1VUw9ZNxcXtvpZ/odNJ2tKO56r4N8Uah400/+3F0r7HZyf8esTOrSMP7xxx/wEVduA63i567P8a0dF0+Sz02K3MNvCqIqpFB0VRVTUsLqSf7n+Nfl2H5XiqvKrLknb7j5vxDv9Ry6/wD0G4T/ANPRKV1ukbbVZreRG3MflrQK7nNSQ2fnYVa4absj9ClC5lpbyN8qpSNayr96t9NNXbuqGa2WJvl6V0QmuhjyGL5MzR7fvbarTWcbNv2fLW20fy96g+z87q6YCasjBuLWNNrbP++6z9QuLHS7ObUr4YhtonllduBtVcmumm09punesDx5o1rNa2Hhu+mSKDVtSSK9dk4WzjVri5/8gRSLn1Za6aS9pUjTW7djty/CPGYynR/ma/4P4HhnieM+MviV4N+Ft9YPNHYWr3Grokzobe+vWa9nYyDhCn+jhd5TPQHO1W9F+Gtnca9pNx4yukRpfEF497uiRlDQ/LHEwB5XMaK3P96vLPAmta94q1bxJ8QLHUntLnxPqRtLWJXVpVnnlyIpEIynlwMuMEZESkhhtr6X8O6HZ6XZW+m28KpDbRJFEv8AdCrgV9BntdYelToLpufWZ9ibYao4r+JNRX+Gnv8AJyaa84sxLfwrcTfdT5azviNq/gz4S+FZfGXxA1+Cws4/li3/ADPM56RRp1dz2UV03xg+Mnw/+APgGXxx42ud3zeVp1hFt86+uCuRFGP/AB4t0A3E8LX55fFP4nfFL9qj4kf2pq7eZM2U07S7V2+zafCf4E9v70p5c/3RhBjkuVVczl7SV401979P8z8+xNeFFWWrNH46ftSeNvjFfP4b8Owy6do5l2Qabb/NLcenmFPvn/YHA/2q6b9nP/hZHwpWKbXrPFo8qM0EsPnJCD0WcYIxn+H8M12H7O/7J8MJTVnvE3h/KuNR8ne8kpXPk26fTq3Qdy3Svqjwn+z/AKDb+G30PxJZo9ncRbZ7KJNpuG/vzuOXcbuB0Hy4+7X1uNxWXZVhlB2j2S3fnbqVk+LxFDGKe8Xo9en+fVHlXwh/ZPutN+Klv+0B4F8K6dL4Z1W8+3vo1vvms9nmuLjTbgSBeA6M2MH7y43IuD7wuq+IIfB9n4Dj1i4/sqw1K5vLK3ZEVofOOfKBRAFRRt+UAZK5rzv9nPxFcfBXxx4i/Z58ffar63a3ll8ORRJ8t55nMcpwctI+yOLcFJDxKeBXrmk28PiTR49Xhs5bcyr+9tbhNssMg++jj+F1Pyla+ez/ADPF08LTnh5tQlu1/XY+hzXKoRrPExScX1S01Wj+a387nLWum28cj3C20Syu2532fM31PWnzRs0bL/s7tn0bNdHN4eG7dVW40GbdtVOqba+HeJqVKqc5XPGlTgtEUljj89l39HNaFnZ28jBZH4qCHTZmjSbsyq1WLe1mVvmf5QlZ1ObnaM7aF1dDsWtZZt+7ETt+S1Zs/BNg1nDtd9y26f8AoNVpobgaTO2/70TL+fFdHp8c25Y2/hqG6sKV/P8AQSUWzm774a6LfNuuEQuv8ff8xzVG++GOqRjdpeqrMqJ8kF/835SDlf1r0SPR5JCsi1BfaTNeXCaOv3XXfdbf4YQ33fqx+X6bq68Fm2YYWS9nN2XRu6FOjTqbqx4t4m8K+GfGWkyeD/iZ4Dt7myu12rZ6vbI6yf7SHo3tyDXzl8YP+Ce/ivwqsvjT9m7VpdTtkbe3hm/u8XMYHJ+zTv8AfI7QykH0f5QK+/8AVfC1rq1m9jqlnFcQN9+Jk+X8P7tcN4i+HfiDwgr6voPm3+nqn723dN80I/8AaqD8xX3eS8XYfFTVLEe7J/c/K/c46+DklzQ1PkH9l79ty4sL+PwH8ZLy4lt4rr7O95PCyXOnuvBSdDhuO4I3ivtfRfBtn4i02DXdB1iK8s7uJZYLqB96SIejAivnH9pr9lnw3+0JAnjLwveW+jeMILf91qjpuh1BF6RXBHLoPuiT/WR/7S5B8t/ZF/bE8dfsq+OJvg/8YtGvYdKtrryNW0aX55tPdv8Al4g7OhHPycSJyOcrWfFHC6x9GWKy/Sot49Jenn/W48Ji3SkoVNV+R93x/DPUsbVmzU0Pw51Ldtrs/DesaP4o0W08R+HdSivtOvYFntbq1fck0bLkMDWlHbtndsr8WlWxEZuM201ume8lBrQ8l+JXgLULP4da9dyKp8rRLp2ITsIWNY/7Ovg6+1T4O6PqEUEZST7RgsuelxKP6V6r8YYJE+EXigt28OX3/pO9YP7I0O/9nzw82zOPtf8A6VzV+s4atW/4gXin1/tKh/6i4g+XqwX+udO//QPP/wBOUxF8A6sP9Xp8RVakj8F6xGd32Tbj+5xXpUdvu+8lTQ28art2V+TqtiUr3Z9Koq55g/hnxFGv7lJU/wBtJmH9axtS+HvirVAI49b1m2Kfx2usXET/AJo4r237KmKljs49u4JXRSzTHYd3jNpkzoQmeM6HoPxI8MwiOz8Z+JZtv8V5r1xM3/j5NX/+Ek+OVrtjh8c6vs/6+e/4g16tJYwHbiHb/DUcum27fwL/AHa6o55mqfO6rv6sy+qUX0PLofiF8frXcF8eap/wN4n/AJpUi/F79oyBfk8c3T/9dbOE/wBK9Jl0u1/54pVaaxs1y32ZPm/2K0p8S5vB29q/vZH1Ol2OFt/2gv2hrY/vtet3Zf8AnrpUTfyxVgftSfHKz+aZ9Lf/AHtN/wAHrfvLazVdzWy/98VzeuT2O4osKVvDizNlL+K/vJ+pU7aIvQ/tffFaNvn0HRJF2fxwyp/I1Lb/ALZHj4Tv9r8E6NLDs/gvJkZW/I1wN9bwzSFtlU5NNh/OuyPGWbL/AJeP5q5lLBQPTZv2vtauF/0j4e22f+mWqt/VKytS/aYk1AHzvBMqb/7l+rfzFcF/Z8Kr8r0xtPTPyvTXGmavRz/Bf5C+pRZ08nx602PH2rwrfhm3f6p4n/qKlb4yeHblVkbR9SiDf34V/oa4i602JuGrPutPWFtyvW8OMMZ1S+YfU0j0qT4meHvs6XU0N0kTbvnltmG3FX/APxF8H+JPFFvY6XqvnTJE8vleSylQPqP9qvFLyGZmb989VtJ1DV/DevW+taTeeVPby7om2bgrdD+nymvVwnGVaE1zR00v3Oepg7rfU+1Jrex8Saalrqibf4kb73l5/wDQl9Vqpdfs+6hdQrcaTfwPG/zL6Vy/wV+K2m/EbQYpN6wX0e5bqw3/ADrtbG8D+4ezV6r4L8Sf2HfPHM7vBMy+an3tpH8Q9/Wv0/B1cFmMI1VrF/1b1R5E+eldLoeX+IP2f/iFCytb6alxt+VXgmXdXzX+1l/wT/1jx9a3vjbTfCWo2niNV815VhZo7hVX7sgAPP8At9RX6Swx291GNQj2vC6/w87s1ait7hl27HCfwJ/Ft96+hp5HRupwk/kYLHOLs0fhf4D+G+teE5L+LxdpkunXMUq272t7C8ZVlXO4bwN2d3Wt21t9Jm0tJGv7f5037X+Xbmv2rvvDtjquG1DR7eYo25Xns0dlP97JFULjwXosjYbw3Zf7TJpsX3v7x4r2qOXwjDluczrwcm0j51aH/b/KmtG5+6a0GtZc/Mn3qja1bcVxX8Q+xbPvOaxS8v8Ah560jQj73SrjWci7fkprWLfw0lSdyedt6FFo1Vl/iqtJIrXCxsdoH+x96taTT2K71qCbTVZvM2PxVumlqgu+p594mtVsdcn8tNqy4f8AOsa4krrvidYpa/ZtT+71if8AmK89utUa6m22abU/v/3q6aK5omtOexo2t4F3LH81ZO2SHxxHNccpeWuxf9lh1/8AZav6f+7j2rTNahVrVLj/AJ4yh/8AgPeuimuWZ0P3tTGkgWO8eNv4XZWq1b28J+8iNRqVrt1SZV6F9y/jU8K+WoLVfNYwa1Gtb2+3aEztrk/jNbRr8NdSlCrkeTjH/XZK6yZt022uY+NJA+GOpKP+mP8A6OSvtvC2UpeJuSP/AKjMN/6egeFxH/yT+M/69VP/AEhmr4DtIW8FaOx286Xbk/8Afta3FsbdV+596sn4f7v+EI0bP3f7Kt//AEUtbTM+/wD2q8biht8S47/r9V/9Lkb5c7ZfR/wR/JHGfFvxJpXw+8L3fjbUUY22j2FzfXSpHljHDH5jY9eFIA9q+SP2Ifhn4g+P3xm1L9p/4lWCTW1hqjyxQS8pNfsv3R/sQR7UHvu7rX05+0peNq/w28U+FtLTz9SXwleOLZQSWM0EqxjA/vNG2PUYrI+Gug2/7P8A+zroXgexT/TY7JYtn8Vxdy8ySn3JbmvvuFKcKOTxq299pq/aPM5M/O8g143z7sqlD7/q1M1fiN4uuvElwvheyuf+Jfp+5Lh0+VZHLZ2/+zGue03RY2utyJxVmG1/s+zisQMtF/rX7s56t/31VzTbdtytXiZjjp4zEN9Ft6H6JSpxgrPdmrpulqqox6qv6Vt2el/Lv3/LVfS7faq/J8xrXt4xbsP4Y2/8davn8TJS33OmPu6l/wAEzQ+H/GWlatv4hv493+4zbH/RqztY0H+ydUudHKf8ed1JB/wFHYD9NtXZrPcuF+9s/wDHq1/iDDHeeJDrVunyapZW16v+9JEu/wD8fVq5m+bDtdn+f/BsVZOehya6TH/fcL/v09tGt/JCtu++i/65h8u6r8cLL8tOulWO2Ld96f8AoS1NGbVRXG9hjaTbxQOqvP8AMu1Nszbtx4GK+Pv2wvFHiLRfil4k1VvGyPoNrbrdaXYRIH+ztbxeRuJcfxlW5Bx9Cua+vte8XaH4B09/Fniq/itrDS4pr+6uJ32JGkETSBif4R5nl1+dvxRktfEljcjUb+C5uby9Et0mnWb20M0h/eS3BRySgkmaRynfr/E1fpnA2F5MHUxD+07L5f8ABbPLx8r1FDsav7AMllpc1/4w1rSvPmt4mni818o1xNK2xAPQffLV9CftJeDfhz4o+BY1DWLmyURvK0Vu0yAyM+4HA9vvV8o/DvxV/Zcaafp9yqLNdJvVE2mQhsDgfkF6V7V+1dofgFv2d01K78PWEl5aWCrZXEsO+WS5l6sCfuhRX21RXhc44O0rnwrqH2e3szpaz+ZsuD8/+yGr7w/4Jgw6L8S/2afEXwd8WN5lhL4jmiiib+GWSJZUYf8AfVfCk2n6XHvWW52sn8Neqfsq/tIX3wLuLi3hf91c69Y3Wz/ZX93J/wCONU14udJpbl07Kep+lH7M/wCzJ4N8H+ODZ6lpsVzst/kd0UhSO9fR3h3wLoul6os2mWCRRI29U2fdP96vjr4Z/t6eC9Hkn8Zt/pKb1gZH9vSvsn4Y/FbwR8Vvh9p3j7wjdbob9fo0bhsbSP4a5aDdWNpbo3l7r5o7Fv4iaTb694Xm027kxFIn79dilZB0OQa/KL9pa1+Gvwj8SamfA9ja21zDqMkF6kSKAvpj3PtX6v8Aiy4kt7XbDN5off5rf3W2/dr81v8AgoR+yDrnjzxjffEr4W6lEktxFt1HRrh1jaSROk0R6ZKcMD/dXGO/TTlGnVd+phO8o6Hz34P/AGhtc8P+IrXxNH4iuNN+xyq8U8UzZj2tkN+Fe6ftleL/ABpcfEn4bftHXlt9j/4WH4GsPEcGmtuZbG8RohdxHOPlbfC+MfxSZrw34cfs46hql5beHviIXsbF1VdR8rE0zJu/eKgQ9SON3brX11/wUo8QeA/FXwh+F0ehpaxQxfarO1tU/wBZbwiJAjcfdUbVBHQ/XbWOKhCTg1rd2+9CinFXR6poXiDxFfaJaahHZ2f+kRK+3Yy7cr93r2qWWfVppxJLbxCfGEUZwR/nNfOf7Kv7Sni3wV4psf2c/jtbSxXE8UX/AAjWqXe0P5bqxjimJP3GCt5cx/u7H5UF/pcyLNqsTL08s7f1r8vxGWYjLcZVhUjo4zs+jVv60Pm+Pa1OtgMta/6DcJf/AMHRKPma8v3bC14/22o/tjxBb/8AMHtWH/XZv8K2WmTy9vyiqUzbm+WvnoXS1P0pvoZk3jLxAo2/8I9bvt/6eW/wqKTxV4gmXnw397/p5/8ArVqraxtgtViO3jXG6OuiNSCVmiOS5hR6r4gn+ZvDyf8AALn/AOtQur+JoZHb/hE94/gRLxVZvzGK6WPyV+UJtpH8tvlVK39orbCcFbU56PxFrSyfvvAt1s/vpcxM1eW/tKePJP8AhH9Vb7G9q9tYJpdvbz7WLTXr5lcbM/MltE3Tn5sV7kFhVPMmdEVF3O7vhVUdea+XPih4ktPGHjjwrp0xSS21h38QXuyJ3aNLtvLtmJQEq62sHHGA865+9XtZFRVfGc8lpH+kfS8OUJqdXExV3Ti7er0RofAHS7Nrzw0sNg8ttFZPrd40Fz5x891aCJc/88/mmIU4wf4Er1/xp8SvC/w38K33jjxYl1aabp1v5s8vk7m9lA/iJPygetZnwH0eS+s9W8aTTS3H9q6o0VrLOiiSS3t/3QZ9gA3F1kbgAfNxXyp+3v8AtBn4n+Pv+FU+FL520Dw7clJ/I+ZdQ1AcO3H3kiPyL/003d0WuqWGlnufSpx+CO78l+r2POz/ABDw3JSe9OKXrJ+9L58zafojzz4ufE74iftW/Fj+1JLZxv3waNpe/wCTT7UNk5I4yflaR+52gZ2ivpT9nH9nGz+H+g23irVvAGqa3p9zlX+wPFDJqE68bCXOVhHcpkgdOd1c9+yX+zL4ifQ7nWLHR7ea4jWKfWZ72ZkhhQt8lvvHOT/dHOefSvtfwbpPiD+xbWbxd9gS9it9kVrptt5NtYw9oYkJJwB1JOSf++a+szbMcNkeCioJdopf1t38z5CjRnial2Y2i33hmzuIte1TSktb+WySD7PZ6awSzhX7kKYHRf7x61rf8Jd4fb7tzLtH962df6Vq32j+dsurd9lwnyrLs/8AHT/eFMs42mV/OTypU/1sX3ivo3+0D2Nfl+Mxc8bWdad2/P8ATyPYpQjSXKeN/tTeGPD/AIw8Iw+OPDc3na34czPBbpC4e8tvlMlvnGd3yq6/7a1s/Bf4tf8ACRWq/ELULy1XSdUis7XWb37Z97VyrD7SY/8Alglx8qgAlN69ctXqCWEb/Ma+d7jwb4V+B/7RkOh+MLNn8E+LZ3uLBdmRZ3XdR86qphdvOXOcJuCp8te1lNanj8LLAVuusfU+wyPF0KtKeFrptcrWiu+XeyXdP3k+1z6Cl8Q+F2bb/bdru/36j/tnwy82V161OP8ApstM8Gapb+KrO7hvNM+zXen3TwSxOn+si3fupkyM7JU2uufoeVar9xoNmzHfbJ/wNFr5WtSeGrulUVmnY8HGYOphMRKlPdP/AIZ+jWpnafNpP2ERrqVv8krr/rl6B2xRI1irBVvYG/3XWk/4RuxkkuYfJXif+5/eVTVSbwhZq24In/fFVUa520ec4pqxdupoFtfLW5T53Rfv/wC2tdHYzQtJuDp9/wDv1yS+F9PaW1jZF+e6Rfuf7xrobHwRpFxt3WsTLVTlalHS+5hy66M6e1vLextHurp/kiRmbb1b/ZHufu1paHoN5HAbq+g23Ny3mzp/DGegQeyjj67jXK6b8OdD1TWPs6237ix/1ux2/eTHovX+FefrtreX4VaNt2x/aF/3byVf61g5Rpxst2XGLW7NtdLbd/qfm/3KSOxO7bs6Vmr8N1WPbb6lfqP9i/l/xqtH4J1C6m26b4h1QQo/zXH29trEfwp/e92rnTTk9LeZavfQ5H4pfDOazvH8TeBbB3eLM+qWECfKuOfNj/6aeqDqOfr4J+0x+yroP7Tvgu38VeCb+1s/Gej2/wDxK7pvkjuImbP2eUj/AJYu3AfrDJtP3WYN9aR+A9ct222Pi3VIU3s37q5XqfqK8c+Kcug/BH4kaba2/i1Jb7WXleXTZdrvGx6tIifcjk6cgc9K/QeFeIJ1JrB1G3b4ZPsuj/Q4sTQ0dRfM+cP+CbP7YmpfB7x4/wCzt8ZJmsNHvNUezRL/AOSXQ9V37DC4PCpI7bT2D7SMhwa/SiGzXdt2V+Z3/BRH4C2upwH9qrwDZupt4ooPGmnKnzzW3yxJe5H3jB8scjf88fLcn5BX0n/wTl/aV8eftCfCF/D+peOVfX/CqxWt6l1YI8lxaFcQXBPBYnbsZv76tXJxxkNLl/tOgtHpNLv0f5X+XW5WX4lxl7Gb9D37402pX4OeLmHQeGL8j/wHkrnf2NLUy/s5+HGMWQftnP8A2+T1Y+LsfxGi+Efik3uuWUkB8OX3nRjTdpKfZ3zgg8HHesD9ke6+IFv8AdA/sWXTfsn+leWtzauzD/S5s5KsM85rvwrh/wAQMxOn/Myof+o2IPNrOT4yp/8AYPP/ANOUz2uGy3cMKnj02NtqVzMOt/E63/1mm6NL/wB/U/xpw8UfFKNvm8K6NKn+xeTJ/wCyGvyt1aK8j6VczOqXT41+XbxTmt4x8tcw/jjxtG26bwTat/1y1X/FKguPiV4kj/13gb7v93UkP9K5XUoylc0akjp2jjH8a1FI0Pmfc+WuMvfjFNA2258GXQ/3blDWfcfG6zX5ZNBvELfw/If61rajNXMuaUdDt7iaELlvlrG1DWLePLK/51yF58WrO6X/AI870f7Gxazbjx9ptx/rjdL/ALHk1m4U29A5pG9q2sec7LHXP30ckjfN92of+Eu0Rm+Z5x/27NTG8TaO6nbeP/wOFqlU4xFzyIZrFQ2771QSWq/d31Zk17Ryu5b9P/Hqgk1bTG/5f4qUoxY3IqSW+379RuvcfjU8uoae33b+Jh/v1XkvrHPy3kX/AH2tZcjfqSrFe5jZvu1RuoQzf3a0ZJLWbbJFNE3+49V7hYz/ALVbRXKrlXT0ZhXVv90N8tZepp9ltZLhfnKRM2xfvNiumuLdW+VaoXGmtdTw2a/J511Gj/Tdk/otb0qnvJEuOnMZ2l+IvE3g3xxo9x4Pm8i5sYlluElThkPBQ+zfN/OvrHwH8SPDvjewj1DS7nyrll/f2Er/ADxt3/3h6GvmjSdDaSa51i4+Z724L/P95Yv4F/AV03hubUNH1OHVNLm8qWH5kevu8kzyplFl8UHv36arzPJxNBVpaaM+s/DN7NqVrZw2l5tSxv5n2I7fffaNpHTj7w/3q6WHUtc/tKXTby8uIniRWSVJtyyMPT6187+Efj94g0GPybjw3ZXK797sjsjM38q0L79tCz0+48mbwHcB0+Vk+2L+Pav0Glxjk6pczrOPrc5vqk5L4dT3yHWPETRlvt+dq/c+0/xbvu0y41jXLdoLhftHkzt96KZ3C4X7p+v6V896n+25o8gf+z/AF5v/AOWX2i/TH44/pXPap+2Zrt0rrDoP2RP7lvc01xxkSVliPwf+QLAT35Dp/wCw1X7u/wCb/bakbQ/R3/77athovm+tN2/N8wfiv50+rRZ73M0tTF/sORW3Lc3AP/XZqRtFm6Le3C/9tmrb2so3YpNv91P+A0/q0egkzB/sa8X/AJiU+4/9NmpjabfKnzXk/wD3+roJEf7zVEY23Yaq9i1sO9zz74naPdT6HBdPeSyxxT/OrP6rxXn8u5X2kV7R4s0v7Zo9za7PmMW5fqK8VvN0NxtZP9//AHq2pprRjpPUuWat95uTVmSPz4Xhb+JdtQ6b88eV61OzfZ18xm4WrS9650XZl6lHNHeK0iYbaN34VE0zFdsj1leMvF0n9n/2ppTvC0WpRWcu5A27c+zd+BqhDfeIWk/eakpG/wD54rXU6E3G/wDXQwlJX0Ojt/mb5nrnfjShX4YamM5x5P8A6PjrSs5NW+VmvFZj/sLWP8Y1v/8AhWWpNPcKVxDlRHjP75O9fbeF9O3iZkn/AGGYb/09A8LiO/8Aq/jP+vVT/wBIZ0ngGDPgbRDv66Tb/wDopa1mhk/hfdurH8AHVT4I0QxXUW0aVb7QYskDylqz4i1bUtD0G91y5uYESyspJ2byfu7EY/8AsteHxRTb4oxqW/tqn/pbOzANLLqLf8kfyR4X8PPFieP/ANo74oX0d75kNo2maXbRhiVSOD7UpI+sjS/ljtXU6pdf8JB8UPJ8mKaz8O2v+qlTKtNIMc+4HzZrwv8A4J7y3F5beNfiZqNvh9S1C2aT1Yoss7/+RLiT869q+GMN9qfhe88VbF36vqUs/wA/XYvyJ+i1+gZg45RlsqcXrFRivWyv+TPzjhNOrxpnl/8An5Qf/lvTNO60db5XutG3OqfNLas+6SMf3sfxj3HTvVjS7XdtC1QaTVbe4E1vsDo25XV2Vlat7RZrzUl3NbQJcs/9/CSZ/i/2T+lfGc0akdHqfpdrO5oafH+8HNblnbx48uT51K/Mn+zWXZ2OsRttmtogy/eV9wq9b3WrRts+wW7f9tm/wryq6kpmsOVpotxxyRsLWR9wb/VO/wDEP7v1Fa+qWs914f0u+k6Q+dZ/7S7W8wfo7YrDa81CaLy5tNTaf4/O5Vh0Ycdq19L1y4uvCepadcWG6SxltrxF3/6wFmidh+DLTpL2sZJdvyKbSWpnfZe+PlqO+h/0Vv8AfT/0NafJrlwrbf7Ef/v8teOftBftOeG9F0mbwboN5nUrmX7PL5E3+pJ68j+P6fpXRleXYrMMVGnSjfu+i82Z1qkKcb3PPf22PjpqDeKP+Fd+GXV7S3t1W9ldN6yXAlWQoB0bbtXPvtFfLHi7Vrj/AEqG7ffK777iXf8AOzn+I/y29q9O8bakPEHiC7vrXYkapsiZvuwoF5x7nue5avPNS0FrzN81s8UMvzRb/wDlof75r9qwGCp5fg4YdfZX3vq/vPEnKVWbfcxtHkPh1rC8t5EEiSh/nrofj98Vr74gaHYaLC7JHbQCCK1XpgLy31J5qzqvw7mXw/b3kCfvJomlX/pnXnzfarPxII9QTds+Xb7iu2M1NNMmUbO55vrEc63j7vlxVT7Q0ijL4K/d2V23jLRYWut3k486VVWsLXPCcmi6fbXm/c0zP8v+62KrqFro9i/ZZ8B+IvG2kvdTb2tbe6/j4HqWc1+hHwO8SXXw98F2/hvTXYhkXytvyr/vV8BfsR/ES10XXG8K69vlivpR9gglm2QrIP4n/rX358LfEnh/xZrjyWNzbyww28UUSxbdrY++34142K9rTrtp6M9HD8s6Vj2W8+LWmeH/AArBe+LLxLeF9qy3GxnTn+HjP/fVeJ/ELwnovxY8TPJNqW/QkdZ3WJ2WS4cLkJn+FfWu7t9NtYfD974J8QfvrNkLWsrddnUZ9xXHWNrDZxqtjMrxQxTMuz/lo7/In5CuWWMrWsbww1J7mx4R+H/wv8J6DZ614T8MWVuL9F3S+SvmN/slzXyv/wAFBljsfi1ZalZpEkMVhEyIif6vc2dwHTBZfzr6B+LXjq3+HPgKzfzv3FjcQxSp/s7WzXxx8TvHGpfHb4jareNM8sNhay/Z93Vogudta4dT5/bSehzYjlilBHonwX8afDL9uzT4v2WvG3wmawtNOtbi6g+LjXKLdafqQTCNOjjZLbsn7toSTxyEQKDXoP7L2veLfhv8Jbuw/aA165t4/D/iSTSLHVdVtpYEntS0MNvLHJMA00EksmY5TnKOoJYqWPzH4k8JeM/2TfGei+PdFubjWfBepXFvf3thE6484p0kQcM69UP3H24OCoz9YfBH4gWnx/8AhfdeKf2r9Tu/iT4Wm1RmtLXRgUaXTY/Llgt2jiUNHGk+4zIxf5BISdjbF6MbDD5jg4xTupPlv110Z+c8eOVPDZe7arGYX/07E9b3NIo2vw3+RUS28m47v4a8F8H/ABi1j9mv4qH4F/Gu2ig8NaxO1/8ADzxNYaw2o6c2lTsptlguygNxbfMqiY8oWVHCjY7/AEDDqGjzLuj1K3c/7Fyh/rX5fmmUYvKMT7Opquj6P/g9z9YoVqdeF0RBQo3N8tSNNtX5UpWaz+8tyh/4GtV5pI2Yqr52/wC3XlqEmdEXFk6yfw92pdwX5Y+WWqcbMw/jq1DG25fkfmtVCUXdjb1KniyzvNW8J3uj2OpLaTX1q9vFdf8APHeuC/4DcfrXxX4q8TyeOvitrfi7wejWqXl0tnon2CZ4v3A229uoxgqGCqSv+13r6I/as+I0fhnQJtN07Uri01LT9Oa9tbq1dd0NzM32e3Qg8MGDzMc9NucZ2mvG/wBlDwPH4p+LOh2T22+20921G4V/m+SEYiU/8DZf++a+qwUY4XLPaW1d2/Tp+r+Z+q8AUZ5bluKzHEL92o3V+tr3v5PRI9e/aa+IkX7KX7L0Wm+HLhY9YlsotG8Pq3UTsn7yf/gCeY5/3a+Ov2W/hfP408WRa9LYS3aQXSw2EH33urt/4jn72N2Se7tk/drqv+CiXxcl+K/7Q/8AwhGgXiXen+Eol0+1WL7kl/Ntec/h+7jz2/eV7v8Asg/CHxN4J0eLxV4VSyRNKVbCDUb9N5+3zLl5Y4xw7gMx54G7+L7tfUZBhI5dlftamk5+832XRfdr8z8RzPF1cfjZTk7tt/NvVv7z274V+E/FngHR08D68+lw2lhKZWg05Gc3F4esssr/AH9o2gADA/2q9As7xdwSQ1zH9n3lmu6z1Jy38aXXzrIx6tnqpJ//AFVPZ+IPs8nl6tbvD/tffj/P+H8a+CzbFyzTFSqJ6bJdl/Wp20KaowUTr412MNvQ1He2P2hlmt38uaL/AFUq/wDoJH8QP92o9L1CG5hVo3QqyfI6/MrVooq7f71eG3KnI6Ermda3DSboJE8uVP8AWxf3f9of3gf71cl8dvhPa/GL4c3nhXEUd6n+kaNdOn+pu0+430P3G9QzCuyvrFrplkhm8qaP/VSqn3f9kj+IeootbhpmaOZPKmT/AFsX3v8AgQ/vA/3q3pVZUpqrT3Wpth6s8PWjODs0eBfs4/Gzxx4q1jd4m8B2elQ+DdDGla89q/72Ozim2B5w+Cy20jY3J8iJO31r6FvI33bdmNteA/HrSYfgL8XNN/aMsdKiu9A1W8t7Lx5pcqbobiFz5cm8BGLCSFmTp/rIof7xavT/AIK/ELw340sb7w3pOvS366JKq2F1dJsuLrTpPntJpELF1fy/kbPJeJjgbq9PO6VPGYaGPpu7ekl2a6/ofY5rRWcZesfh6XLGmop27ba31unp6NdjeXcuqTL/AH4kf8twpJYdzfK+7dVu6hVdSj2/Kr27r+TKf/ZqY0bP8q189OeifkfGSgZ0jPDqlkrD70rt+SN/8VW2upT2tr/o6K88rbLdX/ic9PwH3j7VmSWu3WLeNvmIgmf82QVa8Plb6+bVvvRRIYrX/aH8b/j90ey/7Vbt3jFvZL9Tja3Ov8NxrptvHaxvv2/elbrIx5LH6murtZo47f7RM6IiJuZ2+WuQs7yC1jV5n+98qInzFj/dArpNBsJr50uNURMJ/qrXqI2/vH+8f0HasElOXNJjSe6LKw3GtL+83w2bfcX7rzfX+6P9nqavx2qxrtVNq7dq7P4atRxbvrUixjdUTmuhST6mbrGk3WqWRsLDW57Bnb57q1RfNVe6pvBCk+uOKxvD3wZ+G/hXQ9R8P6L4Vgjh1Xf/AGvcS7pbm8LcF5Z3zI7+jE8dq6e5uLPTY/tF9cpCm/77vjd9P734VUa+1TUFxpNh5Kf8/F+jL+UfVvxxTjUqw+FtLydi7K2p81atoNx4P1bU/hn4ghS6gLPbypOisLq3lRgjEHjDxtsb3/3a+Nv2dPGd5+wL+3Gvh+/v5f7Bs7pLN2d8LcaDeMvlSvzhjBtXJPP7iQ/x1+gX7UXw9mbSLHx/Hf3E1xbXAs9Sl37R5Mjfu2AHChZP/QuTXw3/AMFDfA9nq3g7w98aobNPP0+9/sjXtyf6y1uWwM+u2dVA9BO1fsGU4qjneVctXVTTjL1tv8zxcTTdCrePTVH6a/GxI3+CfjCaORHRvC2oMr9mH2aTDD61zP7FpI/Zr8NkjI/0zj/t8nrzT9lT40Xnxt/4JsS6zrV49xq+ieEtU0PWZZDhpLi0gkiEpH+3GquPZq9H/Yzl2fs1eGx/1+f+lk9cE8NLBeDOMw894ZlRX3YbEa/Pc4uf2nF1KXfDz/8ATlM9XRo+o2Co5riNfl+Sqslx2V+lUb64bbhmr8ZqO59dB8quTalqkar5exK57UtUjYsuxakvpJG+89ZE0bMx3VjJXsTKTlqiteNHMxyiVQksLOT/AJdlLf7lXrmNv4flqtMjK3+r4qktDFt3KraVp+3/AFKLVeTTdPbc2xauSN3FVpj/AA/3qOULtlWTT7VVXanzVXmsbNW2hP4KtSNu+XfVaYsW8z71Ky6gm3sVmsbfd9zrUMlha/dZEqzIzLu+eoGbzJN3zYrGTv1LSuVpNMt5Pl2JtqvcaTa7V/coy/7lXpPlUbZKhmb5f9qlFO4Pczm0uzC/LCq7aguLGDdleKvPJ5lV5t27/arTlDqZ09pt+XfinaTCtqbzUm3y/Z7f5Ed+Pn42/XH/AKFUs33WZelNjuI49Fl/56XeqNAi/wB5k+Qf+g1vhqalK6CbtE37OaK+s4rq3TaCn3XTG2tSxtfu/wC5VC1tZLOOPyX+VPl21t6btuGXa+1v7rV6cvcjY5Iu5PbwxrGzMn3Pv1yOtLDfapNcLDs3P92uw1xlt7GG1R2/ev8AvWX5dqLz/OuSuF8xjI3+TXk42s+XlTOvD0+pj3EOPlqlcWu5v4K2JrdW+8nNV5oR/wAB/uV5/Pp5mzifTix7V+aomXb8v/fT1S/4SqNi3/EnuP8Ax3/GkPiiH7smm3vzf7C/413udO+jOQu/+g0LGn8Oaot4p0/5VWzuv+/NC+KtLVvmhul/7dmoU6K6lJSLki/Lt39aZGrfcqq/irQ+jyTj/t2amt4u8Oo3zXMqj/ri1XzUW9w5WWLq186IrlfuV4X4809tN8U3Vl/t7k/Hmva18Z+G1Yt9v3fe/wCWLZ/lXkPxs17QbrW7a70nzTcTWrebFLCyFfm4Y0RSnUVmJOUZGLp99HbvtZ939zb1qeZZrj95M/H8CVh6XMyzeZcNuJrdW4jkjVVf71bSi1LQ6VJW1OF+IEdvDHqOkfd83yb1fn/iV1J/LbV6zWMMy8NmpfiZp8bWsN020MyPFu2buHX7v51S0G486zgmZ/vxD/0Gui7nRTRztpPQ3tN+9tZKxvjYv/FrNUb2g/8AR8dbdiv8LPWP8bEH/Cq9Vcbv+WHX/rvHX2nhgmvEzJE/+gvDf+noHj8Rv/jH8Z/16qf+kM3fh4rjwNoZH/QItv8A0Utcp+1t4gbwr+zL4/1yH78HhS78r/fZMD/0Kuw+H0f/ABQWhOvX+yLb/wBFLXmv7ejSL+yL47jV/wDXaSE/3d0qjdXn5rTjV43rxls8RL/04zXCyayqk1/JH/0lHi37IP8AxKv2XtZ1nkPIL+4Zh/siRRj8EFe9fD3SBpfw10TT9/zJpcTP/vFcmvDv2cFaP9jy9CxYP9jagR78TV9F6PCyeHbCBoNm2wiXZv8Au/ItfR8Xc86SjH+f8kfBcHu3GOff46H/AKj0zCvIf9I4TrWlocDbh8lLcadJI2T83+7VrS7d4ZBu3rj+/XxnLUirWP0tWOisbhZLfybhNzBP3Uv8S/7P+0Ke1rJDsZ02K6bkqGw2+X81Xbe4aNDbyfPC/wB+J/8A0IH+E1nL39Jb9ykrK6IVjXp1FTQ3Wi6bHPceINSisrb7HMrXVw+xFJXKZ9i6qpWq3iTVNJ8K6Hc+Jta1JYdNs4mluLiXjy0C559/518yfEz9prWNW8Py+PNa0r7No4Zm8OeH9n+kXTHhLi4z90fxBfyr1MgyPF5hjLrSEd338vUxr14Uob3udN8cv2lP7P0O5tYLO6s4R8ir8vmXTfhyifrXx38R/iXrXjjxZb6rNMgtrdliiitflCgtyuf4j6tWf8YPjhrXjKYzXO+Ny+x06fMeqgD7vvXM2WoPDpPmfKC7fe/u4r9hyvLsNl1JQow5V+L87njVKsqjuz3TwnPp+valY6fsWW0m81vKT7zJ0Gf7v3a63xd8K1h8Nw3Gn7XYS7G2feVeqKB/D6183eFfidqXg3VEns5kfPyvu+b5T1/Svrn4K/Efwn8R/DM0LzQLceak6pv28x7cr9cVvXi3K6KpyVrM801bRWt9BRriZzNasE8rZ97K/wBK8w8ReF45Llmktv3zN8rfxZHP8mr6c17w9pc2tXt18yQzWCzpE6fdQtkN/wCymvIPHGgqrS3C23W4Rd+/+Lbw349K54txepo0uXU+ffEnmQ6okNxz5Nx8sv8Adx0qLxVq1ncNpX8UYgdZf94nNdB4v8NXizOrQ/6uV/l/vLurz7WJlkZEXhRu2V06MwV+pdt4Wk02O4s7zypo7p0idX2nmvRv2aP2ovE3wR+KVtrOr39xc6VKkVvqVqj5Kxq2N6D1HX3rya1v5IVe1+7vfzE+oqNpo5rj53/3/wDepSjGas0UpOLTTP2D0X4laF488Fp4p8P6nFfWs0BaKWJ/9Yp9Pf1rzX4W+KLrS/FdxoepO7Wc372zZv8Almw6r9K+fv8Agnz8QNejuJfA9xcvLZXefKi/hVu7D+tfTWteFdLbT23fu5BuXcnDqR/EK8CvhXTr+6tD06WITp3Z55+154ssV+H9/pTXO55LpWi/vbq8j/Zb8N6bq3iqeSRFf7fZzRMj9GI53Uz9o64W11CXRbjXvOlb7iPxuz/Fz91v9muk/Y98LrNpdtrkE3763vGWeL/PqKrHSVDLpXOVP2lc9/0/4W+G/FXwxg8GeJNOt7mA2H2eVJU+8nTmvKLr4fn9jv8AZr8a/Y9R1JoB4nhvrS60q5EN7bxyvZQeYrn5XkjKlgGGH2hWBy2fo2z/ANGt/lRVFYvj/wAL6H4/8MX3hLxDbiSyvrcxXCMuQRnIP54NfG5TmFahiWpP3Pia9NT47xHpRlgsutv9cwq/8qxMf4N+IP2Zf2q/Afg/4f8AxO8HRa74dsdcuNZ0awsP9HvNavJUYTqEyBZ2rO7POhwjv1Chir+XNrmpfsg/HS5+An7RXwntfDfhXWLqW6+H+pfalvrS3s2fCQpdhcTwqWVSeDEWVCNmxz4TdeDviZ+xr8UNSvvB0OpXnh7VrX7PqN1oO37ZDBvylxAedskbtuHBB+YP6103xK/4KAfCvx58HbP9m3xN8PdL1ywHi3+29X8RxXht/MlKYeXT4HObKSfcxniykZfzOvmlq/R6tDB5zg/Zzektn2fc+rdSph58yWqPsh/B/huSPzI9Ktdrr8m1Fwy/hVOb4f8AhuTPmaan/oO6vBP2T/iv4s8A6lpnwo1K/vPGfgjV4mfwN4006zlnEMY629yUB8op9xlfGw+zYT6WmkhgRrq4dI4k+Z5ZXVQq/U1+Y5nlGNyjGOjV17Po13R7WHxFOtR54/Mw7b4W+F2bzI7Bldvvuszhv0NWF+Gei7lWF7pW+78l/KrfzrgvFv7dv7LHgKdLK8+K9rqVw6P+40GF787l42nygQjHtk81jN/wUa/Zw1a3lsPDerawdSmsnawt5fD9wgafDYiLEYDD7xJ496iGWZrK0vZyt6GlKpTqVVTg7tuy+Zh/Hj9nfxZ4i+y+IvCfjzQ9Uj1iVJ/7Llv2W5jZGaJLff8AMspB3MFOD83SuP0Xwz8evhncax4j0DwwtjZ+H7WZvFcus6k1httrdWeSD7hfJ3bwwABG3nDA1598P/EnjjRdc/4SDwz4zv8AS7xLj7W9xZOq/wCk7s+bggjfnocZrp/iB+3ve/Er9l74jfDXx58UL3VfF+par5FhFqMKZuIJ3ignlR40G7ZDBgq+MBVx/EzfbRw9GtTShG8lZWtdbpX+67Pv+I6+ecOZAsvqzjKjU0TStLTW2mlloeI/B15vGfxLPjLxA7XMryz6tdNK+52dizjef7w34/4DX6XeEfCepeH/AAF4e8EabcvaXOmaat7qktvt3NfXK73U5z91Plr4U/YZ8D2/i74mada3k/lx3+uW1rK7dPJVvPkz7YXbX3vZ/GH4ULq2sW+pfEfQbfUP7ZKXlncarEk0Ltt8qJ0JypKbcCq4jr1KWAlCkm27LRff+B+UYGMZV+aT2K82l+Ok+74quH/3kRv6VFHZ+Po5j5niHd/vWyVuW/jz4d30iLZ+O9Dl3u6Lt1WFtxX76jnt3pZde8J3UazWvi3S3RovN3pfxEMnTf16V+bQpV+sfwPbvT6FHS9B8afbFuLPxIsLfxqlmiq31HSt9W+K1q37u8sriH/YsPnX8N43fhVW11bT7SYwnWLXejKrJ9pTcrN0Xr37V0ej+INPkkEH2+JizsNiTKWyOq9eoonGvfWN/VCUqZj2+qfEa8ZvsupaTMR9+J7B0dfqM5pl1J8UJpEkWHRhJE+6KVUlVl/X5lPda6yePQ9UhW4klgkCpuSXzl3Ko6kODnA/vVXazvLdC1rqsVzGuPluplDrnp+8Hr2yOfWpUakZXirPzQoyprc4Px9p/jD4keFdV+HuveHtJe01K1eC9RJpVlWNv4046jqPevPvAPjD4g6bYjwbpvhe81DxJ8FrCZ7qWL7PBb6l4Sk/hjiAEksiPtmLckbWA4r3S6msbhkhuLn7BdCXbA8u3O/+6DnD57p3FeMftEQ+KvBF9B+0B8N9autL1vwt8usvpeyRrzR5GxLs35DhH3cdQksifxV7uVtOMsPWjaM/6/Ox9Pw3mcMNilh5tOE9LSva7VrO3Rq6+Z6NZ/ErVdct7DXrPwxBNbXMRltbi31JXSZHTjBx39atr461QyfvPB9wG2fwXiGvKtF+IPw5+BOtTfDWb4h2ur+HriCDWfBGo26MrSJJ88+lFOiXMD+Yvl7jxt+ld54b+OXwV8TWq6hp/i2IW58nyri4hZFkMjYCpn7xB+VvQ14+MynGUXy+ybSbV0tP61POzvC0cuzKdCMrx0ce/LJJxuujs1e5evfHFxfap9jj8N3sLPZ7WZHRmVC/OPc/dFbOneKljYQw+HrpMRfJEm35VH4/dqto8Ml1q1/NHCgmPkq2/lLVdrEIR/fw24r69a63RdHs4f3caZZvmZ25LH+8a4cTGNGKptbLbz/rQ8Re/Lcr6J4qtYZPtF1omqPcbNu9LPdtH91Oa6vSfiJpNvhrjStWi/7hrt/Kq2m3mjx3BtbWFrq5i+/BapvK+m89E/E10tvoeoahGjX1ytjE3/LCzfMjfWU/d/4APxrhk6jd5Fcq6Ff/AIXJ4FhmFvcXl/HMU/1D6VNvYfQA1W/4XJ4Zvm2/2q+mw/e3XFhKZ2/DZtX8cmt+18N6PYr5dnZqmfvt95m+pPLVI+k2Y/gqJVI20iUo+Zi2fxA+FtncJdHxVA9y3/Le83tL+ZHy/hir/wDws34fsu5fG2ln/euVH86sNodjK3zQ8YqCXwro8i7pLKJv+ALWLqTk7tGqUUjJ8ba34F8aeD9S8Mt4t0t/t9hJEi/b0zvK5RuvY7a+L/jB4dHxO+CvibwLqjoLjUtBl2oqf6u428MPo6K1fa1x4H8OySL5mm2p/wC3Zf8ACvl7x/ocnhn4ga14fbYkcGpXMEWzp5bKsicf8Cr73gjFSbq0fSS+Ts/0POx6Vk0/I8d/4JMfFWa++C/xm+GGoyLGL7wZ/b9okkuD532WW2nRVOOiRQsf96vt/wDY6kx+zj4cUOuR9ryC3/T5PX5h/sZIvhP9pvxT4Q1GSNoz4e8W2iRucgOsZuIcfRY+K/RL9l/wVo2q/ArQtRurcGSX7VuYEgnF1KP5Cv0DjGgsP4VYuol8eYYd/wDlriI/ofOYKSnxXTTe1Cf/AKcpnubM23avWqtxub/9iuGuvhrpqyLNC7IV+VHSZ8qv93rVebwrfW5b7Pr10mf7t4/+Nfzi5OWrR9veNrHY3W6s26+RsKdtcjNoevL80fiS9/8AAlqqXGleLNy+X4kvf/Amqe5k9zqbuRfMX593WqU0m3+OuZ+w+MLddqeJLo/9dZlf+YpjR+Mvu/227f76LTSSdmQ07uxvSSMzYWq8zszbaxVXxlG3/IVz/vwrVWSbxnGx3arEf+3NaE4t72J5ZJXN6Rtq1C0m1dq/NWHNfeMGVd00Dbfvf6N/9emLfeJlb51tXH+4w+b86htNWTKia7ZYdab/ALn41mR6lry8y20B/wB3dTm1TVlb5rCL/vtq55aM0juXWVm+XrUMgZ48l+lVW1q8V/m03/yN/wDWpraxN/FYOP8AddaSXK9wumOmjCtzUMi7fmFK2rx/xWUv3f8AZqtda5bqvy20tNyb2NYwSd2JMyrt/wBn+lJof2OO60nTbg/8upuld04aaRmPX1xu+WqFxrkN0z2NrZ3DTTROsW1OMleMn+Gutht7e7sY4flki8pE+Xoyhcbq9PBpxg2zlqtc9kaMcPyjL/ep0lv8ysr7cf3KjsWaGP7PNNvCt8jt978T/Wriw+Zt3b9tbczMeVLQzdYm8U+db/Z7q1NsEPnxS2zF5G/gw+flUd1xWTcXGoK3/HtF/wCPV0OuMsapCr4/i/3qxrhvm+/Xg4+pF1lGPQ76MWqd31M24vrpV3NZpn/Zeqs+oPuG6z2/8DrRmXd/urWfcKueu2uZJvRlp2Ppn+y4vSj+z4fvNV1ov7z4pu0bPmevddCK1OBO5Sk02EfhTH0qNflatDy/4s/8Aprx+ifjUOlT6mnMZv8AZMLf8s/vUx9HhP8ABWp5K4x3pPJCrUewXRBzGS2ir8rfLXnfx48NrHptpqlvAuIpSkv4rxXq7Lzt9a5/4k6L/bXgvUbNU3OIN8X+8vNNRVJqS6Ccup84zXXkjcvatLRdQ850+ese827tq/xVJpkklu27fhf4q9FU7xuaJ3ZpfEKGG80FlndNob+VcLovjjwjo8kWg3WsLFcIu/ytjlmQt94fL0rT+J11qGvaWnh+xm8qGZm81v4pMdF/2VPeqVr4dkutN0eW0SJPsVw63C/3ojFkKPoa78LRpRoPnfX9DCpKTnZHTaT4y8OyfL/aXT+/C3+FZ/xl17Sbv4Yala21+rPJ5O1BnnE0Z/xq7p1nbxt8ybqzfi/HEnw01Lb1/c/+jo6+s8NVQ/4iZknL/wBBeG/9PQPJ4h51w9jL/wDPqp/6Qze8B+NfCVr4I0i0ufEtpFJFpVusiSPtKsIlBFcJ+214g8M6v+yj470+z1u1mmk8Pu8Vuky75GVlPyD+Ku38C2ts3grR2khUk6Xb5Oz/AKZrWR8b/B9r4s+D/inQVs4me58PXaRfIu7PlMRj/vmvKzedGhxniKnbETf3VGdGEu8qpL+5H/0lHgn7M2opd/sj30IX5YNKvVz6/JIP6V9GaPqmm3Wh2Xl6lbszWcbOu9dy/IvavlX9irVW1/8AZj8R6TGcvDbXKD5f+ekUjD+dfSug2ul+P/gXp1r4Htrrw1e3WlwpFra3631zasvBYJJGImJ29wcV9PxRQhON5yUbS6+a8vQ/P+EJP/XDPWl9uh/6j0zUnW15ZbmLH++tWLN7df8AV3KFf99aPDvg/wDsfQ7bSdW1WfVZ4YsS6jfpEs1w3q4jRV/ICpLnw3pob/jzVV/3K+Bc+WTjF3S6n6bFOSVy/a+TJj50/wB5Hp18LLT7WbVLrXns7eFTLPO9yuyNF6sd/wB0Viahpfh3R9Pm1fWPItba2iMtxPK+EjQclia+Sfjl8dLX4veJIfCOjXLWHhQXWxokfY+oY53yn+FD/DH+J/ur7GT5Tis2r8tPSK3drr+uxlXqworudB8YP2lrr4pak914eR7vw3bXvkaDZ3UOItWu15+0S8jfCm3Kp0+XJ527fCvjJ8RdW1a+TzLl7mWO4+eVvvTTFcFuOw+6F6DtWZ8ZPih/xWTaL4bufLs7WyW1tWt/lEafx7P7ufzrmrnxBHeWMVnDulkjdW83/ar9awOCoYGjGnTVkvx8/U8idSU3dmDrGj3ljeJJffNh/lT3NS3Ey/2Mkaup2ZZvxrY8aTf6HbM1zuKL871yUmoL81sz7Vr04SvGxjr0L1lb32rRpDa229v4FiTc1dZ4P1vWPBtxbqrsjCVdyb/4e9YXgPWNNs2lWS8aFj/H93d/s13ei+GdJ14qsM20SbW83f8AMoHX86wnzRdzSD1PWvD/AMZtN1rQdH1LVr//AE2xl/s663fN50IbIb8Q3NZ/jd4ZtWNrazfubmDY6b9w9Y2/CvG9chuPDX2yxsblpYHT5Nn8Mob5G/DvTNN+KmufZbe5kfzZbP8Ad3G7+JQ3Fc7jz6xNeZx0ZrePJluFFwyfO6tv/wB8cV5vr3huNtIi1CNOSxV/++q7bWPFVhr2oP5Tp+9ZZUT+7nrUOk6bZ3kdxYyPnLboqpNxjZkNLdHlWsafJa2aXXcS/wDfVY1tqDW999o2Kdr/AHP7y16z4w8DxzaWbfT498iN/D6f/Wryu80O8sdQ+zzQuvz1pCSaJatI9i/Z5+Mlr8N/iBpusQvusGvYpWibrG3R/wDx2vsXxh8ePCuteMLyz0nUkCXkUUtrL1RmZM1+ctra3Wn3EMmxtm/cn+9XoXhnxvrWny282/zfKlX90/8AEn93/CpnCEhqcos9K+PTazrmqNLqVn8hdtrbPu+mDXs/7AOkNJ4R1G+meXzftXlbG/3f55q18Mfhv4b+P3gdVS52G9tf9Hlb78ci/wAJ+h613/7O/gfXvhtotz4furC3E6S/6R9794Q2NwP0r5viGp7LLJR9Dqw6Uq6aPTbe1urqaGxsUV5pflSLeq/N9SQPzridQ+KemaP4E1rxvc6D9qhs5pbbybxfkkLlYkmA64HmK+CB8yntgnr5F1K7he3m02Ao+5W+f+E/8Bry3xH8JvF+nfBzxP4R0dZr3UNR1T7TbrMF3vGJId2DnGdkbFTnjj0r5fIFhZTcqjs9N+19T43xJ9p9Ty+231zC/wDp2J5avirT/Gmn28eu3koubWVbfTb3TnRZmZm++VGNoX+9zndg1i+MtZ+K3wXvbW/8S+DPDOtWdztltbrVPDySs3p+8GOce1UdN8Kt4b8STRfErwZqkMIt3ZIIpvsvmS+8h4x9OtXvFnxC8Lt4X/4R/SfGGqfZZdrPperp9pjjx1w55x9DX3LoVVZ4ea5ex+g5ZjqGFrp4impwe91t5nrvw1/4KjfGIeFW+FuraD4VsfDd3E0V1Boegql2oPHnQO52pIv3hlTnpWB8aP2MvjZ8dvgXdeMvC/xsbxdp+l34uLDwvb3Kj+1rMLv87HG2cFtoiclN6snyHBX5gsNTtbmH+2NIf/R2c/8AAfm4Yexr3/8AZL/aq1z4N+Io45pmn0q5lVryz2bmU95ox/Ef7yfxj/a2s2cako4mLxOvLs30v/kfr1XJspz7h+cMqjGE3Z6Le3S/T+rnz9a/DO60f4pW3wP8SXOkeF7wzrFdXt/cq8Fnui8wee6cKdi42g9WUZ+atzWvhr8P/BaWPiLwX8b7DxXcT+elxYWulNby2IRmTfJl24Yr8i8ZHzjcK+s/2qPhr+yBrXizR/jrqmpXFr/wlqH+0dL8PTBEvJQqn7c7gZVCNqFgRztD8rXzd8WtR+Bc2pah4F+EvgzVraOG4gew1x9SeXzJQv7xJEkJLJ8y4YZGa0rVuas6Lk1aN9ErP57q5+f5LkuY4PNqMquHl8VtU+jtf5PUo+HlutP0WS9sdKnvXRHle3tbZ5naJOrbEBOB3rkPiV8ULHxD8F9F+FKeFLMDS9cfUtL8QW6LvktWRw9oD1aPzH3jk4C49DXaab8TNS+FnjLRNW8NJvvrKdZYon3eVIg++smwg7D3Xvt/2a4H9orWrDXvFNjq1v8ADJ/DepSxO3iMWsy/YNSvHZXF1BECfK3Ju3evy9fmNZ5ZB83M46Xvf7z3fFLMFWzOnhYPSnFfe9z3T/gnHpK3HizTrhof+PdL66Rv7xCbP/Zq+s9e+A3wO8cWEL+O/hT4f1WZsSyy3mmo7tJ6kkZzXzT/AME04Y/7VVvvMmg3bbP7u6VK+mYfHei29ijap4ks4pUQLK+/y1yOuAa8fimpiIcjotp36XvsfnmAgpQd1cyV/ZD/AGWfs0ViPgn4Y8mKWR44pNFhkEZf7+Mj/wAd6DtTJ/2MP2WJ4Rb/APCifBbgQeUv/EiCPt/vZRh83+11rL8VftXfA3wfNLa6l8TrCa4h+V7Ww3XMm7+7iMGuC8Rf8FEPCtruh8H+A9X1FtvyT3rpax/kcyf+O183Q/t+r8MpW820exTyvFV17tJ29D0e8/Yt/ZfuLn7a/wCzv4V8zfGytaq0O0x/cwP4fu8+tRWv7HH7Mljq0erW/wAEbWOeK6kuFaCZ2XfIuHbCP3FeC6t+398btaby/DuieHNI3fdRoZr6Xn6sg/SuQ8SftAftBeJi39rfG/V4fN/5ddLtorRf/HAX/WvUp4PNmr1Ktv8At5v8j06PCmY11dwUfN2PrGz/AGNv2NtB05xrXwxs9PtRp0lnK95f3EKLBIcuMlwOTXP+Ofhj/wAE4PAGn3Sa9psEiTJbNPa6Tf317IwgbMeEgZjwa+aPD/wI+L3xS8mZvDOt6rDK+5LrxHqsvkt/tf6RIS34RmvT/Df7AfjttJf7V8QrLRHWJ2isNBSXDPtyimQlUwT1IWtHShTmva4iX3mlTh7A4Wm3WxEL9lq/wOd8f/tB/wDBPrw5/o3hH9mrxBqzx6mdRgl1O/uNPhW527N/76QMox6LXCt+0Z4R+I15Jovh/wCCekaVYzWT286wa9dXM1wS29FLyYDHO75ute1fB/8AYF+Metae998cPifFpluz7Us9L23lzt+U/wCtlXylH3hgo575riP2pPh34P8AgJ4+TSvB3gT7fZ3GnQ3VnJqyRSx3F5G/z/vYj5qoBt3ZA5b5cjOPVwFbLniVShOUn3bbX3vQ8evRdOPLh6fNLT3lfydzL+AeseC/iVJZ/DPWXuIdNfxDFqOh3W9le3kjZRPahwOrR+YDjnPXlq9N8VfsjfH34HfCvUfi14O0238aaH4evHk0aw0523x2bys5vriKc+dceQiqP3O8l2+5t+ceR+PfiZ4L034J+D7fS/DE9l4khurnUbWW1Rkt9DQzebIh+f55J3bed4JQRKPXOB8QP29f2k9c1mPWPhZ8R7jw5CllNYvYWaKzrFKipK3myoQzNt4cKHjXcAcM272qNFQruD1g9V5eR7HEdanm+SUMZUXLiafuTTVnKK+GXnba/Y+rvgD+0V428PwtoPxG8E6lE9xpaXq2EsMVtJpaB8SNLG58zYqOrlhvIHOGr6T0e3l1iGG8vtY8+3lVXiisH2QyKeQ2R87j8cV8e/sO2vhH4qaP4AvtU8cwN450O/uVvLXxHZy3kOoRbXEc0VzvzazYbrcZQncFPzV9TfD+STwXrOt/D3VEazi0q/R9Ot7pGUwwTp5nlemxZPMVSDjG3HC18DxRl8YYl1KMWnZNtLztt0drfifIYSo3G0j1Dw+sNrHHb2sCRxj7sUSKq/kK6uzb5VJ/hri9D1TT5FSRb+Bs/wBx9278q6aHUJvL/wBGsJWX+Bpf3Q/Xn9K+G9lV5rtP5ncpRRq+d/DSM/y/fxWZ52qTf6y8ihX+5Em8/mf8Kia3tWbzLqRrj/ru+79OlP2UFuxqZfm17T45PJW53v8A3Itzn9Kim1K+kT/RdN2D+/dPj9Bk1XbUo7ePy43RB/dT5RVK+1iNfuv96pUqalpEty92xNN/aDNuvNY2q3/LK1TZ+pya+ZP2hY4dP+L+rw28zL9pltJWeWZmVWaFQWJPb5a+grjWsNjfXzt8ep/7T+KGpSen2VP++Ys/+zV9ZwdOf9pz005H080cWL/hfNHxR4VmOjft3ajDBKQDrWqpgdCJPD85P6tX6Z/ssaiLf4A6DGOo+1f+lU1fmPpIbVP279QaF8hddv3/AO+NAlzX6Rfs0zMvwR0RVPT7T/6Uy1+r8c6+D02/+g6h/wCo+KPl8F/yV0f+vE//AE5TPTbrVnb5S/y1nXGqfvvLZ+WqJpG24D00yc8Pur+Z09bH21rg1x9395ULSN/EaVpA+3aPlqOb5m2s9S5e8TyvoRM3QU3bItPfb9f9h6ryFmbbv+Woc7lxhrchuJpG/wBZVaZ13bt/y1LN96opP73/AHzWUqhag5OzIWbzW/u01o23fN0qRo/9v71H/LSuZVdSlGysRr93c1N27jtNSsu/atMZfm9qmVW+o1CzuQvbx4+5Uc0PzZWp2z97fiq8szD5Y6ak3oirW1K8qwj5dlUJ4Y/M/wBStXJRtG7+9VScM3/Aq3ilzabg2N09bS3M8yunmpAWRMevH86m8BQzafp7STTO0NxL5qRb/wDVr04+vUrVJoZLiN7NX2tdXCQI/wDsj53rpre3ihCxxx7UVNqp/sivYoPlpW7nHON2aCr5iq0Mm4f7FXrO4XhZvl/2KzrdpLNd0fzJ/GtWLm3jvLPcrS8/deL7y1a5VfmMmtStq80kl0VkdSy8VnTb2Yr5dWZtqyeWr5C/ceqsjNubivla8uau5eZ6UFyxTKlxt/i9apzL83+s61duF2/N/dqrJs3cpiqUmmRbmdz6oFu23caY8KsWWsZ9L1JVCrqV0v8A22am/Zda24XWLr/vuvfdeT0scXKbTbV/ipe3zdKwVh8Qbf3esXDY/gwv+FG3xR95dYl3f7cK1Lmr6xY+Q2pF7Kflpv3lrAz4w2sv9sN/35Wol1DxZBuM14r/APbstEasWr6/MfLY6ORf9iopId33kyh+8ntWA2ueLGX5Xt2/7dv/AK9N/wCEi8U+YI2W1/78t/jV3g9GKTPn7xxocnhvxdqXh+X7ltdOqP8Ad/dHkN+TVhtcLNN5ap8i/wDfTf8A1q6L46f21qvxAluPEE0SNbom77LuRJsL8jPn0/u965WRvKb5fvV6FBr2cW2XFtxTY7UrlZo42j/gapLGb+zmltfO372WWD/ZRuP03NVWRV+zt+8531peT/xLYdWhh3uiFGf+Laf4fzrTnadlsUk3G6HW87b/AE21nfFlnPw11Ecbf3PXr/rkq/C1vHJ8s0Xzfd+dfmrO+K+B8ONRI7+T/wCjkr7Hwzi14lZLp/zF4b/09A8TiK/+r2Mv/wA+qn/pDOh8CszeC9HU9tMt/wD0Wtaska3CvayfMsqsjfQ8GsrwFGzeDdHPzf8AIMg6f9c1rX8tlb7lfPcTp/6zY3/r9U/9LkdWAj/wm0f8MfyR8kfsV+GLrwx4l+KHwrvoNv8AZutRwBPRGNwiL/37RD/wKvf/ANnFZIfh62h3HD6fezRfP/d3VzFn4Ik8BfH34geNI4kisdf03StQVwMYnRbqOXP4JGfqTXb/AAtvI/DHjzV9Pt5tv9oolxa70Urtbrsz905619hmVs0ymVXvGMvmkr/qfnnCScONc9iv+flBf+W9M7mx0PVLxf8AQ7B5Pk+/swv5nAqSTQ7G3b/iba/awhfmdLfdMVx67OF/E066muLjc1xcs+7+/Xy1+39+0Q3h6GL4E+DdY8m/vbcT+IZbd9r2toekWR90v/Jf92vk8nwCzLGRw1JN33b2S7/11P0mtU9jTcmzgP26P2lIfHHiL/hVfw91KdfDtndf6VdPtV9QmTq2EOPJU9FzyVyeOvzdceKZJNv+mNEkTsy7f6fWq2qaislw7R7wiRNsT/Z7fpXN3V9Ju2r0Wv2zA4HD5dhVRoqyX4vueHOrKpLmZoXTNq2vLK78fL8lWtPmt47hoP4N537f4jVDTWuP7QRJEwZeU/3a34dNX7KJrP7+/avp/k1rNWjccddDd0/wfqXjBUElmqInzPs5Zh2X/Zrg/iN4G1jw9dTNs2pG21m/4DXqfwb+J2n+H753vkRo/K2XqfxKTwG/A1H8UH0nxJZHULG8V4ZYE3f7THcPzrD2rTszRwXLoeBabqzQ3Hl3D8b69U8A+KfIkt5p7n549vlbk3BvrXn8nhu3sdSSG6h+81aWnXCyQyQq7xJD93+9W/PdGdranuPiCTw34w8L7oZoobyWL9+ny7t3QYH8Irya+8O3Wk302n797Jnbs/i+WsvTPFWoW9xGq3m1gjLuf+Krj+JGvL5LrUH24/2/mbNZciT0Gm2VbFZbeaO4kj2Mu75f9mp9N8VX2i6snmSI8S1Z1ZrW9td0L5XbtVF/hri9aurqFlj9H2/hSSbeo3ses+FfFVnfak8kjq6ybty9+a6jTfglo3xC0+a+tdrOVLLt6qRXz/peqala30V5aP8AN/Fsrvvhz8avE3gnVPOY/wCiSvudP7rf3hSdO+sWHMupctfg/f6z5+n6e6CazlZXRvaoZvB+pWesRR3UPlSB9sq7NvzD/Guw+FPxGs9Q+Ik2rsFImuNtxB/z0DfxD3zX0Yv7ONn8Uo01vTUzt2sv8Lsn90+9ZznKCvIqCUnZGX+x61xZwtpsLtHLbXAeJOgZTX1ZeW9vcSC4CbW/v/3q8u+Bn7P+o+D7qK5mhxLYvs+dNyzQlsjP0/SvYNUs7eG+kjs/mj/uP1U/3a+S4pr/AOyJdDtwkbVDPihjVvmqo8aJr0CquQYycfg1aPkhW2mqNyHj8QwbuCIj/wCzV8TgJe/P/DL8j5LxGTeDy7/sNwn/AKeiP1bQtG1m3ax1jSoLuF02vFdQq42/jXzn+2t8C/hn4T+AXij4heGtElsb6wsN0EVk6hN7OqBtr/dHzc459K+lbi4Xb8/NeK/t4Wbaz+yv4sCQwP8AYoLe9b7Vu+VYZlkJTH8eF+XPGetepk2Kr0sZTiptJyV/vR9nXguRux8cW/gfxFeXWm+HfBmmpM9vFE106Ov+jwFl8yWQZ6KNucZPzdKk8RaXdeEfEVzpazbmtJf3u3+HPRq7bwD4X0fxB4gk15vE9rp82iWtnqlvEzsi3EJZopEz/G53KFXp+lc5+1h4x8H3WqaLrXw309Q0dxLa6peRWzCa8nP+rRx0fAVlXHJ6V+lV6cK7cbar/I7+HeIMVkmIUov3OqOu+F/xO0PVNNHw9+Ijyy6Dc3onleCFZZrGXvdWwP8AGV4aPo/fnOeZm8G3l94wubzw7Z3t/bQvK1usVsxk+zozbH2IDtATazdkqX4W/s/fFXx1p9t4it/B+o2KXOqJp1lb3kPkNNctF5p2CQqdix7WLYx82M7lK17FH+zb8dPBKvcLb6lpsUsT28t7BubzI2X51zF94HbtI7/LXj1sLWhZ9H/X3H7ll/HGQ41RjWklLu+/m+55r8KPir8CvhGNX8d/EOS6vfEtwpstL0qDTWkMNnwXcOfkVpD/ABE9FUV4h8aPiDY/Erxg9x4P8JXVhpFvdSvpdm/72aGKTaTESmQyB13Kv8AbA4rtvFjabpeuTWvirRHF1bvt23qYKoPb0qC38SWaqPscMSKfubE+X8K0w01hZuaTbatr5Hy2acJ4TN8dVxE8TG03c9K/YD+JVh4N8dadpPih/wCz7WSwuYJ7q6TYi5ZSMk+tcx41j1bVvFmsfbrme9tv7Wu2ge6v2li8szMU2DJGNm3FY9rf3V8y29v87u21YoodxYluFx/Ea7DS/gT8cNai/wBF+GPiEr/01tkh6+0jqVqMZiuezmkrdz1cjyLh/IVKbxCldW1aRy1noMkai2s/ssA5+WJM1b/4RHbcJ/aF9FDu27/N2j5T/FjNegeH/wBkv47SN5l14DtY0Lq+zUtbWI7l6f6sN+Xeuoh/Yx+IOvXUV94l1LwzYyq/mJss5rx4XO0fISVC/dXtXFDHZfBfvKqv5MxzXPsHh6zhhnBx77/grnlOjXXgnSYZJtSSw1ExyqjJcarLHErbmAVxGO+37p612Wk+LtN01hY6LZ6TbS/Z/P8As+m+D7h9qFd+N5cIx/HJ9K9U0H9jfRdPm+1at4/v5GP3orCwt4EZj/EfkLZ/2s12ei/s6/C/T1VZtHurwDB/0/UpXXcP4sZA/SuWrnGUPRXkfK4viJVFyuSa8ldfjY8P0/4nXDeWuq6lrieYkqvPa2EVosLrt2YR9z4JbnJ6L070zSfE3jbXJnt9P8N+I5o3Zgtxca9LuXCv82LYfMjHy+4IHvX1D4R8E+AtJk8vwt4G0ssj/PLFZosS/wDbQg7j9M12CaWtxGkd8/nbfuWsCeVCv4D734/lWEs9wlNe5RXzep87PM3KfNDTrokl+CufCWpeD/2kfEk1x/Yfhvxhe6e1w6wNLf3DIyBsDHmTAsv1FcF440jxJ8OtRew8Z3Nho175Syz2t1fxLNGD93zcv8jeik9K/TWxs/ts21vlhX5WZf8A0EV86+LPhl4Y1z9r22u7r9l3SdX/ALY0Z9R1vxvqMO6K3mjRooLWIEFGkASPd0JDf7NdeT4mGOrTjKCVlfTT0PbxXHuaU6ChShBLRbavzd+p8QeObbULy1s9Rvtbguba/VntUimV9yD+P5DjB7NXO29p5knl7Nq7f4K+x/jN+xLN8WtaaHwDZ6Xo+sxXCMjwab5cMdifk/0goRvClW2kYPzYxisPRv8AglX4qfVDpviT46WEBKbv+JXoLsdm372ZZD3+XpX0ksyy/BrknUSaV7bv5nwePxmOzOs6tV6v7ix/wTR1TTbOHxfcXevLp39n6dDPcTyzKsK2gd97Pv4XB7/7WK+w/hFqHjzxFo6a3rTyx6deXTy6dpdwjJMtpt2RMc8xOw+fZjA3YIzmvJP2Xf2D/Bv7P+tP4o1Dx/qPiG/ltWilgntoobNcsp3eWFy+Nq48wnHWvpPTbtfvfJn+L/4oV+fZ7m0K+KnPCSbva9/Lp96ReHpOEPfRKvgnxVNG134d1u1vo03N9in0pFuoV/vYQjeP9uP8QKzrqT4iW8O3SfEkVsy/xxWzFfyckVvnU2jZJoZtjI+5XT5WVv7wP8NXR4m0rXW2+KkaKY/L/alrCpkz/elj4Eo/2hh/rXz/ALSWI6qMu1tPve33/cdV0t1oclD40+KenqI7y4028+T772zI36VHN8UPG0K/vtEsHx/cmcVv6zpN1pMS3jeVc2MrssGpWr74JP8AZz/Cf9h8Gsa4aHqy/e/grGUasJWqRX3D5l0MO4+MHj5bx45fBlg8P8LJqThvxylV5vi9r8ijzvB6r/u3+7+lbE0du3yiFaqf2XazE+ZClaR9ha7ivkZSm72RmL8VLySTy7jwxOGb+5MrV5H8QvEC6x4ivNc8vyvtF1I6p/Eqomzn/vmvZdY0vR7HR7rUry2XZbQO79ui18t/GzxtD4J8J6xqzJ8unaQ/zu/3pX52/WvruFKEZ1KtWK6KK+bucuJlGyVzwr9k6VfFf7VviTxHNC8sMejeJLvdtztbyxbRMfwav0C/Z/8AGWi6T8JNIsLw3HmR/aNxSAsBm4kPX8a+NP8Agl98PJr3wZ8U/ijqce/7H4fXSrZv+mxikuJm/J4a+yvgBodnd/CPSbqZclvPz/4ESCvueMMTGv4U4qC+xmGHX/lriH+tjwsJG3FdN96E/wD05TO0k+InhVfma8uP/AZ/8Kib4jeEWb/kK7f9+Fx/So7nw/p+FwlV28P2e393X88xp02r2PsnIt/8J54RmbauvRD/AHty/wBKcPGHhuZl269a/wDf6sxvD9ru64xTf+Edted3eplGklsFN3NK48WeHVbdN4ktUVf71yoqNfEmg3DbLfxDZuf7iXKM386zH8P6f/y0iRv95Kh/4RfRVl85NNt1bZt3+SoNc0vZW1NrO+htyX1u3zLMrbv7jrUfmKzD5/vViyeE9JkkWZrOJin3W2crmo7fwvYxs/7nb83yfO33a5pezvuO8rm6zMz/AOzTj5mPm/SsT/hHLP8AhjfaP9tqP7FjjXbDcyqP+uzVnan3Gk0tTYdv4dmKRm2vnNYbaXPu+XUrpT/12ao2tdQjbC6xdf8Af7NT7KMnox3bWqNmSRmbl/lqBjt3DH3ayJF1ZW+XW7jGf9k1FNLrn8OsSt/wBa0jS6XJczUmZm5qnIzbtqv1/grPe48QYx/aO7/tiKo3moa8s0Nqt/Ej3EuyJ/s275q3hSeyFfS50Hhv/Tr77YqfJEj7P95m+99cLXS28dcx4H1DT2WbS7d/3ts/71H/AIk6I3/jtdXbx/L1r05WTSRzdSVF2VU8Qf2tHapb6Pf+Qbl2W4l2ZKoFY7h2U5+Wr6rtYfPUGuSLDa+Xv+Vvmqak1TpuVug4K9Q5ae01xvm/4SGX/vharXEOtL8sety8/wB6tiRv7lU5G3Bq8BS5n8J2T2Ma6h8SN93Xtr/3f4agkj1v5Fm1tv8AtlWncfe561Cy7epxXTFq2xlZrVH1tJCrKWb5VpN0Mm2P+Kr0kb+WI6iWNmWvoHBLY4eYpKu1vm+WpHj7mp2j/wBjNMeFSd1LkT1HHTqV5IY2P8NRfZVZW4q1j5sNTfLXnb81TypK6Q+Z9Sm1jGvy7M1XuLCNJAM7q0W+T8OlQR/vN1wersPl/u01TTYr6niX7TXhdbe+sPElvDxcK0Fx/vLyP0rydod53SDbj7lfSvxu8Ot4g8B3tvHDvkttt1E/unVfxG6vnWaNW+6/H+3Tot6wfRm0HpYzpd0hfc/8O6m6lNqi6DbtDMi2r3G2X+8zH7jfnVfWppLqB0tvuD7zJ/F/9ati1hN94Tks1T5mgZU3/wB8dP1rrT9nZ2Kvf4Tk7n4d+ELjxBf61NpsTyXkqXGx+VVWiUbcfwjKt8o71V8ZeENC0jwbc3dhYpG6BNu1iQuZFHHPvVnTdYhuriG7a8bzJbDb5Wz+EPn9GbbS+Pb0SeC7uI9T5f8A6MWvuvDmpX/4iTkqb/5i8N/6egeLxFCH+r2La/59VP8A0hmt4S8J6Zd+HdPuikqu1lEzOlxIvJQHsa3IfB+m/eWa4B/2b+X/ABrL8G36x+HNOXOMWMI/8cFb9rfM3zK/3q+b4mqVP9ZMbr/y+qf+ls7MvhH+z6P+CP5I4v492Ufhv4L+K9Xt72ZZF8L6g3nXFyXCbLd2B5Py4JJNc78LfH3/AAsT4T6D8TNLvGS8tIvKv/K4ePPB/wB0q9emeKdO0/xLZ/8ACP6vH5lpeRvb3SMcho3AVh+IJr5h/ZlupvgT8bPEP7MvjWZ2s7iU/wBnXEv/AC2Xbww/3o9r/wC+slfXcNzhiMqVCXx2cvldp/dofnORe5xxnrW3tKH3/VqZ9D/Fj4iWvwh+EN/8WL7xnqj21rZ77W1+0ozXFw3EcKb16s/y1+bWveLte8Ua9qHibxJefatT1Wcz39075LO39PlwPZVr3T9t74kahqfiSH4KreOth4auDPOiv8s07p8jH2WNunq1fPkklvbp5i9ZG3f7q19Fw1kscrpTqS+Kb08o9F69T7LFV/bzUVsircTKqy7uP71Y0twsKtPcJu3fNs/2qvTTrNI0zSfeYtt9qwtQummZj5nG+voJSMUtS5Z6ldXEn9oXE2P4dv8AdX+6K7rT5rrUvDf2VditEhlVu/tXmq3EZk/ebwgroNI8VTWtu8cD7I9n71mrnqc0o2NYpJ6mfp/iC60vWLxVf77bXT+982a6Tw34gjhjRdSffbQvv8r+97fjXC3kjTak94qfKX3f71b/AIZaSa6it7h4vv7tjcr+NJxVrsNehpavatfSf2lN8rzZaKJf4U/vf4Uy3tYYbr7Kr7A8S+bv+9z/ABD3r0PS/g1cNpq61JfeZPLKPkb73PTA/hGK4H4kabceHfEk0MqeSq/cSs4VYydkNxaSb6nOalujuGWP32/7tRwxzTRbm+VV2/Wr6bbyYLs+8nyf7NT6ros0K+Su7cibvk/WtiOUoWevSWdwkMj9P5VfXQYdau0eHY+9t2xP4q564tl8xdr4avaP2VfhXH8RPEEFrdTffuNu1fvN8ueKeijdi1ZL+zv8Abjx344h0e+s9sbo3lb0+VmH8Ne76h/wT18vS7z+0Ewiystu8XXB5Df419afAX9l/wAD+EbEata2v72TY+/Z37sP7vNejeLNBs7iGKxWFGRv9j+IVxYitJR906KVJSlqfnt4F/4J4w6PdrfR63KjpLu+595a+v8A4CeDZvDukw6LqCb5rf5ftH/PRf8AGu3t/Ddhptv5rQqqr/eSsPUvin4F8GzvcalqUECxON392uaDrzfvM3kqUdkd7qlnpvh/Sn1T7vyfeRM/pXl+tePPDcdw7XGqyo33nZ7OX/Ct+3+LnhP4naX9o8G68tykT/vfK+9/ukfwkVnTafHI25uN1fG8TVoRrqjJbeZvhI3945xvid4E8xoZPGFqjD/nqjg/qKb/AMJLoF3epq1vrls9rGpSS5L4RSc8Enp94fmK6KPQ4Q3zVnXmkwHxDBYlAUkhJIZcg/e/wr53BexU52T+GXXy9D4vxFT+p5dr/wAxuE/9PRKcnirwh5nlt4n03d97/j8VapeJrfwf4w8P3/hvUPENh9m1K1ktbjbcxPtSRdh4PH8VdDJ4Ns2Xa1nFj+55Kmq1x8N9BuPlm0eydW+8r2yf4VnSrUU7xuffSi1ufAXg668T/B/xdJpthb3VzrHgW6udOvLVXiebUNMl+QSjOUy0flyDqAVwa9i/ZV8VeB5JjrGraJapcTOyxebsd8B8BwP+emNuW7/Niu8/aO/ZP1DW7eL4m/BHTLKz8W6RE7fY4rZUTXIQvFpIcgLzyrnofZq+cvBIew8fahaWng+KHVdMnhHiLwbe3K210s43O4SUZK5DZBTKP64r9PwOOw2c4N2+K1pK9n/w3+djyKlOdCdnsfR37Q/wf8c+Nfhv4t8TfCbxhdR69LZJLo0Vvc/LMgdPtMUB2nyLh4VYIRjnup+avMP2Dof24vhgNWupLbVk0fd+60vxVePNDeS7mzsjldmTj+IEfN2as+z+MmuLqEFj4k0dHhj/ANempX7Wjxy7uIgRgOdm3GT89dF4D1Txt8Kde8R6prnxMsNS0z7ekVhZN4qiMUfmJ5oZCSf3YDbCwGAeMd69HD/uaUaVV9LL0VieZ8/NFnr/AIs8c/B34lQW9j+0P+z9/ZOsQyq0V+kKtGzD+JJcf+OPweleceJv2UfCPiHUdQvfhjqWl3I1DRNQsotLv4VgMk06fu2EqcRFdu4OI8jsfvVmeFf2zPFuvXDWeteA/tNldbvstu7qrqB/FJv4VfvfjtrvvAaeEfi1p11deH/DeqaS9tsW6WWwlhibeG+aIuAs8Z/vxkiqqwo01dtRv3N4Y7FKPKpO3Y+OfBnwD+PXwD8bx2fxF8B6jp8ibHt9RZPMtpJoWV0cSpkYJ9cH2r9Nr6ZvGPgPwr8ULezSNdc0lPtSL0W4VeVrzO6174peFbP7D9stdX01VVPs97bK/HQKRUdr8bLrTfCsvhM6Pf6Ja+b5qRaM6PDHIP4xFKCE+iEZrgzPAPMcG4Rtfo0OnieWWsn8z0S10J5mVmTatXovDMP/AC0+9Xiel/H7UlaTyfjtasxfbFFqmgxW7/rhG+oNeg6bpfxY8TaHZ6pZ/E7TbyOaDd+/8PfuZst97CSDcO3UivzvF5JjMH71VpLyTf6HbDEQnokdILG3mkaHTLb7UV++6fLGv1f/AAzRb+BWumzqk3nf9MFTZEv4dW/Gsu3t/j3atlvFvheWNPlSL/hHpYgv5TVI2qfHCFWz/wAIvI3+zDcJ/U15jioO0Jr8TZNNao6az8Oxxqse/wCVP4E+7UkkazSHT7H5Ik+We4Tr/uD39W7fWuO/4Sr40yLNDcaJ4fdVX5Wt7mZDu3fdyQdvHpSf8J18VrGMQt4A0PYn3Fi1iVdv5x1tHDzhq5K/qZNxO4Xy4VWFERFT5dq+1eI/GX4s694W1vx7d+HkxB4S8K6drwaeFvLXbM5uMY5bMMDfjurc8XfHLxt4Q8M3/iLVvhcrpY2c07pYawjysERnOwOF5wvc15NpPjP9oRvg14t/aI1T4b6WdY1KCwfRvDNm7OdQ8OwI3mr/ABN5zJPMeB124Hr9Jw1l+Jdeda6atbR9W0/yuc2JqxskfT8PiKz1rTbfVtLuUktry3S4tXX+KJ13hh+DVjpcyXGtTXipuZIlRf8AaCtyv5tXhX7GHx2vPFH7N/h1X8O6peDS4n06K/lv4pPtSQuyI+/gtgbVPA+7Xqdp468tVz4bvw8SP/Gn7zPpzXjVMHPBYipGWrTa3uXGopxjY7i1vN0Ia3kXD/NV63vPLVNr8j5l/GuE0/4hWPztNo+pRL97Y9srfMfTB/4FVr/hZ3hmL/WfbUC/3rB/6V57oTjK6Lcl0O5bUJWzn5cfwUxrz+Hf8tcbH8TvCbctqsqY/vWco/pVpfHnhWSzXUF15Ui37HZ0ddrfiPyqnhnJcyWpHNra512k+JNU0WZ5tNvNnmrtnidFeKZP7roeHH159NtTt/Yuvlv7NeLTLrb/AMetxM32eY/9M5D9z/cfj0NcVH8QPCDN8viez+b+/Nt/nVmPxh4Vl+WPxJYNu/6fF71UfaRSUotrs/8AMblc27yG6sbg2V9bPHKv3opU2moVmZXptv4u0toUs7jWLK5t0TbFE94rNGv+wc5X+XtUc99pKL5y6xBt2NuR5l3L+RqZUFL4F/mK9jA+LOt/Z9Di0FXRWvn3S7/+eKcvXwX+3B4+t7fQ4fC63mw6vdfarpPvGO1j5/X5R/wKvo74s/GbQfEmtajp2ieIbeW6X5Gii+f7HZp13norua+Tvhn4Jvv2zv2wLDw5IjzaFbzh9RfZlV022fMmf+uj7Yh6hm/u1+kZNhYZPlXPW05VzS+7/LT1OCrL2tTT0PtH9kb4Vz/CL9hCPTNVtzBqmtaFfazqismGWW5jZwh/3U2pXpX7Pkqr8HtHBbp9o4/7eJa1viMqwfDnW4reMJGmh3KogThR5TYUfQVgfAQsfhHpIH/Tx/6USVxVK88b4M4ytPeWZUX9+GxH/DHFyunxZSiumHn/AOnIHbySbttQSM33d9MaZl+XzPmWonm/vSfpX5FddD6S0paj/OZn+WllmZehpnmL96T5qguFhYY+f/gNctWV2bw0Q5pmZmU0m5m3YqE+Z90PuVv4Xpfm/irkauXzdyTd1Wkldt3yttzTPMb7v3aY0zM25H/4BWMo2LTsSrIwprNGcrvxUdNkkZv491Ry9h8wjSfLtaoJt3zLv6U5pGX5lqKSZW61pGFiiFpFVhuC1XkbdJj/APXT7ibLbk+aoJZWDVcdFclWb1FkkXf83y1XXyVvlkkfAhiZ/wD7KnyTfw1japeLqC3Wk2+5Hnuo7Pd03KV8yTB9AOM+tb0IupUsEkoxuzU8B2txJp41pvkmvJWm/wBpVPCL+W2u80u+juofuIHX7y/3a5jQV+zx/Y2/5ZKPKb+8nQflWzCskci3EP31/gr0nHnlY5L2dzZTc230rK1S4MlwyS8qibUrRW6iW1+3fNhPvon3qwriZVZpE/iz96uLHScKXKtysPZzbZFcSqrVWkkCt81JdXX7zd/dqCWZWXrXmRhJI6JSiwlmVn+VPvVBIylQqp829v46c7u21/4azPFniC18K+GdQ8UXzqsen2clxLv6fIua1p051Kigt3ovUcdZJH143iy4+62lKzf9fP8A9amt4ubb+80p/wDgMy1ZbR/LydnSoZNHt5F/1Kmvo3CoebdWKjeNm2/8ixeM3+zNE39RUa+PrPzlt7nw9qkDfw77ZCG/EOatNosK/dDrTJNLwNqvQlUi9hproMbxlpjf662vE/7dmpq+NtB2rIftCBv79m60n9nyq3zfw01tLhk+WTv9+k3UXQbepHN488J+XuGq/Lv27/Jf735VnX/xQ8AWa+ReeLbWMS/33dGYj0OK0JdFt1XbGi7W/wBiqlx4fsZG/fQqcf7G6t6dr+8tDOTuS3HjrwVeWb+Z4hsthgbduf1WvlvXotupXOmWsbeSkrL5v/PRd3GPY19ODw/p+35rKJvvfwfw14h8cNLi0XxncRr0kt4n2L/EpXH/ALLRyw504rVl0pPmaPPLpPLhdfkH9/bVzwrqH/EraMvuxn71UNQkZVbHdK5VviJpvhXSbv7VNEbxm2Wdq0yobiQ8BRn9W7Ct6VCpiJ8sVds64e7HU5jWvin4X8J+PLzwzqWt2tt5crzxJLMqbYnXPf8AhzV6b4geGvGHhWWfw/rtpexTbfLeC4DE4YHtXivxs+Feh+IteGpagi694g1Jl/tK/SYy2Ok524ijRPvhR3PXrXo/hb9lnwV8I/DH9vxTpeatEgk+1xJsQb2CHaPTDGv1zw8ybDR46yaq5++sVh9ttKsD53iGrNZBi01o6VT/ANIZ6b4c12GHRrOF5lylsmV3dAFHNb+n65vZvLdK810j9mLwBqkY8ZX0GqJeXsays0GtXCoSwDEhQ429fujitjQf2d/h/pbBl02/kKNvXzdVuG2/TL185xFkVOXEGMlzb1anT++/M6MBXf8AZ9L/AAx/JHoMF4ZmjkIwFbIrxT9tj4Z3mrabZ/HDwijQ6x4beL7VLEnMlurbw/q3lHt/cZhXr2i+FdE0Peum2cqC7cef51zJIJCO43HjqeleQ/tuXHhnwj8IXh09H/tK/wBRS1t386X5Qf3kuOf7itn8qWU5dVw+Ow7py+G69U27r7mfCZBNS4sz+6/5eUP/AFGpnyF4y8SahrGp3/iDWLnzrzUbh57qX+8561yM16021nfCBP8A9VbHiSRmhNv6ru2f3Qa5W6m8wrDG+BX6bO3TY+pjZassTXHlwmNRtzWLfeWq+TH91atXFw0k25X+Vf8Ax6s+6kaRzI3euVvU3SI/O4X51/ip0Mvnfu5JsJ96q3z71b/Zpq+Y37w1It3Y0GjmuLhYbMfL/sVu+dH4fW2azdXuOHdP4Vqrosa28Pmr91F/8erS061hvpDdydf+WSd2apbTKS7HrXwy8aXktvb3F5cqsob5Vl/iP96s340+E5vGmjw+MNLh3S+aYJ4l9ujVQ8I291Yx21jDPuld90rN92PP9a9H+E82k3moSeE76ddl27eV3/eLyK45O0/d3OhLnjqfM8M11peofY735JIW+dX/AIcV6x4g0maPw7ZeImtot09mrts/vN0/Sm/tWfB+bw3qUPi3TUxDMmyf/eHRvyrT+GV9b+LPh3La3z73tkVUif32gNXRCqp0+YwlG0rHm/irQdPms11CzhWNlTcyfX/7KvRf2MfGkej/ABQ0yxu5vJzeKvm/wrhfvVyOu2Pk6pd2N4m1PmRUX3/+ypvwnvI/Bfj7Tbi5gcmO8D/NwOG+8f8A4mtIyurCejSP2a8G69Zrocclu6Oht1b/AHhtq+rNdTec3KMqsleT/AHx9pfjrw3BfaLeLcQlQny+y16R4g1RvD+jvdKn3E/WvOqRcqln0OmD5UJ4y1SHS9Jknlm2JtO5v7tfn3+0x4kuta8XSaTpmpS3K391siX7u4+xHv0r3/VPj9qnxS8UXPgPS45bR7Z3WVHf5W2rkrn6V5x8O/2ZfiFqXjweLvFHhj7TbWEu63iS5RTI27IyDxwOtXVvTp+6m35GfMpS1PRP2QvhjqXgPwKLzVk23V3tf5/lfZ/dPuPu17F5Dbtrf8ArntHuPGEi/vPBl1AF3fummiLKw7EA/LntW5o0firUrX7RN4SvYvKZl2u8W7j+LAfpX5tmWX5tjcQ6k6bPUo1KMY2uW1jkZFbZn+Ksq+jI8b2a7etsTj8HrQs9U1K4mit4/DOrK8m75XsGHT36LWXe6qD4xt7v+zr0GC2ZHhNjIJDhXztTGWHuOOvoa5cHgMbCU+am1eMls92j4TxFnB4PLmn/AMxuE/8AT0ToVhZuWSpGtGZ/9msPSfiV4V1a3vL6G6uFh05lW9lls5UW3yMjflMrVvS/iR4R1KaK30/XreV5dyxJv2lmXquDzxurkWXY2CtyP7j791IN7mg1oqrlk3Vwnxf/AGafg78co4Lvx14WV9Sskk/s7WbCZre8s3dcFkkjwW/h4PFdTdfEjwn9oaxk1u1SRG2vE8yq27rtxn71OXxt4XmuDYw6xA9wib2t0mUuq/3sdcf7VbUKWYYaanBNS72aZM1TkrM+XPEX7D/x90C2uI/hr8d9L1y2RY1t7Dxvov7xmDfPmeDAX2zGTXH61+yJ+2XJsj0jwz8NLd4tRDreWWoujsg53EPbnaD90/xf+hV9mTeKNDkjElvqsDq/zK6zJhh9c1FJrVjGvmG5Vd/3P9r6f3q+ip8QZzTjaST9Vr+FjjeGot2PmPwv8Kv2m/Cuop4m1v8AZo8Ma7frdT3kX/FwCix/JtW3kj+z7Jc/M/Oef7u1a76P4jftS+H7WHRdJ/ZF0v7LaWtnFaxWHjiLyoc8SqgMY+RB0r03VfE1nb/u2ufm/Ws+38SQ3E+5X6VzVsficY+atST/APAv87FqnGn8L/I8Q+P/AI9/bJ1jw6L7wH+zZqOh3Wj6pLdS6jb+IbK6juLCJMyJJBkF94+7jkFa534S/tOXHxq0P+xPGmg2tj9vs/N0TxHp0LCO8wv7xHjPKum5c85I6Z2mvqvS9Ujk2yfeH518Y/HX4byfspfHSLWtDL2vgvxtrkc2nXUVmbldLv5JP9Jtdh4VHRmdenHmdNq19Fw7mvtJPC1IqP8ALvr3Wr3tqjnxNKy5kz06b9lHw74T8Ax/FL4yW11r2m3U80WjRabcqllcXES58qQZMrAdSdoT5sDJr2dbDwxpPwx0bSPhX4YutD06LTmaXS4Lx5k+1syuXgjPKRv3iyACuR95q8s/bA8J/GSb4VyfC/4D2mqXvnXljcS39lbWsE19bFsvFbu7h2KjcFwRn16Vpfsy/C/9pvwd8Pb3wr8fdL8S6Dozai7eEE8X6ajapNa7V885csywmT5lRzlNzY2qwr6PH16EMK+dLT/M56MZc3unXeL/AI+/FHQNevNL8PfsneLvEFrbxbotW06+sY4Lh/unYJJRIoB42lc5Tpjmqkn7QHxlvYLtIP2QvGMMsN/Bb+X/AGrpzTMjrudwROY1wP8Aa+vNegW+uLMzaTpsyb0/1sqp/qx7/wDTQ/8A16n3WtnEtvBwg/yW/GvzHEzwdGs7UIvXR+9r57nqRdRxtc8oj+Pnx3kZbW1/Yh8WQxjzl/e+IdMX7n3P+W/8XfPfoWqlJ8cP2lrqznuG/Yr1mN4rJJbeKXxVY5kkLYKff6gfNXrzXpZsR9/+A1T1GZptmnxy7XuN3zf3Yx99v/ZRWCxFCUtaEfvl/wDJAo1HvI8d8X+Kf2hPGHh+fTof2VdL1G21KJrS6sNY8WxJD5EisHziNt4+7u7dhurlrv8AZ+/a18Ux+Bbq1/aFfwbbeH/DlhZ6no1jbLOVvIdwnli6xsHDcb8juRmvohgsf7uFNqIm1KYVb72/A/uV24fPa2Di40IKKfr+pEsPGfxM57wr8NfCvgeGbT/DVhFCL29e8uvKhVFkmPLy7E4UseWwBk1tyabawqJNn3WH/fJbFSW67rqSZf4VCf1NS3Ss9nJz/B/LmvLxGIqV66lN3b3NYxUYWQz+wbWRtpTrRD4btZmaBvvL8rVoQ7fvfw/ep8jeTi8jfdt++v8AeT/63WuNNNuDG7Xuij/wiNju3NHTl8K2sO7b8ylNro/3WHvWvHtkVWHzA/cqSGPzahVZ056MTSuc1ceBdPkZpI02p/cf5mWmwfD+xZcNsb/gG6urktdy/u+H/geo7eZt3ltsSYffi/8AZh7VVSq5x5oCdjmn+Hent/y5xMq/9MVrifi14g+GfwltYpPGUO5JG+S1tbBppZnHRAiA/jXovjzx5pvgPw1d+INWSd4rSLe62ts80nthEBLH0/wr5h+Mn7Tmh6h4Jf4ia5ok+mWVmzpolhdf8fN87fxlDg8npn69K+q4ZyatWksVWvy9E+r/AMluc2KqxUeRbnmX7aHxit/D+jv4Z0vyrPVNbiL3CRIqfY7Qdc474/Vv9mvX/wBgb9k26+G/wjTx54p0yW217xPElx5G945LOy6wQnGCpI+dh2LYrxT9iT9nrxF+1T8ZJvjh8V7VpfDmj3iyy+afkvbxG3xWiZ+9HGfnf32j++K/RR1bcMP/AILVcY5z7OP1Ci9d5tfgvyb+QYSjd+0ex534x8FNp/g/VbiLUdRUR2E7FDqUzK2I2OCCcEexrP8AhHoWp3HgCwvLPX7+3WTzcpDdFVGJXHA5AruPiE0g8C62nmblGkXP/opqwvgir/8ACq9MZV7T/wDo+SvSo1qv/EEMS27v+0aH/qNiDy5Ri+L6en/Lif8A6cpit4f8XJJ5kHj/AFdP9l3Rv5ipF0zxosnPjO8O3+FoYm/9kroWabdtZP4KY037vzG+8tflEsVUa/4CPp/ZxSMOb/hNNu1fFUu7+PfbRN/SmNceOkkC/wBvRP8A7L2C/wCNadxebV85vuD79JHdxsqzRncr/d+SsXXk90vuRcYXMq41D4hrMklrqVgqL96J7D734g09vEHj5R81tpLN/D8ko/rWrtVvotI3l/eFYuqmtl9w/ZvoZa+KPGv3JNE02T/duZU/TBpy+LPE0b7ZPDFq42/wX7bv/QKus0MbbWfcW/uUuxWT5aydSN/hRXLJbMzx4y1pf9Z4S3f9cr9f6pSSeOLxW+bwldf8AuYv/rVeaNR8x61DND8ytUxlRb+FfK/+Y1Cfcot43uGk2yeD9SRf9+I/+z1FJ44h2/vPDmqL/wBsUY/+h1dkj9U+b+77VH5KjltlWvYpfD+YrSStcz5PHWnx/wCu0rUY/wDtwY7fyJqP/hONBZtrPdJ/11sJR/Sr0scbL1qtcRRtu/u04qi1awve72MnWvip4F0O1N9r3iGK0hT5XlngdAv14rT8Mrb3iQ6tjeszPLbu277jtndg8rkba57xdptndWP2S6RCJ5UTZ97cS64qfwrqFxY+KL64aZ3s5XVXif8A5ZsOAw/DrXoUcPTdHmpp38zGpUkpJSPSLeNmj2x7Vf8AgYfw/wD1q1bNg3yyfK38a+9Zmn7dvy8/7daCxtD/AKaj87P4fSpgtdSZWe2xX8Syahb6W8Om6h9mmm3L5qIp2gfxYPDH/Zrzy5uvj5Y7mjv/AA1qUe07Fns5YHb6lHI/Su/8QXS3DRwtCrQ7BKkv94msmWbcv+7WeIrck1HlT9UOMbxvexxmn+NPi59o2a54D0hU/je11KU/oUrYTxBrzS+ZN4bixt/hvP8Ax7pWjJHGzCRfvVG0arwPvVz1a1KWigvlcuMJ33KU3ijU1+94bf8A4Bcr/hXiP7fHxak8Ofs26xpMeny20+ttHYRM7r9x2/eY/wCAV7R4p1qy8N+Hb3xFfbFhsbWSeV/ZVzX5sftNfGzxZ8a/FEjeIPE7S2sW37LpqJtjtf8AZQevqx5r6jg3JHmWNVdpKFNpt9W90l89ScRV+rQu3q9j+gaSb5Vb1psiwyR7t6iq+i/Cz4iSax/ak3xOuJNN/htbzRIfOk/28jbsX0Ugn1rVm+F0JkEl94qvJH+9/wAeyLu/AZ/SvtX4dZ3GVnFP5/5ngPOMNJb/AIGPJNHu2sEqFpN7bl+7WZ4q8K6lqHiC88L+DbzVEvLfYyXF1bIbOEHk78AFvYZya0/+EH1aFdq687uNv37bA4+hrnqcB51yc0IJryaNFmeH2uJsb0WmvGuzbUF9oPjK1jMtnNazbV/vsu78waqaPD8RtUW43+Eoolhl2bp9SRTJ8ud6DHzD+E5rzZ8H57T0dJmscfh39ovMi7fVVqGSJt237tRLZ+O4f9f4Vdf73lXkT9fxHFUdQ1TxTp6+ZdeCdX2btuy3hSZv947H+Uf7VYvhvOKb96jL5Ir61QlrzL5F+SEbeX4rx79qCxWOTR9cWFsHzbWV19/nTP8A49XoeveNJvD92ljdeFdbkleIS+bFo80sS7v4SYweRt5X/wCKrzX4/eLofFnhD/hE08K+KLm/e9R7W3t/DF1Evmp85aSTGEQD1PPT5ulZ/wBg5i5JOk9+wliYRaaZ4B8RPFlj4f0+S6nuUTb8sWz5mZjwFA/iJ6Ba8Ck+CPxQ8YeOItU1y5skjvJWX7FLc7mtYepU9mJHXH0r6Z0P9nHx54m1qHUNY8JSxq0pbRrC6+XpyZnB5XH6dBy1ew6H+yrJDbIusQxO+9Xb5Ny5H+eK+nynh7G4ePPGm7vv0OiWPpwe54n4B+Gej+B9Ft9HsdEtRFGnyfufverH+9Vv4mQRL8Pb4x2QQRpEEKpjYPOT5a99tPgHpOkM8zI26Vvm+8dxrlv2k/h1Z+H/AIE65qyhAy/ZtgX3uoh/Wv0XgDJcbT43yupNbYig/uqxPnuIMZTqZLiknvTn/wCks5HwJoEmpeEtIkA6abB8vr+7Wuntfh/Mzbum5a6/4T+CLab4WeGr4p80vh+0f84ENa+pabJptuklvCzyzMq28X8TMf6evtWGdZHfO8VJ63qT/wDSmaYHFL6jSX92P5I8h8TeHGtNUsdLEpD3EmzI7ZKgH9a+Sf8Ago0w0nxtoHw9sRti0rRJbyWJ3+688uxGP97hJK+4/H9g1n4r8MB0DPJqWXZU2s58yIfl6V+dP7f3i5vFH7VHja4kh2rYXtvpdv8A9cYLdSP/AB+WQ15mCwNOhjZJ/Z/VL/M+K4fqOfF2eJdalD/1Hpnzz4mu5JLh7pvmZ6wbpZo2Kr/u/wDAq6LxFassMbR/OD835VlWOn+cpmkfCh9z1685H3EY2MuSLyF3SPVC6Y8M38XzVs6pbzwq81wnBf5E+tZVxDJqEwWFK52tTXoVtsjL8qfeqe1tZGkWP+FE+ephD9jhDR96kjZbdWm38n5ql6DSsSWt00P7tuFZ63tFuoYoWbf8wX/vquYku13FmTll7UsOqTeWke/ndUtFJ2PVNDvsW6PHMvmtlfeu6+D+g/a/HViyuzxQ/vXf+7j+I/jXk3ge3vtS1aJo3zGv39n8Ve5/C9v7D1KBLq5W2T7Un2qXf2XkqM9z+lc86dpI0g9D0X9rj4f2d98K9FuLVE3Xyn/W8cj1/Cvk3S9W1T4c3j6fG+0NL8/91lr6Q+JHxak+Mul3Gl2c0UNhp2pL9lXr+7rxv4leAY7HVoftk3mJfaRFeJL/AM82ZmG38NtZxnGEmuhcocyv2MTXvEWm+Jli1C1gWNi+2X/fHWr1mum3kryR2e+WFQzy/wAW0cVwGqeHde0NReK7y2bfN5sX3ducc11HgO+1C4mmnhm8tDtX7n8VbvlSujHfc95/4J//ALRy/BHx9qHw98bXMrWOtbJYLp3ylrIvyFeT8qY29O+4192+NviNofiLwDNb27+c13E0S7Om/b2NflWF/svWIdct4XfZOW/Pgqa/SP4LxWer+C9JuJtkkdxaxyq6fxDb96rmozjzFRT2Z538A/hH428M+MLjxBq1t9p+1vuaXzt21Q3cH73+9X0HawQ2qiSZPK+Tc+znrWhb6HZ2O6L7Mny/8Boj0+P/AJYu+1v+WTf/AFqXvSSZNktxIbeEsyxv/wACqS3mupL12tT8kX7r/GrrWa29r9qdP+A/7R6LVix0Xy7dIYuTt+9/td6tRm9kVdLcrx3l1HgN82P79YWoXzt8UtNubhR8tmww3TpL/jXWyaFM22Nvm/z3rk9d0iaP4o6bYP8AeaxJA/CX/CufFRajF/3o/mfBcfW+qZf/ANhmF/8ATsTsrO7s2kcw/I8vyy7ON31qf+wdInLtNYQTCT5m3wq27K45/D5awpNJvIdoV6dDcataSeXCjldn32/9BrsSi90fd+8bFx4Z0GdoY5NKtVW3lV4tkKJ844HTvUEfgfw2t1NqDeHrJrm7t1inuJbNfMmjHRHPVgP7tVI9eSZT5yO6vL936VpQ+IoJIx8/+7/s1qqFJu9iOd9DJvfhj4NvrFrG68H6W0JTbs/s2IbR7YHy1ga98C/AesMj6h4Ssn8pQ0SumU3BcDjpwPlrtJtahVUhW53tt+d2+9zwOlPmvJJG8yPaRsqJYWk9OUalO9zxTx1+yj4f8UabbafpN4+kPbXsc6y2UK72RWy8RJ7N90/pUP8AwzLpcmly6Kt5dWDvhvt9neSrdMwbj97nP8PK9DXtTSSSSf7P+xTUmYsm1Nv97emf+A1h9Rop6I0VWXc8u8P/ALNOk6LJFt8f+KJmS3eN1l1VmRs/x49R+lY/xN/Yv8J/GTwU/gTxt4/8QvYTlWZFuVY+av3JRkcEHnP5V7c0bfeVKbJGFbdsTDfdojl2FhNT5Ne4nWk9L3PzN+KPi39of9nv9oCy8P8AxPuZru70fSINO0m6iZkt/EmlW+4JcR7+Euxv+cZ++qqeGVq/SD9ln9r74a/tYfCaz+CnxW8Tyv51vt8PeKHRftFjcAbAh8zlZEPBV+D0IriPjp8BPh/+1B8NX8I+N7N1VJXn0jVrdFFzptyGYCaIn9QeHHB4r85/Enh/9oL9gv4tT6T4wtpZrO/l3RXkW77HrSDpLE/JSdR1Q/OOnzqocXUw8Zt8+qf9WFGryrRH6leNv2cfG3wk1aDw/rF551s2+W31RLNfK1JD0If+B/4ivX8Oa4PUPA/xOW8t7H/hKtLD7JHll+wMqso6Ljfnrt3e3Sm/sV/8FQvD/wAQPB8fw7+LEya94dbYkv21M3ViwXhZO/H3g4P0NfRHib4P2viKzXxt8Fdei8Q6LInmJBE6teQg9VH98D2+f1FT/q7luKXNGmm106/JEPFVKb1dkfOlj4Z+KVxpqXDX+h72272dJo0UbfnbBOWHo1Qf8I/8WEjm1qx0fSLx5Io2t4Jb94D5ZZu5Q7T/ABYxzur0nWNP+3XCaGr/AH33X/yYaOJf4COqlj8v03U68sY5tSmha23g2a/Ls4b5/u1xy4ZyxXXJYpYud9zyeaT43W/3vg/FM6/Lst/EkR/IlBVm3uvHX7ybWvhjqWnRQxM7Ty3lu8XHX7jk/pXpklrMzSBUbhm3b027v/iqztas7xrVbHZ8txlXX/YC5fP/AAFa458J5W1s18zVYytc8917XvEHhWzN5dfDrxHdoj/O+m2aSeZu9AHz+lQ2Pj7UNSszM/wr8YRgrt8p9H+bn2zXrVmzNDHMv3GiDJ/ulamjaOHa3p81ZS4SyycrtsPrlWKtY8w8M+Jpta8uxtfDGtq/lN80+jyoG27QcEjDY9jWjea9o9nCtzriSwoj7f8ASLCVdrj/AIDXYaHZ2m7zmuZXkgurhNu/5cFuFx7BuK6Sy8uRtsiZqXwbl09VN3E8fUjrY8f0/wCK3w3WaSzt/GFgqRIr/Pc7PLVmxtOcd+lb1r4m8PzZ+z63anyvv7Jlwteiat4d0nWLRlmtolkX54pXRT5bjkNg/r7Vm2T+G9D8OvNrH2WwTzZUe3lRfvhslAMEv97jA6NWNfgfDVHeM39wlmEn0OObxPodxa+ZBqtu6f31mVlb8ao6n4gt2s5JrW8iR7eJm+1O+5Lf1yf4v92uf+Nn7UHg/wAJ/wDFJ+G9B+1alNF/o+l2VsjXMy9On3IE+vPstfMnxk+Nlr4FuofFHxO1Xzb9c/2T4S0ub5FBX7hAxuGerPxnvXPh+C8PhsQpVKnMl0sW8XKUbWOt+LX7RWuWOg3158QLbTbLQEcurQXLvJqHzfIp3qp5P8A6/SvnTwB4E+J/7e3xkIeaXSvDumv/AKZeIm5NNgbnanZ7h16Dog5PGA+/8Ff2ff2gP+Ci/wAQk8V67cto3gyxuClxqypm2tVHDw2iHi4n/hLkbE790r9C/AP7JvwO+Fvhuz8J+Cfh7aw21rFt819zTTHvLI/V5D94uea+pq0aiw7hhrKVrLtH0Xc5uePNeRjeAfBvgX4X+DdN+H/gWwSz0rSrcRWtum7oP4ierOepY8k1t/2lD5fyzbq3m+D/AIJZhI3h6IZ/uOy/yNZLfAPwPHeTX0KakrStu8pNYm2L9Bn5RX59W4IxVSTk6ibbu2935nfHHRWiWhzfj7Uo38FazE0qlm0q4GD/ANcmrC+DmpxWvw00mB5iu5pl4/67SVvfEX4Jafovw/1q6sPEOtokFjPcCF9RMqkiNn2kuGbacYK5HBrlPg98PPE+seALLV7Pxpdw2twJfLtIo4SIiJmXgtGT1G7qeWP0r7unwriafg7iMOmrvH0Zfdh66/U8F4uMuLacn/z4n/6cgd19s/iZ+tV5rqGWX7Pn5RVLT/hP4m0/zF/4WXe3If8A5+LC3+U/gBVa8+GXjTywtr4/ferfO8+lIw59gRX5LU4SxyfQ+njjIWNWa5gyqrUfnRs33/up92si1+HfxEhtx53jmykkVm3OmjsB/wCjM1H/AMIL8TI5CreJ9OYfwf6BKv8A7PXJLhfG2LWMgbcdxu+86f7NObbu2/ermvD+g/EyPxJPpGsXNhNbW8UUqTxI6faA+75QDnaVK89iKs6hZ/FC0vitn4Gt54Fbakqawodv9rBSuGtw5mMHpH8TeGLpvqad4yiRW6dacsj48v8A2K5bVNW+KVuzeX8Jby42f88tYt/mHtlqqQ+OPiNHdJBd/BPXhG6/8fEVzauF/DzAf0rlnkGZpXUfxX+ZaxFJ9TsGml/i70xrhvmVe35tXH3HxK1qK4+zzfCvxRldu/8A0OLaoP8AFnzKyNU+PNroeoW2n6z8PfFdtJdeYYETR2mZgm3Lfut397of/iqzjkeZPVU2WsRStoz0GS4dtyqaredJvKs6LXI3Xxk0XTYWutW8Pa9Zx/e8240SXbt9yAdtU1/aC+HM0SzJeaoUdtqt/YN3tz9fL+7TWUY9WvB/cRKrT7nay3Cr8u/5qq3F58+3Fea6z+1v+z3pWof2bqvxT061uFyrxXTtGVx/CQQCv41btvjn8K9dhFzoXxL0a6SRNy+VqSH5fzrT+x8fFKUqUrejF7WlsmdRqt9a/aIo5djMm6VO/wAy8D/0Kp/Cuntb2cdtccu/zO/8W4815Wvxm8N+IvFx0jQdetb7zfKt99lMr+W25nfOPu4G2vWdBuhcQiRX+X+5Xd9Tr4aCjNb6mM5RlI67w5qH2WSLTrj7jv8Aunb+H/ZrrrVQse2uEt1aZRDImd1a83iKTT/DtxZ3l5/pku2C1f8AiZpG2Bvqu7cfzqJYeTtKPzJjU6EWta1p8180azICmf3X3T96qDXFvMvmNMq7v9ta05oY1X+/t+X7n3sVSkj/AOnP/wAcrwqsuebsdcIu2pUbVNPX/l5RlH91Gpkt1byy/K+7/Y+appG8tmjbev8AwCqzTNxGz/7lZuDZabT0OP8A2gPCvibxt8Idb8L+EXX7dd2u2KJ32+dhslM/w5+7X5reMfhp8S/Dd/cy+KvAGt2brKfNa40ebYv/AG0QGP8AWv1TmuIl+aSb5a7NfCemax8P28OaxCskF/bnz4pU+WRX9fwr9H4Dx9ejOeGULxfvX6rZHnZlyuEZNn3zdW8Ct5ckmfkZq5TXPEV9eXzaH4VSKa4X/j/vJd3k2o/u8ffk9EH4kU7xB4m1LXtZfwf4P3IsDD+2dW7W6/8APJD/ABTH+7/AOT2Bt2un2+m2/wBltU2RJ8yRKnr1YnqxPdj1r+kZyUtnofARXKrszLHS7fRWextbJv3372e8fDG4lPVn75//AFClumjt49zJ/wAAXqzVavLyO3j3SJtZn2qn3i1U2hlvplkvI2SJEDLs/iP936evrWbeyiapX1KX2W6vLi2k8mJYEZ2lZHYM3ooHRvfNXJtvksrfMg/GrHk7vlX5VX/IqC6W4w8bOiJt3b99Q2luPV6Ir3VvHMrKyfJs+d3qCOGBY9sdsiJH/qlf5Rx/GR7dhU1tZ3V1ItzJM626fdV+fMb+99PRatMvy7VOP/HqOVFGXp8kdxG0lnbXCofmS4uOPMx6Z5/4FiqevalH5nkzI87zMsUUS/fkbsoH8z2HJqzq2tW8EczI7zLD8jpEm5mc9EH94mjQ9BktWOsapzeTK30t0P8AAn9T3PtVQpxkxt9TJsdBjs1kubhFlvJ/knlVNoVV/gTPOwfr1pLu1aKNtqdK3Zo13bt/y/xVnblvLjy96Ki/N8/93+99T2rWUIWsTGUjGj09nb7VePvA+VV/hzXmP7Y7xP8As4+I2SP/AJ8wCwx/y+QdK9lNqrFdqLhd23/dryT9tUQxfs4+I0fYHYWnlgcnH22Dj2+te1wnSjHivL3/ANP6X/pyJw5vOTyrEf4J/wDpLLfwgis7X4NeFrp5X+TwzZSE7yQD9nTj/wCxrXsdPmvFbUL75JpV2pE//LNP7v1Pf8qyfgrH/aHwh8JysCtvD4esflkXBkkW3T5sdwvb1611S28m9ZF+YKv8VeNm9K+bYh/35/8ApTO7CTtg6a/ur8kebfFmzEfj7wTCpyG1YL+PmwCvyb+NuqjxX8XvGF7K3mPceMtVDev7u5eL/gX3MV+s/wAd7r+z/GHg24Fq0rw6i8phj+8/7yAhR7nHH1r8ePE9+upy3GvqyI+papfXUuzlczXMspwf4gC+0e1fFTjyZjX8rf8ApMT5nhb3uLc7/wAdH/0xTORvGkaN4WfiFm2VDpEKt5kbH5REzU7xFN9lhaPqN/8ADTrG43SMqjYuwr/47Scj9DMm6huNSvSrPvVX/wDHaiaxihXzl5Y/Ls/9lrVsYVh868kfdK/yolQqsm51kTOH/ufdFZtlrYxjarJHtk+QB/4/4aoXUjeZ9nUZUMv/AAKt3Ula4aKNU/jqra+HbnVLjbHDna23YlReyKSuZE0MkknTbTls5mulkRH8tdtelaT8G9a1SFWhsNgVNzM+7Cr/AHq6PRvgLZ2d0JtU8Sbtu1ktbW2+dm/Gp5osfKzkvCfia00dY7OTejD5X2cbm+tTX3jDVtQ1ZpoJWRURlt4k+7GDxwPU/wB7qa9S0H9ktvFUj3zWb2EO357iWZgdv94jBFRL+zZfeD/HmkjQUn1W2ivUb7RcWbIvmBvk4P3gG+Yn/ZpNXdxpWOt0fwzb+BfCun+DGtohqUtnJda5cbMmNAqkqP7uNyqfdsVx/ijw3ceL7NdSjv2aVLco8Wz+A9MfSvQ9Nm0vXrPxN4g1HzRBPvtUl+bfJb286ptHvNM0jbu42/3a1rP4eX2jW8c1zDuuby3e4liVPkt4x0X8q4qy1vHc2hJbPY8F0fwjrUeny+F9Q0/el3taKX+7833TV7w/8NdQ0bUpLfVNKlifcy/73o1es2sdvouoC6vYUMc3/Pwm3/8Aarr28P291/xMJAqwqm5ZX+Xcx7fWuN4modMaMXqcZ4V+Etvq1nGuoOmy7Vk2/Kpj+X/x6vpv9nvzPDvh/wAPeH5vnhSw+zxfPu/1Tsm7NeLs11oNv9o2YVtrbV6fe7V7t+zvY6Zr3h2w8RfaWK2N5Mn2f/Zm5P4ArXTh5znBt7E1IxUke+SeH49ShWSL5d0Q/SiHwr5e3zNoVXrU8PXAuLHdC+5Fq5MiNtVof9qvoMLSjOkmeXiJclRowZtBa6vraz85h5KmeVW/iPRKuf2XJD+7k+Rk/jXrVrTZBJJNqDJ/rn/i/ujgVNeW7XUcSyTbF37mSL+JR1Umuh0YrUhVL6Gdbx7mZhvVd3y702/pXI+ILff8bNHi+b5tNY+/SevQobX5R93muH14AfHrRPMXj+zHz83+zcVw46naEP8AHH8z4bjybeFwF/8AoMwv/p2J0NxpMfmI+zaP4ot/ytT5rWzWNmbYCqbquzbGXzN/zfMv3Pu//qqheRxqsca/OzsPm+nJrdqKZ96pNrQrLoMckYbd8uwfLs+6arXfh2OT93H/APs1eaS6aZY2m3VYhjZptrJtDU4qMnYGrHkfxq8Ra94HhhXQ4v8ASXyzPPzHGo6P/tferxW6vPGkn/E017xhcT+ZuZWe5b5vl7AEDj2FfYGsaFouvf6Lq1nFcoy7djpu27v/ANmueuv2f/hfNCLa48H2skUT74klTO1v73+zWVbD1Z6xZ24bGUqMGpRuzxn9nX4reN9ctbjwvr2sRXMumqn2d9+6ZoT/AH8/eOa9bs/FkjP8yZZflTcmKvaD8I/h34b1B9Q8P+FbK0mkXbLcW8Kh2z6mrdx4RXznXyf9xaKKrQjaWpzV5U6k246JjLXxFHJcFWhXYqjY+/7zd+KvrqNrtaSTZtVN1ZEngu4aQiNG2/lWVrWh69pthPdRO6r9ndmTqrfLXZGrBL3kcsoSezOi0e8t7rSbZU4doFZ0f+Enk1h/FL4W/Dz4yeD7v4f/ABM8MWuq6RfJ+9tZ+sbdpY3HKOD8wcHINZ1vJrGmxoqQt+6iVV+T/ZpJPF2oKzLNC25P96tIzwsl7xMlVT0Pgv8AaH/4J6fG39mjXpvif+z7qt/4i0C2dnVbM/8AE109OpWSP/l7j/2kG/1Qn5qm/Zn/AOCm3jr4ca1CuqeJH0eVJdl7cRIxtJnHVZ4jzE/r6d6+5m8f27SBpk27W/ueleOftCfsxfsx/tD3D6t4m0T+xNeZNv8Awkeg7YJ89vNGNs49nBqJQpp81KS++z+THGU7Wkj2TwT+318E/jFHBcfE/SlhuZlCp4h0ubcknpmROcf7L5rtbe30rVmGseA/Ftlq1q8QXYsypJjdnr91v0r8tfH37Ev7RvwDv38Q/Bbx3B4ksUberaXMtreY3f8ALS2kJjf3IIz2FYvhP9t74tfDPUhZ+NfDbw3KS/v/ACnl06fI/wBg/I38qz+tVOa0rN+f+ZSpw6I/WK6vLe12rqkNxbMz/wDLxD8v4EZFUrOSx17WLiSx1K1kS3gWJNkyv87cv37DatfC/gP/AIKlWqtF/bniS/iw25ItUsGcL/20j44r1DRf+Cl3wx8Rbf7atvCV/JJ99HdI3Y/XApPEX+yNUl0kfTi6RqzaLYtpqZdPJ81GdfuD76k/SlvNKkjXzJjtx8uzoteOaL+2n+zjrVqy3Hw02On3ks/EMSrgdf4xWB8UP2svgDq2hJY+D7P+xdSWVZbe6vNbilhbHWKVN5LoR8pxg56GsZVYt/CXGDW7PZY9Y0XS7q+hudYtY1+0LJsa5XdtZF+b8StV9a+Mng3wybaG/wBQffeS7LXemxZnH8KPJhf1r5g+K37c3wJvtDTSdGudD0G9tpfNstU0bUmaa3uB/EiBMOPVHyD6V5Z8YP8Agot4Z8aaK3h3VrO68QQBF81LiwSKGYj+LMn3efSsoqa1SsVaJ9o/FL9pDXvB+hrfaDpVlMfN2y2VneJNeNEerR78R59s14l8Yv2itF/4R+9utU1VdAW8gX7VqWqXm6+X6HOFP+ylfK+l/HL9pL42Xn/CO/A74b3Wyb5Yn0uzaZlT/rvJtiT869a+Df8AwSl+K3xM8S22u/tM/Ed9NtriB53tdJuVvLzKsuYjLIvlwbg3Gxf4etWqkm1G7v2RDlGCvY8w1v8AaV1XV9Sj+H/7OXhK/vdW1B9q6l9gM15eN6xRD5vq74QdTXv/AOyt/wAEkPEnizV4Pil+2TrNwWncTt4TivN89xnoL2cZCj/pjFx2LkfLX2J8Bf2XvgP+zfoP9m/Bz4e2unzS4+26pL++vbrH8Ukz5ZvzxXoC/e/eP96uqGEk3eX/AAWcssRzOyMPwboei+H/AAzZ+H/Duj29hY6eptbeztYViihWNsbQg4X7taW3dJ5bfKKk02ER6lf27P1lWdE2f31/xWr/ANntQo3J/B9+tXQhFWRKndmS9uzN5av92o2t9sh3cVsNDDt+X/0CqjqZJEbqBWEqaa1NIzszkPi1bhPhX4icyZzoF53z/wAsHrlv2aLOGX4KaHuKhpBcnn0FzKDXa/F/Efws8UKAzAeH7wAr0H7h65T9mWMH4DaBNtUlGuev903Uua+vjSj/AMQzrL/qMpf+map5F78TQ/68y/8AS4HWNpcat8qL/v7KhuNHh3fND1q7eXUMUbTXE2wIm537baZLDJNnPT/Yr82q0Yn0XMZcljG3yMmBVabTbeRiqzPitSS1uGbap/3Khazut3H3lrilh09TS+pzOuaL9lvrPxAt55SWrMl0rdJIX459w+1g3b5qv3ELLujZau31iuoWc2m31tvgmiaKeL+FkZcFaw/Bd9cXdlNoWsf8f+jy/Zbj/pooXMcv/Ak2muOth4tFptaolMPmSY2JVKa13XLLsxW8LFXfctQ3lrbyMreZtl/h/wBqvOnQszaEtDnZrPcpkKfN81Zmo6GtxHFcW8K+fay+ba/Pt2nbjr7j5TXXSWSp+8VPlFVJtPkbcypurjlTcXdFp2Rz1xYC6t90iJKp+XayfnmqLaPCrN+4dC7btn+1/wCy101nZ+TM9vcJ+7f7n+93p0NjZ3EX2q1fejfd/CodNp6DTcldnnOtfDPw3fXRuLrQbWTzM7t9srMzfWuB8W/sjfA3xPDKuvfDjTj5rs3yWyoyn+8COVNfQdxpcckf1rNvNF85GZU+YfKy/wB6tYTrQas3ddgaufG3jX9km8+GLQ+OPgy7JdWCN/obIzi4H59a9W/Zz+Iy+PfD8LXDotzB+4vLd02TW8y9VkH8Jr1y50O6hLKqbg38FeF/FTwzrnwX8Xr8cPDOj/6A6RxeL7C3dndoS3yXXo2z7rMOcdfurTxmHWYYez+Jbd35P+twU1F7HvVnb9GX+GpdctdL1jS/7P1CFZRJ82xvb0ql4L1mx8QaPBqmn3KywzRB1bruU0upX1m2sSw7/ntkVN3+9zXxNWUqUJd0bx5ZO/Q5O6+Fehxxy28Nzfqk334v7Vm2/h8/y1Uk+F9vHEbePxPr3l7PlT+25vl/8frtG2N8yvTJPL27v4a8n61WXU6bJ7HAt8N7y1ZpLPxt4hj/AO4xK/6Pmoz4V8QWs3mr421xv9+/3fzFd1cpHn5apXEMe7+8pqli6vXUi1tDndH0HxRqmqWmj2/ivUf9IukR3aZW2gty3T0r6CmjX7UtnCjbY0+TcnYcVwPwr0XzNcfUl2+VaxH/AL7bgV6Is0fmbf8AZr9H4OoP6rKu1u/y/wCDc83GzbqJdj7C03S9P0PSbfR9NTZDbptRPvbsdWJ/iJ6ljyTUV5eRx/Lvy5Xci76fezb1WFU6fNsT5enrVC3j+3TPDC+5Y323Euz/AFjf3Qf84r94m1sj49a6kEKyX99LCqZRfmll2fe/2E+nc1bnVY4wqmnzbY48rsQBNqovyqtVoZlX95I+f726sbpaI1tdDPM/d+d5KIP4/n9Kg/c3yiTY7Rbvlib/AJaf7X0p8ardMW3/AOjbGbZ/z0Pv7fzpzTfM2wbc/wAFDfUZFcPNuXyflP3ttUdU1C10bTptQ1C5f90m6Vk+by19h61YvL+K1hmmkmRFRd0rP0jXruNZOm6f/wAJBPD4g1CF1sw6y6dZTptLMOlxIPU/wp269fuvVuyBK6Lnhe3uJbVNU1Cwa0eTLW9lL9+3T/bx/GR19OlWb6Y7vanTXUkTLjcxk3Nv/wAaztQvppo0W3+/LuVPX/exVqokrITjfXoMupvOlVY0T+7/AJ9qjgtQqtEvVmZnfuzf3jTpA2nw+TGE+Tau5vl+tJJ+7HzfMrf3KcZ6i5X0ILxfs9u0jPjGfn+9Xin7ZL3zfs/eJY3MbyD7G104XiNDdw7YwfXPJ+lev3l9NPILfTkR59+23Rn+Xd/fP0rzD9sqE2n7MPia1C5CvaZkZvmdjewFmx9a9zhSV+Lcv7e3pf8ApyJw5tFRynEf4J/+ks2/gzaM/wAF/CCxqDnwxYHaOn/HvH1roltzDdLHcOoRImdu3T+I1m/Bea3t/gX4NnljH/Iq6eI1HVm+zR8fjU2qMmtalL4Tgm3wQbH16dP4mPItQffq3onH8Vebm7/4VsQv78//AEpnVgoyeEpr+6vyR45+1/rk2i6JYeM5dsNrb2l9c2wK4LJEsbGVj/tY6dgo9a/IfT9PC+A9PtJhsubOzt1lVn53+Uufy21+sv8AwUqhGt/BPUNLicqJvCmtwiSMYYFrZQcY9M8V+UPi6GRtNh1Sz4x95Pq1fD1rf2jiLbXj/wCko+f4TTXFWd/9fKP/AKYgYPiLTZFkjhOwxXMG6ufVbqzml3NWpPrWoCRLe45SHKp+NVtajt5bjzI7ncJU3I61zvc/Q72JYbrbGs3Vt+6rFq1u0LySv9113b/4qxlmmXy45E53Nu+nap0uGsIVZk3M+W2U0rjTL9va2cjPG1yvnK/yf1au7+EvgeO8vHt18qT7rNK77I1U/wARJ9647wXpKXmrNfa0jRwhVeVUT5mT+6K9L1L4leGdD8O2mn+BXnhd4imqSyouZE3cL9AKxqXeiNYW3Z6LoniRvDsd94dhubLzrZF+y3ED48w+4cfNius0/VPC/h3WIbfUrO31W8iZWvIood/mKy53cf73avk2bxVJceIG1S6ubhI4327N+ePf619Rfs26L4RtbO28YeJte3394u2ysnf7qBfvnPt61UKbeq6DlUs7Hq7R6r4qt4LjwXf3Wlaa8TLLprw4dWHXJx+VM074X6x/aEOoX1zcXMscU0sSXHPzbWx1/wC+a7Wx8dfD3S7e30ldbtWVVTckW3MjHkf411015p91DbzabIh3L99PQVU6c+UcZJ6I8Gm+EwTTbPw/HbIgWJGuol/iPmq/6DpWu2myaxqzwrD/AK+4ECIv90cba9YudDhk1ZtUjh4lVVZf7uF4rF1LwLJpt1HdWKbIVuGdH34aNj6n61x+zbasauy3PIvid8J5tUkRvsaqkKtsdONu1uW/2ea5TSbi9+2Lp987GOL5Yml/i/2sV9Qw6Tb3mlvG/E037lmb7qqvVse1eFfGb4e33hvxEutaa8qQL8u9k5j9/rj8q48ZhpUfeWzNqFa75SLVPD82oaSbVU+dU8xH7/7tehfst+J7WPWpPBjTIr3/AO6gX7vzdQw/FdtcZ4R8QW+oWMNjC6LLDLu+f72D0zWlouizeEfiBpXjbS0lFvFfpLLEkPy/eb/vmlhWlLlfU0rJvVH2B4TkmvLd4d/lyeV8n8Jam3etXSq8K7933E/vc/4UDULOHX9M1S2m2w6pa709Ffuuf+BUnxAW8a4h1LS0RgG2smz/AMer1KGJnhocrWhyVKMatmWbbUI4Y1hR3VEXd/e+Uf8AoVXDqVvt8pm5rP0tprq1h+0aGqyr8u9HYdPrV1dNt49m19pf7yfxLmu6nXlUWhzToqG5IuqRNuaFdwRf4/l/KuE8T3s8nxp0m4LZZdNYD24n/wAa7xtNhVGh37ivzP8A7KmuE1u1jT426TAsWFOnOcdc/LPWGOlNwhf+aP5nwPHqisJgP+wzC/8Ap2J0SahdGRVVMjf8z/Sob7Uri8vkhD4ZIi+3ft6tx/6DWjNpYGWPQ/wVn22l+dI9xF82922f7g4/WtJ+0eh96rW0HWd55zLcRn/frRt5lnk+ZGf/AMd+aq9vo8zRp5PC/wC5VtbOa1Xd0C/Mz1cFIUpJ7EkMbNdPJHNj59qov8O1afNHH/FM7Ns/jequnzNHt3bnZv40/i3c1eZvMVdyfMny/frovZmdn0IdP08vJtZ32u3y/d+XHpSi3W4Z7hd2G3Mu9+fyrT02Fo1W4k+ZIkZnptvZmO3Rdm5T8351rGy1RnJvqZn+pLec/wAn+x/DUGtRRx6LNDHbKRNF5fz/AO1xV3zlhvvsskNwzTMy7kh+RfqemP603VIWuLPyvs2797Ev+786/NWiTe2xDdiq2nwzY8yFD8nzP021RvNF09WZZoVP/AK3ltWVTJv6v8qrUc+nyeYzb/u/w1XJF9CeZrU5q+8H+G7iEedpsW01j3nwt8H3wLf2Uu1v41f0rtLqzWS38je0W75dy9VzTbXQ4bdiEeVy+3fvdj/P7v4U/YUGtYke1n0Z5lq37PfgfUv9Hb7Qn9799t2r7V5140/YH+Hfji8+zTarcOjW5VPtUKSDaH9+a+lYdHaSRtqf8D7U9dH/ANORo027bU7Pk4bDrSlgsJN2cRutVS0Z8K+JP+CN/hHVpv8Ain/iK1gy7vl+wLhfpg1zGpf8EQfF0jf8Sv8AaBsMP/z++Hnf+Uwr9Fbiz8lsqmd/92nG3kZRsj2bfvb6SweFT0v95Ptqr6n5oR/8EPfijcRtt+Nfhd0SV0bf4fm7HhsedU//AA45+LMfyy/G/wALp/3L03/x6v0t0ezZpLyHenyXR2fiqmrf9mtJuVpdvWq+q0bbsHWqdGfnD4b/AOCJPiC3uEXxR8Y7W4Qrub7BYNGrfgST+teufDP/AIJI/BvwfcQ6hqUFvf3C/wDLxdW3mtu/4HmvsO109lkYM7bVYbGZPvVLHp+6Pcz8B2/9CqXl2EmrTv8AeyXWrPqeaeD/ANnXwr4Pt4rHSYdqD+BflH4YrduvA0NndabKtzsWK9ZNm/5mV0ZAv/fVdjHatbrtk6VU123kbSbi4jT/AFKLKmz5myjK/wDJa2p4PDUf4cbA51Ju7Ytro26EMs+07NtH9hRr8rTc1qwxsrbl53ZpZLdY8L3rXkiZ7MxJNHjs/EFpN53y3Nu8Oz+8ytvH6bq0Bpe1t2/d/fpmvFobWDUG2r9kvI5Xd/7hbY/6NWiy7ZNrD5VaodNXKvqUV01V+992m/2bC2fk6fNtq5JbsrbcYXdSrCqtu+7j5awnSRonZHFfGawWP4SeKm27tvhy+5bt+4euS/ZU037T8ANAkdcq/wBqGPX/AEqau7+NYQfB/wAWLu5/4Rq//wDSd65H9kCe3uP2ftBtop1aWL7UXjDZ25u5iCR29j0PSvr4wt4b1r/9BdP/ANM1TyHJ/wCssGv+fMv/AEuB1cmhs4DJ95Pl+epY9Pj8vDcN/s1ryQqzfwZb+lNMG2bzI32n7v8AvV+fSpK+p9GpXMOTTWVt2yo20fzs7h/wGty4s5mZpI5s/wCxUDW0jNtYbVX5fk/9CrCVFIq9zDk0lYY/3yJuOfuvxXOatoMml+J7bxFC8SwXiiyvf73mbv3DZ/4EyH/eWu8azjWT5vmrK8QaPp+uWtz4XunlhN3at5TbMfR0PTKnafWuKpQjexpF6mTNYtG23vVSbT2X5ti7q29NWa+s83yYmX5Ljb/z1Xg4/wCBfNSXFjG3yr8tcNWhfQ0UjmWgljbapqCZdjHzE+X+Otu809o/m9P/AB6qUlqu35v/AByuCvhl1NVPUyZrOO9j3WsyN/Evz5+YVXutSsdOt47i+nWEXEqwRLLwzSH+Af7VaK6XFa3H2qGFUf8AjdeN1VtQEMN4n21Fe3k/vw7lWXs1cTp20NlsM27mXb8wpjW6+duZPl/iq55Krhlfd/u0rWvmfvF+b/ZrNXtqQr9TMvNNST5lNc54g8PR3X7m6sEnidGV9/3Wz1U+x712a/637H/Gvzfc+8tVrrTWVmkjO7P8NVF8sr2H1Pmbwbda1+zP8WpvhX4mGzwvrW698K3txN8sKO3z2mSf4H6egZRXpl8zXUn2xX3M7bm+Tlq6X4ifCnwr8UtBttL8XaDBfy6bO09rFew70bKshUj0Ib89prI8P/Czw2vhu20ywm1GO2SLbEy38qSrhu5znI+6c15ObZUsY1VpaN7r9TanUUN9itDcLGn7z7tRfbvMb9192r118L5Fkjax8ValHFHKrPFcbJhIo/hJI38/Wq0nwzkjvHum8Z6u0Uv/AC7/ALrEPf5CEz+ZPFeHLh2u3e6NFiIdSu0zHhpmqtNJH8zM7tU1/wDDXxEq7dL8f3vzP9+ezhcx/wCyPlAYVz/jjwL8S7i3ht/Bvi61juvlVkvLDKyEfQ/LmpXDeMvujR4mmlqeu/B+ztV8NvqS/N9ouCvy/wB1eK6i1tdm5lh+/WN8NfC2reD/AAJpXh3XryK5vba1Vb+e3TYkkx5dgP4Rmurs/KWPcfv/AHVSv0vKMLLAZdTo9UtTx6sva1HI+oI7qz1C4u7HTdQf/R5dl08X98r9wE8Z9fT61akkVbfyWRE27f8AVdP90VHa6db6PYxafpdmsUSLtRU/hB69fvE926mmMtw03nfIdi7VX3/vZr9TlJJanzUUhJNsGYV3l2T5fx/h/wBms21abWGW4ifdZ/wypt/0pg3OP9gfr9Knm83VpPs6z/6Gr/v5fum4f+4P7qDue/SkmvfItQ0CbVHyp8mNvYcfwiueUtTdKyXckvJN0iRxvslRPkfZ+tVf9Y23ouwNu6tgfxH60tvuhXbJN8yq29+y9toz3z61g/am8UakYY96aJa7vtVwj83kytjyhj/lmP4vXp60nOzsUqdyaOxk8RX3nahCi6Pb/PbxP/y+Sj+N/wDYB6A9Tz067Tw3DR+ZcbC7fNsX9OtQwzyXlwN0OyHZuiRv4mHRsegqCS6uGkmmmTzSjDYu/wCXcfT+rU+fohNW0Hak0jKzQum9U/j+9jvUa2ojmkjuJv3rY3y7/usf4ahku5mb7YX3u/yp8m1VA6sPYdvfmq1xJJfXiW3/ACxj+ZU/kx/vUozsw5bok+wyalILdv3Seb/rX9vSqupXULTSxxzY2bYtivuZjt4QGrVxcXQh+zrMqInzMzfdkx1/AfrVGxtd0aa9ImxG2rZ27J2Lcufdu3t/vUnUVrIaTW46302RrNofOWGZ12yyp/yzA6In0ryz9tXYP2avEkKR/dWywQM7QL2AcmvU1jkjmlt1/wBZvZk3c/8AAc+tZXjvSPBuqeELrRvH1gl/pl8ggubOYHEjMw2qpHO/cBtYEFSAQQQDXqZJmVLK86w2Mmm40qkJtLdqMlJpeehyY7DTxWBq0YuzlGS+9NHmnw9+Nnwy8PfBfwlpNr8SvDq6wfD9lbFbzWIB9ikW3TcZFL/JsweDyWGK6XSPiv8AAzRLCHTbb4v+F5Qg3Szy+ILbfNIeXcnf1Lc1ztl+xL8BodPc6t8N4VvZpWbyRq975dsG+4gYTfNgdScknNTwfsS/s4xITc/DxHX5cOdYvAR68Cavp8TV8P8AFYqpWlUxSc23/DpdXf8A5+nmU1xDRpRhy0tEl8U+n/bp5p+3B4x8H+PvhrcW/gDxTpmrSx6LqqOmmX0cxjZ4UCBghO3OCBnrtOOlfmFfeBPHzeH5rRfCGpkvMSif2bISAO33a/TH9rD4QfDP4M+BppfhloX9nS3mkX8l3i7mmLbIlMZxK7AY3v0xnvmvgq2+IHxAsdBi8Qa9rzJBPeTRJi0iLbVZhnhP92vjqMOBXmmN9vVxKp80OTlp0nJrkjfmTqJJ817Wb0s3Z6HzXDM8+XE+cKnGlzc9HmvKVv4ELWtHtve2p4jN8L/iUlwYX+HuuEMciRNJmIH5LWdP8Kviol4Ui+HmvlD91ho0+P8A0GvQvFXx9+LltI50fxgUjQkA/wBnW53ZOF6xms+4/aB+NaMgh8eswwok/wCJZa/KT/2yrp9j4Zv/AJfYz/wVR/8Alp94qnE6+xR/8Cn/APIHNQfCX4ltdR/8UBrahlBYnSpuD7/LW4PhB4wcpPP4J1fAGWT+zZeuf92tTUP2hvi5a3X2SPxhnaMmX+z7fDAjt+7qWH9on4qwWKXN34n3GRvlBsYB0+iVp7Dw0Uf42L/8F0f/AJaLn4n/AJKP/gU//kChF4H+IVvP+78Ca0Y2V1CjT5Rgdv4axx8Lvie0ItI/AuqlQGJeTS5gcnt92up0/wCPHxnu5mmPiT90wIjX7DAMHt/yzpH+P/xcjLbvFm4p/rFjsLc4/Hy653R8Mb/xsZ/4Ko//AC0uM+KbaQo/+BT/APkDmI/g98R3m+0L4H1SNUHyRrYSgk/981vaV4Z+MemW58zw7rLylNgH2CYhFPYHbWpZ/Hj4oTtHv8Wqq4y++ygzj/viur8G/EDx74j1CRrrxsEt02bY47SHc/rj93xQv+IZQdvbYz/wVR/+XFJcVTekKP8A4FP/AOQODR/jvonioa5ZeGNfdt6uXGlzMAw9ttfRnwf/AGjfiBLaS6d4m8GatZujId8lpKgkGNpGXTHXniuE8Y/EDxTojSaXa6zi92gRIIYyCSOpJXt9K8m1H9pL4t6RqgsJ/GIklW4USQnT7cAJnnkR56VpTl4ZVFb22M/8FUf/AJaKX+tMPsUf/Ap//IH6ReGPHXhi8t44dT8V6bAxQbvMvowFx1Xk+tdJp/jP4byRG01LxnojIU2sDqcWD/49Xmn7MPw48D/E/wCFGj+LvF2gJLd3tgtxcNHdSrt3cqp2uACR7V6LD+zr8IXZ3bwfnHSP+0Lj/wCOVMcP4ZRqNKtjP/BVH/5aW6vFThrCj/4FP/5Aot4g8B6feubfx9o8kK5MQ/tOI/8As1c1401HwPr1i1rN4l0yVCcnbfRlm/Wuo1j4DfByM77DwxtQH/n+nJ+hy9Yk/wAC/AkqtJaeHsKG2r/pc3X8X7U6+G8M6lPllVxn/gqj/wDLSI1uKIyvyUf/AAKf/wAgeMXmh23hfxJDqGl6vazwSMRIsMqtgDnsfT5a9V8H674OdZLfUvEOnRxMo2+bdov6E1y/xT+HWkaLbgeFrMo8YzK+92B+m4msf4Zab4c1VY18UaUsrDKMDM6bn7fdYV5McL4WUp8vt8bf/r1Q/wDlx3KtxbNX9nQ/8Dqf/IHvsnxE8G2vwoms4/HekNf6K32vTkXVITJKoOHiUbssWU4wOflp/gH4/wDhq4bbqvijT4UMJLfbbtBk+nJpfBP7OvwV1GWNtU8EidXh37TqVyvbodsgPWr9z+xv8J9NvftraTdy2jhjGkt84Cn0O0huO3rXZSoeGLlZVsZr/wBOqH/y4zlV4qirunQ/8Dqf/IFTRPi3pKa1Lb2nxX0uSyJZ7b7fqMStCW/hLMRkDtjtXSaL8RfhvpdupufizoU8pfdJL/bMAyx/ujd8oAqhH+yr8DrhRFH4KCOGBLNqV1yvfP7zinH9lP4IR2sssvgZQVCrHu1S6+Zj/wBta644bwzp6+2xn/gqj/8ALTnlX4pqfYo/+BT/APkDStvjL8PyZn/4WNoILynbv1iH7v8A31XO3/j3wnd/E/TfE9v4u0xrC2s2jlvlvozDG2JcKzbtqn5l4J/iHqKvx/sp/BEsEPgI5BwS2p3X/wAdrDv/AIKfDfSviDY/DuHw1t0i+tzPd2f2yY+Y4DnO8vuHMadCBx7mvMzaHAap0fqtXEt+0p83NTpL3OZczVqjvK3wp2Te7SPhuOJ5+8Ngfawpf73hrWlN6+1Vr3itL79bdDu5/i/8L1jEi/Efw6+1GYZ1mDPTgY3daq2HxV+FwgRH+IegLtiUY/tmBee/8dZ2o/sqfAe0t41i8ElpJZlUH+0br5BncSf3v935aRv2S/grPGhTwXHEytmTGpXTblPRf9bwa9f2fhq/+X2M/wDBVH/5afbc3FH8lH/wKf8A8gSaz8Z/C9tepFpXxF8PmH+8urwH5f7v3/lrWsvjN8OptHUaj8R9A85lIH/E3tt2Ogz8/FUT+yV8AQ6IvgLdvb/oK3X3fX/W06f9kn4ALFlPAOG3D/mK3fTHP/LX1qlDw2T/AI+L/wDBVH/5aTz8T8vwUf8AwKf/AMgaUHxY+FzHEfxN8NohkwB/bcC4A4z9+ri/FP4SBRu+K/hk/wDcct//AIusOH9kH4Buis3gNien/IUuuT3/AOWtK37I/wAAEGR8OmfjPy6rd/8Ax2rVLw1entsX/wCC6P8A8tFz8T/yUf8AwKf/AMgdJ/wuH4QLYyLJ8VPDZYxbQE1y3z/6HRB8ZfhKIgg+KXhlSsYHOuW/zD0+/XO3v7JH7PENms0XgIlmlKg/2nd9v+2tRQ/sn/s9STEH4eOE42t/aV5z/wCRa3UPDiK/jYv/AMF0f/lplfib+Sj/AOBT/wDkDpj8Xfg6zbm+J3hn5M/8xy3+b/x+q2rfGT4RNHDBH8TPD5zcoWEeswHCgE/e31lw/se/s9kBX8BMWKnC/wBqXX4f8tap6j+yP8B4Lm0ig8BnErTlv+Jld5YIowMeb6mhQ8N1tWxf/guj/wDLSX/rLf4KP/gU/wD5A6S1+MXwkjjG74l+Ght+6F1u3/8Ai6V/jR8Iy2E+KPh7IGd/9tQfl9+sdf2Ov2fZYsp4DZWPZtUuvx/5a05P2OP2fFYBvAYIJ2/Nq90D9f8AW1aj4cf8/sX/AOC6P/y0TfEq+xR/8Cn/APIF6X4wfCmRgP8AhaHh3luf+J3B/wDF1aHxX+D5i2/8LX8NjnH/ACHLf/4vpXNv+yB8DTMqx+AlwSxP/E1u+g/7a1Yuv2Pv2e7e2FxN4IWId3fU7vH/AKNq0vDp/wDL7F/+C6P/AMtIb4klpyUf/Ap//IG9F8WfhDFu2fFnw1/wLXbf/wCLqKb4x/CKO9j3fE7w8w8kgumuW7YORx9/pWHD+x98ApGDf8IOrRj77pq10fpx5tPn/ZE/Z8juUiHw6+8jNj+1rvoCMf8ALWmo+HTd/bYv/wAF0f8A5aK3Ef8AJR/8Cn/8gbY+LnwfEQQ/E3w4T3/4n9v/APF05Pi/8H9uwfE/w2B/ta5b/wDxdc5/wyJ8AiGRfh8cgfe/tS66n/trT7f9kL4BMzLN8PMY/wCotd//AB2jk8Ov+fuL/wDBdH/5aH/GR/yUf/Ap/wDyBt6Z8XvhRDqN9JJ8UvDS7pVZWGt24D/Jg/x+1aMXxl+D/wDF8VvDP469b/8AxdchF+yB8BYtZe2ufA2Y3jj8qP8AtK74bLBufN/3a0/+GO/2cUG8/DcsBu+X+17zLY/7bVUIeHV9K2L/APBdH/5aEv8AWPR8lL/wKf8A8gb3/C5Pg62TJ8V/DJ/7j1v/APF0f8Ll+DvnsV+LXho56E67b8f+P1jWf7HX7NF3F5g+GrqR99G1e8yv/kamR/sbfs5vcSwt8OPujIK6veYH/kar9l4d/wDP3Ff+C6P/AMtI5uI7/BS/8Cn/APInQf8AC4vg2SB/wtvwycdc+Ibfn/x+nTfFr4Kzo1ufi74WCyqyuP8AhIbfGDx/frnYP2N/2ciFL/Dcnc+051a8GP8AyNUU/wCxX+z4l8lyPBTCEPue3Gp3RDLjHDedkf3qfL4eL/l7iv8AwXR/+WhfiP8Akpf+BT/+ROm0X41/B9tOtnufix4Xiby13R/29bgjjBzl+MEUp+Mnwekm3y/GDwxjkca/bf8Axdc3pX7G37OtwLmOX4drLLBdSxhRrF4FKjBXnzvQgVe/4Yk/Zyb983w62KUzs/te8O0/XzqFHw7f/L3Ff+C6P/y0TfESfwUv/Ap//ImlqHxb+Deo6bPYn4veF1863dB/xP7bg7eP4/WrGm/Gz4NS2MM918XfDSyPAhkRtftgQxX5s/P61iSfsVfs2qnmJ8N/cK2sXnQdv9dVfS/2L/2eZmube4+GYdredl3f21eDKkhl6Tf3TQ6fh2nrVxX/AILo/wDy0ObiL+Sl/wCBT/8AkTp5/jR8FnIA+L3hgj/sYLb/AOLok+MnwXRG8v4ueFOVyuPEFtxj/gdZA/Yl/ZkwN/w2xn+7rN7/APHqhb9iP9nJLgr/AMK1Yp/D/wATe8/+PVDo+HX/AD9xX/guj/8ALRqfEf8AJS/8Cn/8iVvjJ8VvhLqPwo8TafpPxT8N3M9x4evUgt4NbgeSR2gcKqqHyWJIAA5Jqt+x1aQzfs7+HZpol4W8RWxhv+Pyfv6VaT9jH9nDcQ3w3yQ3T+2Lz/49Xf8AhjwxongzQbTwz4R0iKysLJBHbW0I4UdScnliSSSxJLEkkkkmozbM+GafDLyrLHWk5Vo1W6sYRSUYThZcs5XvzX1tsXg8Nmcs0WKxXIkoOKUXJ7yTu7pdiGSxkhZJFT7j/wDjp61P9liZdueR/fq3NHI25Wg+WofJjkj/AHjsCPl9K/P5QezPoIuxVMIgUbv/AB2oWtd3zbMbquXVjuK+W+f71Mjhmj+VqxnG2hpCRnSWrBf3qbttE1usnysn/wBlWkytIvmbPlb+GoLj7CrfvLlUZk+4zqP51zyp6M1vc5v7HHpPiJwp+TUt0qK33fORfnUfUc/8BqzNZx3vzNDjH51oa5p81xpZuLPY01tiWBf4WK/w/iNy0238m+hTUbOXKSqGX/dNcVSn1NEzAutLb+Ksm+s2hV/JTfj5tq7ct+ddhcQouNyVkahZxyRlmrgrQurGkTlrqHzF8yPgN97ja1VfsMZieH7ZK+f7752/Stu6s5kkZY/uH+KqE+lyxsJI/wDvivJrRcXdHRF3RlwM0haGZNpD7X2fdyPSnKskO5Vf/dqe6tvs8yXG9vLf5Jfk3c9mpLxZI/mj3r/wCuaVk7sq9yrdKysrQv8AOj712/kV/Gl3MzLNG/yN82ykxtVpl3ltxb5//rU2F4Y5dsf8fzbff2oTuFrDJvvecv3v9iuf1S4h8Ja43iLew03VpUTUvn+SzuvuJN/so/yo3odp/iNdLJCsi+YOu6qk1nY6hbzafqlsk1tPE0VxE/3ZEZcFTVQkouz2BrqEsMe1n2fK3/fVUri38xdypU+i2Nxpdquh31553kLtt5e8kQ4TOf4wPlPr1qWS1XrGm7Py1PLrYRgtCscjwzcpJ/ql/mv+FT+HdHtbnWre3jTalu+9k9lqxqGmfaoTDIjAf7P+e1afgSzuPOuJLzkuvlb9nDKP4vxrowyU6iTM6vwm3H5zMs3o26rqxr5ZaRFGxN1V4Y2h/wBcjsd21U2fe/8AiRVm+VltVgmm3yP8zdq9/Sysci3PqOSNpmeNk4b5f96qSzLeKVhmbygxR5Yvm8xh1RP7vu1S3kl0zGzhm2tu2SsnVf8AZHuf0p8cccf7mGHCRLtTZ938K+7lK78jxYw5VdjFkVWjjXjd91NmP+A4qhqiqu2NX/eF/wDR1f8AhPd6uTTSWse2Z1eQvuX+H/IFee+OvFF1rniZPhv4LvsahJb+bqmpJDuTTbYt949vMf7qJ+J4WuapNJpLqaU48zua2qarceMNVfwr4duWa2t/+Qzf27/6n0iQ9PObv/cHPVhW1b6Ta28MFjZwolraIzeRF8q7v8/rzVbwv4X0nwl4ftvDOi6U9vbD5vV2Ytl3kJ5Z2PLN3rSeSRR/pX7nO75t/wAqqPf1NWrJBbUoN9lWSSTftWPd8/UKP7uf4qotcTXEcqs7fZlbbt/jb/8AX+gqzfTNGyW9r84SJm+fgY7v0/L1NRx3B5kVNsMUR2fJn5T9e5rJzSRcY3Ipla4xKruYv4dn8X5dhRCWZyV2D+/+H8NKrW8MawQpt8tdqfJtVVPfH8Oe1QLDJqVw2mLN5b7FaV06rGf6mo52noXy6ahbqurXDs0b/ZIX27P+fh/8B94/lWhNqlszCHejSp95PvNHnpTWjjt7VLex+VYkK+V/9eql5NJCqQ6fCjT3Pyu+z+ELwxP8X41aly6GbjdliCG3ZmuJnQbF3NK9Ylv4dsvEmrW3iy+TFvZM7aNavu2M3T7Q4Pf720dhzTfEElxrl9/whNjc7beGJJdeuv4lhbpEmP45NuPZNx9K3ZLiORdw2rlNqxf7IXjH4U1JfELlsrdytIvzGNuWVNz/AO1VPUNQjtbbczqzfMj7Pl/KlupI4bE3Fxcoju2x5d+1VG7hR71jSJJfasfMk+Up/B/yzUffYfyqpVbqwcl9TwT9ub7dc/DZ9TbCPc6LqpgYrxgRIAfp/nvX5g69qV5b6SlndOxdJX2Rb/lzu5x9a/Tn/goDrlppvwnvL29nX7Np/h3V5ZpIo+FVbdWYAdwFAwK/K/xNqEbaxNIu/Y10Wg/2QXrw0/8AbK/rH/0lHyPCya4szpf9PKH/AKj0zK1y5jhsbzzkXesTMn1DVzkl5GzfunwDFu3e9X/Elw11fOtwmPvb9noa57zFWPaz7gKz5lfU/QrFpNSXftY7v9t6v6dHJdQ7NjlF/rXMNcbr9dz/AC7vnrpdBk227wsm4N8y/wC9ROV1cLWNLULyaS1+x2b4x9/b/CB61c8E+Eb7Vrv7M0zh5fm99o/oKs6f4XuNQt7dpE8pZfuP3kNdNpvh/XvCt1KunbIbnZsTf87qp68etck5cqOmEVuzB8Qw6boMzaPbvumLjeuzlv8AaJPrXUeD/Dd5N5Fjp9zteZPNn8jlVx7+tZtr8Nb7xFrbreXM7vvH226dN3l5r3D4e/Du18Mxfare2dRDFsie4+8y9N2K5KlTlVurOmlB822h5n4+/snw/pVzeSbpruOBPmbl8+5PtXg3ij7PealDNCn7+f52T/ebhfyr1b4lWt94s8YXmj6Y8qB7p1/2ePSu7/ZT/YX1D4leLh42+IXmxeH7KXbFFswbxx1/3UH6100Woq7OereUtD7A/Yn0vUNB+Aeh2upP+8ey82X5/XotetyatHawrPHN8v8AEv8AFXLaS2meG7GDS9NtkhtIUWKJV4VQvAWm6hqStIFtXQpJXTGSb5mD2SNG+1iO4uGaN+S/zbPSmTXzXSizjTYg3eazdWzWDHqcUMzNJMj/AO7/AJ/CiHxZZ28nl3E3zL80rfw7v7tbe1pqOrMXGVybWNIttQVo5EVQ8TKq/wB0V5x4m+HeoaDq1veW5/dJsb5P7wbn/wBlrto/id4duNQ2w3/nSI23yk+9tFVvE3j7R5NNka+3qg+bd/Erd2/GvMrxoyd09TenKonsdd8N/HireQ6fJfoJ1iVpf9n/AGa+g/CWpWPiTS/sN1bK77Pm/wBqviXWPHmhrrlheaPeJDMU23Cfd8wj/OK9y+CPx20+a1to7774fY7b/u06E1N2bNXeSuey3Xg3VrW4+0WKeZCE+6/DL9apXyrDfWumtbOv35ZfwXA/Vqq+NPiRdX0f2jR332wQL58T9/eovDPi77dZpNa2/wBsePd5trK/zsP9g+vpXUq3vcpz+ye6NaGHzIQsior7PmrgfEEYh/aH0EMvA0lzjr/DcV3Gk65p+tQfbtLeURrKyNFcJtkjcfwEe1cX4mQ/8NF+HyGP7zSWz68rcA/1rLFNSjBr+aP5n59x+nHC4BNf8xmF/wDTsTrrqFbjUPImRRst9/zf3nZv/ZVqeOFuG7N96Kn+X9uvpriH5xNLtT/ZVF2fzVquwxwxsmLb733a7W7M+9Klvbt9o3Mm51yu7+FQf4alm0+S52fufv8A9W4/9Bq/9lt/L8yP5WbcyU6SaP7YzF921dq/7O1f/wBqmlYi4f2fHDbrum+Zm2p8n3af/ZYXasUn/wCqpJI3kVI5H+X7zb6WObybhGmT5X3f7v8Ak1slFaE3sU76ztbVrOGSZPNmd2VPp6Uz7HcCYMrpt3/d+9Wne6fCuqLJ/F9lSLfv+VcfP0/h5aovscNxta3RFXavyfSne6FZrUpRw7b5o03d9z02+/4/oJmf5Y4H3/7RLr834KrVo2y3H2gtMmAV2rVO4jVdZaNk+SG1RXb+8xdj/KmnoDVySGOFlEkL/e3f8ComjtYY/MmfC8fJSrbzQzfOj/6rd/492pzP5eyYT/3vl/z3rTm0M2kRXFn5lwq26cpllT/69OjjhuIRDcW25fusrdG/+KqRbXy2ij85n+dn2S/M3zdV+npUiw7m8udG4f7laQt1M2rMia1W3jLW9sm0uv3OKT7F8yboPvb9tW/LlkkKs/yf7lV/JkhKyK7bFR9jP/F23VfN2Fa7MZZrhobiabTWTY/+qfq2PQVc0e8/tS1edLOWFvmT/SEwWq3Da7d7b1AH3nf9c1chjWONlV1fKsv97caUeZu4PltYzYWP/CRLp7JuQ6ar7/4twlYVp/ZW2+dJ97+H/ZqjIzR+KrT9z8n2CTfL9HTC/wDj1bS/vj9/5V/gq4yadmTNXRU8lvOKw7TuX5v9mmJ5SyeZjaHiP6c1ejh2yP5KIysn/fVUpI0a8RGTYA7L67ty/wCNU5tbExinuEFvIHWGa525+b7lLHp8jXHl715+XelWLdlmVNs3zq+10f8AvCo44ZH1CVv4Hfb/ABDatS5p6sLalfT4IYdUuWV93nKkuz3H7s4/753VZmkZSnmPvV3P+TUF3HcW+rQtZplVili+b+8PnT/2amx3Gosy7kbYH3NuT5dx6/hVKqr6j5G0TeY019Fb7EASJmZ3f/x2oI45LfXpGjdDFcW6PvT+8rbDn8NtWPLkuF8632h/u+b/AHl/u0mpQ+XfWd5Cm0JcNE3+7ImP5qtNyItpYtW6yNCPb5dn0pZmVkKh2bd/d60kcjLDGtx/E/zMn3eabDJ5kj7UXG9l+Xd+tNz90lbkMzeTIi/P+8+X7lJbrJbx+X/rV+b53xlvyqvdKzRhbZ2VVl/if1arDMGbyWfa+3bv2Vmp3Zo1ZgyxyKFbhf8AfqlcWrfaAqp9/wCb5P1p91eNGrx9x9z+lJJdTfZ/Ob5WTDb0+b5TxWcmr6FqLsRTR+XiNUwhqG6gb7vmNz/HV+5hZZEVn4/i+Tv2p95CVh3eTvJrF2aNFuYojkXK7/k37l/vUvmbseZAu7/bq1JbycM33Vbdtf8AummyW+5f/Zf7tc0nqaRKkF0vnLCyferOsYV03VLnQPJ2pt+0Wvz/AMDN8649Ff8A9CWtCS08uRZG+VRWP8Rr6fw9pC+OLe2dzo/7+6iT70loeJ1/2sJ8491rmmubQ1WrsXZI5GjC+du/9CqhdQ+ZHtVPmDfJ+FXria3vo0uLOZXifa6SxfxA8hh7EVXuGk/ih4bd/wB9Vw1Y8xadkYt1Asfy/d/i/CqU0Kq3zfMtbEzRtHuf+B/uv/d71UuY49n7lOGrzK9PQ2hKzMaS3jkVoZEUq/y1iabrjajql7oM1hcR3Njjf5sLBJkbpKh6MP4T6Gt+4s5vtDMup3Ee1t2xNuzH4iq2tR/Z2j1aMZC/JcfP/Aejfgf0ry6itobppq7KTQRyfL91qqyWu2No0TZ/c21r+S27/R3+WoJI/L+Zn3Vim0rMGilHbyMu7/x36VFcQyLLu2bht/Gprx4bOaK+uI8p5qpL/dXPAf8ADv7VZuLdlbd8/wDu1TfNqheRhagt5dWrfYNi3cXzW+/7uf7p9j0pNH12z8QaauoWKbPnKyxN9+GVeHR/7pBrSuso29uq1nvp9vpt9Nq1uipFfMrXu35f3o4D/iPlP/AaqLTja2pUkEyXTTRqsKNGVbc+/wCZW7YH8VbHh1Y1Xyf7kW9/727dxVGGESXQhb+9Wxo9rHGssw+Xzfl/4CK9bLKabcn0Oas+hcuNUhtdKn1dtjfZlZpU/ibFZ+nz3t5bpeXzp5ki722fdXPRR9KtTNZyQva3Ft5in/ll/jUEO5I/lk/2dtenFON7mF9ND6jXUNNZn0+zTHk/wbPT+Imql5rMyr9nt03yy/6qL7o/3voKsytJefdTykb+D+Jv6LXMeMPGknh+zgs9Hs21HVtQuPsum2EX98LksT/Cij52c9v+A19nVnZHjx96RF4u8QavYyReG/DPlXOu6gjNB5u7ZCg4eeT+7Gvp3PA5rT8B+AdJ8E6K1nbu89xLK09/f3X+svLhusr+56AdAOBS+FfCbeH/ADL7ULz7XquoMrX95sx5m37iIP4I1X7q/ieWrVkmt5JGjZ9/luu/Z/eNRZQjd7l6y90T7VHHGLpX/cp/4826oru4mgVJJof3jKWitf4dx/if/wCtVTWtUs9L06e3km+Xln2e/RQP4iewHeszQ2vuLrWHZ7y5dE2/eW1txyIk/vE9Wb19lWsZVbuxcaehehjmsYW+0XP2iaaXzLhk+Xc/b/dA6BapRxq0jzyOu0fc3vy1Wrqby5m0/fsWHbuX/ZPPBqG9urOOF5L50Tb8yJ/dUenvWDberNUl0KjWsl3L5bTPn73yfdkO7+QrYitY9NjMMkO55UDSy9GZ/f2FVbG1uGxqE33ni+Zf7vcIKl1C8tdu774DIu3+6393/wBmrSHuxuRL3nYluvJWFwqeYnzLu+7u/wDiaydU1ZdOktrW3hT7dfusFlF9P4j/ALCjk1bmuFjWa61O5RbeHe77Odyjkufw7Vk6PpuoTak/irXLDbqWpRFLVJX+XT7PdkIf7rtty3/AR/DV3T0JUUtWT6Pp9r4f082cF20zm4L397Knz3UzdXPt/Cq9gqipJb5YZgxOfl/hT8qdcTJHOsO/ayuzbf4v96s7U9VkWPzI3/i3bP4pMfw/40OXUErqxma9qhhefzPNlXYqW8ESLlpT0QD+Ik/xdqrfY7zT7P8Ase+eJp22/bW37hvLZCJ/eC/KvvV2ztWjU69LvM5RksIuyk7t7/l69ttZF7HdXhihaHbGkqujt9+R+7E/73SiLvqVa6PDP+CgMMkvwR1CGwVGL+HdbEMhPDuYF6n6/oa/JrxhrK2eof2NGmGhd9//AH1X61f8FBrHVdX+EmoaPZsWubjw3rEVqsafxvbgDaPcmvxt8RapIviaf7ZvK7m+b/ZNeLJuOMr+sf8A0lHxvCuvFud/9fKH/qPA2JpF1C1mmlmX5E/jrmJGjXfIz/x/PV221KGSGePzvmZPl/3qxLiR/mVf41/8eqOp+hu1tAXbNcK0b4JzXYeDr63kuIYZk+RPv7H+Zq4nTWa4m8pvvVv20dxb6hHa2+9nl2Iuz1LYGKpK7E9j374axw65dQ6lNsMNmqskX/PHLYTj1J6V2KeGWtbq4VdVVLu8i3btmdqs2Nv1P3a5H4Z/DvxhoNi8WpQyql39mZNv8WHyM/7prrPHGsTafFZ28FsrXE2YovK/5aOHwHrlxDV0kb0bnVfC/wAG+G7fxR/Z7O9w9nblni/huJj/AH/72PvV2nxM8NkLNZyX728wtxLB9nfHT/0L/drl/hnu03V4dSmT5n/e3sr/APLFP7o+pq54+8Sahr2oW1na7JVnlZU/vqx9DXBJKTVjsU2tx37PP7Pui/EDxlceNNahVtNtv9an/PSUdQPb1r6NmW3022TS9BsES2iXakUXy7VrkPgzpP8AwinguDQo4fJRGae49Wd+Sxrt7VoWX94+x69ulhlyI4qtTUzVmuJEKyW37nb8yP1rLnt5trTNcuuUZol/i3duP4a6OezjhmRC+63LbpcfL/wGtCx8P6d9oaZrZXuE2bPTcf8ACk8JJ9bEqqedr4fuo4WX5opfvSy/3if4R+NaUeiXhhM0lru2pu+5u3HtXoFnoMN5ILhbZSqSlW/DgVpR6C25o7WzX7m1d/O1j1/Cs/qKejY/bHi0vw7+2XiXXkpG+/5fI+Tb681meKPAGsX0kVrDMzp5u9kb+JU5/ntr3xdJg021MkaK+/dt+T+7wVH9Ku6f4Ps2ZmmhV2RWtvk+8zdZMfi23/gNYyy6DWjH7d7nxz8Qvhn4u0/bqFrbS74H3q6IzBj/AHaqeAfHl5NIlnrDPYXaz7ZYpX2chuGH97Pavt6T4daPdSJLeWqGKLO9F/izwFFUbj9nP4Y64vl634VsLnflv3sOfk3cfgKw+pVab916FqvFanmfgT4oLHGNNvNYyj/LcS78fJ/d9Gq54R+LzaH8QrzwncXMV5ptsqSwX9q/+sjflFkH8Lr933Fdi37IPwes5rW30/Srq2SV3a6iivHWLYvRcE9/auo8K/s7fBnwPeXN54f8E2UVzeOJLiVk3NIwCgcn2rpp0ayd2DrrohfCPxI03xl4qa30GwZ4Y1R59SiRtsw+YeUfcf3utV/F2oW1v+0FoF84LJFoUjAJ13Bboj9cV3VrpthZKtrp9v5Yb+CJMfMK4Lxmkd7+0VoMdswRV0zk7eCFE5P5gH86eLjy04L+9H8z874/lzYbAP8A6jML/wCnYnoWnR+X/wABVV/4Ft5qyytHJGvl8BNqf3l9WpbO3Zo2m37S+1v930p8MLXUyyb2C8bF2fe7bjXYo3R91zD7dmWTdGm5wn/AW/2qjZUjKbXz8m5v9pjU0ln5cLt91mfZFv8A7x4H50ghaTcY/ueaNiVXJoQ5JD2faGjjfcdu7/a9lq9p9rNcY8zZv8r7v8KtVe3s2kw6r/tbtn86vaSsm1pv4Ffd9/0/iq4wblZicla5Tjt5pLyaGa587zN+9/pwP0XmpmWS1UrFDtV/lpbeIQyJ86bpE3f+PZq7GyyR/fVf4fStFG9yXLUisrFS3mY/g/8AHqpyf8hC88xPkj2Rp34CKd3/AI9Wtbp5kLqj/e/1X41VtY/tVzcH59jXRj+f5SzLwfwytVaxLldkKrGqo8kO8Mm1d/tVRrX7P8zbSh+b/a/3q2rqFl3W+zatQX0fmWK7dm/+/wC9Di3sK9ijeQ7tST5/+WX8H92pZbfb/dG77/fdRL5cdxHI0z82qff/AL3zfzq1Du4bZuSSqXMielylNDIZEhjf5Wf7+/tTPtTG6msY7ZH+zqrfO/3mdc/ptq40EcP3vvNTvJS6hEjcL8y+hq1dOzEmktTN+z3T/aIfnWKVP9k7s9cVcs/s6W4VUwhXb8vy/LUluG+RV7bqmt4lP7lfmXf8v407k3uY11DDY6lpluvzCZrhXeXk/cU/+y1AtrrjXy31vfv5bMvy/Lt21oeJY7ddW0HD7CdUljT/AGt1pL8v/jtT2tniNY5rZsLuV0T+Gpldy0LTVrliKVvMb+9/H8/NV7pdtn9ot+nmoz7n7FsVJCqsvl7G4+Zfk4qO+kt1spppoWwi7tiegbIrRtcupmtGT5WFj5kOP9371TWy2s9x8wfan9/+L605oW+Vld9rfL+dJaxSW8Q/j/hd39vWlzWQ0rDrq3VpEmj2/fX/ANmBxVW4tfOtzDIeQu1l/hb0zVrVP3dmZpXwwR/u+3NTwrDt2L9zZuX/AHetJW5tEJuyM23t2t4/JtkWJNjM2yk1iMLodzJGm6SGLz1X+8yfOPz21qyLGzHan8P9yi3hj2gSbHQ/L+fWtVoTuzPtfLurPdsQrKiv8v8AEp5FOl+Wbds2tt21H4Zjmh0uG1nmVntlaB/73yMwRv8AvjbVqTdu2+ZubZt39qlP3SuXUpSW7XCvGzqrMm5f9r/apPL86M+Wm7r9d1XPsqeT52/a6Y/75FRNbN9pSbzpdv3Wii+783IYj1H96sutxp3KTWreZumhR13/ADUR6Wq3G37ufl+/8q5rV+XcVWTdn/Y9KbJD5y/KOF/gqG9C/eM6G3VtzSPx9193HzCnMq+Xu/2KLq1jWR7qN3KTYlf5/Tg8VB5e5fMt32I7fx/xZrNuzKSuxrWqt8zcA/fpjW6ou5nw43bnq1JHJDCzXSZGz+FMbagVVkkaSPo33f8AaxWEuY0RQmjkXbyjD+5UNxatdQvbsiYZGX8DxtrWa3hDb1+ZnqvcQr/C/wAy/wAVYTV0aHF+D7WPR4bvwbK7O2kyhImf+K2kXfE3HoNyf8Bq/JlZPLmTaP8Af/8AHqk8SW82m6tba9GiskjLZXT/AN1Hb5GP97a//oVWZoVZt8ibm2ba4Zq2hd43ujFuI4490kyM3yHYz+lZkkLQ7of4R8yf7vat+4twzvHs+XZ/wGsm6W8bYsnkOU++yuyHafbkNXn1U2rM1i09zMkkkabyZKihZ9zxybHT7r7PmrWl0/bCGZ9xH9yqdxbxKPMg+X5Pudq8yrCV7m0H0MixhuIWms2m/wBVK3zf3kPI/wDiamkG1vm2MtO1LdGyapDC7+T/AK1P7yHr/vY6ipmRP4T977lcslbUpuxQvtLgvoXtbhFeGSIpKj/dYH+Gs3w0smnq+g3jyy3FkiolxcfM80J+45P/AI6fdf8Aarfkt2kb5Xxu/h31k+ItNvJVttUsd7XNhcK7RJt/0iE8SRH8PmHutEV0BNNai3kMf+sU4+aqbNGshhmRXiddro3vWgzLcQ+dC++J/wCNapX0cKo8kabnCUP4kxp8ysRab9o+2TWcls2yFU+z3W/iZW/qv3T/AMBNbnk+TGkY/hqn4ft5vLT7RtZtxZ3XhfatCTf6ba+nwcOWgn3OGo7yI5mLM8rDmqzfu/lxx/7NU8m1F+b7tV5v9WF67q6ZW6krVnvvxE8fWPgXw6l5LI9zdTukFnb26fvbiU9EQfxE1F4J0GbS4bnxF4idH1rUE/esnz/Z89IY/b+8f4z7KKr+EfB+peINbHxO8eWD210LcJpekSvkafCf4j/03kH3v7g+Qd929H5Mk7zTfOjP+4VU+83fHf2r6m03Nt79DzY8qVkLJatHdW8cgaVnfakr/fZdvoOPq1RalI1rcSvGyI8abdzJuSFO7ntnsK0Zpvsts0Nxcolw0W5tibmX/ZFc4zx+MrWaGGHZpVjcN/qnJ+1Sr/AP7wB+Ut3PstRPmirDjqyPTbq31PVl8QSQtFDbf8g5JeSzjjeR6kbvl7VpDUFs7d2hs/n8pnX5Pur13E/+y1G0MdqqTQwrEkafxc7f9kf3j/WobqZrz93qH7mMMd8X/wAWf4j69qwWisaaN2Ww+1Y3UKTyJmR2Dbnfhm28k/Sls7P7RePc71dftGyL/acdW+g+79ahjtZtQmextbl08t9tx/srtztHvjrWjH9l8l2sodsdtb7Hfs3+yP61UYuTuxtpIW+uPLm8u199kv8Atd2qrJ5l4z6eq7j8v72L7209G/wqbUPMuJk0+P786fM/91Pp/KqPiBpmuIfCOmiVHuV3ajdRfL9ngC42g/8APRvuD0G403zIjmTdirasuva4dRWbdpGmsyW+3/l8uV6sf7yIevqf92r91NcY+Z2Zmfc3ybmqO1tI7O1j0/T7a3to4U2pEv3VUtnaP73u3rUswuPLdYRs3Y2Styat3S0B6mbeSQSQ72il3P8AeXpuUfw/7NZawya5qH7mbyoY4vMuJU5+Utyo9/4f++q1NSuFVnjWB2kkZVt0/vMPXH3smla3Oh6H9nu3RXhi3XuzpI/sf7o+6Khxb1YXsjG1i4kVk0+wfYGxtRPmaFS3Lfj0oW3aGF5pps7/AJfl/hXdnj61Y0vR18z7ZqCbZbnDMkX8IHA/ADgf8CNSalBa2trc6trDpDZ21u/yb9qKP7xraCsrsiVuh4r+1Vp+pajFothpMqLd3S3aWr3KEoJG8oIWHcZxkelfjT4i+F3iTWL7U2tdNll/s26mgnfZ837uZ4i34lM1+vXiS71LUviFovjbW7ucrf6pG1pbznatvapJH5Yx2LBi57jeAfu14X4J+EfhvwH+0J4s8K+KtKiNjr2nXNxpcsqLiZjeu5wPUiWvDnGUsbXaXb/0lM+O4VajxdnUf79H/wBR6Z+ZF1o+oafM9q0Lq9afhnwRc+ItNvJrNNz2kRlf/dFfYXxo/ZZ8L6pcWOseGbbZdiWW3v7f/d3bGrjPhL8FbzwbrF43iDTZRYzRNE/yf3lrNSbhdrU/RYr3rHyrDpN5p2qKtwm3zH2r9a9x/Zz+E83in4mWnmWXnR2SG5l/u/KvGfxam/F/4c6ToOqS2LfJLDcbot/Dba+jP2afDd1Z+AT4o0HSla7u4P3SsihpkVsb/wDaB/lVUm5asHHUu+M7NtD0qxk0+LZtRt6f3cV5T4217T7TxBZYdNltcF1d/wC7Jz/OvpGH4J+KNa8P6hHfXn2wvKv2eVE2Jz/Dk8cVwd9+w14m8VMt1f6zFbLFnc3zbM+znH6CuKtfm0OqmkrWKPhfWNP1CO/js7pXS58l4v8AcC8r+ddh4T8A3Fw0Ml1bPKWl3W7KnKt2bNa/wd/ZR8O+Fb5DDM2t6oqbZZZd0dtH/tbP8a970XwP4d0W1W31a8+23HG6CD5UX2AHalRgk+Y0km1ZI8/tdPvIYYdPazbZ9+WX7v3f4PxP6VoLLcNP9okTYN3yb67bUG0fT43DabZWat8qbXVn/wB7HrXMaleafdLtt3l+X5dzvhfyrqni6j0SMlQS3LPh+3jkm3XY87+LY3Suu0y6sdC/4mU1nFFG3zNKybq8kOteNNKuvs+g6PPeK7fPK7qiqv4/errfD+rapbxuviCNnWZV3oz7VX1/H0ojOT6kuNjvP7W03Vr5FV7d0mVX/wBH+RlpmsafJp8j280zLv8A7j/wn+KvP9LdfDPie4uLfxmt9Zuv7qCVFSWFv7p7fiK67wi0d1ePfaxrEUrtOzJumVl8vb92uqi5NXM6iikbUzWtndLHdWySw2Fvvlif+Jl6f72XZa1tF0+SxtILG6m3yxxblnbjzNy5LY93rnm+0X15Hb2F/FJJLKZ5/ut8ifIij/ek3Ef9cq6PSY/Mt4ZLx2+Vvl/vZHFXcz+yasKqu+3ks8p5Sts/vf7P51chhwqQqiM4xvX+6x/wqGzm+zlbe+2+Y+91X+7lqu6Xt+fUJ4dyxRM35UWuK9yPy1n1K7ZuYt/kLu9NvP57qVYWuds0aJvRm2/7Pbj8KuLbtJbpJBDtbarIrJ93NTabDbzzNIrbVTCbN/8AF3Y0ckibohsUm2m4kVNqbtr/AN7/AGRXnni1Vb9orw/DHEC40STzB2JK3Rz+RFer/wBlr5gwm1PvKmPu5ryvWlFz+07oc0hwkumzMo9EC3IUfkorlxiahC/80fzPhOPWnhMB/wBhmF/9OxPSoZFt41WNN4CfdT5jVy38tiVk2Kdm/wCX+tQLpbLI0lrv3L8r/Sp5LHyY2+TaR/sfw16EYs+3cl0Js8wpJCjfvWb5vuttXNN023W4t9y/Nu3f8Bp8atcXDRsnywwbvxZv8Fq7b2K+Svyc7Pl2da2jG5k5WI1tZINkafc+6/z/AHs1JcLHDp7+S6AMu38+KnW3khVTN86/wt/db+6aXULFpIUjH/PVG2fRs1cIu9yZSK0dvHIWmWHb/Cm/+6KVfL3G37/41MqxvGq7/wDa+WmLD86yK7Lv+WqUIi5rj7U+W6RxfdDbfm+9VbS4xJahZOollZ/++2NO2rax4muGPlb3Z/7wFM0m3khsYpkmVleIMrf3s9P/AEKm420KuXLiRdoVff8AKqt41use1lwXT5dtWLjdHGm3Zu5ZvpUTxCZkmfllVmT05pcpKeogj3TJ50O4oi+/zbadHC27b/wKnrMvnFZOG4/Radb3Vm22Rpk2b2V9j/dw3OafLG5PNKw2O3ZvmbZ83X+9SNb/ADBf7u+rUPlrN5KvvzuZP9mo4UaaJFkR8lG+d0+amu5Lba1M+eGSHc1t80hT/gPHrVxVWOZl+flflqxb267maTpUccNrcW3lqdyf3HTDeo91pcuo4vQz/ETW/l6ZNcW24jV7dYmVPuu24FvyZh/wKr0dv8zXPRTK1VvE0bW9jbXkaO3k6pbO2x8fL5qgsfYbtxXvWgzKtwsciPuZwqo/T7vUUW1H9lEMNqrbmU/xVT1axa60+5tY+GeJ9n/fNa8Dff2/xUskK/LsTbmhrmVhKVmUoW3Wcc7fKXQfqtWI0VW2t92oreS4ktUdU3fJt/Lip4ZIyu2SHadlJLoweoLa2c1vtaLdu+5vptv5clnDuh+Zl+7/AHcUOsh/eBNzL93/AGf9qi1uI5FlVk+aKX+D3VT/AOzVVuV3Dpcn/cM25k/3N7037Paj7r5K/K//AMVRcWscka+Y7D5d37p8VJDbt9n2SfOo+5/ep3uxXsVbXSo7fUbybO6OW4Sdf9ksmD+q8066hk+7CqkH7yP83HtU1wjR3EKqHPmxFX/3htI/TdSrIzSH5fmX76f7NSirkMMk/Kt91P71V5F3XwjE21XTbt2fxDkf+zVNcXEzS7I9i/3qp3O6L/TJ/lELh0+T061EnFqw4p9DREO37se2omXcPMZFxUsazfM33v7v0oVZP9WyfNWMklsUnJIzdWjWOOKb+AP83+6eDTY7e8t1+yxpAsX/ACy3bifxq/caf50b2NxFhZEZP73Wm6TbrNbhZJkZ/uu/+0KiSuaX0sVZIZJdvmJuqvdW6srrDD86Yb5vatlrdtx28VBLu8zyt6fL/DWckXC3Uz2hhZRJs+b71V5rVZP3m/bVryW3PHJDkpnZ/So4I5vJ/wBKhVHP8K7vl/OuaS6GiVjE1zw3p+r6fLY6hH8kq7WdXw3qG49D81UbHzLix23T7Z4X2XH++vr9fvV0UlqxkbZN8p/hrEvoV0/Xk3f6u/i2J6LMnP8A48nH/Aa55IpMoTW83mK2/C/3Kp31rI0asvLK26ta6j8tljVPlqrOrM24dK4qkLlxZhSRSeYLeR2Vl/lUFxZsuWjda2NQ8mOIzTJ9371U2jjb5o+f5151Wloap2MZoZlx6fxf3ahhZvMexb78XzIz/wASn/Doa0WjmbKsmB/BsrP1S1uY4V1CGFTNb/N/vKfvr+I6e9ee4Wdjbm0FWGThmdWf+NV6bqa8DLt3JtarMapIokj+ZG2yI3+yelDRqylv4qjl1JvdGCtq2n3j2bP+5nlZ7d/+eZ7p/wCzCo2jjm3eX/8AtVsXFq11CY45lDr8yu/973rM09o9UhS4X5JUcpcRI+dsi9V/z2rop0nVqJIHL3bl2zjWGEKqfe+WpJo1X733akjVVYKvy7KSXs0nzV9LGPJFI43uVLiT5sbM4qpJNu3dqnus/Myfw1SmkCrgptYVVrsErn1LeeZdL+7mwh+/vqteKbVRHDt82FNyO/3IV/vE1dt4Y44fOuptxCffauc8aa9qC+Tpuj7PtNwu6BHTcqrux5zj0H8K9zX1M2t2eRDeyK+s6hca9fS6LpMzOqKrapdL8hjQr/qh/dLfmB+FXY4VtbVIY08mGKILBb2/yiMD+ECn6LYw2enrZ26bRu3yvL9+aU9Xc+pNSwwtNmTZs2fM/wDtelYzTvZGyfQqSXklwqvIn/HtuWKJ/lHmN7+oH5VXurq6tZoIIXike5d1WL+Ld/fPso6mrV9MtlYyXV08S7J9rxf3mPRR7/1pbHQ3s9l5qGxZn+9s+by07RD6d/eslCTNE11H2MNvYweTD8gbdvbfyxP33J9TTvMtZvKa3uVW0Rd0vozev0HpUrSNcTSxww7VRVU+i/8AxRNG7T1tWxs+zW/zea/3d4/wrfZXM73K+oXjQvLq0dn5l3PiKwte7Memfc/ePoKqW9ncWFn9jmvHmuJJWnv5d/3pjwcD0A4FSWNm11ff8JFHNKks1u8Vras+NsLMuZSP77foOB3qxJCu/bvZjs20km9xtqOiIf3bFJI+vy7/AP4n6026vfJ4k3ufurEtS+T9jjZY/md/m/3nqhNDI19Fp8Lu3nbnunT/AJZoOvPXcT8ooEmQ6BDfSahc69qltsWJfIslT2++w/8AQfrzUM0k2tXEkLQ74UdVi+fhn/8AZsfzrXvvtVxD9ih/d5+V5U/5Zr/dH/oRNJa6bHCyNBDtiiTaq/zoUXdJjdk7vcp3QuPMSxtYf9Ib5Wd03eWo44+p6VyvxAt18QaovhJn36dDsfVN7t+8/uIcdmZen9xWrpfEniS08L6LPqhh3TzbUtYtmXmdmwPfn+VZmm2MWl2ckmrT+dMiNLeNF96a4PXB9vuL7LTu1oFtDzj44abaR+LPAlusAJn1NzMAOXzLAPp0/wAO1fP3/BTL4c+Lo9G0D4meC96SeHr/AM2/a3dlfyG+Tbx/CD83/Aa+hPjXPJP408B3MzMitqzFfl5A86Dke3pXSfEDwjpfijw7c6LryJJbOitLu/u7s/yXhfWvPpXli8RbvH/0lHw3DqceLc6a6VKP/qPTPzc8N/Fa38UX0cC63snVN9xLK/y/xYr2STS9J8RWbxw7F3vbq6p/c2KS349K82+Pn7CvxE8Ayan8WvhXYfatCl1RWg0t/luJIlZnfYB/Ad2AtWP2ZPjRpOtawnhvWrzybyT7Q0tvKmxodu0IuD3FYVYWWi1P0vD1I1nvqeh+Ef2ZfB/jb4gat4k8baHa3zJcRtFb3kO6NV284H8WO+a9d0X4S+HdLjjvLN7CKO3Ty4ElhyI0/ugVmap4m0PRo7LWLe/REmlhaV0/hZv3Zz+PWsz4i+KPEGl+HdTvvDrr9s0yUO9r2mT/AOvXmuo1HsegqcU7nWahNpNnIMarBdMPu+a67V+iCsuX+ydUmVdUvJbuV/uRfciX6Af1NcT4e03SviZpY1S11K/025eIMyIinax53VRs5vE3gu+mj8QP9sWFd63G9vmQe1ZOXM+Vmy9nbU9csbfS9Nt/s9vp0qo/zPFa7VZvqetRy6l4ZZzDHrCR7U/4936/jVLwrqHgvx1oC3UbuwK/LLE+1lb65rK1Tw3oM11sk1qW5kX+C4+/GP7uR978atOcd0Q/iLurXXh3y2aHxJEWb+BU/wAmucutet9PmC/ZmmZvu7f8/LVpofCtiir57Iw+/wCVDy31c1Vk8ReHdLk8xYYtvzfefO2lG8pXZMyb/hKNSKhVs2Un7kUSb3ZfoP61z/iPVvHDbriDRJYU3feupk3L+FS6342trqEtpbypl/vxQ421yOrLrF9N58l/cOjfcR3+b/eroTSOe92Ovtc1q1kae8sJ23f66VYd+78RTbfxxDb3yt5l1E33f3Vs3zLUUYhsYftGp6k0KN837rarf1qhcWLTRy3Gk63K8Un3omTHzfWj2ji9yGrHb+BfibYtqE1xH4nvIMvti/c/wJwOn/Aq9n8EeKtWmjhmh1hbmJ/lRHtnyyHq3cV8W694g17wzeurWcsv7r5Wlh+6v93PTitT4W/tHal4f1hLHVP7W02Esy+fYXMrq2f9hCQtelh3Go9znq36H6BabqkeoTpCvzP5X39jIcH+Hn9K1rVZLXQ9scz+bNKrS+qovG2vFvhb8UpPEi2kdjrF1LbttSWe/tsTSE9GA4P517dYWs3nJDMjMgX+Pru7tXRKlFMwcmlqTTSX1nIzWro8I+X/AGlXutTq1vJxcvlZk+f/AGcdP89qhGmTNI7q+4f5+Wpo11KbbY/Y1OH2ptej2d0Tzl1pvstnNfKWlEdm38f3m6D9a851i0jg/ak8L2YGFi0NwA3bCXX+Fd81vJI0MDQ7Ee93SxPzuWP5/wCe2uI8Wov/AA1Z4ZDR8HQ5CV/4Dd1y42PuU/8AHH8z4Xjxv6pgLf8AQXhv/TsT1G1j8uR1mmyd/wA8vtUkaxtIY2/j+bdv+81Qx27LNtU78puWrtrGzTpJIiABx9Ntej7M+2epDDZL9pmZd43vt/3tqqKuwxww7rjzsKqL8n0qO3HnWsUn8TfP/wB9NmrEkLLub+9V25WkRe4COaRU3J97K/J7/wAVNmG24jjkfJRHZdn3vu4p/mFl2yI+ar3Ec41B5FdNgiRfn/h+Zif/AGWqik5XRL7BFA0Py7Ew/wB2iMFptqJ8rZZPTFOjhuIZD5b7l37v/sak8ldoZU+//ClOSsgsilrUca6LfSTfIPssux9m7buXG7/x6rUlvDp9qIW+7DEq7+3yriq3iaGS60OaxhfY80tvFvdM/elT+lT6pNtLRxjl5dz/AO16VLtFXBC+TDN++ZELBd0T/wAXNQLH9ndtu5g23Zuf7vrUq3G5QyptZPv/AIVLNukhVl+ZmT56FFNaC1uVIY2aR5NmNu75npt5CrW7xzQoYf40b7u41d3q0n7sqF3fxpTvIXy9v+9U76j5kR2sP2eT5vmb5m31PHJIsn2ppvl2Ouz+9QsflMcPxs27Kkt45PJ/1fzfNs31cVaxLd0Nk+aEyL8v+71qNcsjJ92kjjkaP92/8Z+T+HIqWOPdhpE2bvvU202rDirGZ4gb7d4ZkWF/mSWFkZPm6Sqa1Zo2MzLH8wLqv/j1UfEENx/YN3Hauit9nbbt9d2av+ZM0zSKW27fu/dqVfqC0K5X7PN5g+9937//AI9UrHcyrUi2u5fMU7t3y1Ktqu395/DStcUihZwyNCFX5P3r/L/wLNWY4WYN5kyHb/WlhhaG4k3fN+93f7uVp1v5n2h1kKbWf5KaStqO7voQ7Q03lrUUaqbya3VGVniR9rp8rfwVZuIY1lVsN/8AZf3qa6wrqVrI023ekybU/iztf/2WlJFRZN8sLKrW3ybP1p8sbxRMv935t9LHtjZY2fcPvJ/tUrJMQNvenEhrUguJJGjWQun7u4if5dy/u+6/XFLMrLJ5mzlX27qWSGO4jkhk/wCWqFH/AB6UxJIZoVuPO3Iyfwvu+YcGkPV7jJFZWPlDflt1SrHb3S7ZE+VvvJ9actuq/dfcOfmpqssczMu912H86yaTV2UmQ2MbLZxqr/c+T/gStipPlkZl/iH9yo7O6kj1CWzk6N+9XYn3VPH061PIvzNtOG/u/wAVQ1Yq7voMZWP3uv8AF221kR6ffWviC4nsb/ZFNseW1l5DDbgsn9w5/A+la7OtxIu452pu/wBniq0i6fJepqUG0s+5fNX5uD/CPbK1nKKtoXGVmTeU/Cb/AJf/AEKoJo42fzIj92pPO+VI2dFbZ/DVSa6WHMKvvb72yokioOxDI0y3AST5i+5fy6UkkirH5jp81MaS4bzfOhRN/wAyPvzULLJNH9q3/K3zP/vVg0bc10DMrHjrv+7WX4g09tSs5LWF8TI4ltX/ALsyco3/ALLV64kkXczdqqzQ3DSC4h+b5fuVhONkUtjOjvLTWNNTULV1xKu75X5U91+oPy0y78uONWVGbd/dqCOzj03WptPhtvKR3+1W6p0Ysf3n6/N/wKp7pY1jZo5nRfvf7tck02yluZ140bELNVCNfLby5HyG+69aU3zR/u0d8/e/3f71ULx7dv3Mk0S1x1IN7mq0RWmhzJ5kfb/vlqpXDbJt0nC7drrs/wDHqtyLtzMz/d+5sqtcM0a7obN5A2PuzevXGfSvOq07rbU0i3sipbwzWNw1rvVkXLxfPtOzv+R/SpGVmZm7VHcSNKsTQpukjy0W/wBe6n6j5afFcLM3+iPkMm5K5pQuky1J9SGRv3nzJyv8FVbfT7G11S5vLTh7l0eX/aYLjd+VaLxsNkj8Gq3l+Y25H6vt+eu7LqTdXm7GdWS5bDo9u1m/iao7uQtuqZQGjEjfLj/0Kq0m7b83U17Zz2uipdFsBN/DVRvNqsqycbvmq9M277v8VZ9x80m1pKAtc+lvGviaPQdPht7eF7qa5lEFvZQcvcSnkL/s/wB4t2CtWfptjNbXD3Wpv5t5c7Wupdn8XZEH8IA+UD096p+G7HV5dS/4TbXv3N/LE0WnWfUafA3XP96Rv4j/ALoHC/NozXl1DCi6On75/uytyFz/ABHvX08nzSPKj7qLF5IrQ/Z7N/3m/wD0iX+GOq7Xi8WdrM7RJu/e/Lt/3s1WurPdGumxzeb/ABSu78SMerHH8qfY2NnrG/QYbZ/sFt8t7Lv2+c/Xyhjt3b06VErydkapKKuyTQ7ddYu1164fzbdGb7EneQj/AJageh+6v51p3HnXFxtmhdVD7pYonXKpT7iNyzfYYVjKIF89k3Bf9kCm3UEemxoqx+a8jlt6fMW9zT5bKxN29UU5Li4upGht/wB3vdvN2/8ALMf41VZF1q8/stYc6dZsv2pO0z9oh7D7zfl/FUl1eSrJb6XZQot5eN/HyIU7u/sP1O0U+W3WxWHT7eH/AEbbsii/imctks/1oW4y2szRySXVx80m/wD1uz8lHsKZ5jPuWObaq7meVvmbJ64qaG2uFTy4W3AP99/Q9eP4qbdTRq0ys/3f++vb/dq0lbQm+pFdSKtxFbhEG1fk/ibb/iaSZks49qorzSt8+329T7VBYyNcRnVW/i3JB+HG78e3tU8O5Iy0nb+dCg0ht2GR+XDlpn2sE3N9ad50aoGkfaGT/vmhbdW/eSclayPF1400kPhvS7nZc3+dzbP9TGPvufp90epq3ZKyFuyiqx+LPED6xCn+i6c5is/+m03R5fovQe+4+lX9a03dDBpsaIoSXfP/AHmO3irNvZwaPYxWtrZ7ljVViiT+Ef8A16lkmuLeMzXEPmy8u2z73+7ScYctkHvbnkHx4WQ/ELwCrRbW/tghE9F823wK6nxBZXGqatZ6Jp8K+dPKwuN/3o4ivzsO3SuZ/aAZ4/H/AIBbefOGsMzHIKg+bbYH1HfNeiLpc1lC2oSO8sl4he4l2fNHEeAw/wB5untXnYZJ43Eesf8A0lHxPDj/AOMszr/r5R/9R6Zxnj6xt1027js7bckMDRWC/wAKxKvH4ll5r83/AI2fsj/Es/ETWPHngubyIU33DS3D7Nznl9n94fezX6c6xZteRrIblNpT/vpuma8I/bC8M61F8Jtat/D6RJvsHXzdm7d/BtH4tzV1Um3NvY+8T5HeJ+c/hL9pbxVpunXHhfVvELzMJ90VvK+fqvP+7Xrfwz/aM1TxFDD4g1rVVCzWs1hdP33p/q2P94f/ABVfJnxP+GfiTR9YudSsbPUrmKz+a6vfsb7ZMdXzj5RngU/wX8SmjsfJmDKqIWf/AKafLzXm1IwqrQ9GnXktD7r+AfxusWs7b+1H+zPd3rQeav3I5d3Ck/w57V7PNqmm+IppNF1yzTzUTb5Tf8tEbjg9GBr8+vg/8ctD8O+H7ay1R/Os7ljb3/z/ADxkfPHKPcFa9j039sC1mvNMt76/SWWGLypXR/lkUdGx64rmnh3I641YNWZ7Voek6h8HfFkmj/bUm0fUfnsGZ9k0PqpH8QHr+dafxA1+ZrB7iN7i3uAqvE7orpJj+EkY4Nef+PPil4f+IHgl2sdSiube4fba3ET/AL2zvFX7vPKZHrwa8Nt/2iPGUKr4Z1LVZZ44mwrTuyyxsOqg/Xsaj2c+Xlkx+1inY9qb4uapebre48MXD/w70mwN30pNL+Jl14gYaf8A2DLAnzLu3rlW+hrz/wAO/ECa8j/tO6fzrbf+/wDu7l/2iMZ4/Suw1K4kj0//AISDTbNJnSL966PlZFP3GfHb3FS4uOwnPndyt8Svhl8RNY046l4V8SNNGm7darcuj7fYj71cZ4dm8VaLMW1ia4nRP+PhZXfKt75rtfDv7S0NjJFpt94bS3lD7Wi87cjL3YH0rodW1rRfFEP9reH0ia5dV2RSvw3487aXJLoyG0kZ2k69eTWKGPwZPfQBf+Xe8VnX6A4qLUPGWlyM0WlTNY3G3/j31S2aJ9393eeKoWXjDVtN1qHSdQ8MXltPO+xdv3fzHb612ng+61zxVYreXGgxSWczt5TSw/Ps6bq0VOT6GbqJHmHiC+8cXUiwyaPcOr/de3m3xMPwFXvhv4R8YaprSrY+G7VJ0bd9qlm2PH9FHHNezaT8FlvLx5rfTZUCuuxon2K34e1emeDfhX9gt1s4bZbaXeHxsV3Zd3f2rtowlDWxzyld3L/7PvgvxNpnk654s1vz/skW/wAr5GX5fcDtXsmjzMIx5ju0s/zy/Pn/APZWuY023t/D+g+Tb2uWmfbvX7smen4Cul063Xyf3fPmp/31Xapa2MZxNW1eQ4aH7g3Mu/8Ai/2a0re3WGH938nzfives60t5HZFXo7bVT+9/tGtSNZJ2SOF9yo238T/ABV0xldHPLQSNWkvEto/lRYl++n8TNn/ANBrz7xOsZ/a28LLv3r/AGHJjd/uXdd3Hq1jZ3El9eXmxJMtF/E2N2wKAOW49K4DxJdRXH7V/hSe3b5f7BcKWUr/AAXfrXJj0uSn/jh+aPh+O7/VsD/2F4b/ANOo9ZbT5LybzFOz5Nu5X+ZabMfJ0+4RvnlZFiRP7zM2zj/vqp7e8zIyyBMfw7KjuvJ+2Wi+S7kz+b/u7Fb+rLXo8t3ofaXauXdsf2gLCEYCnyR7vOjZN2xl+ff/ABdaYrRxyeWw+/8A7FKZI1lEK7tzZb7nyrTlqRe6Gxrc/aNz/Mg+41QsskjSNsztfaqf3SP/AEKrMMyxu/nPn5vuVW02aO+sY7yKbPmpvT5+GUtxUq9tDS46NplkH7zj+/TmWRmVWTbt/iqzHbxyKY2j2FamW3UZbZnH33qXqJ6GJ4muGhi0q2+ctc65bp8n8W3fJ/7JV4WKSbW+7/t7N21qfeWq3eoWPpBcPK34RMB+rVbWGNhylC1VhXtYpW+nxrt8t2bqz/8AfVQ3ti0mI14+6z/7XzVoyx2xzFHw2xvn9/71VrS3v7e1t11LUkuZ0iCvcLDsWQjqwTJ25/u0/sjUtSHSrWQWvmS/xM7f+PVNCsjRt5mwOj/8Bp9jDI1qjNM4+9uT6tUNxcLHlFdFCsN7u+B6d6adybXdyfdGq7ZYeP8A0L6Uy1bdGY/u9f8AgNO3RbQrPlFX5SnPy/3qSOTy5nXYh/h/3v8AZpXG0JbqsO2Nv4ctQu5cNJs+99yjy/m2q/3t33qT7LNMm7sv30qdlcq12OvreOSGeHeqgxH/ANBqzauskKNMm47Pm+tJcW/n20kUfzK8RX/x3im6bJJ9hgaROfITevvtqpCWxPJu+8tI3mbOfu7KlVY1X5n2r/t0LH5bM0nP+xSVxS2KUas11JI0/wDCu9fpQsKwtuX5g/3171KzQLerDs2+ajN+VK0O798r/Iy/rUtJ7jSsRNjC/dZd9UNcmmtZLC4hSIR/2zBFdeb97ZJuj4/FlrSktW8sSL/wCsvxt5Mfh66a+3GHylf5fl2ujLIGz/Dyu2m9hQTbsjUkmihuDH2Wnm6haRlHzVXkkjv1F5G+UmTfEyfxKartfGG/jsWh+V7Uyq6/7LKCv/j1JSXUvluaHmQrJui2D5/nqrpemWtvdXbQ2yozyt5r7/vN1DY6fxfe60LGrKV9MNvf3qaNo2mO5/lkiU71/iI4ob1t0FrbQlmj3Y+f5Qn8HFMgjjj+dflYrT47iN40Vn/3P9qq8yqtx9ojfhfvLRJq2olfqJLtttQiuM4EiGL8+R/6DSSqsKosaZ2p8i7/AOtSX37yz8wfKU+bb9OabN5c0ssiXP8AdZfptqJbFXsipJcfY1D+co3fKu58fzqtus44cSOiY+b5XXv7D3q5Nb/arFvO/vbvmRW/Q1BDpel2t0LmxsLeJyvzvFCoPP0rGW5qnGxG07TKjs/z7G/+vUFxN+/8yR1zt+SrysqzMrDduqrdRwyR+W0e3Zu/grOV7ajTuR+c235kyv8As1Fb/ubqWFvlDYkTf+q1L5e35U/4DSXG6PypF5x/FWJfVkN5GrfeqDy1b93s+X735VZaRvMX0qHy8t9xxUNamkWrJnO+MYdaW3/tTT4126dL5rr955ox/rF/2cj8crRNIyKZIXyhVWi/2l/vCuhmVVj2t/tK3+1WLbrb82bP80HyfN8vy9Qw/CuacWtS1IozLiFGkh3D9M1DJHCJm3w4/wBnZVy6aSH94sKH/b38flUMknmR4aPp9yuWpEpMzbq3/d+YE+ZmZW2p8uRVOOSNZlj+yoHH3Jf4lq5dWa3EhEb7A6Nt2/dVv8agVZPOXzE/4H8ueK4a0JM0WxVmVfMi/coPn++9Z026xv3j++Jd0sSom7af40/H73/fVbF4y+Z5bfONpbfv5X8Ko3itMv7p037d34iuOUVGVjROyMuNtca8lWZ0S3PzRJ1arG3bMFT5lRKfaX1reWvnWT5Rt33/AL0bjh0PuD8tLbszb23V6uCpezpX7mFWTctRt0275T0qrcMdu3/gKbPerMnzO7Mfu/wVUm/vLXWRcp3TbVKs/SqEzcuzVbvGZWCgpuqndNt/dfexQF7n0hNcRyXixw/NhP3suzv/AHRTJG+yq1vC/wC9f7/q1M3R2+ZJm2kf6pG/mailuobMtdNNK7s+2JFT5mbt+Z6V9LfU8tK5D9n1SO++x2tz/pMn91PlhXoXPv6LW/p9jb6bZpZ2qOkMKfxcs2eTn+8SetQ6Tb/Y7V2mm3zO+6V/7zfX2q6reZG7K+7yk3ff+9VqNlcJO5JJceXZtbybOfm2In/jv1qlfahb6PZvrWrTfNDFuZFT72eAgH8RP3Qvc1PNI106SfcQJ/318tULGRdb1SPVLjeIbaUrYRMnEko6y/Rfurnvz6Ub6DiGl2eoWcMmsahZ41K92NOn3vJQfchH+7u3M3c7varEM03nIvyO3zL5uzHzDq3+yKkeRY1bzJtzN/47Uc1wq7GjRVaX+/8A3e9HkVe+o6a6ZozNH8yfMu/+8e34VlahNNq1x/ZNu/lMmHvNr/Oqnov1bp7DdUupalJZw+cyea5fZa2+/b5jnotNsdLaxheGaZpnm+e4l+7uc9fwHQe1Ju2olYsSQrJN/pAwqJ/B8vT+EfSomuFnuPl/4Bv9qmmjkuJAvnbuv/7Rqb+zY1jDL83zKtVvqJJJ6lWa6/s/T21DUHQBE+fvtbrtrE8M6dqE1xc61ffJcz/M8Uv/ACxiH3E/D7x961byG217UPssyZtLG4DuivxJcDkKf9kfeNaN1Esdmsa9ZdzO/wDn1ptN6jtYwmvGjvoTdXO1pm2xbOsje1Xl3SRhlfHyfdqePTYY7dJt/wAw/g/u5561XuLeOG2eS4d0SNCyInX3qfesU2nseTfHCSC4+Jnw/t52DxjXiGVD8xUz22efzH1Br07WYppppLhp9rNt3p04HRa8v/aKuLHwx428B6jrUnlm01V59QYIWESLJbHaMcttUdu+a2pPj/8ABczGWPxi+XzvL2Vwfcf8s/WvJp4ihQxldVZpO8d2l9lH5nlme5HlHGGcwx+Kp0nKdFpTnGDa9hT1XM1c1rpZLW485UV9r7k819q7v730H90Vh+OvDNjqFoYLh/Oiht1VbeVOGy2f50yb48/B6V2lbxWCzqPl+wXGF/8AIdVLr40fCHUL+Frzxfuhjw8n+gXHzsOo/wBX36VtLF4KSs6sf/Al/mfVLjXg1f8AMxof+Dqf/wAkeefHD9nLwfr3wX1PwgNHi87xIrJeLboqvIgX5Iv9kHqf95s1+ffxc/4J1+PvCum+dpMO+4kuJd1rAm1VhC5DgnsfujNfqPe/GT4OavqPnah4uUQgjIGn3B3A9QP3fy+9Yfijxj8DfEAa2bxOnlZOGbTp92N3/XPuK5p1sC3eNSP/AIEv8wXG3B0XpmND/wAHU/8A5I/GbVPgv468K2/mXmhzpBcruTejdv4v9k1k2Oh+JDcLZzJLkN8ifxba/XX4jeHvgL45mt4Li8ge3ibfIDYSjftXCxn5ehryxP2ZvgSb251Lz4IzLKyxQLayEKgB55X+JiW9htFYSxFFbVI/+BL/ADNVx1wd1zGh/wCDqf8A8kfnvpPi7xt4TvpWsryVoXXZPbtysi9sj27V0mm+ILfxZLDqurWyx3S4SWderY6M/wDe/u7uor658Q/sf/Cu+kjksL1A0km+5xE4Gc47j0rG8SfsWeBriyB0LU1SYtl18sgdPpXPLE0ZfFKL+aNVxzwYnpmWH/8AB1P/AOSPLfh3M32yJ7O6QsJWZPNT5d3dT7GvS3k/4R3WEsdLuVhinTalvcPxC7LkIfYno1P0H9l298NT79P1SJ02bWVz96upu/gx/acVuby9IeGEREBuCv19RUqrhv5196L/ANe+Df8AoZYf/wAHU/8A5I8v1bTfD/iy+uNB1jTWsL+1z9xNrbu2D/ER6d61fBHgHxda3kFva3kFxDKu9H34K+qkfzFemX3wY0jVkgutQ1Zft1uyr9sWE7pox/C/uOxre0D4feH9EnNyuqhiqEJ8rgjJwcYHXFOFTC3+Nfeiv9eeDP8AoZYf/wAHU/8A5IwPDei69g/bptgmcWcVvdQ71YsrZZCfRFZute3eDfDej2trDb6bbbYliC7NnHHFcr4ag8PW1zBa32szQ20DtLyHYM7gAngZyFUL+Ndxoev/AAr0mOSSPxAyuTlF+yy4yf8AgNdtLFYOKtzx+9GEuNuDW/8AkZYf/wAHU/8A5I6bRNFs7eTyWTdLJub/AGq2YdHj/tbzt7l5Ytr/ACY8tOyj3P8AKsaw+LPwwgtgG8RBHHpZzfTj5KuWnxg+E0TRmXxYxAYGUfYp/m5yR9yqeLwr/wCXkfvRH+u/BqemZYf/AMHU/wD5I2ZrWRfE1npcIVoreyZrhNn3nO3p9N1bGl2v2PUGnt0+VU+VP4ciuJ0f42/DZ3lv7zxDLby3AMjxzWkjsjM2SmVTHA71tW3x1+DUYAPi3bhs/wDIPuP/AI3VrFYXf2sfvX+Ynxvwc1/yMsP/AODqf/yR2WmxrHbxMN6Fkx86VduWa1tZFt5OSny7PvbzxuriE/aA+Dap5MfjDap7/wBn3H/xunT/ALQvwgmmgjbxkuxGO90064HA5Xjy/Wt443BR19rH/wACX+Zk+NODn/zMqH/g6n/8kdxpFutrO26blE2bv4q848UNKv7WvhgugyuhSY9D8l3V9f2gvhDA4lh8bISZQZC2n3WSP+/dc9F438LeP/2pvDWr+EdSF3bRaVLC8hgeMbxHdMVw6g9GHPTmubF4nDVfZqE03zx0TT6nyHFvEfD2ZxwFDB4ylVn9bwz5YVISdlVjd2TbPbrW3ja45TCs259n97+9TEkaTWpJDu/0a1C/8Dd8/wAkX/vqpo12tua5dcPu2r8vH901Bosi3n2u6V9yTXr+V/uJtjH/AKBXsR7H6fLRXLqxt97fjan602aHzsrGd2U+9/tVK3zOzN/ClIrLt3b/AL39371S+wLRJleZZI7OS4jTLJE//fW2orK1+wx29nD8qW0Spt+i9qnuGaHbZlPNV2Cv/ulufyqO4jmmZpGRmCS/99ZqJXitC733LcNxI233/wDHqnVvM+ZXx8/3Kq2rRzRpJCzsKtQ28ayLuflt2ymtdRPci8mRdUjkV/kS3l+Vv4mLLj/0GrSr+8+/UUan+1pW2fL5Cf8AoTfLUlqvnD94/wB12b5KpvSwrJK4rQr/AOO/epqxtysP/j1LcNshbaHJ/wDHaSSRltQI+6UNaifchhWTasbPu+Tds/hpTb28paGSPcpf5k2ZWpLWTfGhX5V20jR/M0jUPuLYb9hjjZfL+Qfd+T7uBTY1Zbxl/vbv5VMzNFH5n8PHy0kci/bk3/e+f/0GmtWPmGSW/CKu9ld9u3Z92pI42WNvMRIt38P8NOaRmm27P/105Y1kYeZ96k9A5rbD7WPdtjPBbjb+lZ+m3y3Wk2s38LQblb+9jitCSRoZPm/iYN/u1n6Lt/s+GH5VMe9V2dNodhxReyHFPcsTXyxRMkyMrbf7lQSaosMbeY+Ds3Ir/wB2rM0LNnaOTVZv30LLLsb5WXa1ZubWqKikOmmWS8gjZ/8AXOVT/e2Mdv8A47Vq3XEgh39PlqheM0KwzlOElT7n8I6VZt7iFX3LxRzXQNFhmjj+V33NvqrqFrDfW82l3CJ5dzA8UqP824MuDTryZreLzI33M1MjZxbi4Z9rj+BulN7WEt0yvpKraeH7OGR3IhtQm7vuRdh/Vaf9l3TNcTWyCXytm/8AuoWU8fWk0lpWW5t5f+WN4+3Z/cb5w3/jzVa8mDcs0b7Tv/76rK+li+4haNlSPoNxXYtRxzW8UcEjvsWLK79n+FWJLVvmmh+Zkz/wFveqjLeTWeGtkR0RXdN+Rkcnmrk7aEpXJt0cc+5k+Q/Mj7PX+GkmZvNlmZ90bJu/uml2tu8vflXT/wAepjboYfs8yfL/AN9VFwasP3YZGP8ACn/jtUrMx39ufMC77e4eJk+nI/RlaraiaJdqzKSv3Fb+JaqTW9uuoyeX+6a5tw/ycfOnB/RlpyKTJZx5y7Tv/vfP/Fio1Rj8wKrUkcM0SlWm3/J/wL8aa0q+YI2O0N9+okUnqVrxZIJIpA/R/m2elVbpt37xX+/mtG8jhaLH3hVFl+Vo1RVx8351EndFp9inbxqsf+u6f55pZLhRG6feVc0M0iybdmFak2yQ59/738NYtdirvqJaq0lvG0j7W/i30y4kZWKLN92lhaR38tg+OVX1p8lr8uN+aiRat1KkuoNGvzdGqjqjNb/6VJJ8j7UZvq3DfnWh9jjm3Rt94fMtMuLOJrfy9r4b5X3VhNFJ3MW8WSO4WOa22sf40f71VGuljnf55U2/L9z71SfY724V5JUlhMUrK8T8hsNgOMdiOadJuVfMk3b12/cTNcsm72NGynJfR7hGo37vmdPu7fxqKOONmEjJx83/AAKp5P3b/MmVf/YqKFLeO5VYZmUMv3P9quWa1GV761tWZo2Tc7fnWS22zY28275P+WrfMrZ9P/QTW9dK0cbt/F9Kypsrb/voeibll2fLXJOm5TVjWLtEoMq2rSrH8u993/Aj1p0fyrtP3hSSM0+z+795/wClLJtDbV/zivXhGMYpHO3d3IJJGXd/tf7fzVTuJm7/AO9Vmb723/vuqt4V8syf98JQIoySLJJuFUrqTdJ/u1bk3Qwbj/nNUZdzdv8AfovYD6Mn+zTTS3V1coqwrvff/D/9aodDtbw3U/izUEuGh8r/AECyVOVXu5H99uy9h7s1SQ27a1qSxqn/ABLrV9r7/m+0SjnaP7wXv78f3q6GGMtC0jcIX3f7zfSvpuW7ueapWIYWaRVkWF4/4tsqfMv4VLNbq0f2Rdy/MrS//E0kckTSKjJh9m7b97pRfagLONfJTzZ5m2W8X95z6+wHJ9qtW6kO/QiuGm1S8/se3T90nzX8vRtu3hB7t+g5qTzo5JFt1s0Tavyf3Y0X+lU49qwiztZnbEp+0XDcG4l7t9KsRqq7Zpuv8H1oWw9tBkP3n81HbP3EX5m/KpL52mjCsOEba+zjnvzS/aIbdm8vd5275ayLp5r66bTYkbY3z3TPxweij6n9Khlp3HWqx314NXbnYjLbq/8ADnq34r09qttI1vGyqn3n+/Ulvbt95ipXf9xfb+lTSRtMBG21tv8Af/vf3vwoewxmlwiSZZPugfKy0/xFqFxptifsoRrid9lun+0f4vw61Y0+yjt/3kzrjZu3v/d7sazo5m1K8/tKZPkKYt3/ALsXXd9W/lVJNWFfUfa29vY2u2H5dv8A48T1Y+5NSOsjOG2KiCL9aRo1k2/Jj+5U0jfafMhbZu2fN9BV7sG9CKPa2FG/ajHc79arRwwzah9qCI1tbffRv4pT0X8PvH/gNJeTXMbJDapl5HC28X94/wB4/wA6tRzW9vD/AGfbpvSHI/3j3b6k1S5egpGV4m0LSfEsQ03WdKtb6FZhLGlzaLKpkII3YYEZwSM+5rJl+EXw+SHyT4D0bzT95hpMPy/+O11KzeXGsy/O7y/db72e9LcSRxqrTSYdPm2P/Fj+grKpQpzd3FP5HDWy7L8TU56tGEpd3FN/e0cnc/DT4bR5mX4eaIMfNsOlQ/gv3fzo0/4U/DhIozc+A9FLykSEHS4jgHoPu9K3oY7XUbwww7difPLt+Vtx/wAf71W5FXzvMV/m2fN/s0lhsPu4L7kY/wBj5R/0DU//AACP+Ry+ofDL4clcQ/DzRVw5T5NIh+YDqB8vU+tRf8Kp8BXchhj8A6LGi/Mz/wBlQ5/3fu10skkMcnmb9+1fvfxVHCqy3CyWvKJ/E38RoWGw7esF9yH/AGLlCX+7U/8AwCP+RyepfCf4e2nl2Nr4M0nfPIY4/N06LJONzHO3+EbqbqXwc8Hm6VIPBGkpFHEoyNNjyzE8545wOK7GHTbO8vE1z78sSvFEz9FQsucfUr+S0+88y4mRUf5Vdd3rtHX8zWn1XDLTkX3IhZPlLf8Au1P/AMAj/kcLJ8HvAkTF38H6bsKcD7DHnP5VBD8GPCFzKzSeEtNAzhQLNF479q9GmjVYEXYgVfmf69qatvHHAzFPlC1X1XCXvyL7kH9i5T/0DU//AACP+R5zN8FvB5V1/wCEV07YgyzpaJuz+VQyfBXwa8BY+H7BGK/Kos07fhXpE0cMNqsaj7773/Hq1Q6lDbz2qwq6B9ytt3/wjrUvC4VJ3ivuQ/7Gyi6vhqf/AIBH/I80X4OeForY3Uvh6wEe3q9kn0HahvhB4QtrD7TdeG7DYMPJutUVgo5Pb0r0eT/TJPsqptQf+hVU1rT7fUtRtfDdxykyNPOqp/yzi2uV/F/LFZzweG3UV9yLWT5Rf/dqf/gEf8ji/Dnwc8H29mhv/DVhLM0jvOJbBCAzsW2DI4CghMD+6a1U+FXgab90PAekDaMsV0+P/wCJrrobGTbu/ib5tnuamltY0X5HRlXbUfVcNf4V9yD+yMo/6Bqf/gEf8jkYfhX4Cwv/ABQmj9PmD6ZF1H/Aas/8Kr+HYiWSPwBoxDP82dMi4Hf+GujktV3eYOn3npkassj7vuwoq79/DbuTTeGw9rci+5Asnyf/AKBqf/gEf8jk7/4Z+A4pw8HgXRPm+8H0yIKv/jtRn4f/AA/kV5E8DaCojUBtunRNz/3zXXTQFlbei/7G+ofscMe2Ntob7zfJ/ndWToUVqor7kaLJ8l2eGp/+AR/yMaw+Gvw2kVnk+H2jNhcEtpcIz9PlqVfhV8O3llZPAOiAqRGFOlQ4yOSfu+9bEO6OH5uUVt2z+JlC81asVka3VpPlld97/wCzu5qnQw1vhX3ErJMoeqw1P/wCP+Rjw/C74aj5n+HWhNj73/Eoh/8AiavaH4O8KeHbh77w94O02xmaMxtNZ2McTFSQdpKgEjIBx7CtJv8AWLcb0VP7n+1/dokZrdd38NP2NFe9GK+4qllWWUKinChCMls1GKa9GkRzLDYwzXzp9xGd9/zL8q0ui2LaXpdtZx8xR26/wfezyW/8epLqP7ZbrayQ/wDHzsR0bp8zcr+W5qs3uFk2wo5B+5VXa1Z6OstBzTKu5vn+ZlVPqajhkVlTcjNh/uf+zU8yRzQru2hh/cqO6227Iqw/Miqy/P8ANUSk1sPl0GzS7rtNyY+bd/u4p0lxHGDCr/e/u1TuLeVr14Fl/wCWH3++S38qktY3WaG3+YKf9bu/iYLSUwStqW7Nt0iLG/z/ADN8tXbM7pPM2fd+9WdG228YLu+ZNu/7v8Pb6VYtZpIGdpP95KakraiavqWILf8A068uGfmVo1/3VVP/ALKp44vs7M0exVZqr2dwsxl+Td/pD/8AAqnluFFVtqRIJMyKyyH/AIFUMcd1+78zYqIrb/pSrMX+9/FQzLNCfeI00rjSfUWNlVNsf3f4aSaSRl3JNjH+xT9rNCjNwVxQtxHIv31+X+Gi9ibErRybVVflqlJe2sN9AskyfPK6/P67M1NJeTSL5cMO3ctUNQkhj1DRYZ0+a41R03b8bf8AR3P/AAL7uKV7FJWLlxfeXcJ9n+dC+2X+JvwqNvt0c3lt/FLuTa/bdVto4VxIvXeG396bIqtILptq/Lt+eh8zCLiOmjkj3Mu92as3T4Sqv9n+RknkX/x9s1prI08fELBqh09t11eRz7V237NF/uMqnbUvuCkOkmmWTbDsZP8AbqtOWup2t9jhV/j2fw1oSeX9oMMafd+5UCrvuCzdVpPmasO9itqsccNi90021Y9rb3+7tXmoI9sN1Nuh+X73zVfvFW6s2t5kwsqPu/vfdqncLHb2q3E02fNiD/7vyrWclZ6FptqzLDSefGsy/N/E1Rtc/Nt2ff8A4aS1aRody7Sj/wDj1TxwxtGitJ92m5XSFaz1Mr+14dP1ia1km2s8SS7E/iUMyH8jWmrNNCUtblFm2s0Tt9zd2zjtmub8UyNbeILGOZHCXMFzat/vfLIjf7WNjVqeGNUS4hSGZIsb9u/+8oWs+aPNZGji2ro2bDzozuvnXcdyv5X3d1Ky7l8tvut/H7UyWRY5EaM42/L/AL1RXTNIywxv9193/Aa0bSVmZpMRbVprRPLd02Ou/a/Pyt936GovMmaZ5FRm/vLRC159omt8/K/zb2/hBX/FaLSS4t7fzprb5pX2siv90Coe5SSSHyTfvkKuhT/vn5aoanI0clveK67Yp9v3/wCB+D/7LWi3k+TuV/8Ad/3arS2v2i1eNf41Zdn3vm7UN2KjsO+1CGSVZPuN/dqs90WjC7Nyr8qJ/FSK0k0azSJ8rJ/laiby2YxTfLtTdt31LkHKL9uhgDLPC65f/vmq41KOO8fT32/c3/O/Y/8A16juGkuIXs1mcu/3G6NtrLvLW8t1iuLyHcm7yJWb5vkbp29aiUtSkrs1o4WuIfM87d/fpJt5+Zug/v1Xt7660myZp4eB8vt7VNHcW+oxhdmMff3/AMNRLYtb3K9xJ5biSN02/e/L3q2twrLubp95arPHCyy2e/crfLt2flxUVtJ8ot5plV4fvejKelQ1bQd76ljdDNcMq7dzfcqJ13R+bI/SmSQrJHuWZfkfdv8Au7fpUDStD8sz/NWUkVazKV2y2uofvtvk3K4+T/noOi/iP/Qad+7VsN8h2babrFr/AGhYGOHaWXa8S/ws68jp71Hp+oR32nw3yQ7WlXdsb70Z7qfoeK557l3uElnJJt3fLj5veqfkrGrxt1D/APj1W2vPJXy7WHeC/wDB/Dn+I5qOaSMyCT7qt97d/ernmluile2pmTN9ohO790H3K27+HFZuuXCwxx20Oxo3yvz/AHlxWxeQ7m8uJMqzbv8AgXeuc1y6WSZvI/hbatRThepcbloVo1b7y/e+7SsoEZ3vtpbdW+8w2024k3KyNla6mjMrMzruZX5NUL6bzZvL7JV2SQfd9Kz7pvMkL1N7DtYrXUhaMKPm+eqvy7d//Aakumy25f8Axyop28u3G7hh81Nu4j6dt9lvHFaxp5SJ8qov3aJr5mkKybmt4/4F+XzG/unvVW8ZWmVvOd2R9zInRV7Z/vU5VUXC+c7Hbu2bPuqepY19Pd7I81cvzJ5LiWMveXUyqqoy/L8qqKp2rXUNx9qkh2XEyKql3/1MW7hf99vvN6fKKR5m1CcyeTutoZV2M/3ZnH8R/wBhf1PFT/6vzGaZyZX3PK6fwj0o6heyJ45PMZJH9/KWn2U1vYxiG4++Pmb+LmoYYWuFdmTyLc7diOn3s9G/+xpzRxwtzM77f7/3eP4qL2Ek76lfUL5beNprO3zNLhIl/vHt/wDFGpNLtTawrC02+VstLK/3mc9W/wAKr6bB/aEj6tJN8jpstYv7qd2/3m/lV7d5LGGN923/AGKmyS0LuuhI0kkf7hX2qPu/7tWI285vlG5Nq7FqjCsyvt+R97sv5U+4vl0fTTeP87fdii/ikc8BfxqrqwmnbQb4gu2upP7Bjm+TZ5l66f3T0TPv/Kk+1L5ixxw/KE/udqp2Uc0drtZ0kl3F5X/hZz/QdqLW1m0+2ma6dn37nZ2+4uF+79KV76FWT3L7TNJHvXjb/wB9YpWum2mO1RNvC/7zVmySyX0arNeeTb7P+Xf396j1q8WS3h0XQZtj3asiS/8APGIf6yb8O3qauMrkyTLWn3El1dNqjXPyLlIu24hvnf6fwinR3UkbSbU3Mdy7t/NQwyWSon2f5IbFEVIt/wB0bcBT79y1WA1rDBuh2fvIlfev3pM/0rTRq6IvYm0tYY3aa4m+fYz7mf7v0qNmhs43aSZS0v3d3zfL9DVZvOWEeZcrvuJd2z720f3f9qmeZdahd+dM6yRp/sfdUdMVMmm7IfKWLG3a2t2uJvvzuWdv4l+v0FQs0l9cLbt8qH5m+f8AhH+NTTatZxqLe8fmbc0UT9Zsc8UM0Om2b6ldfxfPL6KB7+1Qm3o9ikmtyCGJlRp2TK7/AJd38Tf3aRpmghitVfe8zbE28NuPXH0FWrOaG4t1urV0eF1Zkf8AvZ/i/CqOGutW+2Mi7LPckTr826Urh8f7o4+rVTlyu4neWjLMk11DEsaw5X7vy/wqKtW7Q7dq8Zqu0dw00Um/5atRw/K00gyy7l/4FThN7sTsJcN9qkWHHC1HI0bXaQtc7URdz/7WKYsRs5Gmk3/+yrRY28N1CbxnZ1uNrRbk/g7LTTfQVyW+/fR7IfmR3+Zao3ySTTJ9nRPlZf4Ow61qTWcflop+WoLW12xtN5m7d9z/AGfmqZKUtBp2C3hf5d3+9/tVnaPm81K+1ze+2ZxZ2vpiNmeRx/vSNj/tlWhrl1JpWh3F1b/PcOqpAv8AelZlRF/EstWP7NOmWsNir71togif7w4PP13VUnrYRCrRr/tNRb6hbyL9nb7+/wCZf9mm+XJHNLNJN9/7n+ytC26XEbzWs/lOctv7NjpUNtN2KS7lllh+VWdMfedO22q6t50auu1XPz7fr0p7Mvkqrffk2rs/u1D++jvnmaZPKZf7nf2NTzaA4iTK0lqHk+Vv9z0qvLHMyiSKb/bb6fjWh80ymTftx8393iq//HxeFpEbau36baiTV7lLuQzeb/ZsrP8Afl/dL/wNsbvyrQt1jW6eFTu2KNn+7imK0bXcdvHs+TLt/s/wD/0Jqlhnt31CRljYOjqjN/dwtS17yHew5ZJtwmkRMfxqlPZYZovJfZ8/zfhUVxOqruVH77+/X+KltbiNlaSN+n8FCvzWYPUkmhWS+t/Ld9qb32/7q4H/AKHTXa6EnnLJ/B/+qobeRzdXNxIP9VEsSf7X3if1ap4JJJJEab5Pl+5Re6sHmOjhmfa0m3c/36ivrWaZluofvH8lxVhl2RiSb5Zd21X/AJYouEwWhX5UT+579azs0hqWpBb2skX2q4m43eUiv7BWJ/ItTpLNpFbyblEML7qdaNJdWMrTbsPcP9cBsD9Fp8bLCqzfw/xJTsm0kCerK95a6h9qWS1KKfN37f8AgOKkkVrfEbvlWbb/ALXPFWWa4ZvMbjen/jlV7y4hh33TbWMcTSKv90qrGlJ2Y4sj0Vmh0uFt/Du7/wB75WdiGq3cbl2yL/31VLT1f7HBYt8yRxIv8K8bFq5btHNv3fw/L89EZ20FKLeqGQ3zM1wrWz4i27JW24bK54+nelms2mtX8u4ZWMTL/u5X71Le2klxYzQ6fefY5nVkiuokUtCf72Dw341NDNNJG8MiLv8AK/WtG7oSQy2jhhtf3aM5VRs+f5qj+xzLM11HN/B9xkqe33NCkfnMrD+QobLfLn/gdZ83MgejFjVYR80nP8aVi+JJLhta8PNCm3/iodsr/d+T7LOdv57a2BIvl+Yrr/e+T0rE8RXQhvtBMDu6zeIYk3f71vPn/wBBpuS3Qkrm9M/zLFmm27Ncq0fk/wDAadJbtJs2/dXbvqSGPyWZVTdWnMjMIWkj+Vtn+xUVmq/arv181G/NFqx5Mm3cv8LNVNWla6nj2Or7Im9mwv8A47WbmaRRdj3LM3mJg/0p21Wb76LVWxtYreZ/ssmxZm81k37huPX/AHc1ak2r+8V0P8NJNW0FL4hkkCXGyOQYw9Z2l29vfaXDJJNldrI6e6swK/8Ajtam52j3q/y1mXF5HZx+WYX4umjVETb70mlccb20LNvbxwxiLZt2/KtSLCse5l4zTI7iPcqs9SNtkG3tU6JWQXMjxRpq301hcTQufs+qW7fK/wB0Puids+29Sfarel6Xa6bZ+T5O5uVfvt9cfjT9Uhk/suaON8HymZP4uRyKsR3y3itdQ/cf5lT61LhFzNFKTiU4ZtSmkdrqCCK3V9sC72Lsv949vyqVVXbulLZX7uz+LNK0ckjBVdGVt3yN/dpLhZtq7Qh+Smk1uNu5XumaHUIfLmRDNA6/988j9Gak/eQyNtfzV/3Pu024mnmmhbZsSFlZ3+6Gzxt/CpVkkjmZW+6v3P7rUt0K1gaY/Z/s7Q7dv8fv1qlpuqXl1dX0clt5cUEoit3+60xC5f8AAFsf99Vbm+0faEWGaLOz596fw/3Rj9DUsaLg7fur/eoeuwXKFv5g86387dslb5O6q3Iz/wCPUkhZfvRocJ9z/CiS6ittYSE7x58Tf8C2c1LN8wMnyf7lQ1qNu5SktWk2eWiMU/762/3aqahb3dxby2cMzI2z5N/TdWpD++Z27r8tNaPcp3feqZK5admZVq11PGjM++Lb8yv8zKasSW+5t0Tr/tbKakflzXFqr8h96r/st/Sq001xCqtF8v8As/w1ErJalco6Zo45HmX7zfnWdJdQtfR3EyfJL8ku3+8enNW/MkmiaRpPk/z8tRXMck1s0a7GUo3/AAGs3duyLWg2zazvVkh/1kKTsm2WHG10bB4PoejflT7jMiyRxuvmJ8zfPyqn+hpIbiSZUmk6/wAa/wB2pLm3WRt2xNxXb9z5ttSPTdFQ3THG2Ha/lfLu4rnpGvPDq6lcQu8pudlxaxN/yzc7UdR7Z+f/AL6ro7qSSHase1P4fublqjrixyQpdWs21oXWVk2f6xB99frjpXPJFrcgbzFb938+XqHULpFtWkb5l+66J6Va8uOSHct1uRvmX/aHaqd1Cz3Hyp+5MX3/AOLd/dxWEk1HQaavqMmuI7Wx85v4IvlrmW/fNtzj+KtjXpGs9JW2k+Yyt8n+6Kx/lh+ZKqkny3YPTQb523Aaq13IrSbalZm+Zt9U7qb7zL94/wAFU3ckr6g0rNDHCn35dz/7oqvcTIq/8Aqy0+75l/h+VKzb6RmmZl6e9J7jsV5ZNz7Y0+YvVfUr6G1jLXDbV/wXNTKfvO0m75a8n/a4+IDeBfhm11GjeZdXS26uk2zbu43E+mKT2LpU/aSsf//Z)", "_____no_output_____" ] ], [ [ "# or\n!python tools/demo.py image -f exps/default/yolox_s.py -c /content/YOLOX/models/yolox_s.pth.tar --path assets/dog.jpg --conf 0.3 --nms 0.65 --tsize 640 --save_result\n", "\u001b[32m2021-07-21 12:15:29\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36m__main__\u001b[0m:\u001b[36m219\u001b[0m - \u001b[1mArgs: Namespace(camid=0, ckpt='/content/YOLOX/models/yolox_s.pth.tar', conf=0.3, demo='image', exp_file='exps/default/yolox_s.py', experiment_name='yolox_s', fp16=False, fuse=False, name=None, nms=0.65, path='assets/dog.jpg', save_result=True, trt=False, tsize=640)\u001b[0m\n/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)\n return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)\n\u001b[32m2021-07-21 12:15:29\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36m__main__\u001b[0m:\u001b[36m229\u001b[0m - \u001b[1mModel Summary: Params: 8.97M, Gflops: 26.81\u001b[0m\n\u001b[32m2021-07-21 12:15:33\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36m__main__\u001b[0m:\u001b[36m240\u001b[0m - \u001b[1mloading checkpoint\u001b[0m\n\u001b[32m2021-07-21 12:15:33\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36m__main__\u001b[0m:\u001b[36m245\u001b[0m - \u001b[1mloaded checkpoint done.\u001b[0m\n\u001b[32m2021-07-21 12:15:34\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36m__main__\u001b[0m:\u001b[36m128\u001b[0m - \u001b[1mInfer time: 0.7156s\u001b[0m\n\u001b[32m2021-07-21 12:15:34\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36m__main__\u001b[0m:\u001b[36m165\u001b[0m - \u001b[1mSaving detection result in ./YOLOX_outputs/yolox_s/vis_res/2021_07_21_12_15_33/dog.jpg\u001b[0m\n" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
4a4ae54f97d629db549c20f8abe811d1e8aca04a
10,645
ipynb
Jupyter Notebook
novice/sql/08-create.ipynb
richford/2015-01-22-stonybrook
b9d5dd4af5df2ae39dcd1d7a8cfeeb07b6f35e26
[ "CC-BY-3.0" ]
null
null
null
novice/sql/08-create.ipynb
richford/2015-01-22-stonybrook
b9d5dd4af5df2ae39dcd1d7a8cfeeb07b6f35e26
[ "CC-BY-3.0" ]
null
null
null
novice/sql/08-create.ipynb
richford/2015-01-22-stonybrook
b9d5dd4af5df2ae39dcd1d7a8cfeeb07b6f35e26
[ "CC-BY-3.0" ]
null
null
null
38.992674
128
0.557445
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
4a4aea0af0faa3461ad6aee3ae7d8554cc2a82f5
176,927
ipynb
Jupyter Notebook
DifferentRegionsCorrelatedLatents/restandmove.ipynb
cindyhfls/NMA_DL_2021_project
6adc006936df6e6d1e0fa2744629d41980e329e7
[ "MIT" ]
null
null
null
DifferentRegionsCorrelatedLatents/restandmove.ipynb
cindyhfls/NMA_DL_2021_project
6adc006936df6e6d1e0fa2744629d41980e329e7
[ "MIT" ]
1
2021-08-06T20:52:32.000Z
2021-08-06T20:52:32.000Z
DifferentRegionsCorrelatedLatents/restandmove.ipynb
cindyhfls/NMA_DL_2021_project
6adc006936df6e6d1e0fa2744629d41980e329e7
[ "MIT" ]
1
2021-08-09T14:22:10.000Z
2021-08-09T14:22:10.000Z
174.140748
62,350
0.864961
[ [ [ "<a href=\"https://colab.research.google.com/github/cindyhfls/NMA_DL_2021_project/blob/main/DifferentRegionsCorrelatedLatents/restandmove.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Focus on what matters: inferring low-dimensional dynamics from neural recordings\n\n**By Neuromatch Academy**\n\n__Content creators:__ Marius Pachitariu, Pedram Mouseli, Lucas Tavares, Jonny Coutinho, \nBlessing Itoro, Gaurang Mahajan, Rishika Mohanta", "_____no_output_____" ], [ "**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**\n\n<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>", "_____no_output_____" ], [ "---\n# Objective: \nIt is very difficult to interpret the activity of single neurons in the brain, because their firing patterns are noisy, and it is not clear how a single neuron can contribute to cognition and behavior. However, neurons in the brain participate in local, regional and brainwide dynamics. No neuron is isolated from these dynamics, and much of a single neuron's activity can be predicted from the dynamics. Furthermore, only populations of neurons as a whole can control cognition and behavior. Hence it is crucial to identify these dynamical patterns and relate them to stimuli or behaviors. \n\nIn this notebook, we generate simulated data from a low-dimensional dynamical system and then use seq-to-seq methods to predict one subset of neurons from another. This allows us to identify the low-dimensional dynamics that are sufficient to explain the activity of neurons in the simulation. The methods described in this notebook can be applied to large-scale neural recordings of hundreds to tens of thousans of neurons, such as the ones from the NMA-CN course.", "_____no_output_____" ], [ "---\n# Setup", "_____no_output_____" ] ], [ [ "# Imports\nimport torch\nimport numpy as np\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom matplotlib import pyplot as plt\nimport math\nfrom sklearn.linear_model import LinearRegression\n\n\nimport copy", "_____no_output_____" ], [ "# @title Figure settings\nfrom matplotlib import rcParams\n\nrcParams['figure.figsize'] = [20, 4]\nrcParams['font.size'] =15\nrcParams['axes.spines.top'] = False\nrcParams['axes.spines.right'] = False\nrcParams['figure.autolayout'] = True", "_____no_output_____" ], [ "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\nprint(device)", "cpu\n" ], [ "def pearson_corr_tensor(input, output):\n rpred = output.detach().cpu().numpy()\n rreal = input.detach().cpu().numpy()\n rpred_flat = np.ndarray.flatten(rpred)\n rreal_flat = np.ndarray.flatten(rreal)\n corrcoeff = np.corrcoef(rpred_flat, rreal_flat)\n return corrcoeff[0,1]", "_____no_output_____" ], [ "#@title Set random seed\n\n#@markdown Executing `set_seed(seed=seed)` you are setting the seed\n\n# for DL its critical to set the random seed so that students can have a\n# baseline to compare their results to expected results.\n# Read more here: https://pytorch.org/docs/stable/notes/randomness.html\n\n# Call `set_seed` function in the exercises to ensure reproducibility.\nimport random\nimport torch\n\ndef set_seed(seed=None, seed_torch=True):\n if seed is None:\n seed = np.random.choice(2 ** 32)\n random.seed(seed)\n np.random.seed(seed)\n if seed_torch:\n torch.manual_seed(seed)\n torch.cuda.manual_seed_all(seed)\n torch.cuda.manual_seed(seed)\n torch.backends.cudnn.benchmark = False\n torch.backends.cudnn.deterministic = True\n\n print(f'Random seed {seed} has been set.')\n\n\n# In case that `DataLoader` is used\ndef seed_worker(worker_id):\n worker_seed = torch.initial_seed() % 2**32\n np.random.seed(worker_seed)\n random.seed(worker_seed)", "_____no_output_____" ] ], [ [ "**Note:** If `cuda` is not enabled, go to `Runtime`--> `Change runtime type` and in `Hardware acceleration` choose `GPU`. ", "_____no_output_____" ] ], [ [ "# Data Loading\n\n#@title Data retrieval\nimport os, requests\n\nfname = []\nfor j in range(3):\n fname.append('steinmetz_part%d.npz'%j)\nurl = [\"https://osf.io/agvxh/download\"]\nurl.append(\"https://osf.io/uv3mw/download\")\nurl.append(\"https://osf.io/ehmw2/download\")\n\nfor j in range(len(url)):\n if not os.path.isfile(fname[j]):\n try:\n r = requests.get(url[j])\n except requests.ConnectionError:\n print(\"!!! Failed to download data !!!\")\n else:\n if r.status_code != requests.codes.ok:\n print(\"!!! Failed to download data !!!\")\n else:\n with open(fname[j], \"wb\") as fid:\n fid.write(r.content)\n\nalldat = np.array([])\nfor j in range(len(fname)):\n alldat = np.hstack((alldat, np.load('steinmetz_part%d.npz'%j, allow_pickle=True)['dat']))", "_____no_output_____" ], [ "#@title Print Keys\nprint(alldat[0].keys())", "dict_keys(['spks', 'wheel', 'pupil', 'response', 'response_time', 'bin_size', 'stim_onset', 'contrast_right', 'contrast_left', 'brain_area', 'feedback_time', 'feedback_type', 'gocue', 'mouse_name', 'date_exp', 'trough_to_peak', 'active_trials', 'contrast_left_passive', 'contrast_right_passive', 'spks_passive', 'pupil_passive', 'wheel_passive', 'prev_reward', 'ccf', 'ccf_axes', 'cellid_orig', 'reaction_time', 'face', 'face_passive', 'licks', 'licks_passive'])\n" ], [ "#@title Define Steinmetz Class\nclass SteinmetzSession:\n data = []\n binSize = 10\n nTrials = []\n nNeurons = []\n trialLen = 0\n trimStart = \"trialStart\"\n trimEnd = \"trialEnd\"\n def __init__(self, dataIn):\n self.data = copy.deepcopy(dataIn)\n dims1 = np.shape(dataIn['spks'])\n self.nTrials = dims1[1]\n self.nNeurons = dims1[0]\n self.trialLen = dims1[2]\n\n def binData(self, binSizeIn): # Inputs: data, scalar for binning. Combines binSizeIn bins together to bin data smaller Ex. binSizeIn of 5 on the original dataset combines every 5 10 ms bins into one 50 ms bin across all trials.\n varsToRebinSum = ['spks']\n varsToRebinMean = ['wheel', 'pupil']\n spikes = self.data['spks']\n histVec = range(0,self.trialLen+1, binSizeIn)\n spikesBin = np.zeros((self.nNeurons, self.nTrials, len(histVec)))\n print(histVec)\n for trial in range(self.nTrials):\n spikes1 = np.squeeze(spikes[:,trial,:])\n for time1 in range(len(histVec)-1):\n spikesBin[:,trial, time1] = np.sum(spikes1[:, histVec[time1]:histVec[time1+1]-1], axis=1)\n\n spikesBin = spikesBin[:,:,:-1]\n self.data['spks'] = spikesBin\n self.trialLen = len(histVec) -1\n self.binSize = self.binSize*binSizeIn\n\n \n s = \"Binned spikes, turning a \" + repr(np.shape(spikes)) + \" matrix into a \" + repr(np.shape(spikesBin)) + \" matrix\"\n print(s)\n\n def plotTrial(self, trialNum): # Basic function to plot the firing rate during a single trial. Used for debugging trimming and binning\n plt.imshow(np.squeeze(self.data['spks'][:,trialNum,:]), cmap='gray_r', aspect = 'auto')\n plt.colorbar()\n plt.xlabel(\"Time (bins)\")\n plt.ylabel(\"Neuron #\")\n \n def realign_data_to_movement(self,length_time_in_ms): # input has to be n * nTrials * nbins\n align_time_in_bins = np.round(self.data['response_time']/self.binSize*1000)+ int(500/self.binSize) # has to add 0.5 s because the first 0.5 s is pre-stimulus\n length_time_in_bins = int(length_time_in_ms/self.binSize)\n validtrials = self.data['response']!=0\n maxtime = self.trialLen\n newshape = (self.nNeurons,self.nTrials)\n newshape+=(length_time_in_bins,)\n newdata = np.empty(newshape)\n for count,align_time_curr_trial in enumerate(align_time_in_bins):\n if (validtrials[count]==0)|(align_time_curr_trial+length_time_in_bins>maxtime) :\n validtrials[count] = 0\n else:\n newdata[:,count,:]= self.data['spks'][:,count,int(align_time_curr_trial):int(align_time_curr_trial)+length_time_in_bins]\n # newdata = newdata[:,validtrials,:]\n self.data['spks'] = newdata\n # self.validtrials = validtrials\n print('spikes aligned to movement, returning validtrials')\n return validtrials\n\n def realign_data_to_rest(self,length_time_in_ms): # input has to be n * nTrials * nbins\n align_time_in_bins = np.zeros(self.data['response_time'].shape)\n length_time_in_bins = int(length_time_in_ms/self.binSize)\n newshape = (self.nNeurons,self.nTrials)\n newshape+=(length_time_in_bins,)\n newdata = np.empty(newshape)\n for count,align_time_curr_trial in enumerate(align_time_in_bins):\n newdata[:,count,:]= self.data['spks'][:,count,int(align_time_curr_trial):int(align_time_curr_trial)+length_time_in_bins]\n self.data['spks'] = newdata\n print('spikes aligned to rest')\n \n def get_areas(self):\n print(set(list(self.data['brain_area'])))\n return set(list(self.data['brain_area']))\n\n def extractROI(self, region): #### extract neurons from single region\n rmrt=list(np.where(self.data['brain_area']!=region))[0]\n print(f' removing data from {len(rmrt)} neurons not contained in {region} ')\n self.data['spks']=np.delete(self.data['spks'],rmrt,axis=0)\n neur=len(self.data['spks'])\n print(f'neurons remaining in trial {neur}')\n self.data['brain_area']=np.delete(self.data['brain_area'],rmrt,axis=0)\n self.data['ccf']=np.delete(self.data['ccf'],rmrt,axis=0)\n \n def FlattenTs(self):\n self.data['spks']=np.hstack(self.data['spks'][:])\n\n def removeTrialAvgFR(self):\n mFR = self.data['spks'].mean(1)\n mFR = np.expand_dims(mFR, 1).repeat(self.data['spks'].shape[1],axis = 1)\n print(np.shape(self.data['spks']))\n print(np.shape(mFR))\n self.data['spks'] = self.data['spks'].astype(float)\n self.data['spks'] -= mFR\n def permdims(self):\n return torch.permute(torch.tensor(self.data['spks']),(2,1,0))\n\n def smoothFR(self, smoothingWidth):# TODO: Smooth the data and save it back to the data structure\n return 0\n", "_____no_output_____" ] ], [ [ "# function to run all areas", "_____no_output_____" ] ], [ [ "def run_each_region(PA_name,IA_name,ncomp,learning_rate_start,plot_on = False,verbose = False,niter = 400):\n nTr = np.argwhere(validtrials) # since the other trials were defaulted to a zero value, only plot the valid trials\n\n ## plot a trial\n if plot_on:\n plt.figure()\n curr_session.plotTrial(nTr[1])\n plt.title('All')\n\n PA = copy.deepcopy(curr_session)\n ###remove all neurons not in motor cortex\n PA.extractROI(PA_name)\n ### plot a trial from motor neuron\n if plot_on:\n plt.figure()\n PA.plotTrial(nTr[1])\n plt.title('Predicted Area')\n ### permute the trials\n PAdata = PA.permdims().float().to(device)\n PAdata = PAdata[:,validtrials,:]\n\n if IA_name == 'noise':\n # generate some negative controls:\n IAdata= torch.maximum(torch.randn(PAdata.shape),torch.zeros(PAdata.shape)) # for now say the shape of noise matches the predicted area, I doubt that matters?\n if plot_on:\n plt.figure()\n plt.imshow(np.squeeze(IAdata[:,nTr[1],:].numpy().T),cmap = 'gray_r',aspect = 'auto')\n plt.title('Random noise')\n IAdata = IAdata.float().to(device)\n else:\n IA = copy.deepcopy(curr_session)\n ###remove all neurons not in motor cortex\n IA.extractROI(IA_name)\n if plot_on:\n ### plot a trial from motor neuron\n plt.figure()\n IA.plotTrial(nTr[1])\n plt.title('Input Area')\n IAdata = IA.permdims().float().to(device)\n IAdata = IAdata[:,validtrials,:]\n\n ##@title get indices for trials (split into ~60%, 30%,10%)\n N = PAdata.shape[1]\n np.random.seed(42)\n ii = torch.randperm(N).tolist()\n idx_train = ii[:math.floor(0.6*N)]\n idx_val = ii[math.floor(0.6*N):math.floor(0.9*N)]\n idx_test = ii[math.floor(0.9*N):]\n\n ##@title split into train, test and validation set\n x0 = IAdata\n x0_train = IAdata[:,idx_train,:]\n x0_val = IAdata[:,idx_val,:]\n x0_test = IAdata[:,idx_test,:]\n\n x1 = PAdata\n x1_train = PAdata[:,idx_train,:]\n x1_val = PAdata[:,idx_val,:]\n x1_test = PAdata[:,idx_test,:]\n\n NN1 = PAdata.shape[2]\n NN2 = IAdata.shape[2]\n\n class Net_singleinput(nn.Module): # our model\n def __init__(self, ncomp, NN2, NN1, bidi=True): # NN2 is input dim, NN1 is output dim\n super(Net_singleinput, self).__init__()\n\n # play with some of the options in the RNN!\n self.rnn1 = nn.RNN(NN2, ncomp, num_layers = 1, dropout = 0, # PA\n bidirectional = bidi, nonlinearity = 'tanh')\n\n self.fc = nn.Linear(ncomp,NN1)\n\n def forward(self, x0):\n y = self.rnn1(x0)[0] # ncomp IAs\n\n if self.rnn1.bidirectional:\n # if the rnn is bidirectional, it concatenates the activations from the forward and backward pass\n # we want to add them instead, so as to enforce the latents to match between the forward and backward pass\n q = (y[:, :, :ncomp] + y[:, :, ncomp:])/2\n else:\n q = y\n\n # the softplus function is just like a relu but it's smoothed out so we can't predict 0\n # if we predict 0 and there was a spike, that's an instant Inf in the Poisson log-likelihood which leads to failure\n z = F.softplus(self.fc(q), 10)\n \n return z, q\n\n # @title train loop\n # you can keep re-running this cell if you think the cost might decrease further\n\n # we define the Poisson log-likelihood loss\n def Poisson_loss(lam, spk):\n return lam - spk * torch.log(lam)\n\n def train(net,train_input,train_output,val_input,val_output,niter = niter):\n set_seed(42)\n optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate_start)\n training_cost = []\n val_cost = []\n for k in range(niter):\n ### training\n optimizer.zero_grad()\n # the network outputs the single-neuron prediction and the latents\n z,_= net(train_input)\n # our log-likelihood cost\n cost = Poisson_loss(z, train_output).mean()\n # train the network as usual\n cost.backward()\n optimizer.step()\n training_cost.append(cost.item())\n\n ### test on validation data\n z_val,_ = net(val_input)\n cost = Poisson_loss(z_val, val_output).mean()\n val_cost.append(cost.item())\n\n if (k % 100 == 0)& verbose:\n print(f'iteration {k}, cost {cost.item():.4f}')\n \n return training_cost,val_cost\n\n # @title train model PA->PA only\n net_PAPA = Net_singleinput(ncomp, NN1, NN1, bidi = False).to(device)\n net_PAPA.fc.bias.data[:] = x1.mean((0,1))\n training_cost_PAPA,val_cost_PAPA = train(net_PAPA,x1_train,x1_train,x1_val,x1_val) # train\n\n # @title train model IA->PA only\n net_IAPA = Net_singleinput(ncomp, NN2, NN1, bidi = False).to(device)\n net_IAPA.fc.bias.data[:] = x1.mean((0,1))\n training_cost_IAPA,val_cost_IAPA = train(net_IAPA,x0_train,x1_train,x0_val,x1_val) # train\n\n # get latents\n z_PAPA,y_PAPA= net_PAPA(x1_train)\n z_IAPA,y_IAPA= net_IAPA(x0_train)\n\n #@title plot the training side-by-side\n if plot_on:\n plt.figure()\n plt.plot(training_cost_PAPA,'b')\n plt.plot(training_cost_IAPA,'b',linestyle = '--')\n\n plt.plot(val_cost_PAPA,'r')\n plt.plot(val_cost_IAPA,'r',linestyle = '--')\n\n plt.legend(['training cost (PAPA)','training cost (IAPA)','validation cost(PAPA)',\n 'validation cost (IAPA)'])\n plt.title('Training cost over epochs')\n plt.ylabel('cost')\n plt.xlabel('epochs')\n\n # see if the latents are correlated?\n plt.figure()\n plt.subplot(2,1,1)\n plt.plot(y_PAPA[:,0,:].detach().cpu().numpy())\n plt.subplot(2,1,1)\n plt.plot(y_IAPA[:,0,:].detach().cpu().numpy())\n\n if verbose:\n print(F.cosine_similarity(z_PAPA.flatten(start_dim = 0,end_dim = 1).T,z_IAPA.flatten(start_dim = 0,end_dim = 1).T).mean())\n print(F.cosine_similarity(z_PAPA.flatten(start_dim = 0,end_dim = 1).T,x1_train.flatten(start_dim = 0,end_dim = 1).T).mean())\n print(F.cosine_similarity(z_IAPA.flatten(start_dim = 0,end_dim = 1).T,x1_train.flatten(start_dim = 0,end_dim = 1).T).mean())\n\n diff_cosine_similarity = torch.subtract(F.cosine_similarity(z_PAPA.flatten(start_dim = 0,end_dim = 1).T,x1_train.flatten(start_dim = 0,end_dim = 1).T).mean(),F.cosine_similarity(z_IAPA.flatten(start_dim = 0,end_dim = 1).T,x1_train.flatten(start_dim = 0,end_dim = 1).T).mean())\n diff_cosine_similarity = diff_cosine_similarity.detach().cpu().tolist()\n\n if plot_on:\n plt.figure()\n plt.hist(F.cosine_similarity(z_PAPA.flatten(start_dim = 0,end_dim = 1).T,x1_train.flatten(start_dim = 0,end_dim = 1).T).detach().cpu().numpy())\n plt.hist(F.cosine_similarity(z_IAPA.flatten(start_dim = 0,end_dim = 1).T,x1_train.flatten(start_dim = 0,end_dim = 1).T).detach().cpu().numpy())\n plt.legend(('PAPA','IAPA'))\n plt.title('cosine_similarity by neuron')\n\n def regress_tensor(X,y):\n X = X.detach().cpu().numpy()\n y = y.flatten().detach().cpu().numpy().reshape(-1,1)\n model = LinearRegression()\n model.fit(X, y)\n r_sq = model.score(X, y)\n if verbose:\n print('coefficient of determination:', r_sq)\n return r_sq\n\n rsqmat = []\n for i in range(ncomp):\n rsqmat.append(regress_tensor(y_IAPA.flatten(start_dim = 0,end_dim = 1),y_PAPA[:,:,i].reshape(-1,1)))\n\n Avg_rsq = sum(rsqmat)/len(rsqmat)\n max_rsq = max(rsqmat)\n if verbose:\n print('Average Rsq for predicting the %i latents in IAPA from a linear combination of %i latents in PAPA is %2.3f'%(ncomp,ncomp,Avg_rsq))\n print('Max Rsq for predicting the %i latents in IAPA from a linear combination of %i latents in PAPA is %2.3f'%(ncomp,ncomp,max_rsq))\n\n\n return diff_cosine_similarity,rsqmat", "_____no_output_____" ] ], [ [ "# select session and area", "_____no_output_____" ] ], [ [ "# set the sessions\nsession_num = 30\ncurr_session=SteinmetzSession(alldat[session_num])\n# some preprocessing\nvalidtrials = curr_session.realign_data_to_movement(500) # get 500 ms from movement time, \n\n# had to load again for at rest \ncurr_session=SteinmetzSession(alldat[session_num])\ncurr_session.realign_data_to_rest(500)\n\n# cannot get realign and binning to work the same time =[\n # print areas\nareas = curr_session.get_areas()", "spikes aligned to movement, returning validtrials\nspikes aligned to rest\n{'CA3', 'OLF', 'TH', 'SCm', 'SNr', 'ORB', 'POST', 'MOs'}\n" ], [ "# CHANGE ME\n# Set input/hyperparameters here:\nncomp = 10\nlearning_rate_start = 0.005\n\n# set areas\nPA_name = 'MOs' # predicted area\n\nall_other_areas = ['noise']\nall_other_areas = all_other_areas+list(areas-set([PA_name]))\n\nprint(all_other_areas)\n\ncounter = 0\nfor IA_name in all_other_areas:\n print(IA_name)\n diff_cosine_similarity,rsqmat = run_each_region(PA_name,IA_name,ncomp,learning_rate_start)\n\n if counter == 0:\n allrsq = np.array(rsqmat)\n cos_sim_mat = np.array(diff_cosine_similarity)\n else:\n allrsq = np.vstack((allrsq,np.array(rsqmat)))\n cos_sim_mat = np.vstack((cos_sim_mat,np.array(diff_cosine_similarity)))\n counter +=1\n", "['noise', 'CA3', 'OLF', 'TH', 'SCm', 'SNr', 'ORB', 'POST']\nnoise\n removing data from 696 neurons not contained in MOs \nneurons remaining in trial 281\nRandom seed 42 has been set.\nRandom seed 42 has been set.\nCA3\n removing data from 696 neurons not contained in MOs \nneurons remaining in trial 281\n removing data from 934 neurons not contained in CA3 \nneurons remaining in trial 43\nRandom seed 42 has been set.\nRandom seed 42 has been set.\nOLF\n removing data from 696 neurons not contained in MOs \nneurons remaining in trial 281\n removing data from 899 neurons not contained in OLF \nneurons remaining in trial 78\nRandom seed 42 has been set.\nRandom seed 42 has been set.\nTH\n removing data from 696 neurons not contained in MOs \nneurons remaining in trial 281\n removing data from 868 neurons not contained in TH \nneurons remaining in trial 109\nRandom seed 42 has been set.\nRandom seed 42 has been set.\nSCm\n removing data from 696 neurons not contained in MOs \nneurons remaining in trial 281\n removing data from 936 neurons not contained in SCm \nneurons remaining in trial 41\nRandom seed 42 has been set.\nRandom seed 42 has been set.\nSNr\n removing data from 696 neurons not contained in MOs \nneurons remaining in trial 281\n removing data from 860 neurons not contained in SNr \nneurons remaining in trial 117\nRandom seed 42 has been set.\nRandom seed 42 has been set.\nORB\n removing data from 696 neurons not contained in MOs \nneurons remaining in trial 281\n removing data from 686 neurons not contained in ORB \nneurons remaining in trial 291\nRandom seed 42 has been set.\nRandom seed 42 has been set.\nPOST\n removing data from 696 neurons not contained in MOs \nneurons remaining in trial 281\n removing data from 960 neurons not contained in POST \nneurons remaining in trial 17\nRandom seed 42 has been set.\nRandom seed 42 has been set.\n" ], [ "summary = {'output area':PA_name,'input area':all_other_areas,'cosine_similarity_difference':cos_sim_mat,'all rsq':allrsq};", "_____no_output_____" ], [ "avg_rsq = summary['all rsq'].mean(1)\nmax_rsq = summary['all rsq'].max(1)\nsort_index = np.argsort(np.array(avg_rsq)) # sort by average\n\ninput_areas = np.array(summary['input area'])\n\nplt.figure()\nplt.plot(avg_rsq[sort_index])\nplt.plot(max_rsq[sort_index])\nplt.xticks(range(len(sort_index)),input_areas[sort_index])\nplt.legend(('average Rsq','max Rsq'))\nplt.title('Rsq in predicting '+summary['output area']+' latents');\nplt.ylabel('Rsq');\nplt.xlabel('Regions')", "_____no_output_____" ], [ "outfile = 'summary_arrays_rest.npz'\nnp.savez(outfile, **summary)", "_____no_output_____" ], [ "import numpy as np\nimport matplotlib.pyplot as plt\nsummary = np.load('summary_arrays_rest.npz')\nprint('summary.files: {}'.format(summary.files))\nprint('summary[\"input area\"]: {}'.format(summary[\"input area\"]))", "summary.files: ['output area', 'input area', 'cosine_similarity_difference', 'all rsq']\nsummary[\"input area\"]: ['noise' 'CA3' 'OLF' 'TH' 'SCm' 'SNr' 'ORB' 'POST']\n" ], [ "cos_sim_mat = np.array(summary['cosine_similarity_difference']).squeeze()\ninput_areas = summary['input area']\nsort_index = np.argsort(cos_sim_mat)[::-1] # sort by average\n\n\nplt.figure()\nplt.rcParams[\"figure.figsize\"] = (20, 10)\nplt.rcParams.update({'font.size': 20})\nplt.plot(cos_sim_mat[sort_index])\nplt.xticks(range(len(sort_index)),input_areas[sort_index])\nplt.title('Average difference in cosine simlarity between area1-area1 prediction and area2-area1 prediction')\nplt.xlabel('input areas')\nplt.ylabel('cosine similarity')\n", "_____no_output_____" ], [ "summary_move = np.load('summary_arrays.npz')\nallrsq_move = summary_move['all rsq']\nindx =[0,4,1,3,6,5,7,2]\nallrsq_move = allrsq_move[indx,:]", "_____no_output_____" ], [ "avg_rsq = summary['all rsq'].mean(1)\nsort_index = np.argsort(np.array(avg_rsq)) # sort by average\nprint(sort_index)\ninput_areas = summary['input area']\n\nplotx = np.array(range(1,9))\n\nallrsq.shape\nplt.figure()\n_ = plt.boxplot(allrsq.T, positions = plotx-0.2,widths = 0.2,patch_artist = True,boxprops = dict(facecolor = 'blue'),medianprops = dict(color = 'white'))\n_ = plt.boxplot(allrsq_move.T, positions = plotx+0.2,widths = 0.2,patch_artist = True,boxprops = dict(facecolor = 'red'),medianprops = dict(color = 'white'))\nplt.xticks(range(1,len(sort_index)+1),input_areas)\nplt.title('Rsq in predicting '+summary['output area'].tolist()+' latents');\nplt.ylabel('Rsq');\nplt.xlabel('Regions')\nplt.legend(('rest','movement'))", "[0 7 1 2 5 4 3 6]\n" ], [ "print(summary['input area'])\nprint(summary_move['input area'])\ninput_area_move = summary_move['input area']\nindx =[0,4,1,3,6,5,7,2]\nprint(input_area_move[indx])", "['noise' 'CA3' 'OLF' 'TH' 'SCm' 'SNr' 'ORB' 'POST']\n['noise' 'OLF' 'POST' 'TH' 'CA3' 'SNr' 'SCm' 'ORB']\n['noise' 'CA3' 'OLF' 'TH' 'SCm' 'SNr' 'ORB' 'POST']\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a4aeef99140be705129ff0a23c624ebec386e1b
693,635
ipynb
Jupyter Notebook
working/EDA_WM-BrianMc-topics-cluster1.ipynb
bdmckean/woot_math_analysis
d3308e413da2e41fff5060f10339b5ab4d01d690
[ "MIT" ]
null
null
null
working/EDA_WM-BrianMc-topics-cluster1.ipynb
bdmckean/woot_math_analysis
d3308e413da2e41fff5060f10339b5ab4d01d690
[ "MIT" ]
null
null
null
working/EDA_WM-BrianMc-topics-cluster1.ipynb
bdmckean/woot_math_analysis
d3308e413da2e41fff5060f10339b5ab4d01d690
[ "MIT" ]
null
null
null
44.808463
18,054
0.445757
[ [ [ "import pymongo\nimport pandas as pd\nimport numpy as np\n\nfrom pymongo import MongoClient\nfrom bson.objectid import ObjectId\n\nimport datetime\n\nimport matplotlib.pyplot as plt\n\nfrom collections import defaultdict\n\n\n%matplotlib inline\nimport json\nplt.style.use('ggplot')\n\nimport seaborn as sns\n\nfrom math import log10, floor", "_____no_output_____" ], [ "## Connect to local DB\n\nclient = MongoClient('localhost', 27017)\nprint (\"Setup db access\")", "Setup db access\n" ], [ "#\n# Get collections from mongodb\n#\n#db = client.my_test_db\ndb = client.test\n", "_____no_output_____" ], [ "chunk = 100000\nstart = 0\nend = start + chunk", "_____no_output_____" ], [ "reponses = db.anon_student_task_responses.find()[start:end]", "_____no_output_____" ], [ "df_responses = pd.DataFrame(list(reponses))", "_____no_output_____" ], [ "print (df_responses.head())", " _id behavioral_traits bonus correct \\\n0 5a00f1739100de1a390000d0 [] False True \n1 5a00f1739100de1a390000d5 [measuring_tools, orange_tick] False True \n2 5a00f1739100de1a390000d9 [] False True \n3 5a00f1739100de1a390000dc [] False True \n4 5a00f1739100de1a390000df [] False True \n\n diff id incomplete lesson \\\n0 0.000000 nvrm82_9Yv False nline_1b \n1 0.563288 jVG3p9f-20 False nline_1b \n2 0.601043 _NUUDSBMum False equivalence_0 \n3 0.686276 B6HmMEMpoL False equivalence_0 \n4 0.642014 IYWiIP26on False equivalence_0 \n\n level_summary \\\n0 {'entered': True, 'path': 'nline_1b', 'lm_stat... \n1 {'entered': True, 'path': 'nline_1b', 'lm_stat... \n2 {'subject': 'fractions', 'unit_name': 'frac_eq... \n3 {'subject': 'fractions', 'unit_name': 'frac_eq... \n4 {'subject': 'fractions', 'unit_name': 'frac_eq... \n\n problem_set ... \\\n0 lessons/fractions/lesson13_1/part_a/media/prob... ... \n1 lessons/fractions/lesson13_1/part_b/media/prob... ... \n2 lessons/fractions/lesson22_4/part_a/media/prob... ... \n3 lessons/fractions/lesson22_4/part_a/media/prob... ... \n4 lessons/fractions/lesson22_4/part_b/media/prob... ... \n\n screenshot_url second_try \\\n0 http://woot_math_cub.s3.amazonaws.com/ss/73538... False \n1 http://woot_math_cub.s3.amazonaws.com/ss/73538... False \n2 http://woot_math_cub.s3.amazonaws.com/ss/50253... False \n3 http://woot_math_cub.s3.amazonaws.com/ss/50253... False \n4 http://woot_math_cub.s3.amazonaws.com/ss/50253... False \n\n session_id \\\n0 efaa1ff0-47dc-4eab-89f1-93d6b76de5a5 \n1 efaa1ff0-47dc-4eab-89f1-93d6b76de5a5 \n2 c258e462-a7e8-411a-bfa8-7d949a8a9a79 \n3 c258e462-a7e8-411a-bfa8-7d949a8a9a79 \n4 c258e462-a7e8-411a-bfa8-7d949a8a9a79 \n\n student sublesson \\\n0 {'school_id': 'U4U2K7E1', 'grade': '3', 'secti... nline_1b.parta \n1 {'school_id': 'U4U2K7E1', 'grade': '3', 'secti... nline_1b.partb \n2 {'student_id': 'J3N5Q2O0C5', 'grade': '4', 'se... equivalence_0.parta \n3 {'student_id': 'J3N5Q2O0C5', 'grade': '4', 'se... equivalence_0.parta \n4 {'student_id': 'J3N5Q2O0C5', 'grade': '4', 'se... equivalence_0.partb \n\n t time_spent timestamp \\\n0 1.470065e+12 43677.0 1.470065e+12 \n1 1.470065e+12 24239.0 1.470065e+12 \n2 1.470065e+12 19765.0 1.470065e+12 \n3 1.470065e+12 7236.0 1.470065e+12 \n4 1.470065e+12 4808.0 1.470065e+12 \n\n txt untouched \n0 Use the 1/2 pieces to figure out how far the d... False \n1 Drag the panda to 4/4 of a yard from the start... False \n2 Model how many eighths are equal to one fourth... False \n3 Model how many halves are equal to four eighth... False \n4 Cameron ate 4/8 of a pizza.\\nCover the pizza... False \n\n[5 rows x 27 columns]\n" ], [ "df2 = df_responses.join(pd.DataFrame(df_responses[\"student\"].to_dict()).T)", "_____no_output_____" ], [ "df2 = df2.join(pd.DataFrame(df2['level_summary'].to_dict()).T)", "_____no_output_____" ], [ "df2 = df2.join(pd.DataFrame(df2['problems'].to_dict()).T)", "_____no_output_____" ], [ "df3 = df2.copy()", "_____no_output_____" ], [ "## Look act columns\nprint (df_responses.columns)", "Index(['_id', 'behavioral_traits', 'bonus', 'correct', 'diff', 'id',\n 'incomplete', 'lesson', 'level_summary', 'problem_set',\n 'problem_set_id', 'problem_set_subspace', 'qual_id',\n 'randomly_selected', 'response', 'response_idx', 'retried',\n 'screenshot_url', 'second_try', 'session_id', 'student', 'sublesson',\n 't', 'time_spent', 'timestamp', 'txt', 'untouched'],\n dtype='object')\n" ], [ "## How many data samples\nprint (len(df_responses), \"Number of entries\")", "100000 Number of entries\n" ], [ "## Make 'description' a feature wih important words mapped", "_____no_output_____" ], [ "df3.columns", "_____no_output_____" ], [ "df3['percent_correct'] = df3['nright'].astype(float) / df3['ntotal']", "_____no_output_____" ], [ "df3.iloc[0]", "_____no_output_____" ], [ "for idx in range(100):\n print ('index\"', idx)\n print (df3.iloc[idx]['lesson'])\n print (df3.iloc[idx]['response'])", "index\" 0\nnline_1b\n{'fraction_cblock_chains': [{'right': 442, 'sum': {'numerator': 1, 'denominator': 2, '__as3_type': 'Fraction'}, 'pieces': ['1/2'], 'left': 97, 'lcm_sum': {'numerator': 1, 'denominator': 2, '__as3_type': 'Fraction'}}], 'plain_image_groups': [{'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/markers/end_marker_noline.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/markers/start_marker.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/objects/dog.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/objects/cat_dog_trail.swf'}], 'den': '2', 'fraction_input_value': '1/2', 'num': '1', 'fraction_cblock_total_count': 1, 'numberline_associations': [[]], 'fraction_cblock_counts': {'1/2': 1}, 'fraction_cblock_containment': {}, 'whole': ''}\nindex\" 1\nnline_1b\n{'fraction_cblock_total_count': 4, 'plain_image_groups': [{'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/objects/panda.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/markers/start_marker.swf'}], 'input': '4', 'fraction_cblock_chains': [{'right': 856, 'sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/4', '1/4', '1/4', '1/4'], 'left': 165, 'lcm_sum': {'numerator': 4, 'denominator': 4, '__as3_type': 'Fraction'}}], 'numberline_associations': [[{'obj_value': None, 'position': 720.6521739130434, 'pos_value': 1.0009451795841209, 'obj_name': 'object'}]], 'fraction_cblock_counts': {'1/4': 4}, 'fraction_cblock_containment': {}}\nindex\" 2\nequivalence_0\n{'fraction_cblock_chains': [{'left': 176, 'lcm_sum': {'numerator': 2, 'denominator': 8, '__as3_type': 'Fraction'}, 'right': 348, 'pieces': ['1/8', '1/8'], 'sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}}, {'left': 590, 'lcm_sum': {'numerator': 1, 'denominator': 6, '__as3_type': 'Fraction'}, 'right': 705, 'pieces': ['1/6'], 'sum': {'numerator': 1, 'denominator': 6, '__as3_type': 'Fraction'}}, {'left': 176, 'lcm_sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}, 'right': 348, 'pieces': ['1/4'], 'sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}}, {'left': 176, 'lcm_sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}, 'right': 866, 'pieces': ['1'], 'sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}}], 'fraction_cblock_total_count': 5, 'fraction_cblock_counts': {'1': 1, '1/8': 2, '1/6': 1, '1/4': 1}, 'fraction_cblock_containment': {'piece0': {'lcm_sum': {'numerator': 2, 'denominator': 8, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/8', '1/8'], 'sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}}}}\nindex\" 3\nequivalence_0\n{'fraction_cblock_chains': [{'left': 176, 'lcm_sum': {'numerator': 1, 'denominator': 2, '__as3_type': 'Fraction'}, 'right': 521, 'pieces': ['1/2'], 'sum': {'numerator': 1, 'denominator': 2, '__as3_type': 'Fraction'}}, {'left': 176, 'lcm_sum': {'numerator': 4, 'denominator': 8, '__as3_type': 'Fraction'}, 'right': 521, 'pieces': ['1/8', '1/8', '1/8', '1/8'], 'sum': {'numerator': 1, 'denominator': 2, '__as3_type': 'Fraction'}}, {'left': 176, 'lcm_sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}, 'right': 866, 'pieces': ['1'], 'sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}}], 'fraction_cblock_total_count': 6, 'fraction_cblock_counts': {'1': 1, '1/2': 1, '1/8': 4}, 'fraction_cblock_containment': {'[Fraction] 1/2': {'lcm_sum': {'numerator': 4, 'denominator': 8, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/8', '1/8', '1/8', '1/8'], 'sum': {'numerator': 1, 'denominator': 2, '__as3_type': 'Fraction'}}}}\nindex\" 4\nequivalence_0\n{'fraction_circle_containment': {'[Fraction] 1/2': {'lcm_sum': {'numerator': 4, 'denominator': 8, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/8', '1/8', '1/8', '1/8'], 'sum': {'numerator': 1, 'denominator': 2, '__as3_type': 'Fraction'}}}, 'fraction_circle_total_count': 6, 'fraction_circle_groups': [{'x': 512, 'y': 300, 'scale': 0.9999999999999991, 'pieces': ['1/2', '1/8', '1/8', '1/8', '1/8', '1'], 'chains': [{'right': 180, 'pieces': ['1/8', '1/8', '1/8', '1/8'], 'left': 0}]}], 'fraction_circle_counts': {'1': 1, '1/2': 1, '1/8': 4}}\nindex\" 5\nreview_lesson_4\n{'image_object_groups': [{'total': 6, 'on': 3, 'url': 'assets/objects/singles/watch.swf', 'off': 3}]}\nindex\" 6\nreview_lesson_4\nNone\nindex\" 7\nreview_lesson_4\nNone\nindex\" 8\nmodel_symbol_6\n{'fraction_circle_groups': [{'x': 512, 'scale': 1, 'chains': [{'pieces': ['1/8', '1/8', '1/8', '1/8'], 'left': 0, 'right': 180}], 'pieces': ['1/8', '1/8', '1/8', '1/8', '1/2', '1'], 'y': 300}], 'fraction_circle_containment': {'piece_0': {'sum': {'denominator': 2, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/8', '1/8', '1/8', '1/8'], 'lcm_sum': {'denominator': 8, 'numerator': 4, '__as3_type': 'Fraction'}}}, 'fraction_circle_counts': {'1': 1, '1/2': 1, '1/8': 4}, 'fraction_circle_total_count': 6}\nindex\" 9\nmodel_symbol_6\n{'fraction_circle_groups': [{'x': 512, 'scale': 0.9999999999999994, 'chains': [{'pieces': ['1/8', '1/8', '1/8', '1/8'], 'left': 0, 'right': 180}], 'pieces': ['1/2', '1/8', '1/8', '1/8', '1/8', '1'], 'y': 300}], 'fraction_circle_containment': {'[Fraction] 1/2': {'sum': {'denominator': 2, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/8', '1/8', '1/8', '1/8'], 'lcm_sum': {'denominator': 8, 'numerator': 4, '__as3_type': 'Fraction'}}}, 'fraction_circle_counts': {'1': 1, '1/2': 1, '1/8': 4}, 'fraction_circle_total_count': 6}\nindex\" 10\nmodel_symbol_6\n{'fraction_circle_groups': [{'x': 512, 'scale': 0.9999999999999993, 'chains': [{'pieces': ['1/4', '1/4'], 'left': 0, 'right': 180}], 'pieces': ['1/2', '1/4', '1/4', '1'], 'y': 300}], 'fraction_circle_containment': {'[Fraction] 1/2': {'sum': {'denominator': 2, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/4', '1/4'], 'lcm_sum': {'denominator': 4, 'numerator': 2, '__as3_type': 'Fraction'}}}, 'fraction_circle_counts': {'1': 1, '1/2': 1, '1/4': 2}, 'fraction_circle_total_count': 4}\nindex\" 11\nnum_den_1\n{'radio_choice': 'C', 'radio_group_problem': {'choice': 'C', 'text': '3/6'}, 'radio_text': '3/6'}\nindex\" 12\nnum_den_1\n{'fraction_cblock_chains': [{'sum': {'denominator': 10, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 10, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/10'], 'left': 1024, 'right': 1458}, {'sum': {'denominator': 5, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 10, 'numerator': 2, '__as3_type': 'Fraction'}, 'pieces': ['1/10', '1/10'], 'left': 1024, 'right': 1297}, {'sum': {'denominator': 10, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 10, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/10'], 'left': 1024, 'right': 1531}, {'sum': {'denominator': 10, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 10, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/10'], 'left': 1024, 'right': 1214}, {'sum': {'denominator': 10, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 10, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/10'], 'left': 1024, 'right': 1424}, {'sum': {'denominator': 5, 'numerator': 2, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 10, 'numerator': 4, '__as3_type': 'Fraction'}, 'pieces': ['1/10', '1/10', '1/10', '1/10'], 'left': 544, 'right': 820}, {'sum': {'denominator': 84, 'numerator': 73, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 84, 'numerator': 73, '__as3_type': 'Fraction'}, 'pieces': ['1/7', '1/7', '1/6', '1/6', '1/4'], 'left': 1001, 'right': 1272}, {'sum': {'denominator': 35, 'numerator': 17, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 35, 'numerator': 17, '__as3_type': 'Fraction'}, 'pieces': ['1/7', '1/7', '1/5'], 'left': 981, 'right': 1316}, {'sum': {'denominator': 28, 'numerator': 11, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 28, 'numerator': 11, '__as3_type': 'Fraction'}, 'pieces': ['1/7', '1/4'], 'left': 1001, 'right': 1272}, {'sum': {'denominator': 7, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 7, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/7'], 'left': 1024, 'right': 1300}, {'sum': {'denominator': 7, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 7, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/7'], 'left': 1024, 'right': 1248}, {'sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/6'], 'left': 1024, 'right': 1316}, {'sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/6'], 'left': 1024, 'right': 1387}, {'sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/6'], 'left': 1024, 'right': 1220}, {'sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/6'], 'left': 1024, 'right': 1387}, {'sum': {'denominator': 5, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 5, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/5'], 'left': 1024, 'right': 1358}, {'sum': {'denominator': 5, 'numerator': 3, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 5, 'numerator': 3, '__as3_type': 'Fraction'}, 'pieces': ['1/5', '1/5', '1/5'], 'left': 1024, 'right': 1337}, {'sum': {'denominator': 2, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 4, 'numerator': 2, '__as3_type': 'Fraction'}, 'pieces': ['1/4', '1/4'], 'left': 1024, 'right': 1523}, {'sum': {'denominator': 4, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 4, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/4'], 'left': 1024, 'right': 1272}, {'sum': {'denominator': 4, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 4, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/4'], 'left': 1024, 'right': 1358}, {'sum': {'denominator': 2, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 2, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/2'], 'left': 1024, 'right': 1531}, {'sum': {'denominator': 2, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 4, 'numerator': 2, '__as3_type': 'Fraction'}, 'pieces': ['1/4', '1/4'], 'left': 1024, 'right': 1389}, {'sum': {'denominator': 4, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 4, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/4'], 'left': 1024, 'right': 1216}, {'sum': {'denominator': 4, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 4, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/4'], 'left': 1024, 'right': 1351}, {'sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 1024, 'right': 2045}, {'sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 130, 'right': 820}], 'fraction_cblock_containment': {'bar1': {'sum': {'denominator': 5, 'numerator': 2, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/10', '1/10', '1/10', '1/10'], 'lcm_sum': {'denominator': 10, 'numerator': 4, '__as3_type': 'Fraction'}}, '[Fraction] 1/4': {'sum': {'denominator': 5, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/5'], 'lcm_sum': {'denominator': 5, 'numerator': 1, '__as3_type': 'Fraction'}}, '[Fraction] 1': {'sum': {'denominator': 10, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/10'], 'lcm_sum': {'denominator': 10, 'numerator': 1, '__as3_type': 'Fraction'}}}, 'fraction_cblock_total_count': 41, 'fraction_cblock_counts': {'1': 2, '1/7': 7, '1/4': 10, '1/6': 6, '1/5': 5, '1/2': 1, '1/10': 10}}\nindex\" 13\nnline_1b\n{'whole': '', 'fraction_input_value': '4/6', 'fraction_cblock_chains': [{'sum': {'denominator': 3, 'numerator': 2, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 6, 'numerator': 4, '__as3_type': 'Fraction'}, 'pieces': ['1/6', '1/6', '1/6', '1/6'], 'left': 96, 'right': 522}], 'fraction_cblock_containment': {}, 'num': '4', 'plain_image_groups': [{'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/markers/end_marker.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/markers/start_marker.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/objects/beetle.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/objects/beetle_trail.swf'}], 'numberline_associations': [[]], 'den': '6', 'fraction_cblock_total_count': 4, 'fraction_cblock_counts': {'1/6': 4}}\nindex\" 14\nnline_1b\n{'plain_image_groups': [{'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/objects/panda.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/markers/start_marker.swf'}], 'fraction_cblock_containment': {}, 'input': '8', 'numberline_associations': [[{'position': 634, 'pos_value': 0.8753623188405797, 'obj_name': 'object', 'obj_value': None}]], 'fraction_cblock_chains': [{'sum': {'denominator': 8, 'numerator': 7, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 8, 'numerator': 7, '__as3_type': 'Fraction'}, 'pieces': ['1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8'], 'left': 165, 'right': 769}], 'fraction_cblock_total_count': 7, 'fraction_cblock_counts': {'1/8': 7}}\nindex\" 15\nnline_1b\n{'numberline_associations': [[]], 'input': '8'}\nindex\" 16\nnline_1c\n{'numberline_associations': [[{'position': 580.5, 'pos_value': 0.9972826086956521, 'obj_name': 'answer_text', 'obj_value': '3/3'}]], 'input': ''}\nindex\" 17\nreview_lesson_1\n{'plain_image_groups': [{'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/objects/shark.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/markers/start_marker.swf'}], 'fraction_cblock_containment': {}, 'input': '6', 'numberline_associations': [[{'position': 722, 'pos_value': 1.0028985507246377, 'obj_name': 'object', 'obj_value': None}]], 'fraction_cblock_chains': [{'sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 6, 'numerator': 6, '__as3_type': 'Fraction'}, 'pieces': ['1/6', '1/6', '1/6', '1/6', '1/6', '1/6'], 'left': 165, 'right': 856}], 'fraction_cblock_total_count': 6, 'fraction_cblock_counts': {'1/6': 6}}\nindex\" 18\nreview_lesson_1\n{'whole': '', 'fraction_input_value': '1/3', 'fraction_cblock_chains': [{'sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 3, 'numerator': 3, '__as3_type': 'Fraction'}, 'pieces': ['1/3', '1/3', '1/3'], 'left': 96, 'right': 657}], 'fraction_cblock_containment': {}, 'num': '1', 'plain_image_groups': [{'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/markers/end_marker.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/markers/start_marker.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/objects/snail.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/objects/snail_trail.swf'}], 'numberline_associations': [[]], 'den': '3', 'fraction_cblock_total_count': 3, 'fraction_cblock_counts': {'1/3': 3}}\nindex\" 19\nreview_lesson_1\n{'whole': '', 'fraction_input_value': '3/4', 'fraction_cblock_chains': [{'sum': {'denominator': 4, 'numerator': 3, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 4, 'numerator': 3, '__as3_type': 'Fraction'}, 'pieces': ['1/4', '1/4', '1/4'], 'left': 96, 'right': 545}], 'fraction_cblock_containment': {}, 'num': '3', 'plain_image_groups': [{'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/markers/end_marker_noline.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/markers/start_marker.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/objects/dog.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/objects/cat_dog_trail.swf'}], 'numberline_associations': [[]], 'den': '4', 'fraction_cblock_total_count': 3, 'fraction_cblock_counts': {'1/4': 3}}\nindex\" 20\nreview_lesson_1\n{'plain_image_groups': [{'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/objects_v2/kangaroo.swf'}], 'numberline_associations': [[{'position': 219, 'pos_value': 0.27391304347826084, 'obj_name': 'object', 'obj_value': None}]], 'input_A': '', 'bitmap_text_inputs': {'den_input': ['']}, 'bitmap_text_interp': {'den_input': '?'}}\nindex\" 21\nreview_lesson_1\n{'numberline_associations': [[{'position': 167.5, 'pos_value': 0.24909420289855072, 'obj_name': 'answer_text', 'obj_value': '1/4'}]], 'input': ''}\nindex\" 22\nmult_whole_frac_review_1\n{'fraction_circle_containment': {'circle1': {'pieces': ['1/5', '1/5', '1/5', '1/5'], 'lcm_sum': {'denominator': 5, 'numerator': 4, '__as3_type': 'Fraction'}, 'homogenous': True, 'sum': {'denominator': 5, 'numerator': 4, '__as3_type': 'Fraction'}}}, 'num': '4', 'plain_image_groups': [{'url': 'assets/cms/wootmath_fractions/cloud/cloud_round.swf', 'total': 2}], 'den': '5', 'fraction_input_value': '4/5', 'fraction_circle_total_count': 6, 'whole': '', 'fraction_circle_groups': [{'pieces': ['1/5', '1/5', '1/5', '1/5', '1'], 'x': 710, 'scale': 0.49999999999999994, 'chains': [{'pieces': ['1/5', '1/5', '1/5', '1/5'], 'left': 342, 'right': 54}], 'y': 415}, {'pieces': ['1'], 'x': 310, 'scale': 0.5, 'y': 415}], 'fraction_circle_counts': {'1': 2, '1/5': 4}}\nindex\" 23\nmult_whole_frac_review_1\n{'input_1': '12', 'plain_image_groups': [{'url': 'assets/objects/singles/football.swf', 'total': 12}], 'input_2': '3'}\nindex\" 24\nmult_whole_frac_review_1\n{'input_a': '8', 'input': '8'}\nindex\" 25\nmult_whole_frac_review_2\n{'input': '12', 'numberline_associations': [[{'pos_value': 1.5122282608695654, 'obj_name': 'a_text', 'position': 403.0000000000001, 'obj_value': 'A'}]]}\nindex\" 26\nmult_whole_frac_review_2\n{'input': '15', 'numberline_associations': [[{'pos_value': 1.5991847826086958, 'obj_name': 'a_text', 'position': 424.33333333333337, 'obj_value': 'A'}]]}\nindex\" 27\nmult_whole_frac_review_2\n{'input': '9', 'numberline_associations': [[{'pos_value': 2.012228260869565, 'obj_name': 'a_text', 'position': 525.6666666666666, 'obj_value': 'A'}]]}\nindex\" 28\nnline_1a\n{'num': '1', 'fraction_input_value': '[Fraction] 1', 'numberline_associations': [[]], 'whole': '', 'den': '3'}\nindex\" 29\nnline_1a\n{'num': '3', 'fraction_input_value': '[Fraction] 1', 'numberline_associations': [[]], 'whole': '', 'den': '4'}\nindex\" 30\nnline_1a\n{'num': '2', 'fraction_input_value': '[Fraction] 1', 'numberline_associations': [[]], 'whole': '', 'den': '4'}\nindex\" 31\nnline_1a\n{'num': '2', 'fraction_input_value': '[Fraction] 1', 'numberline_associations': [[]], 'whole': '', 'den': '3'}\nindex\" 32\nnline_1a\n{'whole': '', 'fraction_input_value': '[Fraction] 1', 'den_input': '', 'num': '1', 'numberline_associations': [[]], 'den': '3'}\nindex\" 33\nnline_1a\n{'whole': '', 'fraction_input_value': '[Fraction] 1', 'den_input': '4', 'num': '1', 'numberline_associations': [[]], 'den': '4'}\nindex\" 34\nnline_1a\n{'whole': '', 'fraction_input_value': '[Fraction] 1', 'den_input': '2', 'num': '2', 'numberline_associations': [[]], 'den': '2'}\nindex\" 35\nnline_2\n{'numberline_associations': [[{'position': 375, 'pos_value': 0.5, 'obj_name': None, 'obj_value': '1/2 mile'}]], 'input': '2'}\nindex\" 36\nnline_2\n{'numberline_associations': [[{'position': 547, 'pos_value': 0.7492753623188406, 'obj_name': None, 'obj_value': '6/8 mile'}]], 'input': '8'}\nindex\" 37\nnline_2\n{'den_input': '6', 'numberline_associations': [[{'position': 492, 'pos_value': 0.6695652173913044, 'obj_name': None, 'obj_value': '4/6 mile'}]]}\nindex\" 38\nnline_2\n{'num': '5', 'numberline_associations': [[]], 'fraction_input_value': '5/6', 'whole': '', 'den': '6'}\nindex\" 39\nnline_3a\n{'whole': '', 'fraction_input_value': '8/6', 'fraction_cblock_chains': [{'sum': {'denominator': 3, 'numerator': 4, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 6, 'numerator': 8, '__as3_type': 'Fraction'}, 'pieces': ['1/6', '1/6', '1/6', '1/6', '1/6', '1/6', '1/6', '1/6'], 'left': 96, 'right': 657}], 'fraction_cblock_containment': {}, 'num': '8', 'plain_image_groups': [{'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/markers/end_marker_noline.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/markers/start_marker.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/objects/seahorse.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/objects/shark_trail.swf'}], 'numberline_associations': [[]], 'den': '6', 'fraction_cblock_total_count': 8, 'fraction_cblock_counts': {'1/6': 8}}\nindex\" 40\nnline_3b\n{'numberline_associations': [[]], 'fraction_input_value': '[Fraction] 1', 'input_A': '3', 'bitmap_text_inputs': {'answer_text': ['3']}, 'bitmap_text_interp': {'answer_text': '1 = 3/3'}}\nindex\" 41\nnline_3b\n{'numberline_associations': [[]], 'fraction_input_value': '[Fraction] 1', 'input_A': '6', 'bitmap_text_inputs': {'answer_text': ['6']}, 'bitmap_text_interp': {'answer_text': '1 = 6/6'}}\nindex\" 42\nnline_3b\n{'numberline_associations': [[{'position': 400.5, 'pos_value': 1.0013586956521738, 'obj_name': 'answer_text', 'obj_value': ' 1 \\nyard'}]], 'input': '6'}\nindex\" 43\nreview_lesson_2\n{'num': '1', 'fraction_input_value': '[Fraction] 1', 'numberline_associations': [[]], 'whole': '', 'den': '6'}\nindex\" 44\nreview_lesson_2\n{'whole': '', 'fraction_input_value': '[Fraction] 1', 'den_input': '6', 'num': '5', 'numberline_associations': [[]], 'den': '6'}\nindex\" 45\nbasic_ordering_3\n{'plain_image_groups': [{'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/mug/mug_thirds_02.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/mug/mug_fourths_02.swf'}], 'input_a': '<', 'input': '<'}\nindex\" 46\nbasic_ordering_3\n{'fraction_circle_containment': {'unit1': {'sum': {'denominator': 2, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/8', '1/8', '1/8', '1/8'], 'lcm_sum': {'denominator': 8, 'numerator': 4, '__as3_type': 'Fraction'}}, 'unit2': {'sum': {'denominator': 3, 'numerator': 2, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/6', '1/6', '1/6', '1/6'], 'lcm_sum': {'denominator': 6, 'numerator': 4, '__as3_type': 'Fraction'}}}, 'input_a': '<', 'fraction_circle_total_count': 10, 'fraction_circle_groups': [{'x': 700, 'scale': 0.9, 'chains': [{'pieces': ['1/6', '1/6', '1/6', '1/6'], 'left': 180, 'right': 300}], 'pieces': ['1/6', '1/6', '1/6', '1/6', '1'], 'y': 300}, {'x': 300, 'scale': 0.9, 'chains': [{'pieces': ['1/8', '1/8', '1/8', '1/8'], 'left': 180, 'right': 0}], 'pieces': ['1/8', '1/8', '1/8', '1/8', '1'], 'y': 300}], 'fraction_circle_counts': {'1': 2, '1/6': 4, '1/8': 4}, 'input': '<'}\nindex\" 47\nbasic_ordering_1\n{'fraction_cblock_containment': {'unit1': {'sum': {'denominator': 2, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/2'], 'lcm_sum': {'denominator': 2, 'numerator': 1, '__as3_type': 'Fraction'}}, 'unit2': {'sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/6'], 'lcm_sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}}}, 'input_a': '>', 'fraction_cblock_chains': [{'sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/6'], 'left': 175, 'right': 290}, {'sum': {'denominator': 2, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 2, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/2'], 'left': 175, 'right': 520}, {'sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 175, 'right': 865}, {'sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 175, 'right': 865}], 'fraction_cblock_total_count': 4, 'input': '>', 'fraction_cblock_counts': {'1': 2, '1/6': 1, '1/2': 1}}\nindex\" 48\nbasic_ordering_1\n{'fraction_circle_groups': [{'x': 512, 'scale': 0.9, 'chains': [{'pieces': ['1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8'], 'left': 315, 'right': 315}], 'pieces': ['1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1'], 'y': 350}], 'fraction_circle_containment': {'unit': {'sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8'], 'lcm_sum': {'denominator': 8, 'numerator': 8, '__as3_type': 'Fraction'}}}, 'fraction_circle_counts': {'1': 1, '1/8': 8}, 'fraction_circle_total_count': 9}\nindex\" 49\nordering_1_v2\n{'radio_choice': 'B', 'fraction_cblock_containment': {'bar2': {'sum': {'denominator': 7, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/7'], 'lcm_sum': {'denominator': 7, 'numerator': 1, '__as3_type': 'Fraction'}}, 'bar1': {'sum': {'denominator': 9, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/9'], 'lcm_sum': {'denominator': 9, 'numerator': 1, '__as3_type': 'Fraction'}}}, 'radio_text': '1/9 is less than 1/7', 'radio_group_gtlt': {'choice': 'B', 'text': '1/9 is less than 1/7'}, 'fraction_cblock_chains': [{'sum': {'denominator': 7, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 7, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/7'], 'left': 200, 'right': 298}, {'sum': {'denominator': 9, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 9, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/9'], 'left': 200, 'right': 276}, {'sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 200, 'right': 890}, {'sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 200, 'right': 890}], 'fraction_cblock_total_count': 4, 'fraction_cblock_counts': {'1': 2, '1/7': 1, '1/9': 1}}\nindex\" 50\nordering_1_v2\n{'fraction_cblock_chains': [{'sum': {'denominator': 5, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 5, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/5'], 'left': 100, 'right': 238}, {'sum': {'denominator': 7, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 7, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/7'], 'left': 1024, 'right': 1274}, {'sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/6'], 'left': 1024, 'right': 1587}, {'sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/6'], 'left': 1024, 'right': 1274}, {'sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/6'], 'left': 1024, 'right': 1153}, {'sum': {'denominator': 30, 'numerator': 11, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 30, 'numerator': 11, '__as3_type': 'Fraction'}, 'pieces': ['1/6', '1/5'], 'left': -251, 'right': 2}, {'sum': {'denominator': 4, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 4, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/4'], 'left': 1024, 'right': 1763}, {'sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/6'], 'left': 100, 'right': 215}, {'sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 100, 'right': 790}, {'sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 100, 'right': 790}, {'sum': {'denominator': 4, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 4, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1/4'], 'left': 100, 'right': 272}, {'sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'lcm_sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 100, 'right': 790}], 'fraction_cblock_containment': {'[Fraction] 1/6': {'sum': {'denominator': 7, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/7'], 'lcm_sum': {'denominator': 7, 'numerator': 1, '__as3_type': 'Fraction'}}, 'bar3': {'sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/6'], 'lcm_sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}}, 'bar2': {'sum': {'denominator': 5, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/5'], 'lcm_sum': {'denominator': 5, 'numerator': 1, '__as3_type': 'Fraction'}}, 'bar1': {'sum': {'denominator': 4, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/4'], 'lcm_sum': {'denominator': 4, 'numerator': 1, '__as3_type': 'Fraction'}}}, 'fraction_cblock_total_count': 13, 'fraction_cblock_counts': {'1': 3, '1/7': 1, '1/5': 2, '1/6': 5, '1/4': 2}}\nindex\" 51\nordering_1_v2\n{'radio_choice': 'B', 'radio_group_problem': {'choice': 'B', 'text': 'I think 1/11 is less than 1/8 because a piece that came from a cherry pie'}, 'radio_text': 'I think 1/11 is less than 1/8 because a piece that came from a cherry pie'}\nindex\" 52\nadvanced_add_0\n{'fraction_circle_groups': [{'x': 800, 'y': 350, 'scale': 1, 'pieces': ['1/10', '1/10', '1/10', '1/10', '1/10', '1'], 'chains': [{'right': 0, 'pieces': ['1/10', '1/10', '1/10', '1/10', '1/10'], 'left': 180}]}], 'fraction_circle_total_count': 6, 'fraction_circle_containment': {'unit': {'lcm_sum': {'numerator': 5, 'denominator': 10, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/10', '1/10', '1/10', '1/10', '1/10'], 'sum': {'numerator': 1, 'denominator': 2, '__as3_type': 'Fraction'}}}, 'fraction_circle_counts': {'1': 1, '1/10': 5}}\nindex\" 53\nadvanced_add_0\n{'fraction_circle_groups': [{'x': 800, 'y': 350, 'scale': 1, 'pieces': ['1/12', '1/12', '1/12', '1/12', '1/12', '1/12', '1/12', '1'], 'chains': [{'right': 0, 'pieces': ['1/12', '1/12', '1/12', '1/12', '1/12', '1/12', '1/12'], 'left': 210}]}], 'fraction_circle_total_count': 8, 'fraction_circle_containment': {'unit': {'lcm_sum': {'numerator': 7, 'denominator': 12, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/12', '1/12', '1/12', '1/12', '1/12', '1/12', '1/12'], 'sum': {'numerator': 7, 'denominator': 12, '__as3_type': 'Fraction'}}}, 'fraction_circle_counts': {'1': 1, '1/12': 7}}\nindex\" 54\nadvanced_add_0\n{'fraction_cblock_chains': [{'left': 176, 'lcm_sum': {'numerator': 7, 'denominator': 9, '__as3_type': 'Fraction'}, 'right': 712, 'pieces': ['1/9', '1/9', '1/9', '1/9', '1/9', '1/9', '1/9'], 'sum': {'numerator': 7, 'denominator': 9, '__as3_type': 'Fraction'}}, {'left': 176, 'lcm_sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}, 'right': 866, 'pieces': ['1'], 'sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}}], 'fraction_cblock_counts': {'1': 1, '1/9': 7}, 'fraction_cblock_total_count': 8, 'fraction_cblock_containment': {'unit': {'lcm_sum': {'numerator': 7, 'denominator': 9, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/9', '1/9', '1/9', '1/9', '1/9', '1/9', '1/9'], 'sum': {'numerator': 7, 'denominator': 9, '__as3_type': 'Fraction'}}}}\nindex\" 55\nadv_ordering_1\n{'fraction_circle_total_count': 4, 'fraction_circle_counts': {'1': 2, '1/2': 2}, 'input_a': '=', 'fraction_circle_containment': {'unit2': {'sum': {'numerator': 1, 'denominator': 2, '__as3_type': 'Fraction'}, 'pieces': ['1/2'], 'homogenous': True, 'lcm_sum': {'numerator': 1, 'denominator': 2, '__as3_type': 'Fraction'}}, 'unit1': {'sum': {'numerator': 1, 'denominator': 2, '__as3_type': 'Fraction'}, 'pieces': ['1/2'], 'homogenous': True, 'lcm_sum': {'numerator': 1, 'denominator': 2, '__as3_type': 'Fraction'}}}, 'input': '=', 'fraction_circle_groups': [{'x': 700, 'scale': 0.9, 'pieces': ['1/2', '1'], 'y': 300}, {'x': 250, 'scale': 0.9, 'pieces': ['1/2', '1'], 'y': 300}]}\nindex\" 56\nadv_ordering_1\n{'fraction_circle_total_count': 15, 'fraction_circle_counts': {'1': 2, '1/8': 13}, 'input_a': '<', 'fraction_circle_containment': {'unit2': {'sum': {'numerator': 7, 'denominator': 8, '__as3_type': 'Fraction'}, 'pieces': ['1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8'], 'homogenous': True, 'lcm_sum': {'numerator': 7, 'denominator': 8, '__as3_type': 'Fraction'}}, 'unit1': {'sum': {'numerator': 3, 'denominator': 4, '__as3_type': 'Fraction'}, 'pieces': ['1/8', '1/8', '1/8', '1/8', '1/8', '1/8'], 'homogenous': True, 'lcm_sum': {'numerator': 6, 'denominator': 8, '__as3_type': 'Fraction'}}}, 'input': '<', 'fraction_circle_groups': [{'x': 700, 'scale': 0.9, 'pieces': ['1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1'], 'y': 300, 'chains': [{'right': 142, 'left': 97, 'pieces': ['1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8']}]}, {'x': 250, 'scale': 0.9, 'pieces': ['1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1'], 'y': 300, 'chains': [{'right': 155, 'left': 65, 'pieces': ['1/8', '1/8', '1/8', '1/8', '1/8', '1/8']}]}]}\nindex\" 57\nadv_ordering_1\nNone\nindex\" 58\nadv_ordering_2\n{'fraction_cblock_counts': {'1': 3, '1/4': 1, '1/6': 1, '1/8': 1}, 'fraction_cblock_chains': [{'right': 290, 'sum': {'numerator': 1, 'denominator': 6, '__as3_type': 'Fraction'}, 'pieces': ['1/6'], 'left': 175, 'lcm_sum': {'numerator': 1, 'denominator': 6, '__as3_type': 'Fraction'}}, {'right': 261, 'sum': {'numerator': 1, 'denominator': 8, '__as3_type': 'Fraction'}, 'pieces': ['1/8'], 'left': 175, 'lcm_sum': {'numerator': 1, 'denominator': 8, '__as3_type': 'Fraction'}}, {'right': 347, 'sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}, 'pieces': ['1/4'], 'left': 175, 'lcm_sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}}, {'right': 865, 'sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 175, 'lcm_sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}}, {'right': 865, 'sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 175, 'lcm_sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}}, {'right': 865, 'sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 175, 'lcm_sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}}], 'fraction_cblock_total_count': 6, 'fraction_cblock_containment': {'unit3': {'sum': {'numerator': 1, 'denominator': 6, '__as3_type': 'Fraction'}, 'pieces': ['1/6'], 'homogenous': True, 'lcm_sum': {'numerator': 1, 'denominator': 6, '__as3_type': 'Fraction'}}, 'unit2': {'sum': {'numerator': 1, 'denominator': 8, '__as3_type': 'Fraction'}, 'pieces': ['1/8'], 'homogenous': True, 'lcm_sum': {'numerator': 1, 'denominator': 8, '__as3_type': 'Fraction'}}, 'unit1': {'sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}, 'pieces': ['1/4'], 'homogenous': True, 'lcm_sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}}}}\nindex\" 59\nadv_ordering_2\n{'fraction_cblock_counts': {'1': 3, '1/6': 5, '1/12': 5, '1/8': 5}, 'fraction_cblock_chains': [{'right': 750, 'sum': {'numerator': 5, 'denominator': 6, '__as3_type': 'Fraction'}, 'pieces': ['1/6', '1/6', '1/6', '1/6', '1/6'], 'left': 175, 'lcm_sum': {'numerator': 5, 'denominator': 6, '__as3_type': 'Fraction'}}, {'right': 462, 'sum': {'numerator': 1, 'denominator': 12, '__as3_type': 'Fraction'}, 'pieces': ['1/12'], 'left': 405, 'lcm_sum': {'numerator': 1, 'denominator': 12, '__as3_type': 'Fraction'}}, {'right': 404, 'sum': {'numerator': 1, 'denominator': 12, '__as3_type': 'Fraction'}, 'pieces': ['1/12'], 'left': 347, 'lcm_sum': {'numerator': 1, 'denominator': 12, '__as3_type': 'Fraction'}}, {'right': 347, 'sum': {'numerator': 1, 'denominator': 12, '__as3_type': 'Fraction'}, 'pieces': ['1/12'], 'left': 290, 'lcm_sum': {'numerator': 1, 'denominator': 12, '__as3_type': 'Fraction'}}, {'right': 289, 'sum': {'numerator': 1, 'denominator': 12, '__as3_type': 'Fraction'}, 'pieces': ['1/12'], 'left': 232, 'lcm_sum': {'numerator': 1, 'denominator': 12, '__as3_type': 'Fraction'}}, {'right': 232, 'sum': {'numerator': 1, 'denominator': 12, '__as3_type': 'Fraction'}, 'pieces': ['1/12'], 'left': 175, 'lcm_sum': {'numerator': 1, 'denominator': 12, '__as3_type': 'Fraction'}}, {'right': 606, 'sum': {'numerator': 1, 'denominator': 8, '__as3_type': 'Fraction'}, 'pieces': ['1/8'], 'left': 520, 'lcm_sum': {'numerator': 1, 'denominator': 8, '__as3_type': 'Fraction'}}, {'right': 519, 'sum': {'numerator': 1, 'denominator': 8, '__as3_type': 'Fraction'}, 'pieces': ['1/8'], 'left': 433, 'lcm_sum': {'numerator': 1, 'denominator': 8, '__as3_type': 'Fraction'}}, {'right': 433, 'sum': {'numerator': 1, 'denominator': 8, '__as3_type': 'Fraction'}, 'pieces': ['1/8'], 'left': 347, 'lcm_sum': {'numerator': 1, 'denominator': 8, '__as3_type': 'Fraction'}}, {'right': 347, 'sum': {'numerator': 1, 'denominator': 8, '__as3_type': 'Fraction'}, 'pieces': ['1/8'], 'left': 261, 'lcm_sum': {'numerator': 1, 'denominator': 8, '__as3_type': 'Fraction'}}, {'right': 261, 'sum': {'numerator': 1, 'denominator': 8, '__as3_type': 'Fraction'}, 'pieces': ['1/8'], 'left': 175, 'lcm_sum': {'numerator': 1, 'denominator': 8, '__as3_type': 'Fraction'}}, {'right': 865, 'sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 175, 'lcm_sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}}, {'right': 865, 'sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 175, 'lcm_sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}}, {'right': 865, 'sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 175, 'lcm_sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}}], 'fraction_cblock_total_count': 18, 'fraction_cblock_containment': {'unit3': {'sum': {'numerator': 5, 'denominator': 6, '__as3_type': 'Fraction'}, 'pieces': ['1/6', '1/6', '1/6', '1/6', '1/6'], 'homogenous': True, 'lcm_sum': {'numerator': 5, 'denominator': 6, '__as3_type': 'Fraction'}}}}\nindex\" 60\nordering_5\n{'input_a': '>', 'fraction_input_value': '[Fraction] 3/6'}\nindex\" 61\nsimplify_0\n{'input_A': '3', 'plain_image_groups': [{'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/markers/end_marker_noline.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/markers/start_marker.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/objects/seahorse.swf'}, {'total': 1, 'url': 'assets/cms/wootmath_fractions/number_line/objects/shark_trail.swf'}], 'bitmap_text_interp': {'equation_text': '9/12 = 3/4'}, 'fraction_cblock_containment': {}, 'numberline_associations': [[]], 'fraction_cblock_counts': {'1/12': 9}, 'bitmap_text_inputs': {'equation_text': ['3']}, 'fraction_cblock_chains': [{'right': 706, 'sum': {'numerator': 3, 'denominator': 4, '__as3_type': 'Fraction'}, 'pieces': ['1/12', '1/12', '1/12', '1/12', '1/12', '1/12', '1/12', '1/12', '1/12'], 'left': 96, 'lcm_sum': {'numerator': 9, 'denominator': 12, '__as3_type': 'Fraction'}}], 'fraction_cblock_total_count': 9}\nindex\" 62\nsimplify_1\n{'fraction_cblock_counts': {'1': 2, '1/4': 1, '1/8': 2}, 'fraction_cblock_containment': {'bar1': {'sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}, 'pieces': ['1/8', '1/8'], 'homogenous': True, 'lcm_sum': {'numerator': 2, 'denominator': 8, '__as3_type': 'Fraction'}}, 'bar2': {'sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}, 'pieces': ['1/4'], 'homogenous': True, 'lcm_sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}}}, 'fraction_cblock_chains': [{'right': 272, 'sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}, 'pieces': ['1/4'], 'left': 100, 'lcm_sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}}, {'right': 272, 'sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}, 'pieces': ['1/8', '1/8'], 'left': 100, 'lcm_sum': {'numerator': 2, 'denominator': 8, '__as3_type': 'Fraction'}}, {'right': 790, 'sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 100, 'lcm_sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}}, {'right': 790, 'sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 100, 'lcm_sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}}], 'fraction_cblock_total_count': 5}\nindex\" 63\nsimplify_1\n{'fraction_cblock_counts': {'1': 1, '1/4': 3}, 'fraction_cblock_containment': {}, 'fraction_cblock_chains': [{'right': 617, 'sum': {'numerator': 3, 'denominator': 4, '__as3_type': 'Fraction'}, 'pieces': ['1/4', '1/4', '1/4'], 'left': 100, 'lcm_sum': {'numerator': 3, 'denominator': 4, '__as3_type': 'Fraction'}}, {'right': 790, 'sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 100, 'lcm_sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}}], 'fraction_cblock_total_count': 4}\nindex\" 64\nsimplify_1\n{'fraction_cblock_counts': {'1': 2, '1/4': 1, '1/12': 3}, 'fraction_cblock_containment': {'bar1': {'sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}, 'pieces': ['1/12', '1/12', '1/12'], 'homogenous': True, 'lcm_sum': {'numerator': 3, 'denominator': 12, '__as3_type': 'Fraction'}}, 'bar2': {'sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}, 'pieces': ['1/4'], 'homogenous': True, 'lcm_sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}}}, 'fraction_cblock_chains': [{'right': 272, 'sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}, 'pieces': ['1/4'], 'left': 100, 'lcm_sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}}, {'right': 272, 'sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}, 'pieces': ['1/12', '1/12', '1/12'], 'left': 100, 'lcm_sum': {'numerator': 3, 'denominator': 12, '__as3_type': 'Fraction'}}, {'right': 790, 'sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 100, 'lcm_sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}}, {'right': 790, 'sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 100, 'lcm_sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}}], 'fraction_cblock_total_count': 6}\nindex\" 65\nsimplify_2\n{'mult_n': '2', 'mult_d': '2'}\nindex\" 66\nsimplify_2\n{'mult_n': '4', 'mult_d': '4'}\nindex\" 67\nsimplify_3\n{'den': '2', 'whole': '', 'fraction_input_value': '1/2', 'num': '1'}\nindex\" 68\nsimplify_3\n{'den': '3', 'whole': '', 'fraction_cblock_containment': {'[Fraction] 1/4': {'sum': {'numerator': 1, 'denominator': 6, '__as3_type': 'Fraction'}, 'pieces': ['1/6'], 'homogenous': True, 'lcm_sum': {'numerator': 1, 'denominator': 6, '__as3_type': 'Fraction'}}, 'model1': {'sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}, 'pieces': ['1/4'], 'homogenous': True, 'lcm_sum': {'numerator': 1, 'denominator': 4, '__as3_type': 'Fraction'}}}, 'num': '1', 'fraction_cblock_counts': {'1': 1, '1/4': 5, '1/6': 1, '1/3': 3}, 'fraction_input_value': '1/3', 'fraction_cblock_chains': [{'right': 790, 'sum': {'numerator': 7, 'denominator': 4, '__as3_type': 'Fraction'}, 'pieces': ['1/4', '1/4', '1/4', '1/4', '1/6', '1/4', '1/3'], 'left': 100, 'lcm_sum': {'numerator': 21, 'denominator': 12, '__as3_type': 'Fraction'}}, {'right': 851, 'sum': {'numerator': 1, 'denominator': 3, '__as3_type': 'Fraction'}, 'pieces': ['1/3'], 'left': 621, 'lcm_sum': {'numerator': 1, 'denominator': 3, '__as3_type': 'Fraction'}}, {'right': 527, 'sum': {'numerator': 1, 'denominator': 3, '__as3_type': 'Fraction'}, 'pieces': ['1/3'], 'left': 297, 'lcm_sum': {'numerator': 1, 'denominator': 3, '__as3_type': 'Fraction'}}, {'right': 790, 'sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}, 'pieces': ['1'], 'left': 100, 'lcm_sum': {'numerator': 1, 'denominator': 1, '__as3_type': 'Fraction'}}], 'fraction_cblock_total_count': 10}\nindex\" 69\ndivision_7\nNone\nindex\" 70\ndivision_8\nNone\nindex\" 71\ndivision_8\nNone\nindex\" 72\ndivision_8\nNone\nindex\" 73\ndivision_8\nNone\nindex\" 74\ndivision_13_0\n{'input_a': '9', 'plain_image_groups': [{'url': 'assets/cms/wootmath_fractions/number_line/markers/end_marker_noline.swf', 'total': 1}, {'url': 'assets/cms/wootmath_fractions/number_line/markers/start_marker.swf', 'total': 1}, {'url': 'assets/cms/wootmath_fractions/number_line/objects/cat.swf', 'total': 1}, {'url': 'assets/cms/wootmath_fractions/number_line/objects/cat_dog_trail.swf', 'total': 1}], 'numberline_associations': [[]]}\nindex\" 75\nmult_whole_frac_review_1\n{'input_1': '3', 'plain_image_groups': [{'url': 'assets/objects/singles/peach.swf', 'total': 6}], 'input_2': '9'}\nindex\" 76\nexplore_fract_1_v2\n{'fraction_circle_groups': [{'pieces': ['1/10', '1/10', '1/10', '1/10', '1/10', '1/10', '1/10', '1/10', '1/10', '1/10'], 'x': 613, 'scale': 0.9999999999999979, 'chains': [{'pieces': ['1/10', '1/10', '1/10', '1/10', '1/10', '1/10', '1/10', '1/10', '1/10', '1/10'], 'left': 198, 'right': 198}], 'y': 303}, {'pieces': ['1'], 'x': 300, 'scale': 1, 'y': 300}], 'fraction_circle_counts': {'1': 1, '1/10': 10}, 'input_a': '10', 'fraction_circle_total_count': 11, 'fraction_circle_containment': {}, 'input': '10'}\nindex\" 77\nexplore_fract_1_v2\n{'fraction_circle_groups': [{'pieces': ['1/12', '1/12', '1/12', '1/12', '1/12', '1/12'], 'x': 562, 'scale': 0.9999999999999959, 'chains': [{'pieces': ['1/12', '1/12', '1/12', '1/12', '1/12', '1/12'], 'left': 90, 'right': 270}], 'y': 293}, {'pieces': ['1/2'], 'x': 300, 'scale': 1, 'y': 300}], 'fraction_circle_counts': {'1/12': 6, '1/2': 1}, 'input_a': '6', 'fraction_circle_total_count': 7, 'fraction_circle_containment': {}, 'input': '6'}\nindex\" 78\nadvanced_sub_0\n{'fraction_circle_total_count': 5, 'fraction_circle_containment': {'unit': {'pieces': ['1/6'], 'sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'lcm_sum': {'denominator': 6, 'numerator': 1, '__as3_type': 'Fraction'}}}, 'fraction_circle_counts': {'1': 1, '1/6': 4}, 'fraction_circle_groups': [{'pieces': ['1/6', '1/6', '1/6'], 'x': 860, 'scale': 0.8999999999999998, 'chains': [{'pieces': ['1/6', '1/6', '1/6'], 'left': 0, 'right': 180}], 'y': 216}, {'pieces': ['1/6', '1'], 'x': 512, 'scale': 0.9, 'y': 350}]}\nindex\" 79\nadvanced_sub_0\n{'fraction_circle_total_count': 9, 'fraction_circle_containment': {'unit': {'pieces': ['1/12', '1/12', '1/12', '1/12'], 'sum': {'denominator': 3, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'lcm_sum': {'denominator': 12, 'numerator': 4, '__as3_type': 'Fraction'}}}, 'fraction_circle_counts': {'1': 1, '1/12': 8}, 'fraction_circle_groups': [{'pieces': ['1/12', '1/12'], 'x': 880, 'scale': 0.9, 'chains': [{'pieces': ['1/12', '1/12'], 'left': 300, 'right': 240}], 'y': 197}, {'pieces': ['1/12'], 'x': 667, 'scale': 0.9000000000000001, 'y': 315}, {'pieces': ['1/12'], 'x': 610, 'scale': 0.9, 'y': 156}, {'pieces': ['1/12', '1/12', '1/12', '1/12', '1'], 'x': 512, 'scale': 0.9, 'chains': [{'pieces': ['1/12', '1/12', '1/12'], 'left': 210, 'right': 120}], 'y': 350}]}\nindex\" 80\nadvanced_sub_0\n{'fraction_circle_total_count': 10, 'fraction_circle_containment': {'unit1': {'pieces': ['1/6', '1/6', '1/6', '1/6', '1/6'], 'sum': {'denominator': 6, 'numerator': 5, '__as3_type': 'Fraction'}, 'homogenous': True, 'lcm_sum': {'denominator': 6, 'numerator': 5, '__as3_type': 'Fraction'}}}, 'fraction_circle_counts': {'1': 2, '1/6': 8}, 'fraction_circle_groups': [{'pieces': ['1/6', '1/6', '1/6'], 'x': 847, 'scale': 0.9000000000000012, 'chains': [{'pieces': ['1/6', '1/6', '1/6'], 'left': 0, 'right': 180}], 'y': 165}, {'pieces': ['1'], 'x': 700, 'scale': 0.9, 'y': 350}, {'pieces': ['1/6', '1/6', '1/6', '1/6', '1/6', '1'], 'x': 325, 'scale': 0.9, 'chains': [{'pieces': ['1/6', '1/6', '1/6', '1/6', '1/6'], 'left': 300, 'right': 0}], 'y': 350}]}\nindex\" 81\nadvanced_sub_1\n{'fraction_circle_total_count': 9, 'fraction_circle_containment': {'[Fraction] 1': {'pieces': ['1/4', '1/4', '1/4', '1/4'], 'sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'lcm_sum': {'denominator': 4, 'numerator': 4, '__as3_type': 'Fraction'}}, 'circle1': {'pieces': ['1/4', '1/4'], 'sum': {'denominator': 2, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'lcm_sum': {'denominator': 4, 'numerator': 2, '__as3_type': 'Fraction'}}}, 'fraction_circle_counts': {'1': 2, '1/4': 7}, 'whole': '1', 'fraction_circle_groups': [{'pieces': ['1/4'], 'x': 920, 'scale': 0.8000000000000003, 'y': 370}, {'pieces': ['1/4', '1/4', '1'], 'x': 725, 'scale': 0.8, 'chains': [{'pieces': ['1/4', '1/4'], 'left': 270, 'right': 90}], 'y': 350}, {'pieces': ['1/4', '1/4', '1/4', '1/4', '1'], 'x': 275, 'scale': 0.8, 'chains': [{'pieces': ['1/4', '1/4', '1/4', '1/4'], 'left': 0, 'right': 0}], 'y': 350}], 'den': '2', 'fraction_input_value': '1 1/2', 'num': '1'}\nindex\" 82\nadvanced_sub_1\n{'fraction_circle_total_count': 25, 'fraction_circle_containment': {'[Fraction] 1': {'pieces': ['1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8'], 'sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'lcm_sum': {'denominator': 8, 'numerator': 8, '__as3_type': 'Fraction'}}, 'circle1': {'pieces': ['1/8', '1/8', '1/8', '1/8'], 'sum': {'denominator': 2, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'lcm_sum': {'denominator': 8, 'numerator': 4, '__as3_type': 'Fraction'}}}, 'fraction_circle_counts': {'1': 3, '1/8': 22}, 'whole': '2', 'fraction_circle_groups': [{'pieces': ['1/8'], 'x': 671, 'scale': 0.8, 'y': 602}, {'pieces': ['1/8'], 'x': 743, 'scale': 0.8, 'y': 502}, {'pieces': ['1/8', '1/8', '1/8', '1/8', '1'], 'x': 850, 'scale': 0.8, 'chains': [{'pieces': ['1/8', '1/8', '1/8'], 'left': 0, 'right': 225}], 'y': 350}, {'pieces': ['1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1'], 'x': 525, 'scale': 0.8, 'chains': [{'pieces': ['1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8'], 'left': 0, 'right': 0}], 'y': 350}, {'pieces': ['1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1'], 'x': 200, 'scale': 0.8, 'chains': [{'pieces': ['1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8', '1/8'], 'left': 0, 'right': 0}], 'y': 350}], 'den': '', 'fraction_input_value': '2 ', 'num': ''}\nindex\" 83\nadvanced_sub_1\nNone\nindex\" 84\nexplore_fract_1_v2\n{'fraction_circle_containment': {'[Fraction] 1': {'sum': {'denominator': 1, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/3', '1/3', '1/3'], 'lcm_sum': {'denominator': 3, 'numerator': 3, '__as3_type': 'Fraction'}}}, 'fraction_circle_total_count': 5, 'input_a': '3', 'fraction_circle_counts': {'1': 1, '1/3': 3, '1/4': 1}, 'input': '3', 'fraction_circle_groups': [{'x': 300, 'scale': 0.9999999999999981, 'chains': [{'pieces': ['1/3', '1/3', '1/3'], 'left': 120, 'right': 120}], 'pieces': ['1/3', '1/3', '1/3', '1'], 'y': 300}, {'x': 470, 'scale': 1, 'pieces': ['1/4'], 'y': 857}]}\nindex\" 85\nexplore_fract_1_v2\n{'fraction_circle_containment': {'[Fraction] 1/2': {'sum': {'denominator': 12, 'numerator': 7, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/12', '1/12', '1/12', '1/12', '1/12', '1/12', '1/12'], 'lcm_sum': {'denominator': 12, 'numerator': 7, '__as3_type': 'Fraction'}}, '[Fraction] 1/12': {'sum': {'denominator': 12, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/12'], 'lcm_sum': {'denominator': 12, 'numerator': 1, '__as3_type': 'Fraction'}}}, 'fraction_circle_total_count': 8, 'input_a': '5', 'fraction_circle_counts': {'1/12': 7, '1/2': 1}, 'input': '5', 'fraction_circle_groups': [{'x': 300, 'scale': 1, 'chains': [{'pieces': ['1/12', '1/12', '1/12', '1/12', '1/12', '1/12'], 'left': 90, 'right': 270}], 'pieces': ['1/12', '1/12', '1/12', '1/12', '1/12', '1/12', '1/12', '1/2'], 'y': 300}]}\nindex\" 86\nexplore_fract_1_v2\n{'fraction_circle_containment': {'[Fraction] 1/4': {'sum': {'denominator': 5, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/15', '1/15', '1/15'], 'lcm_sum': {'denominator': 15, 'numerator': 3, '__as3_type': 'Fraction'}}, '[Fraction] 1/7': {'sum': {'denominator': 10, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/10'], 'lcm_sum': {'denominator': 10, 'numerator': 1, '__as3_type': 'Fraction'}}}, 'fraction_circle_total_count': 11, 'input_a': '4', 'fraction_circle_counts': {'1/10': 2, '1/4': 1, '1/7': 2, '1/15': 6}, 'input': '4', 'fraction_circle_groups': [{'x': 300, 'scale': 1, 'chains': [{'pieces': ['1/15', '1/15', '1/15', '1/15', '1/15'], 'left': 30, 'right': 270}], 'pieces': ['1/15', '1/15', '1/15', '1/15', '1/15', '1/4'], 'y': 300}, {'x': 349, 'scale': 0.9999999999999994, 'chains': [{'pieces': ['1/15', '1/10'], 'left': 0, 'right': 300}], 'pieces': ['1/10', '1/15'], 'y': 575}, {'x': 467, 'scale': 0.9999999999999983, 'pieces': ['1/10', '1/7'], 'y': 523}, {'x': 770, 'scale': 0.9999999999999992, 'pieces': ['1/7'], 'y': 536}]}\nindex\" 87\nexplore_fract_1_v2\n{'fraction_circle_containment': {'[Fraction] 1/9': {'sum': {'denominator': 9, 'numerator': 1, '__as3_type': 'Fraction'}, 'homogenous': True, 'pieces': ['1/9'], 'lcm_sum': {'denominator': 9, 'numerator': 1, '__as3_type': 'Fraction'}}}, 'fraction_circle_total_count': 12, 'input_a': '54525', 'fraction_circle_counts': {'1/3': 1, '1/9': 11}, 'input': '54525', 'fraction_circle_groups': [{'x': 300, 'scale': 1, 'chains': [{'pieces': ['1/9', '1/9', '1/9', '1/9', '1/9', '1/9', '1/9', '1/9', '1/9', '1/9'], 'left': 7, 'right': 327}], 'pieces': ['1/9', '1/9', '1/9', '1/9', '1/9', '1/9', '1/9', '1/9', '1/9', '1/9', '1/9', '1/3'], 'y': 300}]}\nindex\" 88\nexplore_fract_1_v2\nNone\nindex\" 89\nmult_whole_frac_6\n{'numberline_associations': [[{'obj_name': None, 'position': 578.9560975609756, 'obj_value': 'A', 'pos_value': 2.622976316719689}]], 'input': '20'}\nindex\" 90\nmult_whole_frac_6\n{'numberline_associations': [[{'obj_name': None, 'position': 588.9463414634146, 'obj_value': 'A', 'pos_value': 2.0034287734181686}]], 'input': '15'}\nindex\" 91\nmult_whole_frac_6\n{'numberline_associations': [[{'obj_name': None, 'position': 532.7512195121951, 'obj_value': 'A', 'pos_value': 2.3997643454695416}]], 'input': '20'}\nindex\" 92\nmult_whole_frac_6\n{'numberline_associations': [[{'obj_name': None, 'position': 495.7301728780492, 'obj_value': 'A', 'pos_value': 1.6656890321668452}]], 'input': '18'}\nindex\" 93\ndivision_11\n{'fraction_circle_counts': {'1': 8}, 'fraction_circle_groups': [{'x': 850, 'scale': 0.55, 'pieces': ['1'], 'y': 535}, {'x': 625, 'scale': 0.55, 'pieces': ['1'], 'y': 535}, {'x': 400, 'scale': 0.55, 'pieces': ['1'], 'y': 535}, {'x': 175, 'scale': 0.55, 'pieces': ['1'], 'y': 535}, {'x': 850, 'scale': 0.55, 'pieces': ['1'], 'y': 325}, {'x': 625, 'scale': 0.55, 'pieces': ['1'], 'y': 325}, {'x': 400, 'scale': 0.55, 'pieces': ['1'], 'y': 325}, {'x': 175, 'scale': 0.55, 'pieces': ['1'], 'y': 325}], 'fraction_circle_total_count': 8, 'fraction_circle_containment': {}}\nindex\" 94\ndivision_11\n{'fraction_circle_counts': {'1': 8}, 'fraction_circle_groups': [{'x': 850, 'scale': 0.55, 'pieces': ['1'], 'y': 535}, {'x': 625, 'scale': 0.55, 'pieces': ['1'], 'y': 535}, {'x': 400, 'scale': 0.55, 'pieces': ['1'], 'y': 535}, {'x': 175, 'scale': 0.55, 'pieces': ['1'], 'y': 535}, {'x': 850, 'scale': 0.55, 'pieces': ['1'], 'y': 325}, {'x': 625, 'scale': 0.55, 'pieces': ['1'], 'y': 325}, {'x': 400, 'scale': 0.55, 'pieces': ['1'], 'y': 325}, {'x': 175, 'scale': 0.55, 'pieces': ['1'], 'y': 325}], 'fraction_circle_total_count': 8, 'fraction_circle_containment': {}}\nindex\" 95\ndivision_11\n{'fraction_circle_counts': {'1': 8}, 'fraction_circle_groups': [{'x': 850, 'scale': 0.55, 'pieces': ['1'], 'y': 535}, {'x': 625, 'scale': 0.55, 'pieces': ['1'], 'y': 535}, {'x': 400, 'scale': 0.55, 'pieces': ['1'], 'y': 535}, {'x': 175, 'scale': 0.55, 'pieces': ['1'], 'y': 535}, {'x': 850, 'scale': 0.55, 'pieces': ['1'], 'y': 325}, {'x': 625, 'scale': 0.55, 'pieces': ['1'], 'y': 325}, {'x': 400, 'scale': 0.55, 'pieces': ['1'], 'y': 325}, {'x': 175, 'scale': 0.55, 'pieces': ['1'], 'y': 325}], 'fraction_circle_total_count': 8, 'fraction_circle_containment': {}}\nindex\" 96\nparts_whole_1\n{'radio_group_mc2': {'choice': 'B', 'text': 'No'}, 'radio_group_mc1': {'choice': 'A', 'text': 'Yes'}, 'plain_image_groups': [{'url': 'assets/cms/wootmath_fractions/equal_parts/fourths/fourth_13.swf', 'total': 1}, {'url': 'assets/cms/wootmath_fractions/equal_parts/fourths/fourth_12.swf', 'total': 1}]}\nindex\" 97\nparts_whole_1\n{'radio_group_mc2': {'choice': 'A', 'text': 'Yes'}, 'radio_group_mc1': {'choice': 'A', 'text': 'Yes'}, 'plain_image_groups': [{'url': 'assets/cms/wootmath_fractions/equal_parts/fourths/fourth_12.swf', 'total': 1}, {'url': 'assets/cms/wootmath_fractions/equal_parts/fourths/fourth_25.swf', 'total': 1}]}\nindex\" 98\nparts_whole_1\n{'radio_group_mc2': {'choice': 'A', 'text': 'Yes'}, 'radio_group_mc1': {'choice': 'B', 'text': 'No'}, 'plain_image_groups': [{'url': 'assets/cms/wootmath_fractions/equal_parts/fourths/fourth_08.swf', 'total': 1}, {'url': 'assets/cms/wootmath_fractions/equal_parts/fourths/fourth_09.swf', 'total': 1}]}\nindex\" 99\nparts_whole_1\n{'radio_group_mc2': {'choice': 'A', 'text': 'Yes'}, 'radio_group_mc1': {'choice': 'B', 'text': 'No'}, 'plain_image_groups': [{'url': 'assets/cms/wootmath_fractions/equal_parts/ninths/ninth_02.swf', 'total': 1}, {'url': 'assets/cms/wootmath_fractions/equal_parts/ninths/ninth_01.swf', 'total': 1}]}\n" ], [ "def stringify_response(resp):\n my_val = str(resp).replace(\"': \",\"_\")\n my_val = my_val.replace(\"_{\",\" \")\n my_val = my_val.replace(\"_[\",\", \")\n for c in [']','[','{','}',\"'\",\"\",\",\"]:\n my_val = my_val.replace(c,'')\n return my_val\n", "_____no_output_____" ], [ "stringify_response(df3.iloc[0]['response'])", "_____no_output_____" ], [ "df3['response_str'] = df3['response'].apply(stringify_response)", "_____no_output_____" ], [ "for idx in range(20):\n print (idx, df3['response_str'].iloc[idx])", "0 fraction_cblock_chains right_442 sum numerator_1 denominator_2 __as3_type_Fraction pieces 1/2 left_97 lcm_sum numerator_1 denominator_2 __as3_type_Fraction plain_image_groups total_1 url_assets/cms/wootmath_fractions/number_line/markers/end_marker_noline.swf total_1 url_assets/cms/wootmath_fractions/number_line/markers/start_marker.swf total_1 url_assets/cms/wootmath_fractions/number_line/objects/dog.swf total_1 url_assets/cms/wootmath_fractions/number_line/objects/cat_dog_trail.swf den_2 fraction_input_value_1/2 num_1 fraction_cblock_total_count_1 numberline_associations fraction_cblock_counts 1/2_1 fraction_cblock_containment whole_\n1 fraction_cblock_total_count_4 plain_image_groups total_1 url_assets/cms/wootmath_fractions/number_line/objects/panda.swf total_1 url_assets/cms/wootmath_fractions/number_line/markers/start_marker.swf input_4 fraction_cblock_chains right_856 sum numerator_1 denominator_1 __as3_type_Fraction pieces 1/4 1/4 1/4 1/4 left_165 lcm_sum numerator_4 denominator_4 __as3_type_Fraction numberline_associations obj_value_None position_720.6521739130434 pos_value_1.0009451795841209 obj_name_object fraction_cblock_counts 1/4_4 fraction_cblock_containment \n2 fraction_cblock_chains left_176 lcm_sum numerator_2 denominator_8 __as3_type_Fraction right_348 pieces 1/8 1/8 sum numerator_1 denominator_4 __as3_type_Fraction left_590 lcm_sum numerator_1 denominator_6 __as3_type_Fraction right_705 pieces 1/6 sum numerator_1 denominator_6 __as3_type_Fraction left_176 lcm_sum numerator_1 denominator_4 __as3_type_Fraction right_348 pieces 1/4 sum numerator_1 denominator_4 __as3_type_Fraction left_176 lcm_sum numerator_1 denominator_1 __as3_type_Fraction right_866 pieces 1 sum numerator_1 denominator_1 __as3_type_Fraction fraction_cblock_total_count_5 fraction_cblock_counts 1_1 1/8_2 1/6_1 1/4_1 fraction_cblock_containment piece0 lcm_sum numerator_2 denominator_8 __as3_type_Fraction homogenous_True pieces 1/8 1/8 sum numerator_1 denominator_4 __as3_type_Fraction\n3 fraction_cblock_chains left_176 lcm_sum numerator_1 denominator_2 __as3_type_Fraction right_521 pieces 1/2 sum numerator_1 denominator_2 __as3_type_Fraction left_176 lcm_sum numerator_4 denominator_8 __as3_type_Fraction right_521 pieces 1/8 1/8 1/8 1/8 sum numerator_1 denominator_2 __as3_type_Fraction left_176 lcm_sum numerator_1 denominator_1 __as3_type_Fraction right_866 pieces 1 sum numerator_1 denominator_1 __as3_type_Fraction fraction_cblock_total_count_6 fraction_cblock_counts 1_1 1/2_1 1/8_4 fraction_cblock_containment Fraction 1/2 lcm_sum numerator_4 denominator_8 __as3_type_Fraction homogenous_True pieces 1/8 1/8 1/8 1/8 sum numerator_1 denominator_2 __as3_type_Fraction\n4 fraction_circle_containment Fraction 1/2 lcm_sum numerator_4 denominator_8 __as3_type_Fraction homogenous_True pieces 1/8 1/8 1/8 1/8 sum numerator_1 denominator_2 __as3_type_Fraction fraction_circle_total_count_6 fraction_circle_groups x_512 y_300 scale_0.9999999999999991 pieces 1/2 1/8 1/8 1/8 1/8 1 chains right_180 pieces 1/8 1/8 1/8 1/8 left_0 fraction_circle_counts 1_1 1/2_1 1/8_4\n5 image_object_groups total_6 on_3 url_assets/objects/singles/watch.swf off_3\n6 None\n7 None\n8 fraction_circle_groups x_512 scale_1 chains pieces 1/8 1/8 1/8 1/8 left_0 right_180 pieces 1/8 1/8 1/8 1/8 1/2 1 y_300 fraction_circle_containment piece_0 sum denominator_2 numerator_1 __as3_type_Fraction homogenous_True pieces 1/8 1/8 1/8 1/8 lcm_sum denominator_8 numerator_4 __as3_type_Fraction fraction_circle_counts 1_1 1/2_1 1/8_4 fraction_circle_total_count_6\n9 fraction_circle_groups x_512 scale_0.9999999999999994 chains pieces 1/8 1/8 1/8 1/8 left_0 right_180 pieces 1/2 1/8 1/8 1/8 1/8 1 y_300 fraction_circle_containment Fraction 1/2 sum denominator_2 numerator_1 __as3_type_Fraction homogenous_True pieces 1/8 1/8 1/8 1/8 lcm_sum denominator_8 numerator_4 __as3_type_Fraction fraction_circle_counts 1_1 1/2_1 1/8_4 fraction_circle_total_count_6\n10 fraction_circle_groups x_512 scale_0.9999999999999993 chains pieces 1/4 1/4 left_0 right_180 pieces 1/2 1/4 1/4 1 y_300 fraction_circle_containment Fraction 1/2 sum denominator_2 numerator_1 __as3_type_Fraction homogenous_True pieces 1/4 1/4 lcm_sum denominator_4 numerator_2 __as3_type_Fraction fraction_circle_counts 1_1 1/2_1 1/4_2 fraction_circle_total_count_4\n11 radio_choice_C radio_group_problem choice_C text_3/6 radio_text_3/6\n12 fraction_cblock_chains sum denominator_10 numerator_1 __as3_type_Fraction lcm_sum denominator_10 numerator_1 __as3_type_Fraction pieces 1/10 left_1024 right_1458 sum denominator_5 numerator_1 __as3_type_Fraction lcm_sum denominator_10 numerator_2 __as3_type_Fraction pieces 1/10 1/10 left_1024 right_1297 sum denominator_10 numerator_1 __as3_type_Fraction lcm_sum denominator_10 numerator_1 __as3_type_Fraction pieces 1/10 left_1024 right_1531 sum denominator_10 numerator_1 __as3_type_Fraction lcm_sum denominator_10 numerator_1 __as3_type_Fraction pieces 1/10 left_1024 right_1214 sum denominator_10 numerator_1 __as3_type_Fraction lcm_sum denominator_10 numerator_1 __as3_type_Fraction pieces 1/10 left_1024 right_1424 sum denominator_5 numerator_2 __as3_type_Fraction lcm_sum denominator_10 numerator_4 __as3_type_Fraction pieces 1/10 1/10 1/10 1/10 left_544 right_820 sum denominator_84 numerator_73 __as3_type_Fraction lcm_sum denominator_84 numerator_73 __as3_type_Fraction pieces 1/7 1/7 1/6 1/6 1/4 left_1001 right_1272 sum denominator_35 numerator_17 __as3_type_Fraction lcm_sum denominator_35 numerator_17 __as3_type_Fraction pieces 1/7 1/7 1/5 left_981 right_1316 sum denominator_28 numerator_11 __as3_type_Fraction lcm_sum denominator_28 numerator_11 __as3_type_Fraction pieces 1/7 1/4 left_1001 right_1272 sum denominator_7 numerator_1 __as3_type_Fraction lcm_sum denominator_7 numerator_1 __as3_type_Fraction pieces 1/7 left_1024 right_1300 sum denominator_7 numerator_1 __as3_type_Fraction lcm_sum denominator_7 numerator_1 __as3_type_Fraction pieces 1/7 left_1024 right_1248 sum denominator_6 numerator_1 __as3_type_Fraction lcm_sum denominator_6 numerator_1 __as3_type_Fraction pieces 1/6 left_1024 right_1316 sum denominator_6 numerator_1 __as3_type_Fraction lcm_sum denominator_6 numerator_1 __as3_type_Fraction pieces 1/6 left_1024 right_1387 sum denominator_6 numerator_1 __as3_type_Fraction lcm_sum denominator_6 numerator_1 __as3_type_Fraction pieces 1/6 left_1024 right_1220 sum denominator_6 numerator_1 __as3_type_Fraction lcm_sum denominator_6 numerator_1 __as3_type_Fraction pieces 1/6 left_1024 right_1387 sum denominator_5 numerator_1 __as3_type_Fraction lcm_sum denominator_5 numerator_1 __as3_type_Fraction pieces 1/5 left_1024 right_1358 sum denominator_5 numerator_3 __as3_type_Fraction lcm_sum denominator_5 numerator_3 __as3_type_Fraction pieces 1/5 1/5 1/5 left_1024 right_1337 sum denominator_2 numerator_1 __as3_type_Fraction lcm_sum denominator_4 numerator_2 __as3_type_Fraction pieces 1/4 1/4 left_1024 right_1523 sum denominator_4 numerator_1 __as3_type_Fraction lcm_sum denominator_4 numerator_1 __as3_type_Fraction pieces 1/4 left_1024 right_1272 sum denominator_4 numerator_1 __as3_type_Fraction lcm_sum denominator_4 numerator_1 __as3_type_Fraction pieces 1/4 left_1024 right_1358 sum denominator_2 numerator_1 __as3_type_Fraction lcm_sum denominator_2 numerator_1 __as3_type_Fraction pieces 1/2 left_1024 right_1531 sum denominator_2 numerator_1 __as3_type_Fraction lcm_sum denominator_4 numerator_2 __as3_type_Fraction pieces 1/4 1/4 left_1024 right_1389 sum denominator_4 numerator_1 __as3_type_Fraction lcm_sum denominator_4 numerator_1 __as3_type_Fraction pieces 1/4 left_1024 right_1216 sum denominator_4 numerator_1 __as3_type_Fraction lcm_sum denominator_4 numerator_1 __as3_type_Fraction pieces 1/4 left_1024 right_1351 sum denominator_1 numerator_1 __as3_type_Fraction lcm_sum denominator_1 numerator_1 __as3_type_Fraction pieces 1 left_1024 right_2045 sum denominator_1 numerator_1 __as3_type_Fraction lcm_sum denominator_1 numerator_1 __as3_type_Fraction pieces 1 left_130 right_820 fraction_cblock_containment bar1 sum denominator_5 numerator_2 __as3_type_Fraction homogenous_True pieces 1/10 1/10 1/10 1/10 lcm_sum denominator_10 numerator_4 __as3_type_Fraction Fraction 1/4 sum denominator_5 numerator_1 __as3_type_Fraction homogenous_True pieces 1/5 lcm_sum denominator_5 numerator_1 __as3_type_Fraction Fraction 1 sum denominator_10 numerator_1 __as3_type_Fraction homogenous_True pieces 1/10 lcm_sum denominator_10 numerator_1 __as3_type_Fraction fraction_cblock_total_count_41 fraction_cblock_counts 1_2 1/7_7 1/4_10 1/6_6 1/5_5 1/2_1 1/10_10\n13 whole_ fraction_input_value_4/6 fraction_cblock_chains sum denominator_3 numerator_2 __as3_type_Fraction lcm_sum denominator_6 numerator_4 __as3_type_Fraction pieces 1/6 1/6 1/6 1/6 left_96 right_522 fraction_cblock_containment num_4 plain_image_groups total_1 url_assets/cms/wootmath_fractions/number_line/markers/end_marker.swf total_1 url_assets/cms/wootmath_fractions/number_line/markers/start_marker.swf total_1 url_assets/cms/wootmath_fractions/number_line/objects/beetle.swf total_1 url_assets/cms/wootmath_fractions/number_line/objects/beetle_trail.swf numberline_associations den_6 fraction_cblock_total_count_4 fraction_cblock_counts 1/6_4\n14 plain_image_groups total_1 url_assets/cms/wootmath_fractions/number_line/objects/panda.swf total_1 url_assets/cms/wootmath_fractions/number_line/markers/start_marker.swf fraction_cblock_containment input_8 numberline_associations position_634 pos_value_0.8753623188405797 obj_name_object obj_value_None fraction_cblock_chains sum denominator_8 numerator_7 __as3_type_Fraction lcm_sum denominator_8 numerator_7 __as3_type_Fraction pieces 1/8 1/8 1/8 1/8 1/8 1/8 1/8 left_165 right_769 fraction_cblock_total_count_7 fraction_cblock_counts 1/8_7\n15 numberline_associations input_8\n16 numberline_associations position_580.5 pos_value_0.9972826086956521 obj_name_answer_text obj_value_3/3 input_\n17 plain_image_groups total_1 url_assets/cms/wootmath_fractions/number_line/objects/shark.swf total_1 url_assets/cms/wootmath_fractions/number_line/markers/start_marker.swf fraction_cblock_containment input_6 numberline_associations position_722 pos_value_1.0028985507246377 obj_name_object obj_value_None fraction_cblock_chains sum denominator_1 numerator_1 __as3_type_Fraction lcm_sum denominator_6 numerator_6 __as3_type_Fraction pieces 1/6 1/6 1/6 1/6 1/6 1/6 left_165 right_856 fraction_cblock_total_count_6 fraction_cblock_counts 1/6_6\n18 whole_ fraction_input_value_1/3 fraction_cblock_chains sum denominator_1 numerator_1 __as3_type_Fraction lcm_sum denominator_3 numerator_3 __as3_type_Fraction pieces 1/3 1/3 1/3 left_96 right_657 fraction_cblock_containment num_1 plain_image_groups total_1 url_assets/cms/wootmath_fractions/number_line/markers/end_marker.swf total_1 url_assets/cms/wootmath_fractions/number_line/markers/start_marker.swf total_1 url_assets/cms/wootmath_fractions/number_line/objects/snail.swf total_1 url_assets/cms/wootmath_fractions/number_line/objects/snail_trail.swf numberline_associations den_3 fraction_cblock_total_count_3 fraction_cblock_counts 1/3_3\n19 whole_ fraction_input_value_3/4 fraction_cblock_chains sum denominator_4 numerator_3 __as3_type_Fraction lcm_sum denominator_4 numerator_3 __as3_type_Fraction pieces 1/4 1/4 1/4 left_96 right_545 fraction_cblock_containment num_3 plain_image_groups total_1 url_assets/cms/wootmath_fractions/number_line/markers/end_marker_noline.swf total_1 url_assets/cms/wootmath_fractions/number_line/markers/start_marker.swf total_1 url_assets/cms/wootmath_fractions/number_line/objects/dog.swf total_1 url_assets/cms/wootmath_fractions/number_line/objects/cat_dog_trail.swf numberline_associations den_4 fraction_cblock_total_count_3 fraction_cblock_counts 1/4_3\n" ], [ "df3.columns", "_____no_output_____" ], [ "## In Response:\n### convert K, V, and all K_V into words in a text doc\n### Then add text\n### The add description\n", "_____no_output_____" ], [ "def make_string_from_list(key, elem_list):\n # Append key to each item in list\n ans = ''\n for elem in elem_list:\n ans += key + '_' + elem \n \n \n\n \n \n\ndef make_string(elem, key=None, top=True):\n ans = ''\n if not elem:\n return ans\n if top:\n top = False\n top_keys = []\n for idx in range(len(elem.keys())):\n top_keys.append(True)\n \n for idx, key in enumerate(elem.keys()):\n if top_keys[idx]:\n top = True\n top_keys[idx] = False\n ans += ' '\n else:\n top = False\n #print ('ans = ', ans)\n #print (type(elem[key]))\n if type(elem[key]) is str or\\\n type(elem[key]) is int:\n #print ('add value', elem[key])\n value = str(elem[key])\n #ans += key + '_' + value + ' ' + value + ' '\n ans += key + '_' + value + ' '\n elif type(elem[key]) is list:\n #print ('add list', elem[key])\n temp_elem = dict()\n for item in elem[key]:\n temp_elem[key] = item\n ans += make_string(temp_elem, top) \n elif type(elem[key]) is dict:\n #print ('add dict', elem[key])\n for item_key in elem[key].keys():\n temp_elem = dict()\n temp_elem[item_key] = elem[key][item_key]\n ans += key + '_' + make_string(temp_elem, top)\n elif type(elem[key]) is float:\n #print ('add dict', elem[key])\n sig = 2\n value = elem[key]\n value = round(value, sig-int(\n floor(log10(abs(value))))-1)\n value = str(value)\n #ans += key + '_' + value + ' ' + value + ' '\n ans += key + '_' + value + ' '\n # ans += ' ' + key + ' '\n #print ('not handled', elem[key])\n \n \n return ans\n \n \n ", "_____no_output_____" ], [ "df3['response_doc'] = df3['response'].map(make_string)", "_____no_output_____" ], [ "df3['response_doc'] = df3['response_doc'].map(lambda x: x + ' ')", "_____no_output_____" ], [ "df3['response_doc'] = df3['response_doc'] + df3['txt'] ", "_____no_output_____" ], [ "df3['response_doc'] = df3['response_doc'].map(lambda x: x + ' ')", "_____no_output_____" ], [ "df3['response_doc'] = df3['response_doc'] + df3['description']", "_____no_output_____" ], [ "df3['response_doc'] = df3['response_doc'].map(lambda x: x.replace(\"\\n\", \"\"))", "_____no_output_____" ], [ "df3['response_doc'] = df3['response_doc'].map(lambda x: x.replace(\"?\", \" \"))", "_____no_output_____" ], [ "df3.iloc[100]['response_doc']", "_____no_output_____" ], [ "df3.iloc[100]['response']", "_____no_output_____" ], [ "for idx in range(20):\n print (idx, df3['response_doc'].iloc[idx])", "0 fraction_cblock_chains_ right_442 fraction_cblock_chains_ sum_ numerator_1 sum_ denominator_2 sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/2 fraction_cblock_chains_ left_97 fraction_cblock_chains_ lcm_sum_ numerator_1 lcm_sum_ denominator_2 lcm_sum_ __as3_type_Fraction plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/markers/end_marker_noline.swf plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/markers/start_marker.swf plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/objects/dog.swf plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/objects/cat_dog_trail.swf den_2 fraction_input_value_1/2 num_1 fraction_cblock_total_count_1 fraction_cblock_counts_ 1/2_1 whole_ Use the 1/2 pieces to figure out how far the dog traveled.Answer: 1/2 In the first part of this lesson, student partition the number line into the number of given pieces to determine how far the bug traveled. In the second part of the lesson students partition the number line and then drag the bug to the given location.\n1 fraction_cblock_total_count_4 plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/objects/panda.swf plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/markers/start_marker.swf input_4 fraction_cblock_chains_ right_856 fraction_cblock_chains_ sum_ numerator_1 sum_ denominator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/4 pieces_1/4 pieces_1/4 pieces_1/4 fraction_cblock_chains_ left_165 fraction_cblock_chains_ lcm_sum_ numerator_4 lcm_sum_ denominator_4 lcm_sum_ __as3_type_Fraction numberline_associations_ numberline_associations_ position_720.0 numberline_associations_ pos_value_1.0 numberline_associations_ obj_name_object fraction_cblock_counts_ 1/4_4 Drag the panda to 4/4 of a yard from the start.Answer: 4/4 In the first part of this lesson, student partition the number line into the number of given pieces to determine how far the bug traveled. In the second part of the lesson students partition the number line and then drag the bug to the given location.\n2 fraction_cblock_chains_ left_176 fraction_cblock_chains_ lcm_sum_ numerator_2 lcm_sum_ denominator_8 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ right_348 fraction_cblock_chains_ pieces_1/8 pieces_1/8 fraction_cblock_chains_ sum_ numerator_1 sum_ denominator_4 sum_ __as3_type_Fraction fraction_cblock_chains_ left_590 fraction_cblock_chains_ lcm_sum_ numerator_1 lcm_sum_ denominator_6 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ right_705 fraction_cblock_chains_ pieces_1/6 fraction_cblock_chains_ sum_ numerator_1 sum_ denominator_6 sum_ __as3_type_Fraction fraction_cblock_chains_ left_176 fraction_cblock_chains_ lcm_sum_ numerator_1 lcm_sum_ denominator_4 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ right_348 fraction_cblock_chains_ pieces_1/4 fraction_cblock_chains_ sum_ numerator_1 sum_ denominator_4 sum_ __as3_type_Fraction fraction_cblock_chains_ left_176 fraction_cblock_chains_ lcm_sum_ numerator_1 lcm_sum_ denominator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ right_866 fraction_cblock_chains_ pieces_1 fraction_cblock_chains_ sum_ numerator_1 sum_ denominator_1 sum_ __as3_type_Fraction fraction_cblock_total_count_5 fraction_cblock_counts_ 1_1 fraction_cblock_counts_ 1/8_2 fraction_cblock_counts_ 1/6_1 fraction_cblock_counts_ 1/4_1 fraction_cblock_containment_ piece0_ lcm_sum_ numerator_2 lcm_sum_ denominator_8 lcm_sum_ __as3_type_Fraction piece0_ piece0_ pieces_1/8 pieces_1/8 piece0_ sum_ numerator_1 sum_ denominator_4 sum_ __as3_type_Fraction Model how many eighths are equal to one fourth.Answer: 2 In this lesson, students use visual fraction models to determine simple equivalent fractions in real world contexts.\n3 fraction_cblock_chains_ left_176 fraction_cblock_chains_ lcm_sum_ numerator_1 lcm_sum_ denominator_2 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ right_521 fraction_cblock_chains_ pieces_1/2 fraction_cblock_chains_ sum_ numerator_1 sum_ denominator_2 sum_ __as3_type_Fraction fraction_cblock_chains_ left_176 fraction_cblock_chains_ lcm_sum_ numerator_4 lcm_sum_ denominator_8 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ right_521 fraction_cblock_chains_ pieces_1/8 pieces_1/8 pieces_1/8 pieces_1/8 fraction_cblock_chains_ sum_ numerator_1 sum_ denominator_2 sum_ __as3_type_Fraction fraction_cblock_chains_ left_176 fraction_cblock_chains_ lcm_sum_ numerator_1 lcm_sum_ denominator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ right_866 fraction_cblock_chains_ pieces_1 fraction_cblock_chains_ sum_ numerator_1 sum_ denominator_1 sum_ __as3_type_Fraction fraction_cblock_total_count_6 fraction_cblock_counts_ 1_1 fraction_cblock_counts_ 1/2_1 fraction_cblock_counts_ 1/8_4 fraction_cblock_containment_ [Fraction] 1/2_ lcm_sum_ numerator_4 lcm_sum_ denominator_8 lcm_sum_ __as3_type_Fraction [Fraction] 1/2_ [Fraction] 1/2_ pieces_1/8 pieces_1/8 pieces_1/8 pieces_1/8 [Fraction] 1/2_ sum_ numerator_1 sum_ denominator_2 sum_ __as3_type_Fraction Model how many halves are equal to four eighths.Answer: 1 In this lesson, students use visual fraction models to determine simple equivalent fractions in real world contexts.\n4 fraction_circle_containment_ [Fraction] 1/2_ lcm_sum_ numerator_4 lcm_sum_ denominator_8 lcm_sum_ __as3_type_Fraction [Fraction] 1/2_ [Fraction] 1/2_ pieces_1/8 pieces_1/8 pieces_1/8 pieces_1/8 [Fraction] 1/2_ sum_ numerator_1 sum_ denominator_2 sum_ __as3_type_Fraction fraction_circle_total_count_6 fraction_circle_groups_ x_512 fraction_circle_groups_ y_300 fraction_circle_groups_ scale_1.0 fraction_circle_groups_ pieces_1/2 pieces_1/8 pieces_1/8 pieces_1/8 pieces_1/8 pieces_1 fraction_circle_groups_ chains_ right_180 chains_ pieces_1/8 pieces_1/8 pieces_1/8 pieces_1/8 chains_ left_0 fraction_circle_counts_ 1_1 fraction_circle_counts_ 1/2_1 fraction_circle_counts_ 1/8_4 Cameron ate 4/8 of a pizza.Cover the pizza to model how many halves of a pizza he ate.Answer: 1 In this lesson, students use visual fraction models to determine simple equivalent fractions in real world contexts.\n5 image_object_groups_ total_6 image_object_groups_ on_3 image_object_groups_ url_assets/objects/singles/watch.swf image_object_groups_ off_3 Shade 1/2 of the 6 watches.Answer: 1/2 A review of material from earlier lessons. The following topics were selected for review: One Fourth of Shapes and Sets - Supplemental 1; One Fourth of Shapes and Sets; and Equivalent Fractions in Partitioning Sets.\n6 Shade 1/4 of the circle.answer={:n=>3, :d=>12} A review of material from earlier lessons. The following topics were selected for review: One Fourth of Shapes and Sets - Supplemental 1; One Fourth of Shapes and Sets; and Equivalent Fractions in Partitioning Sets.\n7 Shade 1/3 of the rectangle.answer={:n=>2, :d=>6} A review of material from earlier lessons. The following topics were selected for review: One Third of Shapes and Sets; One Fourth of Shapes and Sets; and Equivalent Fractions in Partitioning Sets.\n8 fraction_circle_groups_ x_512 fraction_circle_groups_ scale_1 fraction_circle_groups_ chains_ pieces_1/8 pieces_1/8 pieces_1/8 pieces_1/8 chains_ left_0 chains_ right_180 fraction_circle_groups_ pieces_1/8 pieces_1/8 pieces_1/8 pieces_1/8 pieces_1/2 pieces_1 fraction_circle_groups_ y_300 fraction_circle_containment_ piece_0_ sum_ denominator_2 sum_ numerator_1 sum_ __as3_type_Fraction piece_0_ piece_0_ pieces_1/8 pieces_1/8 pieces_1/8 pieces_1/8 piece_0_ lcm_sum_ denominator_8 lcm_sum_ numerator_4 lcm_sum_ __as3_type_Fraction fraction_circle_counts_ 1_1 fraction_circle_counts_ 1/2_1 fraction_circle_counts_ 1/8_4 fraction_circle_total_count_6 Drag one eighth pieces to cover all of the 1/2 piece.Answer: 4 In this lesson, students learn to relate fraction pieces to each other by covering one fraction exactly with another.\n9 fraction_circle_groups_ x_512 fraction_circle_groups_ scale_1.0 fraction_circle_groups_ chains_ pieces_1/8 pieces_1/8 pieces_1/8 pieces_1/8 chains_ left_0 chains_ right_180 fraction_circle_groups_ pieces_1/2 pieces_1/8 pieces_1/8 pieces_1/8 pieces_1/8 pieces_1 fraction_circle_groups_ y_300 fraction_circle_containment_ [Fraction] 1/2_ sum_ denominator_2 sum_ numerator_1 sum_ __as3_type_Fraction [Fraction] 1/2_ [Fraction] 1/2_ pieces_1/8 pieces_1/8 pieces_1/8 pieces_1/8 [Fraction] 1/2_ lcm_sum_ denominator_8 lcm_sum_ numerator_4 lcm_sum_ __as3_type_Fraction fraction_circle_counts_ 1_1 fraction_circle_counts_ 1/2_1 fraction_circle_counts_ 1/8_4 fraction_circle_total_count_6 Drag one half pieces to cover all of the 4/8 shown.Answer: 1 In this lesson, students learn to relate fraction pieces to each other by covering one fraction exactly with another.\n10 fraction_circle_groups_ x_512 fraction_circle_groups_ scale_1.0 fraction_circle_groups_ chains_ pieces_1/4 pieces_1/4 chains_ left_0 chains_ right_180 fraction_circle_groups_ pieces_1/2 pieces_1/4 pieces_1/4 pieces_1 fraction_circle_groups_ y_300 fraction_circle_containment_ [Fraction] 1/2_ sum_ denominator_2 sum_ numerator_1 sum_ __as3_type_Fraction [Fraction] 1/2_ [Fraction] 1/2_ pieces_1/4 pieces_1/4 [Fraction] 1/2_ lcm_sum_ denominator_4 lcm_sum_ numerator_2 lcm_sum_ __as3_type_Fraction fraction_circle_counts_ 1_1 fraction_circle_counts_ 1/2_1 fraction_circle_counts_ 1/4_2 fraction_circle_total_count_4 Drag one half pieces to cover all of the 2/4 shown.Answer: 1 In this lesson, students learn to relate fraction pieces to each other by covering one fraction exactly with another.\n11 radio_choice_C radio_group_problem_ choice_C radio_group_problem_ text_3/6 radio_text_3/6 What fraction has 6 as the denominator () 6/7 () 4/5 () 3/6Answer: 3/6 Students identify the numerator and denominator of given fractions.\n12 fraction_cblock_chains_ sum_ denominator_10 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_10 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/10 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1458 fraction_cblock_chains_ sum_ denominator_5 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_10 lcm_sum_ numerator_2 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/10 pieces_1/10 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1297 fraction_cblock_chains_ sum_ denominator_10 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_10 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/10 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1531 fraction_cblock_chains_ sum_ denominator_10 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_10 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/10 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1214 fraction_cblock_chains_ sum_ denominator_10 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_10 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/10 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1424 fraction_cblock_chains_ sum_ denominator_5 sum_ numerator_2 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_10 lcm_sum_ numerator_4 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/10 pieces_1/10 pieces_1/10 pieces_1/10 fraction_cblock_chains_ left_544 fraction_cblock_chains_ right_820 fraction_cblock_chains_ sum_ denominator_84 sum_ numerator_73 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_84 lcm_sum_ numerator_73 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/7 pieces_1/7 pieces_1/6 pieces_1/6 pieces_1/4 fraction_cblock_chains_ left_1001 fraction_cblock_chains_ right_1272 fraction_cblock_chains_ sum_ denominator_35 sum_ numerator_17 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_35 lcm_sum_ numerator_17 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/7 pieces_1/7 pieces_1/5 fraction_cblock_chains_ left_981 fraction_cblock_chains_ right_1316 fraction_cblock_chains_ sum_ denominator_28 sum_ numerator_11 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_28 lcm_sum_ numerator_11 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/7 pieces_1/4 fraction_cblock_chains_ left_1001 fraction_cblock_chains_ right_1272 fraction_cblock_chains_ sum_ denominator_7 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_7 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/7 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1300 fraction_cblock_chains_ sum_ denominator_7 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_7 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/7 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1248 fraction_cblock_chains_ sum_ denominator_6 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_6 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/6 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1316 fraction_cblock_chains_ sum_ denominator_6 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_6 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/6 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1387 fraction_cblock_chains_ sum_ denominator_6 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_6 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/6 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1220 fraction_cblock_chains_ sum_ denominator_6 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_6 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/6 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1387 fraction_cblock_chains_ sum_ denominator_5 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_5 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/5 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1358 fraction_cblock_chains_ sum_ denominator_5 sum_ numerator_3 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_5 lcm_sum_ numerator_3 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/5 pieces_1/5 pieces_1/5 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1337 fraction_cblock_chains_ sum_ denominator_2 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_4 lcm_sum_ numerator_2 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/4 pieces_1/4 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1523 fraction_cblock_chains_ sum_ denominator_4 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_4 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/4 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1272 fraction_cblock_chains_ sum_ denominator_4 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_4 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/4 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1358 fraction_cblock_chains_ sum_ denominator_2 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_2 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/2 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1531 fraction_cblock_chains_ sum_ denominator_2 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_4 lcm_sum_ numerator_2 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/4 pieces_1/4 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1389 fraction_cblock_chains_ sum_ denominator_4 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_4 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/4 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1216 fraction_cblock_chains_ sum_ denominator_4 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_4 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/4 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1351 fraction_cblock_chains_ sum_ denominator_1 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_1 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_2045 fraction_cblock_chains_ sum_ denominator_1 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_1 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1 fraction_cblock_chains_ left_130 fraction_cblock_chains_ right_820 fraction_cblock_containment_ bar1_ sum_ denominator_5 sum_ numerator_2 sum_ __as3_type_Fraction bar1_ bar1_ pieces_1/10 pieces_1/10 pieces_1/10 pieces_1/10 bar1_ lcm_sum_ denominator_10 lcm_sum_ numerator_4 lcm_sum_ __as3_type_Fraction fraction_cblock_containment_ [Fraction] 1/4_ sum_ denominator_5 sum_ numerator_1 sum_ __as3_type_Fraction [Fraction] 1/4_ [Fraction] 1/4_ pieces_1/5 [Fraction] 1/4_ lcm_sum_ denominator_5 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_containment_ [Fraction] 1_ sum_ denominator_10 sum_ numerator_1 sum_ __as3_type_Fraction [Fraction] 1_ [Fraction] 1_ pieces_1/10 [Fraction] 1_ lcm_sum_ denominator_10 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_total_count_41 fraction_cblock_counts_ 1_2 fraction_cblock_counts_ 1/7_7 fraction_cblock_counts_ 1/4_10 fraction_cblock_counts_ 1/6_6 fraction_cblock_counts_ 1/5_5 fraction_cblock_counts_ 1/2_1 fraction_cblock_counts_ 1/10_10 Model 4/10 on the black bar using the fraction pieces below.Answer: [object Object] Students identify the numerator and denominator of given fractions.\n13 whole_ fraction_input_value_4/6 fraction_cblock_chains_ sum_ denominator_3 sum_ numerator_2 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_6 lcm_sum_ numerator_4 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/6 pieces_1/6 pieces_1/6 pieces_1/6 fraction_cblock_chains_ left_96 fraction_cblock_chains_ right_522 num_4 plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/markers/end_marker.swf plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/markers/start_marker.swf plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/objects/beetle.swf plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/objects/beetle_trail.swf den_6 fraction_cblock_total_count_4 fraction_cblock_counts_ 1/6_4 Use the 1/6 pieces to figure out how far the beetle traveled.Answer: 4/6 In the first part of this lesson, student partition the number line into the number of given pieces to determine how far the bug traveled. In the second part of the lesson students partition the number line and then drag the bug to the given location.\n14 plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/objects/panda.swf plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/markers/start_marker.swf input_8 numberline_associations_ position_634 numberline_associations_ pos_value_0.88 numberline_associations_ obj_name_object numberline_associations_ fraction_cblock_chains_ sum_ denominator_8 sum_ numerator_7 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_8 lcm_sum_ numerator_7 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/8 pieces_1/8 pieces_1/8 pieces_1/8 pieces_1/8 pieces_1/8 pieces_1/8 fraction_cblock_chains_ left_165 fraction_cblock_chains_ right_769 fraction_cblock_total_count_7 fraction_cblock_counts_ 1/8_7 Drag the panda to 7/8 of a yard from the start.Answer: 7/8 In the first part of this lesson, student partition the number line into the number of given pieces to determine how far the bug traveled. In the second part of the lesson students partition the number line and then drag the bug to the given location.\n15 input_8 One yard on the number line is divided intoAnswer: sixths In the first part of this lesson, student partition the number line into the number of given pieces to determine how far the bug traveled. In the second part of the lesson students partition the number line and then drag the bug to the given location.\n16 numberline_associations_ position_580.0 numberline_associations_ pos_value_1.0 numberline_associations_ obj_name_answer_text numberline_associations_ obj_value_3/3 input_ Drag the fraction to its correct location on the number line.Answer: 3/3 In this lesson, students learn how to place unit and non-unit fraction on a number line.\n17 plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/objects/shark.swf plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/markers/start_marker.swf input_6 numberline_associations_ position_722 numberline_associations_ pos_value_1.0 numberline_associations_ obj_name_object numberline_associations_ fraction_cblock_chains_ sum_ denominator_1 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_6 lcm_sum_ numerator_6 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/6 pieces_1/6 pieces_1/6 pieces_1/6 pieces_1/6 pieces_1/6 fraction_cblock_chains_ left_165 fraction_cblock_chains_ right_856 fraction_cblock_total_count_6 fraction_cblock_counts_ 1/6_6 Drag the shark to 1/6 of a yard from the start.Answer: 1/6 A review of material from earlier lessons. The following topics were selected for review: Introducing 1/b on the Number Line; Partitioning the Number Line; and Labeling Number Lines\n18 whole_ fraction_input_value_1/3 fraction_cblock_chains_ sum_ denominator_1 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_3 lcm_sum_ numerator_3 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/3 pieces_1/3 pieces_1/3 fraction_cblock_chains_ left_96 fraction_cblock_chains_ right_657 num_1 plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/markers/end_marker.swf plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/markers/start_marker.swf plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/objects/snail.swf plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/objects/snail_trail.swf den_3 fraction_cblock_total_count_3 fraction_cblock_counts_ 1/3_3 Use the 1/3 pieces to figure out how far the snail traveled.Answer: 3/3 A review of material from earlier lessons. The following topics were selected for review: Introducing 1/b on the Number Line; Partitioning the Number Line; and Labeling Number Lines\n19 whole_ fraction_input_value_3/4 fraction_cblock_chains_ sum_ denominator_4 sum_ numerator_3 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_4 lcm_sum_ numerator_3 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1/4 pieces_1/4 pieces_1/4 fraction_cblock_chains_ left_96 fraction_cblock_chains_ right_545 num_3 plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/markers/end_marker_noline.swf plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/markers/start_marker.swf plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/objects/dog.swf plain_image_groups_ total_1 plain_image_groups_ url_assets/cms/wootmath_fractions/number_line/objects/cat_dog_trail.swf den_4 fraction_cblock_total_count_3 fraction_cblock_counts_ 1/4_3 Use the 1/4 pieces to figure out how far the dog traveled.Answer: 3/4 A review of material from earlier lessons. The following topics were selected for review: Introducing 1/b on the Number Line; Partitioning the Number Line; and Labeling Number Lines\n" ], [ "df3['response_doc'] = df3['response_doc'].map( lambda x: \" \".join(x.split('/')) if '/' in x else x)", "_____no_output_____" ], [ "df3.iloc[100]['response_doc']", "_____no_output_____" ], [ "df3['response_doc'] = df3['response_doc'].map( lambda x: x.replace('[',' '))\ndf3['response_doc'] = df3['response_doc'].map( lambda x: x.replace(']',' '))", "_____no_output_____" ], [ "df3.iloc[100]['response_doc']", "_____no_output_____" ], [ "docs = list(df3['response_doc'])", "_____no_output_____" ], [ "from time import time\n\nfrom sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer\nfrom sklearn.decomposition import NMF, LatentDirichletAllocation", "_____no_output_____" ], [ "data_samples = docs", "_____no_output_____" ], [ "n_features = 1000\nn_samples = len(data_samples)\nn_topics = 100\nn_top_words = 30", "_____no_output_____" ], [ "print(\"Extracting tf-idf features for NMF...\")\ntfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2,\n max_features=n_features,\n stop_words='english')", "Extracting tf-idf features for NMF...\n" ], [ "t0 = time()\ntfidf = tfidf_vectorizer.fit_transform(data_samples)\nprint(\"done in %0.3fs.\" % (time() - t0))\n", "done in 9.536s.\n" ], [ "print(\"Extracting tf features for LDA...\")\ntf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,\n max_features=n_features,\n stop_words='english')\nt0 = time()\ntf = tf_vectorizer.fit_transform(data_samples)\nprint(\"done in %0.3fs.\" % (time() - t0))\n", "Extracting tf features for LDA...\ndone in 12.468s.\n" ], [ "# Fit the NMF model\nprint(\"Fitting the NMF model with tf-idf features, \"\n \"n_samples=%d and n_features=%d...\"\n % (n_samples, n_features))\nt0 = time()\nnmf = NMF(n_components=n_topics, random_state=1,\n alpha=.1, l1_ratio=.5).fit(tfidf)\nprint(\"done in %0.3fs.\" % (time() - t0))", "Fitting the NMF model with tf-idf features, n_samples=100000 and n_features=1000...\ndone in 424.012s.\n" ], [ "def print_top_words(model, feature_names, n_top_words):\n for topic_idx, topic in enumerate(model.components_):\n print(\"Topic #%d:\" % topic_idx)\n print(\" \".join([feature_names[i]\n for i in topic.argsort()[:-n_top_words - 1:-1]]))\n print()", "_____no_output_____" ], [ "print(\"\\nTopics in NMF model:\")\ntfidf_feature_names = tfidf_vectorizer.get_feature_names()\nprint_top_words(nmf, tfidf_feature_names, n_top_words)", "\nTopics in NMF model:\nTopic #0:\nfraction_cblock_chains_ lcm_sum_ sum_ __as3_type_fraction numerator_1 pieces_1 fraction_cblock_counts_ denominator_1 denominator_4 denominator_12 left_175 denominator_3 denominator_6 fraction_cblock_containment_ right_865 numerator_2 denominator_2 unit3_ left_176 right_347 denominator_10 left_80 numerator_5 fraction_cblock_total_count_6 denominator_15 left_125 left_165 using 1_3 left_347\nTopic #1:\nfraction_circle_groups_ fraction_circle_counts_ pieces_1 chains_ scale_1 scale_0 fraction_circle_containment_ y_350 55 fraction_circle_total_count_6 fraction_circle_total_count_5 y_300 y_325 x_250 y_400 fraction_circle_total_count_2 x_ right_270 left_0 x_750 5_1 word x_700 x_400 y_535 fraction_circle_total_count_7 fraction_circle_total_count_8 left 3_1 1_2\nTopic #2:\nplain_image_groups_ number_line total_1 cms wootmath_fractions swf url_assets markers objects_v2 start_marker objects traveled end_marker far distance left_96 snail ladybug start figure beetle seahorse use line fraction_cblock_counts_ number bug_trail snail_trail fraction_cblock_chains_ end_marker_noline\nTopic #3:\ncommon match having shade fraction_input_value_ numerators input_a_ denominators choose comparison correct compare fraction fractions students circle bar polygon flower star greater identifying experience enter introduces beginning fraction_cblock_total_count_1 magnitude gain sets\nTopic #4:\ngrid model build 1x10 visual models using students answer 10x10 amounts introducing tenth 20 tenths object 10 18 multiply 14 left_176 piece0_ reasoning simple person unit2_ explicitly connect cover right_866\nTopic #5:\nnumberline_associations_ location label line number correct drag obj_value_ yard pos_value_1 interval obj_name_answer_text obj_name_eqn pos_value_0 identify input_ correction partitioning obj_value_1 students answer locate obj_value_2 understand input_8 obj_value_0 input_7 express lengths input_6\nTopic #6:\npieces_1 chains_ lcm_sum_ sum_ numerator_5 numerator_4 numerator_6 __as3_type_fraction fraction_circle_containment_ numerator_7 numerator_3 numerator_8 fraction_circle_counts_ fraction_circle_total_count_10 right_0 left_270 left_0 numerator_9 fraction_circle_total_count_9 fraction_circle_total_count_8 fraction_circle_total_count_7 1_1 unit0_ right_90 8_8 6_6 denominator_1 left_180 fraction_circle_total_count_6 fraction_circle_total_count_11\nTopic #7:\n12 denominator_12 den_12 reds pieces_1 piece1_ 12_6 12_4 12_3 12_ fraction red input_6 twelfth input_a_6 numerator_11 6_ 12_8 pieces right_270 12_2 numerator_12 12_9 lcm_sum_ sum_ 12_7 12_12 12_1 equal numerator_6\nTopic #8:\norder arrange greatest boxes drag fractions fraction common 43 42 32 63 students 81 numerator 83 multiple 51 generating 14 benchmark 75 finding 34 left_175 comparing 16 unit3_ 18 equal\nTopic #9:\n10 denominator_10 den_10 pieces_1 purples 10_5 10_10 numerator_10 purple numerator_9 10_ fraction input_a_10 input_10 5_ 10_9 10_8 10_6 input_5 input_a_5 left_80 tenth 10_2 notation fraction_cblock_total_count_11 10_4 10_3 lcm_sum_ sum_ numerator_5\nTopic #10:\nimage_object_groups_ singles symbolic create set shading shade parts objects given swf url_assets model lesson off_2 off_1 14 fraction on_0 students answer off_3 total_14 total_8 total_6 total_3 octopus cat cats piranha\nTopic #11:\nreview topics selected material following earlier lessons fractions introducing modeling comparing adding line sets benchmark placing partitioning building multiplication labeling equivalents finding applications lines subtracting number locating relating grid simplifying\nTopic #12:\nimage_object_groups_ objects complex singles sets shade swf url_assets modeling 14 total_8 identify multiple multiply total_14 number on_0 groups off_3 total_6 circles octopus cat cats shapes relating bars off_2 denominator answer\nTopic #13:\nbenchmark comparison correct radio_text_ asked noticing larger operator smaller identify using input_ choose compare models students lesson fractions fraction answer comparing denominator_2 2_1 multiple experience result bar fraction_cblock_counts_ identifying obj_name_eqn\nTopic #14:\nobject whole_1 model number answer lesson decimals included names shown x_750 fraction_input_value_1 pieces mixed fraction_circle_total_count_2 size add sized understand x_250 dragging help equal used piece goal black tenth whole_3 y_350\nTopic #15:\n15 denominator_15 den_15 5_ greens pieces_1 15_5 15_3 orange fraction 15_ 5_1 numerator_14 smallest 15_1 input_a_5 sum_ lcm_sum_ left_342 numerator_13 fifteenth whole_ input_5 green __as3_type_fraction numerator_10 right_270 fraction_circle_total_count_6 terms numerator_11\nTopic #16:\ngrid notation decimal shading introduced 1x10 model answer students model_lbl_0 decimals words using input_b_5 14 input_a_7 meters 24 fractional input_b_1 19 input_b_4 input_b_6 input_b_3 tenth input_a_9 input_a_5 length input_b_8 input_12\nTopic #17:\nparts various shapes fractional symbols set using lesson shaded whole_ fraction students den_8 answer equal piranhas den_7 den_10 fraction_input_value_7 den_9 polygon num_7 den_5 fraction_input_value_6 star flower num_6 14 boxes cats\nTopic #18:\nzero fourths fraction_cblock_total_count_16 fraction_cblock_total_count_15 fraction_cblock_total_count_14 fraction_cblock_total_count_13 fraction_cblock_total_count_12 fraction_cblock_total_count_11 fraction_cblock_total_count_10 fraction_cblock_total_count_1 fraction_cblock_counts_ fraction_cblock_containment_ fraction_cblock_chains_ fraction frac_piece_ fourth feet foundation formed form following flying_trail flower flexible fish finding final figure fifths fifth\nTopic #19:\nbar1_ bar left_130 names pieces black lcm_sum_ sum_ right_820 size model piece fraction dragging help used understand __as3_type_fraction sized denominator_1 goal 1_1 equal denominator fraction_cblock_counts_ numerator students fraction_cblock_containment_ number fraction_cblock_total_count_2\nTopic #20:\nterms tothe ans1 denominator box numerator familiar goal ans0 drag answer lesson fractions students person meters den_15 shows main symbolically label multiplying modeling division represent input_12 unlike circles multiply quotients\nTopic #21:\nshown input_ input_a_ choose comparison decimals correct object model shading 1x10 magnitude understanding deepen compare students build estimate hundredths comparison1 place y_300 x_700 left_200 1_2 left_175 asks point right_890 patterns\nTopic #22:\naddition complete problem_text_1 sentences addends missing sentence problem_text_2 bitmap_text_inputs_ bitmap_text_interp_ input_b_1 answer students input_b_2 input_a_2 lesson subtraction input_a_3 adding input_a_1 input_b_3 input_a_0 input_1 input_3 input_2 input_a_5 problem input_a_4 leftover unit1_\nTopic #23:\ncommon mix numerators denominators order denominator numerator enter smallest fractions greatest whole_ students fraction answer den_12 den_15 comparison1 den_7 den_9 den_10 input_ den_8 input_a_ correct den_ fraction_input_value_6 num_6 den_3 fraction_cblock_total_count_1\nTopic #24:\nlength line number yards divide determine yard bar lengths fraction_input_value_ enter equal use meters whole_ way fraction measure den_8 familiar using input_a_0 answer placing num_1 objects den_6 notation students den_2\nTopic #25:\ncommon make true having numerators boxes fractions denominators comparison compare drag answer students greater use numbers decimals 24 fraction_cblock_total_count_1 different connect explicitly pieces relating model express decimal place algorithm review\nTopic #26:\nhundredths 10x10 grid decimal model build 19 14 answer notation students introduced 24 understanding magnitude 18 20 models 16 visual expressed 17 compare using deepen interval patterns building relative point\nTopic #27:\ntenths 1x10 expressed meter input_12 start beetle work model_lbl_0 build magnitude locate understanding problems shark word answer start_marker obj_name_object swam students amounts input_11 final markers compare locating 60 input_a_ objects_v2\nTopic #28:\n2_ fraction denominator_2 lcm_sum_ sum_ chains_ pieces_1 __as3_type_fraction fraction_circle_counts_ 2_1 fraction_circle_containment_ numerator_1 yellow x_512 right_180 right_270 y_300 fraction_circle_groups_ left_0 pieces cover 8_4 left_90 2_2 numerator_2 fraction_circle_total_count_3 4_2 half yellows input_2\nTopic #29:\nplain_image_groups_ juice problems number_line context total_1 wootmath_fractions cms reasoning swf url_assets choose real world comparison size numerator compare correct fractions answer input_ input_a_ mug students feet word division misc_objects subtract\nTopic #30:\nequal_parts radio_group_mc1_ radio_group_mc2_ plain_image_groups_ parts text_yes choice_a fourths total_1 cms wootmath_fractions swf url_assets object shaded area text_no demonstrated partitioned evaluate formed shapes choice_b understanding equal sized sixths models size ninths\nTopic #31:\nform simplest enter included subtract answer whole_1 visual models simplifying mixed lesson add students whole_2 fractions fraction_input_value_1 simplify like difference problems left set proper input_a_2 gain den_3 feet den_2 real\nTopic #32:\ncomparing knowledge extend benchmark input_a_ input_ enter correct fractions 20 lesson 60 30 comparison1 40 students 50 multiple 24 experience 17 identifying gain 16 means 14 lengths bar_ different 32\nTopic #33:\naways strategies variety sense benchmark denominator order numerator number common using fractions 81 students 51 greater 83 63 input_a_3 input_a_2 input_a_4 125 33 fraction input_a_5 apply input_a_6 set division 55\nTopic #34:\nfractions denominator compare arrange boxes greatest models using lesson drag visual review answer learn students fraction person box left_175 reasoning bar 12 10 multiply right_865 unit3_ included size choose woot\nTopic #35:\nmisc_objects plain_image_groups_ simplest ant_alt ladybug_alt wootmath_fractions cms form swf bugs ladybugs url_assets sets represent objects total_2 use total_3 total_4 enter den_2 fraction whole_ fractions total_6 students answer den_3 total_1 input_4\nTopic #36:\norder common denominator whole_ smaller greater fractions den_15 fraction fraction_input_value_7 num_7 students den_12 answer den_10 fraction_input_value_6 num_6 den_8 den_9 fraction_input_value_8 num_8 num_9 den_7 apply set enter strategies multiple second smallest\nTopic #37:\nunit_ lcm_sum_ sum_ __as3_type_fraction fraction_circle_counts_ x_512 1_1 fraction_circle_containment_ numerator_1 y_350 bigger scale_0 125 problem addition denominator_2 pieces_1 model left_176 fraction_circle_total_count_2 bar support black models right_866 fraction_circle_total_count_3 x_800 using lesson denominator_6\nTopic #38:\ncircle1_ fraction_circle_groups_ pieces_1 chains_ sum_ lcm_sum_ circle names pieces fraction_circle_counts_ y_325 x_300 fraction_circle_containment_ __as3_type_fraction fraction size model dragging black understand help goal sized used circle0_ scale_1 right_0 1_1 equal numerator_1\nTopic #39:\narea_target_contents_ night_sky plain_image_groups_ drag_and_drop objects cookie experience x_468 identifying total_1 swf group url_assets gain area drag work fourth identify set comparing benchmark lesson pizza chocolate answer students sets shapes fractions\nTopic #40:\nfraction_cblock_chains_ sum_ lcm_sum_ left_100 bar2_ bar1_ __as3_type_fraction denominator_2 numerator_1 right_790 pieces_1 denominator_1 fraction_cblock_containment_ fraction_cblock_counts_ right_445 bar0_ undefined 1_2 2_1 powerful judge helping strategy model fractions benchmark numerator_3 greater bar3_ numerator_2\nTopic #41:\nproper order numerator compare large common fractions students numbers 32 fraction reasoning circle den_9 woot 18 16 42 den_8 20 75 19 whole_ adding input_a_4 misc_objects input_4 total_1 left_80 size\nTopic #42:\ngiven relative unit names piece y_350 fraction_circle_groups_ say depending important limited flexible need naming pieces different fourth second half cover lesson students circle goal scale_1 dark brown yellow fraction_circle_total_count_2 fraction\nTopic #43:\naquarium image_object_groups_ area_target_contents_ drag_and_drop piranhas piranha objects x_468 swf url_assets plain_image_groups_ fish total_1 drag experience identifying gain on_0 group off_3 comparing area total_6 benchmark answer students work fourth 1x10 placement\nTopic #44:\nui png plain_image_groups_ total_2 decimal point arrows left_arrow right_arrow wootmath_fractions cms url_assets patterns locations points multiplied placement divided use correct introduced writing composing number modeled total_1 deepen location answer students\nTopic #45:\nleft_90 right_780 denominator_1 unit2_ fraction_cblock_chains_ lcm_sum_ sum_ equivalent pieces_1 equal __as3_type_fraction right_435 1_2 build numerator_1 using relating fraction algorithm right_262 explicitly connect model given fraction_cblock_containment_ fraction_cblock_counts_ create denominator_2 right_320 modeled\nTopic #46:\ndecimal break range enter given hundredths input_a_0 input_0 model tenths object students answer 16 17 18 14 19 25 notation 24 meters writing composing make true place 33 length determine\nTopic #47:\nasks true select make enter operator sentence number makes lesson statement students correct comparison compare fractions fraction comparison1 input_ input_1 radio_text_ bitmap_text_interp_ bitmap_text_inputs_ input_3 input_4 24 explicitly connect different input_a_\nTopic #48:\nequivalent group choosing apply fraction understanding box given lesson fractions drag left_176 answer piece0_ use equivalence students simple deepen modeled determine finding right_866 cover piece1_ model right_521 fractionthat beginning introduces\nTopic #49:\npizza ate estimate eat sums friend x_475 y_384 did benchmarks fraction_circle_total_count_1 half context comparing real world 1_1 learn fractions greater lesson fraction_circle_counts_ scale_1 cake answer left students using way adding\nTopic #50:\nordering review lessons topics earlier following material selected intro fractions common numerator generating comparing bars greatest smallest modeling multiple equal benchmark fraction boxes enter answer comparison den_15 context drag whole_\nTopic #51:\nnum_1 fraction_input_value_1 whole_ den_2 den_3 den_4 enter smallest den_5 whole_1 answer fractions den_6 den_8 simplified students having difference bar1_ run den_7 right_820 left_130 50 circle second fraction_cblock_total_count_1 choosing numerators wants\nTopic #52:\ndistance partition measure line animal traveled student short deepens role number animation understanding bars using start yard objects_v2 animations swam bubble_trail butterfly divided input_7 express bee fifths ant fraction far\nTopic #53:\npiece_0_ pieces_1 chains_ piece_1_ fraction_circle_groups_ sum_ lcm_sum_ fraction_circle_counts_ left_0 __as3_type_fraction x_512 numerator_2 piece_2_ fraction_circle_containment_ cover pieces y_300 scale_1 right_120 1_1 numerator_1 denominator_4 relate covering denominator_6 exactly right_180 right_90 fraction_circle_total_count_4 denominator_3\nTopic #54:\ndecimals expressed introducing deepen understanding tenths magnitude hundredths use true make writing composing compose input_ compare order shark input_11 answer grid numbers meters modeled pieces naming topics material selected following\nTopic #55:\ncircle1_1_ circle1_2_ fraction_circle_groups_ sum_ lcm_sum_ pieces_1 fraction_circle_counts_ numerator_1 __as3_type_fraction object fraction_circle_containment_ y_350 scale_1 x_750 x_250 names relative given piece unit denominator_6 pieces say left_270 denominator_12 important limited flexible need depending\nTopic #56:\njuice pitcher plain_image_groups_ orange represent number_line line total_1 wootmath_fractions cms contexts real world swf fraction url_assets number learn whole_ students den_8 answer den_3 applications den_6 partitioning den_2 introducing den_4 5_1\nTopic #57:\ndenominator_7 fraction_cblock_chains_ sum_ lcm_sum_ seventh __as3_type_fraction left_80 pieces_1 7_3 7_1 light numerator_3 den_7 numerator_6 7_5 7_7 right_770 7_2 numerator_4 right_228 numerator_5 7_6 7_ 7_4 right_425 fraction blue right_178 numerator_2 right_276\nTopic #58:\nnumber line location mixed miles work context youranswer did proper drag shown students whole_2 run correct main ran placing multiplication den_5 multiplying fraction_input_value_2 den_4 obj_value_a conceptual obj_value_1 locating goal labeling\nTopic #59:\nrectangle write identify symbols set shade fraction_input_value_ match review fraction circle models using fractions students complex shaded area shapes group work fourth answer den_8 obj_name_eqn 16 objects obj_value_ sets problem\nTopic #60:\nnum_2 fraction_input_value_2 whole_ den_3 den_4 enter whole_2 den_6 fractions answer den_5 smallest fraction second students den_9 having means bars different bar1_ numerator_2 lengths greatest right_820 left_130 amounts learn gain lesson\nTopic #61:\ninput_a_1 bitmap_text_interp_ bitmap_text_inputs_ simplify fraction_input_value_ help input_1 use fraction fractions simplest enter visual models equation_text_1 problem_text_1 equation_1 way simplifying students form y_384 equation_text_2 fraction_circle_total_count_2 answer tenth eqn_1 input_b_1 express scale_1\nTopic #62:\ntransition write simplify support models using used use simplest greatest form visual denominator number common students numerator enter lesson fraction divide answer simplifying solution input_a_2 symbolic provided multiply multiplication problem\nTopic #63:\nhalf lasso input_0 input_a_0 benchmark use decimal relative size enter models compare hundredths 14 bigger answer students 50 18 17 16 42 19 40 comparing decimals writing composing 34 building\nTopic #64:\nimproper fraction far line proper location fractions did context work enter shown students simplifying den_5 simplest represent gain whole_ learn form num_9 simplify y_400 run fraction_input_value_7 left_270 situations num_7 lesson\nTopic #65:\nmile run divide final shark wants walked swam line lengths quotients zero obj_name_obj pos_value_0 objects_v2 input_11 place hippo equal panda interval location drag number animal compose use given ran giraffe\nTopic #66:\nunit1_ unit2_ sum_ lcm_sum_ __as3_type_fraction scale_0 numerator_1 fraction_circle_containment_ left_175 1_2 denominator_6 x_700 denominator_2 unit3_ y_300 pieces_1 denominator_3 numerator_2 denominator_1 right_865 x_250 chains_ y_350 x_315 using models visual denominator_4 lesson fraction_cblock_containment_\nTopic #67:\nmug plain_image_groups_ hot chocolate number_line total_1 wootmath_fractions cms swf url_assets represent line real world fraction number contexts learn whole_ applications den_3 den_4 partitioning answer students den_2 introducing reasoning context problems\nTopic #68:\ndenominator_9 sum_ lcm_sum_ fraction_cblock_chains_ den_9 9_ __as3_type_fraction white numerator_7 9_1 pieces_1 ninth 9_3 left_80 bar1_ whites 9_9 right_206 numerator_8 9_2 9_6 numerator_9 right_819 fraction left_200 fraction_cblock_total_count_10 right_166 right_242 left_243 left_130\nTopic #69:\nnumbers mixed learn whole_1 lesson subtract number subtracting unit divided fraction_input_value_1 divide make whole_2 difference improper feet large included enter non visual whole_3 obj_value_a den_3 20 distance adding problems pos_value_1\nTopic #70:\nexpress words decimal form shown box model_lbl_0 visual models answer drag hundredths students locate use left_80 make true pieces result yard model person right_770 numbers tenth input_b_5 obj_name_answer_text input_a_7 input_b_1\nTopic #71:\n3_ denominator_3 fraction sum_ lcm_sum_ brown 3_1 __as3_type_fraction numerator_2 pieces_1 chains_ left_30 fraction_circle_counts_ denominator_6 right_270 fraction_circle_containment_ 6_2 3_2 numerator_1 9_3 x_512 right_240 fraction_circle_total_count_4 whites 12_4 input_3 simple y_300 numerator_4 cover\nTopic #72:\nnumberline_associations_ pos_value_0 obj_name_object line drag number objects_v2 represent input_12 obj_value_a start meter elephant kangaroo fraction locate giraffe mile hippo input_9 input_8 panda input_4 location labels students obj_value_0 input_5 position_260 problems\nTopic #73:\nbiked mile ran total numberline_associations_ stopped obj_name_a_text obj_value_a distance pos_value_1 line add non sum number greater unit using input_12 fractions lesson input_8 miles situations simplified answer drag light division students\nTopic #74:\nrenaming represented build hundredths tenths grid visual models enter answer students amounts drag represent box boxes numbers writing composing deepen building input_0 person input_a_0 decimals place pieces naming input_a_2 patterns\nTopic #75:\nradio_group_problem_ identify numerator denominator given radio_choice_a radio_choice_b choice_b choice_a fractions fraction fourth think half piece estimate conventions came text_1 radio_text_1 y_400 students cake greater seventh situations cut x_512 smaller sums\nTopic #76:\nnum_3 fraction_input_value_3 den_4 whole_ whole_3 den_6 enter den_7 answer den_5 greatest fractions numerator_3 included long students means bars fraction different adding learn lengths lesson right_820 left_130 bar1_ represent simplified den_10\nTopic #77:\ndenominator_8 sum_ lcm_sum_ fraction_cblock_chains_ 8_ __as3_type_fraction numerator_3 8_1 numerator_7 eighth 8_4 8_3 gray 8_2 pieces_1 den_8 numerator_1 grays 8_5 8_6 8_8 numerator_5 piece_2_ 8_7 right_261 left_175 right_216 fraction numerator_8 right_347\nTopic #78:\nfraction_circle_groups_ scale_0 fraction_circle_counts_ piece x_675 x_200 x_811 pieces_1 magnitude box sense modeling fraction_circle_total_count_4 x_550 y_415 relative y_450 10_1 drag learn unit size 6_1 pieces y_375 answer y_300 9_1 y_275 blue\nTopic #79:\nconnecting symbol word review names shapes earlier following lessons selected topics material bars model mixed fraction shaded say modeling den_8 fraction_circle_total_count_1 y_350 whole_ answer supplemental x_300 cover equal problems sets\nTopic #80:\nalgorithm creating needed equivalent apply learn support missing visual eqn_1 lesson models enter eqn_2 numerator input_6 fractions eqn_3 bitmap_text_interp_ bitmap_text_inputs_ develop relating answer connect explicitly students input_2 24 input_1 input_5\nTopic #81:\n11 think came cut bigger greater den_15 piece pie cake piranha brownies piranhas notation on_0 introduced terms 20 whole_ support sized review understanding image_object_groups_ 12 radio_choice_b visual choice_b reasoning model\nTopic #82:\nadd sum denominators unlike symbolically like greater multiply learn enter lesson fractions adding models y_350 missing visual solution x_705 students using second whole_3 x_315 left_0 included familiar den_12 numerators whole_1\nTopic #83:\nnum_4 fraction_input_value_4 den_5 whole_ den_6 answer greatest fraction enter den_9 numerator_4 fractions students den_7 complex circle0_ den_8 bar1_ piranhas bars left_130 shaded shapes right_820 adding modeling 3_4 large missing y_415\nTopic #84:\nmental gain magnitude hundredths model visual compare models answer students decimals experience shown identifying 60 simplifying person object box input_a_8 input_8 amounts choose estimate 40 multiplying comparison simplify second comparing\nTopic #85:\n1_ fraction lcm_sum_ sum_ __as3_type_fraction numerator_1 denominator_1 fraction_circle_containment_ denominator_2 1_1 left_125 black fraction_circle_counts_ circle denominator_3 right_815 3_3 browns 2_2 4_4 pieces_1 denominator_4 yellows chains_ oranges enter left_150 right_150 blues 1_2\nTopic #86:\nexactly fraction relate covering rectangle shaded fraction_input_value_ learn pieces lesson input_a_3 answer students input_a_2 input_a_6 right_180 estimate cover shown piece1_ half x_512 multiplying input_a_8 drag fraction_circle_counts_ equal left_0 1_1 input_a_4\nTopic #87:\nfrac_piece_ fraction_circle_groups_ relative given unit names piece chains_ lcm_sum_ sum_ pieces flexible important limited need depending pieces_1 naming fraction_circle_counts_ different fourth half second lesson reds cover x_300 left_270 scale_1 students\nTopic #88:\nnum_5 fraction_input_value_5 den_6 whole_ den_9 numerator_5 answer fractions enter den_7 greatest fraction shaded den_8 denominator_6 students simplified cats greater different bars traveled butterfly far bar1_ flying_trail unlike den_10 yards left_130\nTopic #89:\nlong bar yards fraction_input_value_ fraction determine use number length line locate students represent express feet amounts whole_ using fractional den_8 learn whole_3 den_6 model greater num_1 form num_2 placing answer\nTopic #90:\nmultiple denominator provided symbols subtract mult_n_1_ mult_d_1_ second difference using fractions enter solution missing answer students equation_1 whole_ equation_2 simplest given den_12 bar1_ unit numerator left_130 den_15 form den_10 den_8\nTopic #91:\nequal_parts_v2 plain_image_groups_ total_1 cms wootmath_fractions swf url_assets parts formed demonstrated evaluate partitioned area undefined understanding sized boxes equal size answer models drag fraction students multiply multiplying circle singles estimate chocolate\nTopic #92:\nbitmap_text_inputs_ bitmap_text_interp_ input_a_2 model_lbl_0 input_a_4 input_a_6 input_a_3 input_b_6 input_b_4 beginning introduces input_b_8 input_a_5 fraction_input_value_ express input_2 input_a_8 equation_2 input_b_5 input_b_3 fractions equation_1 left_125 input_b_2 input_a_7 input_6 locate makes subtraction words\nTopic #93:\ncircle pieces relationships woot explore familiar math introduced x_300 size y_300 equals scale_1 black equal fraction fraction_circle_counts_ fraction_circle_groups_ students answer fraction_circle_total_count_1 input_a_4 input_4 input_3 1_1 blues input_a_3 input_2 browns input_a_2\nTopic #94:\n4_ denominator_4 fraction dark lcm_sum_ sum_ blue 4_1 __as3_type_fraction fraction_circle_counts_ chains_ left_0 numerator_1 simple numerator_3 right_270 x_512 8_2 4_2 numerator_2 fraction_circle_containment_ cover determine 4_3 equivalent fractionthat 12_3 right_180 use y_300\nTopic #95:\ncomplete sentence math correct drag answer people equally subtraction models split amounts meaning groups division shading undefined visual multiplication students circle0_ 1x10 learn lesson word wants problem problems cake divide\nTopic #96:\ndenominator_5 fraction_cblock_chains_ lcm_sum_ sum_ numerator_3 pieces_1 __as3_type_fraction numerator_2 left_80 numerator_4 fifth 5_ 5_3 right_770 5_2 bar1_ 5_1 5_4 fraction_cblock_counts_ orange 5_5 den_5 right_544 right_268 fraction denominator_10 fraction_cblock_containment_ right_406 bar0_ using\nTopic #97:\ncontexts world real learn represent answer simplified groups multiply giraffe students elephant simplify obj_name_object kangaroo objects_v2 subtract hippo different panda left input_5 number mile fraction problems rex shows using line\nTopic #98:\n13 den_15 terms 24 numerator denominator enter familiar whole_ goal problem radio_choice_b off_1 experience identifying 20 30 input_9 rectangle gain grid students model total_14 off_3 decimal shaded shade piranhas introduced\nTopic #99:\n100 50 10x10 build use choosing comparison1 given den_2 grid size input_a_ input_ 25 zero correct place pieces introducing distance lesson students animal 51 sum equation_1 express fractions 42 order\n\n" ], [ "print(\"Fitting LDA models with tf features, \"\n \"n_samples=%d and n_features=%d...\"\n % (n_samples, n_features))\nlda = LatentDirichletAllocation(n_topics=n_topics, max_iter=5,\n learning_method='online',\n learning_offset=50.,\n random_state=0)\nt0 = time()\nlda.fit(tf)\nprint(\"done in %0.3fs.\" % (time() - t0))", "Fitting LDA models with tf features, n_samples=100000 and n_features=1000...\n" ], [ "print(\"\\nTopics in LDA model:\")\ntf_feature_names = tf_vectorizer.get_feature_names()\nprint_top_words(lda, tf_feature_names, n_top_words)", "\nTopics in LDA model:\nTopic #0:\ngrid 1x10 renaming represented eighths shapes total identifying y_292 right_590 depending gain left_425 brown right_348 12 animation ladybug_alt writing 9_ left bitmap_text_interp_ x_511 left_252 y_400 benchmark using 75 reptile_trail 40\nTopic #1:\nidentify correction area_target_contents_ night_sky group drag_and_drop set obj_name_eqn fourth work area drag total_1 models objects lesson plain_image_groups_ url_assets using swf students answer right_242 result black right_635 right_434 right_245 radio_text_ right_462\nTopic #2:\nfraction pieces object size names bar1_ model bar students black piece goal sized equal help understand left_130 number used lesson numerator denominator dragging 1_1 answer right_820 fraction_cblock_total_count_2 y_325 fraction_cblock_total_count_3 seventh\nTopic #3:\nimage_object_groups_ objects url_assets swf singles answer shade students fraction lesson parts symbolic set given model shading 14 difference off_2 provided off_1 complex total_14 sets off_3 total_8 multiple denominator solution number\nTopic #4:\nanimations correction obj_name_answer_text1 obj_name_answer_text2 3_2 83 fish left_520 right_288 left_135 denominator_2 run write numerator_13 15_ right_494 extend right_445 simplifying locate tenth word placing interval polygon represent right_315 meter sum words\nTopic #5:\nsize answer piece fraction_circle_counts_ box pieces drag relative fraction magnitude sense modeling x_200 x_675 scale_1 learn fractions x_811 students lesson unit 10 y_450 x_550 10_1 y_415 y_300 6_1 fraction_circle_total_count_4 half\nTopic #6:\nchocolate estimate sums situations snack cookie problems friend area_target_contents_ x_468 ate drag_and_drop set group identify work fourth bar area using drag radio_choice_b total_1 radio_group_problem_ objects choice_b students hot world lesson\nTopic #7:\nfraction_cblock_chains_ lcm_sum_ sum_ __as3_type_fraction numerator_1 pieces_1 denominator_1 fraction_cblock_counts_ denominator_2 bar1_ denominator_5 bar2_ left_100 fraction_cblock_containment_ right_790 fractions numerator_2 fraction using model 1_2 answer numerator_4 students size compare models bar 2_1 black\nTopic #8:\nans1 tothe terms ans0 left_97 say label numerator_14 4_4 right_297 unit whole_ num_1 finding chains_ text_no left_205 thirds 7_6 x_475 interval right_276 sum left_435 right_590 generate having right_290 included foundation\nTopic #9:\nobjects_v2 fraction represent drag answer right_810 learn chameleon squirrel students line number plain_image_groups_ number_line wootmath_fractions swf cms url_assets total_1 contexts world real numberline_associations_ mile hippo bitmap_text_inputs_ 33 pos_value_0 equal position_260\nTopic #10:\nnumbers learn left feet long line cut fractions answer number using students num_1 fraction den_6 yards divide whole_ den_3 length enter bar placing object fraction_input_value_ review material lessons parts topics\nTopic #11:\ngiven choosing left_320 left_147 left_205 right_319 right_147 right_204 right_262 9_2 right_549 right_377 left_262 left_377 right_434 right_268 12_6 reds right_406 right_435 cookie left_90 x_468 area_target_contents_ drag_and_drop 3_1 size chocolate build identify\nTopic #12:\nbar0_ think extend introducing mixed hot left_232 left_261 left_90 shark dart_frog_trail input_a_1 text_no obj_name_obj numberline_associations_ purple right_635 denominator_22 ladybugs left_320 12_4 knowledge object 32 used left_297 circle1_2_ 7_3 x_315 word\nTopic #13:\ndenominator_9 pieces_1 lcm_sum_ sum_ fraction __as3_type_fraction numerator_1 9_ numerator_9 whites 9_3 9_9 input_9 input_a_9 9_1 white den_9 9_6 right_206 3_1 pieces students right_317 answer 8_1 left_178 add numerator_7 brown given\nTopic #14:\nplain_image_groups_ fraction_cblock_chains_ pieces_1 swf url_assets cms wootmath_fractions number_line total_1 lcm_sum_ sum_ objects __as3_type_fraction markers fraction_cblock_counts_ use start_marker left_96 snail figure pieces traveled end_marker far answer fractions seahorse beetle learn snail_trail\nTopic #15:\ntrue make comparison statement asks sentence select makes operator input_a_9 input_9 bitmap_text_inputs_ cricket input_3 bitmap_text_interp_ input_a_7 bar0_ correct enter number students fraction compare text_yes determine input_2 left_261 right_268 image_object_groups_ pieces\nTopic #16:\npieces_1 lcm_sum_ sum_ fraction_cblock_chains_ __as3_type_fraction bar1_ numerator_3 denominator_1 fraction_cblock_counts_ left_130 bar right_820 fraction_cblock_containment_ fraction 1_1 model numerator_6 answer students black denominator fraction_cblock_total_count_4 fraction_cblock_total_count_5 4_3 right_840 fraction_cblock_total_count_7 fraction_cblock_total_count_8 numerator_1 8_8 gray\nTopic #17:\ny_400 half fourth different apply naming conventions radio_group_problem_ fractions x_512 relationships fifteenth right_270 seventh learn review lasso sixth tenth ninth twelfth choice_a fraction_circle_total_count_3 eighth radio_choice_a left_342 red building situations larger\nTopic #18:\nden_3 unlike num_1 fraction_input_value_1 learn whole_ number students fractions fraction lesson line object using fraction_circle_total_count_6 bar length feet long way measure fraction_input_value_ express yards orange denominator_2 pitcher right_425 animations mix\nTopic #19:\ndenominator_3 numerator_2 3_ denominator_6 sum_ pieces_1 lcm_sum_ __as3_type_fraction 3_1 numerator_4 3_2 6_2 6_4 3_3 fraction brown left_120 right_120 answer 12_4 thirds right_560 3_4 students right_240 right_309 left_167 right_30 right_330 12_8\nTopic #20:\nfraction whole_ num_2 fraction_input_value_2 bar number line enter answer students length fraction_input_value_ yards den_4 long den_6 den_8 yard fractions den_ orange num_1 learn num_3 express way using measure num_7 lesson\nTopic #21:\nusing smaller cake unit piece bigger fraction greater larger circles right_890 think radio_group_gtlt_ radio_choice_b radio_group_problem_ pie radio_choice_a students pieces came fractions cut brownies choice_b choice_a model fifth understanding radio_text_1 text_1\nTopic #22:\nshading create algorithm model relating modeled number enter explicitly connect input_2_2 input_1_2 lesson fraction right_550 given off_1 symbolic 6_4 1_ left_90 fraction_cblock_total_count_8 right_780 on_0 octopus total_14 right_504 1x10 numerator_2 numerator_4\nTopic #23:\ncomparison correct choose input_a_ compare input_ answer students models numerator decimals size reasoning mug magnitude gain fractions fraction_cblock_total_count_4 hundredths mental context problems model object hot real world plain_image_groups_ number_line total_1\nTopic #24:\nplain_image_groups_ number_line total_1 swf wootmath_fractions cms url_assets distance line number traveled markers animal using measure partition start_marker bars start students short role deepens understanding student animation far bug_trail enter end_marker\nTopic #25:\nrenaming represented grid left_435 complete right_30 denominator_15 circle2_ denominator_3 obj_value_0 used problem beetle y_500 6_4 cms left_210 y_305 purple makes right_606 right_770 piece0_ cake non flying_trail make choosing asks input_b_5\nTopic #26:\n11 on_0 total_3 cat create cats octopus piranha total_1 total_2 off_1 fraction shading given num_3 lesson fraction_input_value_3 whole_ singles url_assets range off_2 image_object_groups_ break swf symbolic off_3 objects model parts\nTopic #27:\nmix bugs ladybugs ant_alt ladybug_alt misc_objects right_178 words result measure piece2_ right_375 symbolically text_1 means left_290 simple y_370 used goal total_1 unit3_ youranswer cats text_yes left_176 complex viewing hot mental\nTopic #28:\nreview number line topics lessons following material earlier selected introducing butterfly partitioning answer labeling kangaroo lines applications composing locating objects_v2 hot start flying_trail number_line total_1 plain_image_groups_ drag swf url_assets cms\nTopic #29:\nright_262 left_262 right_434 right_176 right_607 left_435 left_176 right_348 left_348 algorithm right_521 8_6 fraction_cblock_total_count_11 enter left_90 relating right_780 right_549 number modeled connect explicitly 1_3 input_1_2 input_2_2 create 2_2 lengths denominator_4 4_3\nTopic #30:\nfraction_cblock_chains_ plain_image_groups_ lcm_sum_ sum_ cms wootmath_fractions swf url_assets total_1 number_line __as3_type_fraction numerator_1 fraction_cblock_counts_ markers far ladybug pieces_1 traveled denominator_2 answer line start_marker number end_marker fraction_cblock_total_count_1 shows box denominator_3 bar students\nTopic #31:\nplain_image_groups_ parts total_1 swf url_assets wootmath_fractions cms understanding object equal_parts radio_group_mc1_ radio_group_mc2_ equal shaded area models answer fourths text_yes choice_a shapes students text_no choice_b deepen sixths ninths shows divided identifying\nTopic #32:\ndenominator_12 12 sum_ lcm_sum_ numerator_1 __as3_type_fraction denominator_6 12_1 12_3 12_2 pink lesson 6_1 pieces pieces_1 reds left_330 red 4_1 twelfth students answer 6_3 fraction 3_1 fraction_cblock_chains_ 8_1 denominator_4 relative pinks\nTopic #33:\nden_15 20 30 40 symbolically lasso mult_d_1_ mult_n_1_ mix solution second provided missing support 11 50 18 enter learn denominator extend fraction_cblock_total_count_12 right_750 whole_ add lesson num_8 oranges 60 misc_objects\nTopic #34:\nfraction input_a_1 sets fraction_input_value_ input_1 fractions students rectangle answer way using shade equation_text_1 review shaded supplemental circle lesson enter match learn equivalent input_b_6 object bar bitmap_text_interp_ bitmap_text_inputs_ fourth modeling shapes\nTopic #35:\n12 pieces_1 denominator_12 numerator_6 reds 12_ 12_6 lcm_sum_ sum_ using red __as3_type_fraction numerator_11 fraction 12_4 twelfth numerator_12 answer 12_9 12_7 12_12 numerator_10 12_5 right_330 input_6 numerator_9 right_30 left_330 left_210 left_240\nTopic #36:\nobject form simplest mixed answer enter students number shown fraction_input_value_1 whole_1 lesson fractions models numbers num_1 subtract visual learn add using denominators den_4 second like fraction_input_value_2 whole_2 num_2 den_2 num_3\nTopic #37:\nfraction_circle_groups_ scale_1 pieces_1 fraction_circle_counts_ object circle1_1_ circle1_2_ fraction x_250 x_750 y_350 fraction_circle_containment_ piece orange 5_1 4_1 3_1 6_1 say cover students circle red answer pink pieces names 2_1 unit fraction_circle_total_count_3\nTopic #38:\nmath numerator_7 amounts 13 input_7 input_a_7 fifths division undefined divided 8_7 zero drag people cookies equally sentence split express proper groups meaning answer students fraction_cblock_total_count_7 fraction given far lesson number\nTopic #39:\ninput_12 24 number students numberline_associations_ answer fraction line develop drag means obj_value_a lesson understanding meter algorithm conceptual think using ran way pos_value_1 review input_10 tenths meters label mile locating obj_name_a_text\nTopic #40:\nnumerator_8 input_a_2 input_2 num_8 fraction_input_value_8 numerator_13 12_8 fraction_cblock_total_count_11 fraction eqn_2 thirds right_625 shown total_8 fraction_cblock_total_count_8 generate 1_1 fraction_cblock_total_count_9 fraction_cblock_total_count_13 left_165 pieces_1 students right_855 answer add bar whole_ using sum_ lcm_sum_\nTopic #41:\ndenominator_7 numerator_22 sum_ lcm_sum_ y_350 sum __as3_type_fraction 7_1 denominators greater like add denominator_22 x_705 7_3 pieces_1 second left_0 7_5 x_315 7_6 7_4 students learn den_7 answer visual 1_2 numerator_2 numerator_1\nTopic #42:\nmodels visual students using answer fractions use equivalent lesson fraction support determine equal learn simple creating algorithm develop model covered 1_1 apply input_6 eighths answer_text_5 sixths missing needed input_a_6 generate\nTopic #43:\nnotation decimal answer express students tenths objects_v2 hundredths input_a_0 enter ant bee locate drag swam 14 interval fish bubble_trail start using line number understanding animation deepens role short student plain_image_groups_\nTopic #44:\ntenths hundredths decimals expressed answer 10 objects_v2 markers start_marker start 100 use students obj_name_object meter beetle word final labels work input_11 compose locate tenth 19 drag swam composing number_line problems\nTopic #45:\naquarium area_target_contents_ piranhas drag_and_drop piranha x_468 fish drag experience group set identify area models objects total_1 swf url_assets off_3 using answer students image_object_groups_ plain_image_groups_ shading review right_206 visual total_6 multiply\nTopic #46:\n15_3 make blue 8_8 left_337 right_625 obj_value_1 left_433 input_a_10 colors fraction_cblock_total_count_1 piece2_ represent trex piece_0_ x_700 greater mixed area strategies input_b_8 den_15 evaluate group y_350 mult_d_1_ drag_and_drop enter 8_1 left_338\nTopic #47:\nnumberline_associations_ line number pos_value_0 drag location fraction mile divide correct interval answer students lengths 75 equal 83 obj_value_1 obj_value_2 33 67 position_260 input_6 lesson fractions pos_value_1 represent learn parts obj_name_answer_text2\nTopic #48:\nobj_name_eqn fraction_circle_total_count_8 different tothe boxes whole_3 unit3_ eats light y_530 input_a_3 ninth radio_group_gtlt_ num_5 beginning 55 divided total_4 interval text_yes 6_1 fraction_input_value_2 ordering snail brown 5_4 1_2 image_object_groups_ shark_trail numerator_10\nTopic #49:\npieces_1 fraction_circle_groups_ chains_ lcm_sum_ sum_ __as3_type_fraction fraction_circle_counts_ fraction_circle_containment_ circle1_ numerator_1 woot fraction numerator_3 left_270 denominator_8 right_0 denominator_5 left_0 denominator_1 numerator_4 right_270 scale_1 scale_0 right_180 x_300 fraction_circle_total_count_10 left_180 circle fraction_circle_total_count_9 fraction_circle_total_count_8\nTopic #50:\nleft_80 left_166 right_166 left_243 right_242 right_178 right_319 fraction_cblock_total_count_10 left_320 right_549 right_550 9_6 enter words left_90 right_780 area_target_contents_ equivalents denominator_9 identify 3_2 drag_and_drop group relating algorithm x_468 night_sky numerator_2 finding build\nTopic #51:\nnumberline_associations_ yard drag input_8 pos_value_0 obj_name_object input_4 number line start given input_6 right_597 student answer start_marker pos_value_1 partition input_3 left_165 fraction_cblock_total_count_3 pieces_1 markers students fraction_cblock_total_count_2 bug 63 denominator_8 determine 3_3\nTopic #52:\npieces_1 fraction_circle_groups_ lcm_sum_ sum_ fraction 2_ chains_ __as3_type_fraction fraction_circle_counts_ numerator_1 fraction_circle_containment_ denominator_2 scale_1 pieces piece_0_ numerator_2 left_0 2_1 y_300 x_512 answer 1_1 piece_1_ students cover denominator_8 yellow lesson learn covering\nTopic #53:\ncommon fraction denominators numerators fractions students having compare comparison correct input_a_ choose fraction_input_value_ shade match order smallest circle bar proper greater star polygon flower numerator woot radio_choice_a right_286 5_ large\nTopic #54:\nexperience groups right_297 sized bar3_ math tenths left_243 interval fraction_cblock_total_count_11 came orange 25 finding greater ant selected wootmath_fractions num_9 deepens 9_1 left_405 right_606 right_210 rectangle solution 10_5 y_400 denominator_15 right_790\nTopic #55:\nnumberline_associations_ plain_image_groups_ number_line number line total_1 swf wootmath_fractions cms url_assets learn real world contexts objects_v2 represent mile answer fraction students drag obj_name_object pos_value_0 hippo panda 25 giraffe 17 trex rex\nTopic #56:\nbenchmark students lesson fractions comparing correct number fraction enter operator input_ noticing asked radio_text_ larger input_a_ place knowledge extend comparison1 obj_value_0 50 answer 51 drag unit non 33 learn obj_name_answer_text\nTopic #57:\nfraction 4_ left_176 equivalent piece1_ pieces piece0_ model right_866 using pieces_1 equivalence 1_1 cover denominator_2 understanding covering numerator_2 right_521 lesson modeled deepen variety fraction_cblock_counts_ right_406 fraction_cblock_containment_ smaller piece2_ larger numerator_4\nTopic #58:\n1x10 symbol math division yard juice left_165 sentences piece0_ left_270 y_384 denominator_8 right_268 create introducing fraction_input_value_6 x_750 fraction_circle_total_count_8 unit1_ denominator_28 1_4 numerator_7 10_2 multiplying powerful left_125 given x_724 miles included\nTopic #59:\nbitmap_text_inputs_ bitmap_text_interp_ input_a_3 input_a_2 input_a_4 elephant input_3 input_b_5 input_b_4 34 input_b_1 input_a_7 students answer subtraction fraction input_a_1 number equal learn numberline_associations_ plain_image_groups_ position_260 needed line mile divide lengths cms wootmath_fractions\nTopic #60:\narea set night_sky fourth identify work group drag_and_drop area_target_contents_ experience obj_name_eqn drag total_1 objects plain_image_groups_ url_assets lesson swf swam familiar students 12_9 arrows x_512 circle1_1_ left_210 right_770 right_216 right_606 goal\nTopic #61:\nmodel build introduced answer students decimal use 10x10 total_2 finding ui png point patterns arrows right_arrow left_arrow divided multiplied placement number points locations visualize 10 equivalents lasso right_309 algorithm relating\nTopic #62:\nstudents relative lesson given unit names pieces piece fraction half second y_350 different goal naming depending need important flexible limited fourth circle cover answer say yellow fraction_circle_total_count_2 scale_1 brown fraction_circle_counts_\nTopic #63:\nmisc_objects ladybug_alt ant_alt ladybugs evaluate formed demonstrated partitioned bugs depending 10x10 hippo left_277 right_786 ran light y_325 fraction_input_value_6 right_550 comparison1 arrange introducing 16 missing y_305 y_275 covered fraction_cblock_total_count_18 right_320 fourths\nTopic #64:\nradio_text_is gray correct labeling object piece_0_ person radio_choice_b right_820 2_1 problem_text_1 snack meters fraction_circle_total_count_11 orange powerful text_yes bubble_trail right_590 piranhas den_15 fraction_cblock_total_count_7 equation_text_2 review piece2_ left_433 operator fraction_cblock_total_count_1 relate finding\nTopic #65:\nans0 terms tothe ans1 right_320 right_550 proper right_597 labeling 4_1 input_a_1 10 cloud_round wootmath_fractions swf fraction_circle_containment_ off_3 used 20 ninth amounts left_145 fraction_circle_total_count_7 thirds left_297 43 bigger snack expressed input_7\nTopic #66:\npartitioned evaluate formed demonstrated ladybugs ant_alt misc_objects ladybug_alt bugs night_sky dark x_511 right_504 fraction_input_value_1 left_145 right_180 eat 4_2 sums choose sentence scale_0 left_251 conventions animal magnitude left_342 radio_choice_a 12_8 main\nTopic #67:\nplain_image_groups_ cms wootmath_fractions url_assets swf total_1 number_line juice represent answer students problems fractions correct pitcher context orange size fraction reasoning real world numerator whole_ den_8 den_4 compare den_6 fraction_input_value_1 fraction_input_value_3\nTopic #68:\nunit2_ unit1_ pieces_1 lcm_sum_ sum_ __as3_type_fraction numerator_1 1_2 using fractions denominator_2 lesson denominator_1 unit3_ models visual y_300 answer review fraction_circle_containment_ x_700 students x_250 learn numerator_2 unit0_ numerator_4 denominator_8 unit4_ numerator_3\nTopic #69:\npizza ate y_384 fractions 1_1 scale_1 eat fraction_circle_total_count_1 answer lesson did benchmarks friend half students context fraction_circle_counts_ x_475 sums estimate comparing using greater models 10 circles area_target_contents_ adding x_468 real\nTopic #70:\nreview word selected earlier material following lessons topics names model connecting symbol fraction bars answer shapes mixed input_0 sets building x_300 yellow fraction_circle_total_count_1 pinks equivalents 1_1 fourth grays equal black\nTopic #71:\nfraction_cblock_chains_ lcm_sum_ sum_ __as3_type_fraction numerator_1 pieces_1 denominator_1 fraction_cblock_counts_ denominator_8 left_175 fraction_cblock_containment_ right_865 unit3_ fraction right_347 using fractions answer bar left_347 1_3 1_2 right_232 denominator_4 right_261 models denominator_2 lesson right_519 students\nTopic #72:\n1_ fraction denominator_2 left_125 1_2 enter right_815 fractions answer equivalent lesson denominator_1 names model introduces beginning equal 2_1 right_297 shown equation_1 input_a_6 denominator equation_2 right_470 right_469 left_297 input_b_6 60 input_b_8\nTopic #73:\nbreak range radio_group_mc2_ radio_text_1 singles 30 left_243 radio_text_is patterns numerator friend left_95 fifteenth x_750 asked ladybugs equally came shark start_marker denominator_5 fraction_cblock_total_count_14 fraction_cblock_total_count_11 compose 8_6 fraction_input_value_ right_406 support line bubble_trail\nTopic #74:\nreview fractions ordering selected earlier material following lessons topics modeling common answer intro numerator bars context relating circles adding equivalents subtracting equal introduction generating smallest unit fraction denominators equivalence model\nTopic #75:\nwords result right_178 y_415 right_347 right_625 fraction_circle_total_count_12 bar right_302 right_228 equivalent shows cat intro denominator_15 obj_value_1 fraction_input_value_ right_855 quotients used simplify problem_text_2 right_204 giraffe placement num_7 drag_and_drop fraction_cblock_total_count_4 complex num_6\nTopic #76:\nimproper number fractions greater unit lesson mile answer students learn add distance line ran biked using total non 16 sum stopped obj_value_a obj_name_a_text pos_value_1 input_8 second drag numberline_associations_ fraction 75\nTopic #77:\nden_12 walked mult_d_1_ mult_n_1_ symbolically den_15 did second missing 14 express num_7 add mile 17 equation_1 far fractions denominators simplified fraction_input_value_7 proper solution whole_ support 13 den_9 improper apply students\nTopic #78:\nobj_name_obj input_a_2 simple 8_7 provided 10x10 right_arrow circle1_ objects true denominator_20 orange fraction_input_value_3 reptile_trail total_8 formed right_519 fraction_circle_total_count_8 statement left_210 right_317 fraction_cblock_total_count_7 fraction_cblock_total_count_5 input_a_6 conventions aways giraffe measure model_lbl_0 colors\nTopic #79:\nworld real contexts learn simplify students answer equation_text_2 situations simplified fraction fraction_cblock_total_count_10 equation_text_1 number represent use pitcher orange line fractions equal input_a_2 plain_image_groups_ determine swf url_assets number_line total_1 cms wootmath_fractions\nTopic #80:\narea_target_contents_ drag_and_drop group night_sky identify set experience area drag total_1 models answer objects plain_image_groups_ students fraction_circle_total_count_13 fraction_circle_total_count_16 using url_assets y_ input_9 left_510 y_400 addition 14 9_ swam zero contexts fraction_circle_total_count_5\nTopic #81:\nwhole_ fraction answer num_1 fraction_input_value_1 enter students fraction_input_value_3 num_3 fractions den_8 num_4 fraction_input_value_4 den_2 den_5 den_6 den_4 num_6 fraction_input_value_6 den_7 num_7 fraction_input_value_7 right_475 den_9 light den_10 proper orange fraction_cblock_total_count_3 7_7\nTopic #82:\nanswer box num_5 fraction_input_value_5 familiar whole_ enter placing length den_6 meters fractions line number fraction right_786 label input_a_0 numbers locating review decimals mix earlier following selected topics material lessons tenths\nTopic #83:\nfraction pieces_1 fraction_circle_groups_ circle pieces 1_ scale_1 fraction_circle_counts_ x_300 introduced relationships familiar explore students y_300 black answer lcm_sum_ sum_ chains_ 1_1 equal __as3_type_fraction equals numerator_1 denominator_1 right_270 left_150 fraction_circle_containment_ 6_\nTopic #84:\nmultiplication circle0_ input_a_8 input_8 number total_4 circle2_ foundation y_530 cloud cloud_round sentence complete lesson multiply left_270 using students 8_8 total_3 review numerator_2 1_3 1_4 fraction_circle_total_count_12 swf models plain_image_groups_ url_assets wootmath_fractions\nTopic #85:\nunit_ lcm_sum_ sum_ __as3_type_fraction numerator_1 pieces_1 left_60 8_ 1_1 model answer y_350 lesson fraction_circle_containment_ denominator_8 fractions x_512 left denominators problem students x_800 party like subtracting eats snack cake review 8_1\nTopic #86:\nfraction_circle_groups_ pieces_1 scale_0 fraction_circle_counts_ answer 55 students lesson y_300 fraction_circle_total_count_4 fraction_circle_total_count_5 learn x_400 fraction_circle_total_count_6 x_500 fraction_circle_containment_ y_325 y_275 fraction_circle_total_count_7 7_1 1_2 quotients 5_2 9_1 2_2 fractions x_625 y_535 right_288 x_275\nTopic #87:\n10 pieces_1 denominator_10 sum_ lcm_sum_ denominator_5 numerator_5 purples fraction 10_ numerator_9 numerator_10 __as3_type_fraction den_10 10_5 10_10 purple 10_2 10_1 input_a_10 input_10 woot 10_3 tenth 10_6 10_9 10_4 10_8 y_370 numerator_4\nTopic #88:\nnumber numberline_associations_ line location students drag label correct mixed answer pos_value_1 obj_value_ yard miles context input_ fraction obj_name_answer_text work proper did partitioning run obj_value_a input_9 youranswer numbers input_7 obj_value_1 main\nTopic #89:\nwants division x_400 right_210 12_5 problems left_147 run demonstrated whole_2 6_2 party 10x10 area y_530 24 right_320 y_450 real right_30 difference input_10 obj_value_ simple 5_3 4_5 8_3 4_ rectangle 1_2\nTopic #90:\ndenominator_4 dark blue 4_1 4_2 blues 4_4 8_2 relative answer students fraction pieces grays lesson gray fraction_circle_total_count_2 piece yellows unit 2_1 8_1 black reds yellow circle x_300 scale_1 cover fraction_circle_counts_\nTopic #91:\nobjects plain_image_groups_ swf url_assets answer students lesson total_1 drag number identifying using multiply total_6 cookie x_468 fraction groups singles review total_8 area learn models visual area_target_contents_ drag_and_drop determine fractions understanding\nTopic #92:\nfrac_piece_ left_150 x_724 denominator_22 denominator_8 left_270 pos_value_1 y_350 radio_group_mc2_ develop denominator_3 12_6 singles 6_7 relative makes bar2_ explicitly 15_5 cloud_round fraction_circle_total_count_7 url_assets 16 right_815 left_425 right_268 right_433 equation_text_1 need input_7\nTopic #93:\nleft_90 right_780 equivalent fraction denominator_1 denominator_2 right_435 learn students lesson 2_1 right_550 right_320 means fraction_cblock_containment_ comparing bars fraction_cblock_total_count_5 lengths using right_286 answer meaning 2_2 right_262 fraction_cblock_total_count_13 1_4 4_2 unit2_ numerator_5\nTopic #94:\n15 pieces_1 denominator_15 fraction 5_ lcm_sum_ sum_ __as3_type_fraction denominator_5 15_1 greens 15_ 7_ numerator_5 15_3 numerator_14 15_5 numerator_13 5_1 left_342 numerator_15 orange pieces 3_ numerator_10 numerator_11 3_1 answer green numerator_1\nTopic #95:\nfractions numerator denominator drag boxes greatest students answer order arrange compare fraction using lesson models strategies sense aways variety review learn multiple 81 51 125 43 large 42 63 32\nTopic #96:\ncorrection obj_name_answer_text2 obj_name_answer_text1 num_3 denominator_8 naming right_216 denominator_28 main locate 67 input_a_7 zero unit2_ problem fraction_input_value_6 ladybugs 60 multiplication shading tenth work y_275 den_9 fraction_input_value_8 fraction_cblock_total_count_2 larger applications equivalence earlier\nTopic #97:\naddition complete sentence answer problem_text_1 missing students sentences input_5 addends input_a_5 problem_text_2 input_b_2 input_a_1 subtraction lesson input_b_1 input_b_3 leftover bitmap_text_interp_ bitmap_text_inputs_ input_a_6 input_a_2 large input_1 input_a_4 input_6 object math correct\nTopic #98:\ndenominator_6 numerator_5 left_165 6_6 numerator_6 pieces_1 right_855 fraction_cblock_total_count_6 shark right_337 6_7 left_337 right_509 right_682 left_510 right_251 review den_6 right_625 lcm_sum_ sum_ __as3_type_fraction denominator_1 answer fraction_cblock_chains_ numerator_1 input_6 add 17 number\nTopic #99:\nusing fractional parts symbols students lesson set shapes various fraction answer write whole_ simplifying used transition num_9 den_9 divide flower num_4 number enter num_1 num_3 fractions yard way den_2 line\n\n" ], [ "n_features = 1000\nn_samples = len(data_samples)\nn_topics = 50\nn_top_words = 20", "_____no_output_____" ], [ "print(\"Extracting tf-idf features for NMF...\")\ntfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2,\n max_features=n_features,\n stop_words='english')\nt0 = time()\ntfidf = tfidf_vectorizer.fit_transform(data_samples)\nprint(\"done in %0.3fs.\" % (time() - t0))\n\n\nprint(\"Extracting tf features for LDA...\")\ntf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,\n max_features=n_features,\n stop_words='english')\nt0 = time()\ntf = tf_vectorizer.fit_transform(data_samples)\nprint(\"done in %0.3fs.\" % (time() - t0))\n", "Extracting tf-idf features for NMF...\ndone in 13.200s.\nExtracting tf features for LDA...\ndone in 9.813s.\n" ], [ "print(\"\\nTopics in NMF model:\")\ntfidf_feature_names = tfidf_vectorizer.get_feature_names()\nprint_top_words(nmf, tfidf_feature_names, n_top_words)\n\n", "\nTopics in NMF model:\nTopic #0:\nfraction_cblock_chains_ lcm_sum_ sum_ __as3_type_fraction numerator_1 pieces_1 fraction_cblock_counts_ denominator_1 denominator_4 denominator_12 left_175 denominator_3 denominator_6 fraction_cblock_containment_ right_865 numerator_2 denominator_2 unit3_ left_176 right_347\nTopic #1:\nfraction_circle_groups_ fraction_circle_counts_ pieces_1 chains_ scale_1 scale_0 fraction_circle_containment_ y_350 55 fraction_circle_total_count_6 fraction_circle_total_count_5 y_300 y_325 x_250 y_400 fraction_circle_total_count_2 x_ right_270 left_0 x_750\nTopic #2:\nplain_image_groups_ number_line total_1 cms wootmath_fractions swf url_assets markers objects_v2 start_marker objects traveled end_marker far distance left_96 snail ladybug start figure\nTopic #3:\ncommon match having shade fraction_input_value_ numerators input_a_ denominators choose comparison correct compare fraction fractions students circle bar polygon flower star\nTopic #4:\ngrid model build 1x10 visual models using students answer 10x10 amounts introducing tenth 20 tenths object 10 18 multiply 14\nTopic #5:\nnumberline_associations_ location label line number correct drag obj_value_ yard pos_value_1 interval obj_name_answer_text obj_name_eqn pos_value_0 identify input_ correction partitioning obj_value_1 students\nTopic #6:\npieces_1 chains_ lcm_sum_ sum_ numerator_5 numerator_4 numerator_6 __as3_type_fraction fraction_circle_containment_ numerator_7 numerator_3 numerator_8 fraction_circle_counts_ fraction_circle_total_count_10 right_0 left_270 left_0 numerator_9 fraction_circle_total_count_9 fraction_circle_total_count_8\nTopic #7:\n12 denominator_12 den_12 reds pieces_1 piece1_ 12_6 12_4 12_3 12_ fraction red input_6 twelfth input_a_6 numerator_11 6_ 12_8 pieces right_270\nTopic #8:\norder arrange greatest boxes drag fractions fraction common 43 42 32 63 students 81 numerator 83 multiple 51 generating 14\nTopic #9:\n10 denominator_10 den_10 pieces_1 purples 10_5 10_10 numerator_10 purple numerator_9 10_ fraction input_a_10 input_10 5_ 10_9 10_8 10_6 input_5 input_a_5\nTopic #10:\nimage_object_groups_ singles symbolic create set shading shade parts objects given swf url_assets model lesson off_2 off_1 14 fraction on_0 students\nTopic #11:\nreview topics selected material following earlier lessons fractions introducing modeling comparing adding line sets benchmark placing partitioning building multiplication labeling\nTopic #12:\nimage_object_groups_ objects complex singles sets shade swf url_assets modeling 14 total_8 identify multiple multiply total_14 number on_0 groups off_3 total_6\nTopic #13:\nbenchmark comparison correct radio_text_ asked noticing larger operator smaller identify using input_ choose compare models students lesson fractions fraction answer\nTopic #14:\nobject whole_1 model number answer lesson decimals included names shown x_750 fraction_input_value_1 pieces mixed fraction_circle_total_count_2 size add sized understand x_250\nTopic #15:\n15 denominator_15 den_15 5_ greens pieces_1 15_5 15_3 orange fraction 15_ 5_1 numerator_14 smallest 15_1 input_a_5 sum_ lcm_sum_ left_342 numerator_13\nTopic #16:\ngrid notation decimal shading introduced 1x10 model answer students model_lbl_0 decimals words using input_b_5 14 input_a_7 meters 24 fractional input_b_1\nTopic #17:\nparts various shapes fractional symbols set using lesson shaded whole_ fraction students den_8 answer equal piranhas den_7 den_10 fraction_input_value_7 den_9\nTopic #18:\nzero fourths fraction_cblock_total_count_16 fraction_cblock_total_count_15 fraction_cblock_total_count_14 fraction_cblock_total_count_13 fraction_cblock_total_count_12 fraction_cblock_total_count_11 fraction_cblock_total_count_10 fraction_cblock_total_count_1 fraction_cblock_counts_ fraction_cblock_containment_ fraction_cblock_chains_ fraction frac_piece_ fourth feet foundation formed form\nTopic #19:\nbar1_ bar left_130 names pieces black lcm_sum_ sum_ right_820 size model piece fraction dragging help used understand __as3_type_fraction sized denominator_1\nTopic #20:\nterms tothe ans1 denominator box numerator familiar goal ans0 drag answer lesson fractions students person meters den_15 shows main symbolically\nTopic #21:\nshown input_ input_a_ choose comparison decimals correct object model shading 1x10 magnitude understanding deepen compare students build estimate hundredths comparison1\nTopic #22:\naddition complete problem_text_1 sentences addends missing sentence problem_text_2 bitmap_text_inputs_ bitmap_text_interp_ input_b_1 answer students input_b_2 input_a_2 lesson subtraction input_a_3 adding input_a_1\nTopic #23:\ncommon mix numerators denominators order denominator numerator enter smallest fractions greatest whole_ students fraction answer den_12 den_15 comparison1 den_7 den_9\nTopic #24:\nlength line number yards divide determine yard bar lengths fraction_input_value_ enter equal use meters whole_ way fraction measure den_8 familiar\nTopic #25:\ncommon make true having numerators boxes fractions denominators comparison compare drag answer students greater use numbers decimals 24 fraction_cblock_total_count_1 different\nTopic #26:\nhundredths 10x10 grid decimal model build 19 14 answer notation students introduced 24 understanding magnitude 18 20 models 16 visual\nTopic #27:\ntenths 1x10 expressed meter input_12 start beetle work model_lbl_0 build magnitude locate understanding problems shark word answer start_marker obj_name_object swam\nTopic #28:\n2_ fraction denominator_2 lcm_sum_ sum_ chains_ pieces_1 __as3_type_fraction fraction_circle_counts_ 2_1 fraction_circle_containment_ numerator_1 yellow x_512 right_180 right_270 y_300 fraction_circle_groups_ left_0 pieces\nTopic #29:\nplain_image_groups_ juice problems number_line context total_1 wootmath_fractions cms reasoning swf url_assets choose real world comparison size numerator compare correct fractions\nTopic #30:\nequal_parts radio_group_mc1_ radio_group_mc2_ plain_image_groups_ parts text_yes choice_a fourths total_1 cms wootmath_fractions swf url_assets object shaded area text_no demonstrated partitioned evaluate\nTopic #31:\nform simplest enter included subtract answer whole_1 visual models simplifying mixed lesson add students whole_2 fractions fraction_input_value_1 simplify like difference\nTopic #32:\ncomparing knowledge extend benchmark input_a_ input_ enter correct fractions 20 lesson 60 30 comparison1 40 students 50 multiple 24 experience\nTopic #33:\naways strategies variety sense benchmark denominator order numerator number common using fractions 81 students 51 greater 83 63 input_a_3 input_a_2\nTopic #34:\nfractions denominator compare arrange boxes greatest models using lesson drag visual review answer learn students fraction person box left_175 reasoning\nTopic #35:\nmisc_objects plain_image_groups_ simplest ant_alt ladybug_alt wootmath_fractions cms form swf bugs ladybugs url_assets sets represent objects total_2 use total_3 total_4 enter\nTopic #36:\norder common denominator whole_ smaller greater fractions den_15 fraction fraction_input_value_7 num_7 students den_12 answer den_10 fraction_input_value_6 num_6 den_8 den_9 fraction_input_value_8\nTopic #37:\nunit_ lcm_sum_ sum_ __as3_type_fraction fraction_circle_counts_ x_512 1_1 fraction_circle_containment_ numerator_1 y_350 bigger scale_0 125 problem addition denominator_2 pieces_1 model left_176 fraction_circle_total_count_2\nTopic #38:\ncircle1_ fraction_circle_groups_ pieces_1 chains_ sum_ lcm_sum_ circle names pieces fraction_circle_counts_ y_325 x_300 fraction_circle_containment_ __as3_type_fraction fraction size model dragging black understand\nTopic #39:\narea_target_contents_ night_sky plain_image_groups_ drag_and_drop objects cookie experience x_468 identifying total_1 swf group url_assets gain area drag work fourth identify set\nTopic #40:\nfraction_cblock_chains_ sum_ lcm_sum_ left_100 bar2_ bar1_ __as3_type_fraction denominator_2 numerator_1 right_790 pieces_1 denominator_1 fraction_cblock_containment_ fraction_cblock_counts_ right_445 bar0_ undefined 1_2 2_1 powerful\nTopic #41:\nproper order numerator compare large common fractions students numbers 32 fraction reasoning circle den_9 woot 18 16 42 den_8 20\nTopic #42:\ngiven relative unit names piece y_350 fraction_circle_groups_ say depending important limited flexible need naming pieces different fourth second half cover\nTopic #43:\naquarium image_object_groups_ area_target_contents_ drag_and_drop piranhas piranha objects x_468 swf url_assets plain_image_groups_ fish total_1 drag experience identifying gain on_0 group off_3\nTopic #44:\nui png plain_image_groups_ total_2 decimal point arrows left_arrow right_arrow wootmath_fractions cms url_assets patterns locations points multiplied placement divided use correct\nTopic #45:\nleft_90 right_780 denominator_1 unit2_ fraction_cblock_chains_ lcm_sum_ sum_ equivalent pieces_1 equal __as3_type_fraction right_435 1_2 build numerator_1 using relating fraction algorithm right_262\nTopic #46:\ndecimal break range enter given hundredths input_a_0 input_0 model tenths object students answer 16 17 18 14 19 25 notation\nTopic #47:\nasks true select make enter operator sentence number makes lesson statement students correct comparison compare fractions fraction comparison1 input_ input_1\nTopic #48:\nequivalent group choosing apply fraction understanding box given lesson fractions drag left_176 answer piece0_ use equivalence students simple deepen modeled\nTopic #49:\npizza ate estimate eat sums friend x_475 y_384 did benchmarks fraction_circle_total_count_1 half context comparing real world 1_1 learn fractions greater\nTopic #50:\nordering review lessons topics earlier following material selected intro fractions common numerator generating comparing bars greatest smallest modeling multiple equal\nTopic #51:\nnum_1 fraction_input_value_1 whole_ den_2 den_3 den_4 enter smallest den_5 whole_1 answer fractions den_6 den_8 simplified students having difference bar1_ run\nTopic #52:\ndistance partition measure line animal traveled student short deepens role number animation understanding bars using start yard objects_v2 animations swam\nTopic #53:\npiece_0_ pieces_1 chains_ piece_1_ fraction_circle_groups_ sum_ lcm_sum_ fraction_circle_counts_ left_0 __as3_type_fraction x_512 numerator_2 piece_2_ fraction_circle_containment_ cover pieces y_300 scale_1 right_120 1_1\nTopic #54:\ndecimals expressed introducing deepen understanding tenths magnitude hundredths use true make writing composing compose input_ compare order shark input_11 answer\nTopic #55:\ncircle1_1_ circle1_2_ fraction_circle_groups_ sum_ lcm_sum_ pieces_1 fraction_circle_counts_ numerator_1 __as3_type_fraction object fraction_circle_containment_ y_350 scale_1 x_750 x_250 names relative given piece unit\nTopic #56:\njuice pitcher plain_image_groups_ orange represent number_line line total_1 wootmath_fractions cms contexts real world swf fraction url_assets number learn whole_ students\nTopic #57:\ndenominator_7 fraction_cblock_chains_ sum_ lcm_sum_ seventh __as3_type_fraction left_80 pieces_1 7_3 7_1 light numerator_3 den_7 numerator_6 7_5 7_7 right_770 7_2 numerator_4 right_228\nTopic #58:\nnumber line location mixed miles work context youranswer did proper drag shown students whole_2 run correct main ran placing multiplication\nTopic #59:\nrectangle write identify symbols set shade fraction_input_value_ match review fraction circle models using fractions students complex shaded area shapes group\nTopic #60:\nnum_2 fraction_input_value_2 whole_ den_3 den_4 enter whole_2 den_6 fractions answer den_5 smallest fraction second students den_9 having means bars different\nTopic #61:\ninput_a_1 bitmap_text_interp_ bitmap_text_inputs_ simplify fraction_input_value_ help input_1 use fraction fractions simplest enter visual models equation_text_1 problem_text_1 equation_1 way simplifying students\nTopic #62:\ntransition write simplify support models using used use simplest greatest form visual denominator number common students numerator enter lesson fraction\nTopic #63:\nhalf lasso input_0 input_a_0 benchmark use decimal relative size enter models compare hundredths 14 bigger answer students 50 18 17\nTopic #64:\nimproper fraction far line proper location fractions did context work enter shown students simplifying den_5 simplest represent gain whole_ learn\nTopic #65:\nmile run divide final shark wants walked swam line lengths quotients zero obj_name_obj pos_value_0 objects_v2 input_11 place hippo equal panda\nTopic #66:\nunit1_ unit2_ sum_ lcm_sum_ __as3_type_fraction scale_0 numerator_1 fraction_circle_containment_ left_175 1_2 denominator_6 x_700 denominator_2 unit3_ y_300 pieces_1 denominator_3 numerator_2 denominator_1 right_865\nTopic #67:\nmug plain_image_groups_ hot chocolate number_line total_1 wootmath_fractions cms swf url_assets represent line real world fraction number contexts learn whole_ applications\nTopic #68:\ndenominator_9 sum_ lcm_sum_ fraction_cblock_chains_ den_9 9_ __as3_type_fraction white numerator_7 9_1 pieces_1 ninth 9_3 left_80 bar1_ whites 9_9 right_206 numerator_8 9_2\nTopic #69:\nnumbers mixed learn whole_1 lesson subtract number subtracting unit divided fraction_input_value_1 divide make whole_2 difference improper feet large included enter\nTopic #70:\nexpress words decimal form shown box model_lbl_0 visual models answer drag hundredths students locate use left_80 make true pieces result\nTopic #71:\n3_ denominator_3 fraction sum_ lcm_sum_ brown 3_1 __as3_type_fraction numerator_2 pieces_1 chains_ left_30 fraction_circle_counts_ denominator_6 right_270 fraction_circle_containment_ 6_2 3_2 numerator_1 9_3\nTopic #72:\nnumberline_associations_ pos_value_0 obj_name_object line drag number objects_v2 represent input_12 obj_value_a start meter elephant kangaroo fraction locate giraffe mile hippo input_9\nTopic #73:\nbiked mile ran total numberline_associations_ stopped obj_name_a_text obj_value_a distance pos_value_1 line add non sum number greater unit using input_12 fractions\nTopic #74:\nrenaming represented build hundredths tenths grid visual models enter answer students amounts drag represent box boxes numbers writing composing deepen\nTopic #75:\nradio_group_problem_ identify numerator denominator given radio_choice_a radio_choice_b choice_b choice_a fractions fraction fourth think half piece estimate conventions came text_1 radio_text_1\nTopic #76:\nnum_3 fraction_input_value_3 den_4 whole_ whole_3 den_6 enter den_7 answer den_5 greatest fractions numerator_3 included long students means bars fraction different\nTopic #77:\ndenominator_8 sum_ lcm_sum_ fraction_cblock_chains_ 8_ __as3_type_fraction numerator_3 8_1 numerator_7 eighth 8_4 8_3 gray 8_2 pieces_1 den_8 numerator_1 grays 8_5 8_6\nTopic #78:\nfraction_circle_groups_ scale_0 fraction_circle_counts_ piece x_675 x_200 x_811 pieces_1 magnitude box sense modeling fraction_circle_total_count_4 x_550 y_415 relative y_450 10_1 drag learn\nTopic #79:\nconnecting symbol word review names shapes earlier following lessons selected topics material bars model mixed fraction shaded say modeling den_8\nTopic #80:\nalgorithm creating needed equivalent apply learn support missing visual eqn_1 lesson models enter eqn_2 numerator input_6 fractions eqn_3 bitmap_text_interp_ bitmap_text_inputs_\nTopic #81:\n11 think came cut bigger greater den_15 piece pie cake piranha brownies piranhas notation on_0 introduced terms 20 whole_ support\nTopic #82:\nadd sum denominators unlike symbolically like greater multiply learn enter lesson fractions adding models y_350 missing visual solution x_705 students\nTopic #83:\nnum_4 fraction_input_value_4 den_5 whole_ den_6 answer greatest fraction enter den_9 numerator_4 fractions students den_7 complex circle0_ den_8 bar1_ piranhas bars\nTopic #84:\nmental gain magnitude hundredths model visual compare models answer students decimals experience shown identifying 60 simplifying person object box input_a_8\nTopic #85:\n1_ fraction lcm_sum_ sum_ __as3_type_fraction numerator_1 denominator_1 fraction_circle_containment_ denominator_2 1_1 left_125 black fraction_circle_counts_ circle denominator_3 right_815 3_3 browns 2_2 4_4\nTopic #86:\nexactly fraction relate covering rectangle shaded fraction_input_value_ learn pieces lesson input_a_3 answer students input_a_2 input_a_6 right_180 estimate cover shown piece1_\nTopic #87:\nfrac_piece_ fraction_circle_groups_ relative given unit names piece chains_ lcm_sum_ sum_ pieces flexible important limited need depending pieces_1 naming fraction_circle_counts_ different\nTopic #88:\nnum_5 fraction_input_value_5 den_6 whole_ den_9 numerator_5 answer fractions enter den_7 greatest fraction shaded den_8 denominator_6 students simplified cats greater different\nTopic #89:\nlong bar yards fraction_input_value_ fraction determine use number length line locate students represent express feet amounts whole_ using fractional den_8\nTopic #90:\nmultiple denominator provided symbols subtract mult_n_1_ mult_d_1_ second difference using fractions enter solution missing answer students equation_1 whole_ equation_2 simplest\nTopic #91:\nequal_parts_v2 plain_image_groups_ total_1 cms wootmath_fractions swf url_assets parts formed demonstrated evaluate partitioned area undefined understanding sized boxes equal size answer\nTopic #92:\nbitmap_text_inputs_ bitmap_text_interp_ input_a_2 model_lbl_0 input_a_4 input_a_6 input_a_3 input_b_6 input_b_4 beginning introduces input_b_8 input_a_5 fraction_input_value_ express input_2 input_a_8 equation_2 input_b_5 input_b_3\nTopic #93:\ncircle pieces relationships woot explore familiar math introduced x_300 size y_300 equals scale_1 black equal fraction fraction_circle_counts_ fraction_circle_groups_ students answer\nTopic #94:\n4_ denominator_4 fraction dark lcm_sum_ sum_ blue 4_1 __as3_type_fraction fraction_circle_counts_ chains_ left_0 numerator_1 simple numerator_3 right_270 x_512 8_2 4_2 numerator_2\nTopic #95:\ncomplete sentence math correct drag answer people equally subtraction models split amounts meaning groups division shading undefined visual multiplication students\nTopic #96:\ndenominator_5 fraction_cblock_chains_ lcm_sum_ sum_ numerator_3 pieces_1 __as3_type_fraction numerator_2 left_80 numerator_4 fifth 5_ 5_3 right_770 5_2 bar1_ 5_1 5_4 fraction_cblock_counts_ orange\nTopic #97:\ncontexts world real learn represent answer simplified groups multiply giraffe students elephant simplify obj_name_object kangaroo objects_v2 subtract hippo different panda\nTopic #98:\n13 den_15 terms 24 numerator denominator enter familiar whole_ goal problem radio_choice_b off_1 experience identifying 20 30 input_9 rectangle gain\nTopic #99:\n100 50 10x10 build use choosing comparison1 given den_2 grid size input_a_ input_ 25 zero correct place pieces introducing distance\n\n" ], [ "print(\"Fitting LDA models with tf features, \"\n \"n_samples=%d and n_features=%d...\"\n % (n_samples, n_features))\nlda = LatentDirichletAllocation(n_topics=n_topics, max_iter=5,\n learning_method='online',\n learning_offset=50.,\n random_state=0)\nt0 = time()\nlda.fit(tf)\nprint(\"done in %0.3fs.\" % (time() - t0))\nprint(\"\\nTopics in LDA model:\")\ntf_feature_names = tf_vectorizer.get_feature_names()\nprint_top_words(lda, tf_feature_names, n_top_words)", "Fitting LDA models with tf features, n_samples=100000 and n_features=1000...\n" ], [ "\nfrom sklearn.cluster import KMeans, MiniBatchKMeans\ntrue_k = 100\n\nkm = MiniBatchKMeans(n_clusters=true_k, init='k-means++', n_init=1,\n init_size=1000, batch_size=1000)", "_____no_output_____" ], [ "print(\"Clustering sparse data with %s\" % km)\nt0 = time()\nkm.fit(tf)\nprint(\"done in %0.3fs\" % (time() - t0))\nprint()", "Clustering sparse data with MiniBatchKMeans(batch_size=1000, compute_labels=True, init='k-means++',\n init_size=1000, max_iter=100, max_no_improvement=10,\n n_clusters=100, n_init=1, random_state=None,\n reassignment_ratio=0.01, tol=0.0, verbose=0)\ndone in 1.783s\n\n" ], [ "print(\"Top terms per cluster:\")\n\n\norder_centroids = km.cluster_centers_.argsort()[:, ::-1]\nterms = tf_vectorizer.get_feature_names()\nfor i in range(true_k):\n print(\"Cluster %d:\" % i, end='')\n for ind in order_centroids[i, :10]:\n print(' %s' % terms[ind], end='')\n print()", "Top terms per cluster:\nCluster 0: fraction_cblock_chains_ sum_ lcm_sum_ pieces_1 __as3_type_fraction fraction_cblock_counts_ numerator_1 denominator_1 fraction answer\nCluster 1: fraction_cblock_chains_ lcm_sum_ sum_ 12 __as3_type_fraction fractions pieces_1 numerator_1 problem_text_1 review\nCluster 2: fractions students answer fraction lesson using review enter number models\nCluster 3: fraction_cblock_chains_ lcm_sum_ sum_ pieces_1 __as3_type_fraction numerator_1 denominator_7 bar1_ denominator_1 fraction_cblock_counts_\nCluster 4: pieces_1 fraction_circle_groups_ lcm_sum_ sum_ chains_ __as3_type_fraction unit1_ scale_0 unit2_ fraction_circle_containment_\nCluster 5: numberline_associations_ biked mile sum number lesson line greater pos_value_1 obj_name_a_text\nCluster 6: fraction_cblock_chains_ lcm_sum_ sum_ __as3_type_fraction numerator_1 pieces_1 denominator_1 fraction denominator_12 fraction_cblock_counts_\nCluster 7: numberline_associations_ plain_image_groups_ tenths number_line swf total_1 url_assets cms wootmath_fractions students\nCluster 8: fraction_cblock_chains_ lcm_sum_ sum_ __as3_type_fraction numerator_1 pieces_1 denominator_1 denominator_4 denominator_8 fraction_cblock_counts_\nCluster 9: pieces_1 fraction_circle_groups_ fraction chains_ sum_ lcm_sum_ fraction_circle_counts_ __as3_type_fraction scale_1 numerator_1\nCluster 10: fraction_cblock_chains_ lcm_sum_ sum_ pieces_1 __as3_type_fraction numerator_1 12 denominator_1 fraction fraction_cblock_counts_\nCluster 11: fraction_cblock_chains_ lcm_sum_ sum_ __as3_type_fraction numerator_1 pieces_1 denominator_1 left_100 bar2_ bar1_\nCluster 12: 10 pieces_1 fraction_cblock_chains_ sum_ lcm_sum_ __as3_type_fraction bar1_ numerator_1 denominator_1 denominator_10\nCluster 13: line number fraction review fraction_input_value_ earlier placing input_a_1 following material\nCluster 14: fraction_cblock_chains_ lcm_sum_ sum_ __as3_type_fraction numerator_1 pieces_1 denominator_12 denominator_4 denominator_1 fraction_cblock_counts_\nCluster 15: fraction_circle_groups_ pieces_1 students fraction_circle_counts_ fraction names pieces unit lesson relative\nCluster 16: pieces_1 10 fraction_circle_groups_ sum_ lcm_sum_ piece_0_ chains_ fraction_circle_counts_ pizza __as3_type_fraction\nCluster 17: plain_image_groups_ numberline_associations_ number_line cms swf wootmath_fractions total_1 url_assets number line\nCluster 18: fraction_circle_groups_ pieces_1 sum_ lcm_sum_ __as3_type_fraction numerator_1 fraction fraction_circle_counts_ unit_ fraction_circle_containment_\nCluster 19: fraction_cblock_chains_ sum_ lcm_sum_ __as3_type_fraction numerator_1 pieces_1 denominator_8 denominator_1 bar1_ bar0_\nCluster 20: pieces_1 15 fraction_cblock_chains_ sum_ lcm_sum_ __as3_type_fraction numerator_1 denominator_15 denominator_1 fraction_cblock_counts_\nCluster 21: pieces_1 12 fraction_circle_groups_ chains_ fraction lcm_sum_ sum_ fraction_circle_counts_ __as3_type_fraction 2_\nCluster 22: pieces_1 12 fraction_cblock_chains_ sum_ lcm_sum_ __as3_type_fraction numerator_1 denominator_12 fraction fraction_cblock_counts_\nCluster 23: fraction_circle_groups_ pieces_1 order fractions fraction scale_1 drag students number aways\nCluster 24: numberline_associations_ plain_image_groups_ snail url_assets number fraction total_1 students number_line line\nCluster 25: fraction_cblock_chains_ lcm_sum_ sum_ __as3_type_fraction numerator_1 pieces_1 bar1_ fraction_cblock_counts_ denominator_1 fraction\nCluster 26: pieces_1 fraction_circle_groups_ fraction sum_ lcm_sum_ chains_ fraction_circle_counts_ __as3_type_fraction pieces 15\nCluster 27: fraction_circle_groups_ pieces_1 fraction_circle_counts_ sum_ lcm_sum_ unit_ scale_0 __as3_type_fraction answer scale_1\nCluster 28: fraction_cblock_chains_ lcm_sum_ sum_ numerator_1 __as3_type_fraction pieces_1 denominator_1 left_175 unit2_ fraction_cblock_counts_\nCluster 29: pieces_1 fraction_circle_groups_ lcm_sum_ sum_ chains_ __as3_type_fraction unit2_ unit1_ fraction_circle_counts_ scale_0\nCluster 30: fraction_cblock_chains_ lcm_sum_ sum_ pieces_1 __as3_type_fraction 10 numerator_1 denominator_1 fraction_cblock_counts_ denominator_10\nCluster 31: pieces_1 fraction_circle_groups_ chains_ fraction lcm_sum_ sum_ __as3_type_fraction fraction_circle_counts_ scale_0 scale_1\nCluster 32: plain_image_groups_ fraction_cblock_chains_ total_1 swf number_line wootmath_fractions url_assets cms pieces_1 lcm_sum_\nCluster 33: plain_image_groups_ number_line total_1 swf cms wootmath_fractions url_assets juice fractions numerator\nCluster 34: pieces_1 fraction_cblock_chains_ lcm_sum_ sum_ __as3_type_fraction bar1_ numerator_1 denominator_1 fraction_cblock_counts_ left_130\nCluster 35: fraction_cblock_chains_ lcm_sum_ sum_ __as3_type_fraction numerator_1 pieces_1 denominator_1 bar1_ fraction_cblock_counts_ bar2_\nCluster 36: fraction_cblock_chains_ lcm_sum_ sum_ pieces_1 __as3_type_fraction numerator_1 denominator_1 bar1_ denominator_2 bar2_\nCluster 37: answer students model image_object_groups_ grid object tenths hundredths decimal fraction\nCluster 38: pieces_1 10 fraction_circle_groups_ fraction chains_ sum_ lcm_sum_ fraction_circle_counts_ __as3_type_fraction 1_\nCluster 39: fraction_cblock_chains_ lcm_sum_ sum_ pieces_1 __as3_type_fraction 12 numerator_1 denominator_2 denominator_1 left_100\nCluster 40: fraction_cblock_chains_ sum_ lcm_sum_ __as3_type_fraction pieces_1 numerator_1 denominator_1 fraction_cblock_counts_ fraction bar1_\nCluster 41: fraction_cblock_chains_ sum_ lcm_sum_ __as3_type_fraction numerator_1 denominator_12 pieces_1 12 denominator_1 fraction_cblock_counts_\nCluster 42: plain_image_groups_ object swf radio_group_mc2_ wootmath_fractions equal_parts total_1 cms url_assets radio_group_mc1_\nCluster 43: pieces_1 fraction_circle_groups_ chains_ lcm_sum_ sum_ fraction_circle_counts_ __as3_type_fraction fraction unit_ fraction_circle_containment_\nCluster 44: pieces_1 fraction_circle_groups_ lcm_sum_ sum_ chains_ __as3_type_fraction fraction scale_0 numerator_1 fraction_circle_counts_\nCluster 45: numberline_associations_ number line drag location students answer correct fraction pos_value_0\nCluster 46: pieces_1 fraction_cblock_chains_ 12 lcm_sum_ sum_ __as3_type_fraction numerator_1 denominator_12 denominator_1 fraction_cblock_counts_\nCluster 47: plain_image_groups_ cms wootmath_fractions url_assets swf total_1 answer students fraction number_line\nCluster 48: mile fractions biked object answer fraction_input_value_1 situations whole_1 improper num_1\nCluster 49: plain_image_groups_ pieces_1 15 radio_group_mc2_ cms parts equal_parts sixths radio_group_mc1_ swf\nCluster 50: pieces_1 fraction_circle_groups_ lcm_sum_ sum_ __as3_type_fraction chains_ unit2_ unit1_ numerator_1 fraction_circle_containment_\nCluster 51: fraction_cblock_chains_ sum_ lcm_sum_ __as3_type_fraction numerator_1 pieces_1 bar1_ fraction bar pieces\nCluster 52: fraction_cblock_chains_ sum_ lcm_sum_ __as3_type_fraction numerator_1 pieces_1 denominator_8 denominator_12 denominator_1 12\nCluster 53: pieces_1 12 fraction_circle_groups_ fraction chains_ lcm_sum_ sum_ fraction_circle_counts_ __as3_type_fraction students\nCluster 54: fraction_cblock_chains_ sum_ lcm_sum_ __as3_type_fraction numerator_1 pieces_1 denominator_1 unit1_ unit2_ denominator_2\nCluster 55: fraction_circle_groups_ pieces_1 fraction_cblock_chains_ lcm_sum_ sum_ 10 fraction_circle_counts_ __as3_type_fraction numerator_1 answer\nCluster 56: model grid tenths models visual using answer build students 1x10\nCluster 57: fraction_cblock_chains_ sum_ lcm_sum_ __as3_type_fraction numerator_1 denominator_8 pieces_1 denominator_1 fraction_cblock_counts_ left_90\nCluster 58: pieces_1 12 fraction_circle_groups_ chains_ lcm_sum_ sum_ fraction __as3_type_fraction fraction_circle_counts_ denominator_12\nCluster 59: fraction_cblock_chains_ pieces_1 sum_ lcm_sum_ __as3_type_fraction numerator_1 denominator_1 unit2_ left_90 right_780\nCluster 60: pieces_1 fraction_circle_groups_ fraction chains_ lcm_sum_ sum_ fraction_circle_counts_ __as3_type_fraction pieces students\nCluster 61: fraction_cblock_chains_ pieces_1 lcm_sum_ sum_ __as3_type_fraction numerator_1 fraction fraction_cblock_counts_ denominator_1 bar1_\nCluster 62: fraction_cblock_chains_ sum_ lcm_sum_ plain_image_groups_ __as3_type_fraction pieces_1 numerator_1 number_line wootmath_fractions swf\nCluster 63: fraction_cblock_chains_ lcm_sum_ sum_ __as3_type_fraction pieces_1 numerator_1 denominator_9 denominator_1 fraction_cblock_counts_ denominator_7\nCluster 64: fraction_cblock_chains_ lcm_sum_ sum_ pieces_1 __as3_type_fraction numerator_1 12 10 fraction denominator_1\nCluster 65: fraction_cblock_chains_ sum_ lcm_sum_ __as3_type_fraction pieces_1 numerator_1 denominator_9 fraction fraction_cblock_counts_ denominator_8\nCluster 66: fraction_cblock_chains_ lcm_sum_ sum_ __as3_type_fraction numerator_1 pieces_1 denominator_4 denominator_1 fraction_cblock_counts_ denominator_8\nCluster 67: pieces_1 fraction_cblock_chains_ lcm_sum_ sum_ 10 plain_image_groups_ __as3_type_fraction cms wootmath_fractions form\nCluster 68: pieces_1 15 fraction_circle_groups_ fraction chains_ sum_ lcm_sum_ fraction_circle_counts_ __as3_type_fraction pieces\nCluster 69: fraction_cblock_chains_ lcm_sum_ sum_ __as3_type_fraction pieces_1 numerator_1 denominator_1 fraction_cblock_counts_ fraction denominator_8\nCluster 70: fraction_cblock_chains_ lcm_sum_ sum_ __as3_type_fraction pieces_1 numerator_1 fraction_cblock_counts_ fraction denominator_1 denominator_2\nCluster 71: pieces_1 fraction_circle_groups_ lcm_sum_ sum_ chains_ fraction_circle_counts_ __as3_type_fraction circle1_1_ circle1_2_ 12\nCluster 72: fraction_cblock_chains_ sum_ lcm_sum_ __as3_type_fraction pieces_1 numerator_1 denominator_1 numerator_2 fraction_cblock_counts_ left_80\nCluster 73: pieces_1 fraction_circle_groups_ chains_ fraction sum_ lcm_sum_ __as3_type_fraction 1_ fraction_circle_counts_ circle1_\nCluster 74: fraction_circle_groups_ pieces_1 12 chains_ fraction scale_1 lcm_sum_ sum_ fraction_circle_counts_ __as3_type_fraction\nCluster 75: number line fraction students improper length bar answer mixed enter\nCluster 76: common fractions fraction students order denominators numerators numerator answer compare\nCluster 77: fraction_cblock_chains_ plain_image_groups_ pieces_1 lcm_sum_ sum_ total_1 number_line url_assets cms wootmath_fractions\nCluster 78: fraction_cblock_chains_ lcm_sum_ sum_ __as3_type_fraction numerator_1 pieces_1 denominator_1 left_175 fraction_cblock_counts_ denominator_8\nCluster 79: fraction_cblock_chains_ sum_ lcm_sum_ pieces_1 __as3_type_fraction numerator_1 denominator_3 denominator_1 numerator_2 denominator_6\nCluster 80: pieces_1 10 fraction_circle_groups_ chains_ sum_ lcm_sum_ fraction fraction_circle_counts_ __as3_type_fraction 2_\nCluster 81: pieces_1 12 fraction_circle_groups_ lcm_sum_ sum_ chains_ __as3_type_fraction unit1_ unit2_ scale_0\nCluster 82: fraction_circle_groups_ pieces_1 chains_ scale_0 fraction_circle_counts_ lcm_sum_ sum_ __as3_type_fraction scale_1 fraction\nCluster 83: pieces_1 fraction_circle_groups_ piece_0_ sum_ chains_ lcm_sum_ fraction_circle_counts_ __as3_type_fraction pieces fraction\nCluster 84: fraction_circle_groups_ pieces_1 fraction input_a_1 scale_1 answer chains_ students models fractions\nCluster 85: fraction_cblock_chains_ sum_ lcm_sum_ __as3_type_fraction numerator_1 pieces_1 denominator_5 denominator_1 fraction fraction_cblock_counts_\nCluster 86: pieces_1 fraction_circle_groups_ sum_ lcm_sum_ chains_ __as3_type_fraction fraction_circle_containment_ scale_0 numerator_1 unit2_\nCluster 87: fraction_cblock_chains_ sum_ lcm_sum_ __as3_type_fraction numerator_1 pieces_1 denominator_1 left_60 fraction_cblock_counts_ bar2_\nCluster 88: fraction_circle_groups_ pieces_1 fraction_circle_counts_ scale_0 answer piece scale_1 fraction pieces students\nCluster 89: pieces_1 fraction_circle_groups_ fraction sum_ lcm_sum_ chains_ 1_ fraction_circle_counts_ __as3_type_fraction pieces\nCluster 90: fraction_cblock_chains_ 10 pieces_1 sum_ lcm_sum_ __as3_type_fraction denominator_10 numerator_1 bar1_ denominator_1\nCluster 91: fraction_cblock_chains_ sum_ lcm_sum_ __as3_type_fraction numerator_1 pieces_1 denominator_1 denominator_12 left_175 fraction_cblock_counts_\nCluster 92: fraction_cblock_chains_ sum_ lcm_sum_ pieces_1 __as3_type_fraction numerator_1 denominator_1 denominator_3 numerator_2 unit2_\nCluster 93: fraction_circle_groups_ pieces_1 chains_ fraction_circle_counts_ scale_1 pieces fraction answer circle students\nCluster 94: fraction_cblock_chains_ pieces_1 lcm_sum_ sum_ __as3_type_fraction numerator_1 denominator_1 bar1_ fraction_cblock_counts_ denominator_7\nCluster 95: fraction_cblock_chains_ pieces_1 sum_ lcm_sum_ __as3_type_fraction bar1_ numerator_1 denominator_1 fraction bar\nCluster 96: fraction_cblock_chains_ sum_ lcm_sum_ __as3_type_fraction numerator_1 pieces_1 fraction denominator_1 fraction_cblock_counts_ 1_\nCluster 97: pieces_1 sum_ lcm_sum_ fraction_circle_groups_ chains_ __as3_type_fraction piece_0_ piece_1_ fraction_circle_counts_ numerator_1\nCluster 98: plain_image_groups_ cms wootmath_fractions swf total_1 url_assets number_line number markers objects_v2\nCluster 99: fraction_cblock_chains_ lcm_sum_ sum_ pieces_1 __as3_type_fraction numerator_1 bar1_ fraction fraction_cblock_counts_ bar\n" ], [ "len(km.labels_)", "_____no_output_____" ], [ "np.bincount(km.labels_)", "_____no_output_____" ], [ "df3['cluster_100'] = km.labels_\n", "_____no_output_____" ], [ "df3['trait_1'] = df3['behavioral_traits'].apply(lambda x : x[0] if len(x) > 0 else 'None' )", "_____no_output_____" ], [ "df3['trait_2'] = df3['behavioral_traits'].apply(lambda x : x[1] if len(x) > 1 else 'None' ) ", "_____no_output_____" ], [ "df3['trait_1'].value_counts()", "_____no_output_____" ], [ "df3['trait_2'].value_counts()", "_____no_output_____" ], [ "df_cluster_100 = df3.groupby('cluster_100')", "_____no_output_____" ], [ "df_cluster_100.head()", "_____no_output_____" ], [ "df3['percent_correct'].groupby(df3['cluster_100']).describe()", "_____no_output_____" ], [ "df_trait_1 = df3.groupby(['cluster_100', 'trait_1']).size().unstack(fill_value=0)", "_____no_output_____" ], [ "df_trait_2 = df3.groupby(['cluster_100', 'trait_2']).size().unstack(fill_value=0)", "_____no_output_____" ], [ "df_trait_2", "_____no_output_____" ], [ "df_trait_2.columns", "_____no_output_____" ], [ "df_trait_1.columns", "_____no_output_____" ], [ "[x for x in df_trait_2.columns if x not in df_trait_1.columns ]", "_____no_output_____" ], [ "[x for x in df_trait_1.columns if x not in df_trait_2.columns ]", "_____no_output_____" ], [ "#df_trait_1 = df_trait_1.drop('None', axis=1)\n#df_trait_2 = df_trait_2.drop('None', axis=1)", "_____no_output_____" ], [ "df_traits = pd.merge(left=df_trait_1,right=df_trait_2, how='left' )\n", "_____no_output_____" ], [ "df_trait_1.index.rename('cluster_100', inplace=True)", "_____no_output_____" ], [ "df_trait_2.index.rename('cluster_100', inplace=True)", "_____no_output_____" ], [ "df_traits.columns", "_____no_output_____" ], [ "df_traits = pd.concat([df_trait_1, df_trait_2], axis=1)", "_____no_output_____" ], [ "df_traits.columns", "_____no_output_____" ], [ "df_traits", "_____no_output_____" ], [ "df_traits = df_traits.drop('None', axis=1)", "_____no_output_____" ], [ "df_traits.to_csv('cluster_100.csv')", "_____no_output_____" ], [ "df_traits2 = pd.concat([df3['percent_correct'].groupby(df3['cluster_100']).describe(), df_traits], axis=1)", "_____no_output_____" ], [ "df_traits2.to_csv('cluster_100_plus_correct.csv')", "_____no_output_____" ], [ "df_traits_dict = df_traits.to_dict(orient='dict')", "/Users/brianmckean/anaconda2/envs/hwenv/lib/python3.6/site-packages/pandas/core/frame.py:881: UserWarning: DataFrame columns are not unique, some columns will be omitted.\n \"columns will be omitted.\", UserWarning)\n" ], [ "df_traits_dict", "_____no_output_____" ], [ "df_traits_dict2 = {}\ncluster_with_no_trait = list(np.arange(100))\ncluster_with_lt_10_trait = list(np.arange(100))", "_____no_output_____" ], [ "for trait in df_traits_dict:\n #print (idx, trait)\n df_traits_dict2[trait] = {}\n for cluster in df_traits_dict[trait]:\n #print (trait, cluster, df_traits_dict[trait][cluster])\n if df_traits_dict[trait][cluster] > 0:\n df_traits_dict2[trait][cluster] = df_traits_dict[trait][cluster]\n if cluster in cluster_with_no_trait:\n cluster_with_no_trait.remove(cluster)\n if df_traits_dict[trait][cluster] > 9:\n if cluster in cluster_with_lt_10_trait:\n cluster_with_lt_10_trait.remove(cluster)", "_____no_output_____" ], [ "print (df_traits_dict2)", "{'area_model': {2: 16}, 'benchmark_1_2': {98: 8}, 'benchmark_quarters': {98: 66}, 'comparing_frac_gt_lt': {0: 18, 1: 7, 2: 482, 22: 1, 25: 13, 30: 1, 34: 1, 40: 1, 53: 2, 67: 1, 69: 2, 70: 4, 80: 1, 90: 1, 96: 7}, 'counting_hops_for_division': {2: 36}, 'counting_hops_not_ticks': {2: 6, 47: 886}, 'deci_add_to_model': {37: 113}, 'deci_break_tenths_hundredths': {37: 236}, 'deci_building_tenths': {37: 46}, 'deci_compare_no_models': {31: 1, 37: 156, 82: 1, 89: 1}, 'deci_forgot_decimal_point': {37: 274}, 'deci_hops_instead_ticks': {32: 84, 37: 2, 62: 2, 64: 1, 65: 1, 98: 184}, 'deci_hundredths_vs_tenths': {37: 244}, 'deci_incorrect_inequality': {37: 1282}, 'deci_placing_decimal_points': {37: 3, 47: 405}, 'deci_point_location_correct': {30: 1, 32: 86, 37: 2, 62: 3, 63: 1, 65: 1, 98: 11}, 'deci_tens_vs_tenths': {37: 317}, 'deci_understanding_gt_lt': {56: 1425}, 'determine_the_frac_part': {18: 1, 60: 1, 88: 2, 93: 13}, 'dragging_to_add': {2: 32, 4: 29, 9: 1, 16: 2, 18: 94, 20: 1, 21: 15, 27: 5, 29: 244, 31: 7, 38: 55, 43: 33, 44: 15, 50: 214, 53: 15, 58: 58, 60: 5, 68: 22, 74: 3, 80: 25, 81: 68, 82: 10, 86: 3}, 'fraction_of_set': {37: 222}, 'hops_vs_ticks': {2: 2, 75: 218}, 'how_to_model': {9: 99, 16: 2, 18: 125, 20: 1, 21: 283, 26: 1390, 27: 19, 29: 6, 31: 27, 37: 377, 38: 260, 43: 50, 44: 9, 53: 388, 58: 64, 60: 371, 68: 62, 73: 49, 74: 20, 80: 240, 81: 14, 82: 51, 84: 4, 86: 49, 88: 28, 89: 899, 93: 461}, 'identifying_gt_lt': {2: 556, 76: 2250}, 'inequality_symbol': {0: 9, 1: 9, 6: 1, 16: 2, 22: 1, 25: 18, 30: 1, 39: 2, 40: 32, 43: 1, 58: 1, 63: 1, 64: 1, 65: 5, 66: 4, 69: 8, 72: 2, 76: 675, 78: 2, 79: 1, 81: 2, 91: 2, 93: 4, 94: 1, 96: 3}, 'inverts_numerator_denominator': {0: 53, 1: 35, 3: 1, 6: 36, 8: 3, 9: 4, 10: 9, 14: 16, 20: 4, 22: 5, 23: 1, 25: 69, 26: 1, 27: 1, 30: 3, 31: 8, 37: 1481, 38: 1, 39: 1, 40: 86, 43: 2, 44: 2, 46: 1, 47: 7, 52: 5, 53: 2, 57: 17, 58: 1, 60: 3, 61: 15, 63: 46, 65: 10, 66: 8, 67: 2, 68: 4, 69: 32, 70: 20, 72: 3, 73: 2, 76: 2517, 78: 10, 79: 3, 80: 1, 82: 5, 85: 9, 88: 1, 90: 15, 91: 12, 92: 2, 93: 2, 94: 7, 96: 54}, 'measuring_tools': {6: 1, 17: 43, 34: 1, 40: 4, 62: 18, 65: 1, 66: 3, 70: 3, 77: 384, 96: 1}, 'misplaced_fraction_part_nline': {45: 194, 75: 286}, 'mixed_number_quotient': {98: 5}, 'mixed_numbers_on_number_line': {75: 83}, 'modeled_incorrect_comparison': {2: 7, 8: 21, 10: 20, 14: 16, 20: 1, 30: 41, 35: 2, 36: 54, 40: 3, 46: 1, 52: 2, 63: 33, 64: 19, 65: 22, 66: 24, 69: 127, 72: 226, 78: 13, 79: 7, 85: 18, 91: 4, 94: 12}, 'modeling_fraction_division': {0: 2, 70: 1, 96: 1}, 'multiplication': {2: 121}, 'multiplying_whole_by_proper': {2: 3, 4: 25, 9: 5, 29: 47, 43: 30, 44: 24, 50: 6, 73: 17, 82: 3, 86: 20}, 'nline_as_whole': {6: 1, 32: 134, 62: 27, 65: 1, 66: 1, 69: 1, 75: 15, 78: 2, 94: 1, 96: 1, 98: 141}, 'numerator_off_by_one_nline': {75: 8}, 'only_tenths_entered': {30: 2, 32: 118, 37: 7, 62: 3, 98: 1}, 'partially_drawn_parts': {37: 1, 42: 679}, 'partitioning_number_line': {75: 398}, 'recognizing_the_whole': {2: 24, 9: 19, 15: 920, 18: 21, 21: 5, 26: 67, 27: 26, 29: 2, 31: 4, 43: 4, 53: 46, 58: 4, 60: 123, 68: 3, 71: 1, 73: 4, 74: 2, 82: 12, 86: 5, 88: 1, 89: 3, 93: 11}, 'simplify_with_common_denom': {0: 9, 1: 1, 2: 258, 3: 1, 9: 1, 25: 1, 27: 2, 40: 4, 69: 3, 70: 5, 72: 1, 93: 1, 96: 2}, 'simplifying_bars_2': {0: 1, 2: 11, 8: 2, 14: 25, 41: 2, 52: 10, 57: 35, 63: 7, 65: 23, 66: 46, 69: 5, 78: 7, 85: 1, 90: 4, 91: 16, 92: 51, 94: 34}, 'simplifying_fractions': {2: 82, 9: 3, 31: 1, 64: 1, 73: 1, 74: 1, 81: 1, 82: 6, 84: 2, 88: 1, 93: 3}, 'simplifying_mixed_numbers': {2: 28, 4: 29, 9: 1, 16: 2, 18: 71, 21: 2, 27: 5, 29: 237, 31: 7, 38: 54, 43: 31, 44: 15, 50: 173, 53: 15, 58: 56, 60: 4, 74: 3, 80: 22, 81: 68, 82: 9, 86: 3}, 'simplifying_mixed_numbers_2': {2: 90, 18: 1, 29: 2, 31: 3, 34: 1, 43: 5, 60: 1, 73: 5, 82: 3, 84: 1, 89: 1, 93: 1}, 'simplifying_subtraction': {2: 6, 9: 8, 29: 34, 31: 1, 43: 9, 44: 11, 73: 117, 82: 3, 86: 3}, 'simplifying_with_bars': {8: 14, 10: 17, 30: 10, 52: 12, 57: 32, 65: 15, 66: 35, 78: 17, 79: 1, 91: 1, 92: 53, 94: 15}, 'starting_from_0_nline': {2: 28, 24: 953, 47: 59, 75: 7}, 'using_bars_in_division': {10: 2, 30: 2, 40: 3, 52: 1, 65: 6, 66: 3, 69: 2, 90: 1}, 'using_correct_piece': {6: 17, 8: 2, 10: 3, 14: 4, 30: 10, 41: 9, 46: 10, 54: 26, 59: 1, 63: 1, 64: 2, 65: 3, 66: 2, 69: 1, 78: 1, 92: 24, 96: 13}, 'dragging_to_add_3_circles': {2: 8, 4: 64, 29: 2, 38: 22, 58: 2, 81: 55, 86: 139}, 'equally_sized_parts': {37: 1, 42: 679}, 'modeled_incorrect_numerator': {2: 7, 8: 21, 10: 20, 14: 16, 20: 1, 30: 41, 35: 2, 36: 54, 40: 3, 46: 1, 52: 2, 63: 33, 64: 19, 65: 22, 66: 24, 69: 127, 72: 226, 78: 13, 79: 7, 85: 18, 91: 4, 94: 12}, 'modulo_ans': {75: 8}, 'nline_restart_one': {6: 1, 32: 134, 62: 27, 65: 1, 66: 1, 69: 1, 75: 10, 78: 2, 94: 1, 96: 1, 98: 35}, 'orange_tick': {6: 1, 17: 43, 34: 1, 40: 4, 62: 18, 65: 1, 66: 3, 70: 3, 77: 384, 96: 1}, 'simplifying_answers_nline': {45: 135, 75: 286}, 'wrong_number_parts': {75: 398}}\n" ], [ "cluster_with_no_trait, ", "_____no_output_____" ], [ "len(cluster_with_no_trait)", "_____no_output_____" ], [ "len(cluster_with_lt_10_trait)", "_____no_output_____" ], [ "x = list(df_traits.index)", "_____no_output_____" ], [ "y = df_traits.sum(axis=1)", "_____no_output_____" ], [ "y", "_____no_output_____" ], [ "\nplt.bar( x, y)\n", "_____no_output_____" ], [ "fig, ax = plt.subplots()\n\nrects1 = ax.bar(x, y, color='b')\nax.set_xlabel('Cluster number')\nax.set_ylabel('Lessons with trait at this cluster')\nax.set_title('Traits per cluster')\n\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a4af71ed73a7da6a662b536adf9fb477ac6a710
132,999
ipynb
Jupyter Notebook
Working Notebooks/.ipynb_checkpoints/NBA_SQL_LOCAL_VERSION_nb4-checkpoint.ipynb
JGarrechtMetzger/Project3
f6a75fba4658264741b520f71875df18b675bd63
[ "MIT" ]
null
null
null
Working Notebooks/.ipynb_checkpoints/NBA_SQL_LOCAL_VERSION_nb4-checkpoint.ipynb
JGarrechtMetzger/Project3
f6a75fba4658264741b520f71875df18b675bd63
[ "MIT" ]
null
null
null
Working Notebooks/.ipynb_checkpoints/NBA_SQL_LOCAL_VERSION_nb4-checkpoint.ipynb
JGarrechtMetzger/Project3
f6a75fba4658264741b520f71875df18b675bd63
[ "MIT" ]
null
null
null
39.27909
1,513
0.347634
[ [ [ "# Imports", "_____no_output_____" ] ], [ [ "import pandas as pd\nfrom sqlalchemy import create_engine\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.style.use('ggplot')\n%matplotlib inline\nnp.set_printoptions(suppress=True)", "_____no_output_____" ] ], [ [ "Goal: Use use SQLAlchemy to investigate the NBA data set.\n\n", "_____no_output_____" ] ], [ [ "#This setting allows us to see every column in the output cell\n\npd.set_option('display.max_columns', None)", "_____no_output_____" ], [ "#Import data\n\nall_seasons_df = pd.read_csv('<File_location>/<File_Name>')\n\n#Example: all_seasons_df = pd.read_csv('/Users/<hackerperson>/Desktop/Coding/Projects/Project3/all_seasons_df.csv')", "_____no_output_____" ], [ "#Check dataframe\n\nall_seasons_df", "_____no_output_____" ], [ "engine = create_engine('postgresql://johnmetzger:localhost@localhost:5432/nba_ht')\n\nnba_all_data = pd.read_csv('/Users/johnmetzger/Desktop/Coding/Projects/Project3/all_seasons_df.csv')\n\n# I'm choosing to name this table \"nba_all_data\" for \"NBA Halftime\"\nnba_ht.to_sql('nba_all_data', engine, index=True)", "_____no_output_____" ] ], [ [ "GET Data on a team", "_____no_output_____" ] ], [ [ "#Here, 'DET' is used as an example\n# This uses the WHERE Command\n\nTeam ='''\nSELECT * FROM nba_all_data WHERE \"TEAM_ABBREVIATION\"='DET';\n '''\n \ntanks = pd.read_sql(Team, engine)\n\ntanks", "_____no_output_____" ], [ "#Here, 'DET' is used as an example\n# Pick attribtues to order. Here you are selecting from the df=nba_all_data\n# and finding out what values in columns (after first SELECT)\n# are greater than the average of a variable. Here it was 'FG_PCT'\n\nWinning ='''\n WITH temporaryTable(averageValue) as (SELECT avg(\"FG_PCT\")\n from nba_all_data)\n SELECT \"SEASON_YEAR\",\"TEAM_NAME\", \"FG_PCT\" \n FROM nba_all_data, temporaryTable \n WHERE nba_all_data.\"FG_PCT\" > temporaryTable.averageValue;\n '''\n\nTeam_winning = pd.read_sql(Winning, engine)\n\nTeam_winning", "_____no_output_____" ] ], [ [ "**PERSONAL FOULS**", "_____no_output_____" ] ], [ [ "PFs ='''\n SELECT \"TEAM_ABBREVIATION\"\n AS \"nba_all_data.TEAM_ABBREVIATION\" FROM nba_all_data GROUP BY \"nba_all_data.TEAM_ABBREVIATION\" HAVING \"nba_all_data.PF\" > 500;\n '''\n\nPFs = pd.read_sql(PFs, engine)\n\nPFs", "_____no_output_____" ], [ "# This one uses HAVING and GROUPBY. It shows that Boston Celtics was the only\n# team with more than an average of 10 personal fouls per game.\n\nPFs ='''\n\nSELECT AVG(\"PF\"), \"TEAM_ABBREVIATION\"\nFROM nba_all_data\nGROUP BY \"TEAM_ABBREVIATION\"\nHAVING AVG(\"PF\") > 10;\n'''\n\n\nPFs = pd.read_sql(PFs, engine)\n\nPFs", "_____no_output_____" ], [ "## Sorted where first half score MAX was higher than 50.\n\nScore ='''\n\nSELECT MAX(\"PTS\"), \"TEAM_ABBREVIATION\"\nFROM nba_all_data\nGROUP BY \"TEAM_ABBREVIATION\"\nHAVING MAX(\"PTS\") > 50;\n'''\n\n\nScore = pd.read_sql(Score, engine)\n\nScore", "_____no_output_____" ], [ "c.execute('SELECT * FROM stocks WHERE symbol=?', t)", "_____no_output_____" ] ], [ [ "# Intro", "_____no_output_____" ] ], [ [ "query = '''\n SELECT * FROM nba_ht;\n '''\nsimpsons=pd.read_sql(query, engine)", "_____no_output_____" ], [ "simpsons", "_____no_output_____" ], [ "CREATE TABLE namename (column1)", "_____no_output_____" ], [ "query = '''BULK INSERT namename\n FROM '/Users/johnmetzger/Desktop/Coding/Projects/Project3/all_seasons_df.csv',\n WITH( FIRSTROW = 2,\n FIELDTERMINATOR = ',',\n ROWTERMINATOR = '\\n')'''", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
4a4afe96571cf775ea726d39b6d776033e1b1ee0
141,133
ipynb
Jupyter Notebook
Python-For-Data-Analysis/Chapter 8 Data Wrangling/8.2 Combining and Merging Datasets.ipynb
insomniac-klutz/Data-Science-Test-Code-Archives
463abdc112387bdff2672a616321937fb6ae832c
[ "MIT" ]
null
null
null
Python-For-Data-Analysis/Chapter 8 Data Wrangling/8.2 Combining and Merging Datasets.ipynb
insomniac-klutz/Data-Science-Test-Code-Archives
463abdc112387bdff2672a616321937fb6ae832c
[ "MIT" ]
null
null
null
Python-For-Data-Analysis/Chapter 8 Data Wrangling/8.2 Combining and Merging Datasets.ipynb
insomniac-klutz/Data-Science-Test-Code-Archives
463abdc112387bdff2672a616321937fb6ae832c
[ "MIT" ]
null
null
null
62.697912
46,880
0.709026
[ [ [ "<h1 align='center'> 8.2 Combining and Merging Datasets\n ", "_____no_output_____" ], [ "<b>Database-Style DataFrame Joins", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\n\ndf1 = pd.DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'a', 'b'],\n 'data1': range(7)})", "_____no_output_____" ], [ "df1", "_____no_output_____" ], [ "df2 = pd.DataFrame({'key': ['a', 'b', 'd'],\n 'data2': range(3)})", "_____no_output_____" ], [ "df2", "_____no_output_____" ], [ "pd.merge(df1,df2)", "_____no_output_____" ] ], [ [ "Note that I didn’t specify which column to join on. If that information is not speci‐fied, merge uses the overlapping column names as the keys. It’s a good practice tospecify explicitly, though:", "_____no_output_____" ] ], [ [ "pd.merge(df1,df2,on='key')", "_____no_output_____" ] ], [ [ "If the column names are different in each object, you can specify them separately:", "_____no_output_____" ] ], [ [ "df1 = pd.DataFrame({'1_key': ['b', 'b', 'a', 'c', 'a', 'a', 'b'],\n 'data1': range(7)})\ndf2 = pd.DataFrame({'2_key': ['a', 'b', 'd'],\n 'data2': range(3)})", "_____no_output_____" ], [ "pd.merge(df1,df2,left_on='1_key',right_on='2_key')", "_____no_output_____" ] ], [ [ "You may notice that the 'c' and 'd' values and associated data are missing from theresult. By default merge does an 'inner' join; the keys in the result are the intersec‐tion, or the common set found in both tables. Other possible options are 'left','right', and 'outer'. The outer join takes the union of the keys, combining theeffect of applying both left and right joins:", "_____no_output_____" ], [ "Its just like Sql.Many-to-many joins form the Cartesian product of the rows. ", "_____no_output_____" ] ], [ [ "pd.merge(df1,df2,left_on='1_key',right_on='2_key',how='outer')", "_____no_output_____" ], [ "pd.merge(df1,df2,left_on='1_key',right_on='2_key',how='right')", "_____no_output_____" ] ], [ [ "To determine which key combinations will appear in the result depending on thechoice of merge method, think of the multiple keys as forming an array of tuples tobe used as a single join key (even though it’s not actually implemented that way)", "_____no_output_____" ], [ "When you’re joining columns-on-columns, the indexes on thepassed DataFrame objects are discarded", "_____no_output_____" ], [ "A last issue to consider in merge operations is the treatment of overlapping columnnames. While you can address the overlap manually (see the earlier section onrenaming axis labels), merge has a suffixes option for specifying strings to appendto overlapping names in the left and right DataFrame objects\n\n pd.merge(left, right, on='key1', suffixes=('_left', '_right'))", "_____no_output_____" ], [ "<b>Merging on Index", "_____no_output_____" ], [ "In some cases, the merge key(s) in a DataFrame will be found in its index. In thiscase, you can pass left_index=True or right_index=True (or both) to indicate thatthe index should be used as the merge key:", "_____no_output_____" ] ], [ [ "left1 = pd.DataFrame({'key': ['a', 'b', 'a', 'a', 'b', 'c'],\n 'value': range(6)})\n\nright1 = pd.DataFrame({'group_val': [3.5, 7]},\n index=['a', 'b'])\n\nleft1", "_____no_output_____" ], [ "right1", "_____no_output_____" ], [ "pd.merge(left1, right1, left_on='key', right_index=True)", "_____no_output_____" ] ], [ [ "With hierarchically indexed data, things are more complicated, as joining on index isimplicitly a multiple-key merge:", "_____no_output_____" ] ], [ [ "lefth = pd.DataFrame({'key1': ['Ohio', 'Ohio', 'Ohio',\n 'Nevada', 'Nevada'], \n 'key2': [2000, 2001, 2002, 2001, 2002],\n 'data': np.arange(5.)})\n\nrighth = pd.DataFrame(np.arange(12).reshape((6, 2)),\n index=[['Nevada', 'Nevada', 'Ohio', 'Ohio','Ohio', 'Ohio'],\n [2001, 2000, 2000, 2000, 2001, 2002]]\n ,columns=['event1', 'event2'])\n\nlefth", "_____no_output_____" ], [ "righth", "_____no_output_____" ] ], [ [ "In this case, you have to indicate multiple columns to merge on as a list (note thehandling of duplicate index values with how='outer'", "_____no_output_____" ] ], [ [ "pd.merge(lefth, righth, left_on=['key1', 'key2'], right_index=True)", "_____no_output_____" ] ], [ [ "DataFrame has a convenient join instance for merging by index. It can also be usedto combine together many DataFrame objects having the same or similar indexes butnon-overlapping columns. \n\n left.join(right, how='outer')\n ", "_____no_output_____" ], [ "DataFrame’s joinmethod performs a left join on the join keys, exactly preserving the left frame’s rowindex. It also supports joining the index of the passed DataFrame on one of the col‐umns of the calling DataFrame:\n\n left.join(right, on='key')", "_____no_output_____" ], [ "Lastly, for simple index-on-index merges, you can pass a list of DataFrames to join asan alternative to using the more general concat function described in the next section\n\n left2.join([right2, another])\n\n", "_____no_output_____" ], [ "<b>Concatenating Along an Axis", "_____no_output_____" ] ], [ [ "arr = np.arange(12).reshape((3, 4))", "_____no_output_____" ], [ "np.concatenate([arr, arr], axis=1)", "_____no_output_____" ] ], [ [ "In the context of pandas objects such as Series and DataFrame, having labeled axesenable you to further generalize array concatenation. \n\nIn particular, you have a num‐ber of additional things to think about:\n\n •If the objects are indexed differently on the other axes, should we combine the \n distinct elements in these axes or use only the shared values (the intersection)?\n\n •Do the concatenated chunks of data need to be identifiable in the resultingobject?\n\n •Does the “concatenation axis” contain data that needs to be preserved? In manycases, \n the default integer labels in a DataFrame are best discarded duringconcatenation.", "_____no_output_____" ] ], [ [ "s1 = pd.Series([0, 1], index=['a', 'b'])\ns2 = pd.Series([2, 3, 4], index=['c', 'd', 'e'])\ns3 = pd.Series([5, 6], index=['f', 'g'])", "_____no_output_____" ], [ "pd.concat([s1, s2, s3])", "_____no_output_____" ], [ "pd.concat([s1, s2, s3],axis=1)", "_____no_output_____" ], [ "s4=pd.concat([s1,s3])\ns4", "_____no_output_____" ], [ "pd.concat([s4, s3],axis=1,join='inner')", "_____no_output_____" ] ], [ [ "A potential issue is that the concatenated pieces are not identifiable in the result. Sup‐pose instead you wanted to create a hierarchical index on the concatenation axis. Todo this, use the keys argument.\n\n", "_____no_output_____" ] ], [ [ "result = pd.concat([s1, s1, s3], keys=['one', 'two', 'three'])\nresult", "_____no_output_____" ] ], [ [ "In the case of combining Series along axis=1, the keys become the DataFrame col‐umn headers:", "_____no_output_____" ] ], [ [ "result = pd.concat([s1, s1, s3], keys=['one', 'two', 'three'],axis=1)\nresult", "_____no_output_____" ] ], [ [ "In the case of combining Series along axis=1, the keys become the DataFrame col‐umn headers:", "_____no_output_____" ], [ "The same logic extends to DataFrame objects:", "_____no_output_____" ] ], [ [ "df1 = pd.DataFrame(np.arange(6).reshape(3, 2), index=['a', 'b', 'c'],\n columns=['one', 'two'])\ndf2 = pd.DataFrame(5 + np.arange(4).reshape(2, 2), index=['a', 'c'],\n columns=['three', 'four'])", "_____no_output_____" ], [ "pd.concat([df1, df2], axis=1, keys=['level1', 'level2'])", "_____no_output_____" ] ], [ [ "A last consideration concerns DataFrames in which the row index does not containany relevant data:\n\nIn this case, you can pass ignore_index=True:\n\n pd.concat([df1, df2], ignore_index=True)", "_____no_output_____" ], [ "![Screenshot_2020-07-06%20Python%20for%20Data%20Analysis%20-%20Python%20for%20Data%20Analysis,%202nd%20Edition%20pdf.png](attachment:Screenshot_2020-07-06%20Python%20for%20Data%20Analysis%20-%20Python%20for%20Data%20Analysis,%202nd%20Edition%20pdf.png)", "_____no_output_____" ], [ "![Screenshot_2020-07-06%20Python%20for%20Data%20Analysis%20-%20Python%20for%20Data%20Analysis,%202nd%20Edition%20pdf%281%29.png](attachment:Screenshot_2020-07-06%20Python%20for%20Data%20Analysis%20-%20Python%20for%20Data%20Analysis,%202nd%20Edition%20pdf%281%29.png)", "_____no_output_____" ], [ "<b>Combining Data with Overlap", "_____no_output_____" ], [ "There is another data combination situation that can’t be expressed as either a mergeor concatenation operation. You may have two datasets whose indexes overlap in fullor part. As a motivating example, consider NumPy’s where function, which performsthe array-oriented equivalent of an if-else expression:", "_____no_output_____" ] ], [ [ "a = pd.Series([np.nan, 2.5, np.nan, 3.5, 4.5, np.nan],\n index=['f', 'e', 'd', 'c', 'b', 'a'])\nb = pd.Series(np.arange(len(a), dtype=np.float64),\n index=['f', 'e', 'd', 'c', 'b', 'a'])\na", "_____no_output_____" ], [ "b[-1] = np.nan\nb", "_____no_output_____" ] ], [ [ "Series has a combine_first method, which performs the equivalent of this operationalong with pandas’s usual data alignment logic:", "_____no_output_____" ] ], [ [ "b[:-2].combine_first(a[2:])", "_____no_output_____" ] ], [ [ "With DataFrames, combine_first does the same thing column by column, so youcan think of it as “patching” missing data in the calling object with data from theobject you pass:", "_____no_output_____" ] ], [ [ "df1 = pd.DataFrame({'a': [1., np.nan, 5., np.nan],\n 'b': [np.nan, 2., np.nan, 6.],\n 'c': range(2, 18, 4)})\n\ndf2 = pd.DataFrame({'a': [5., 4., np.nan, 3., 7.],\n 'b': [np.nan, 3., 4., 6., 8.]})\n\ndf1", "_____no_output_____" ], [ "df2", "_____no_output_____" ], [ "df1.combine_first(df2)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
4a4b05be6eda192e3f4a2c61ad44f8fc6c425539
39,503
ipynb
Jupyter Notebook
Python Dictionaries.ipynb
ValRCS/RCS_Data_Analysis_Python_09_2019
70a69480ccabc7276ab787d509d2f039a49716c7
[ "MIT" ]
2
2019-09-13T11:48:33.000Z
2019-09-20T12:36:51.000Z
Python_Core/Python Dictionaries.ipynb
ValRCS/LU_PySem_2019
404026503130c6b26be97693326d3ce51c5a4469
[ "MIT" ]
9
2020-01-28T23:01:39.000Z
2022-01-13T01:40:38.000Z
Python_Core/Python Dictionaries.ipynb
ValRCS/RCS_Data_Analysis_Python_11_2019
cca85c9726b3dfe7de1efa374bb87f6efadd4150
[ "MIT" ]
2
2019-09-20T12:33:03.000Z
2020-01-20T08:58:18.000Z
25.065355
689
0.427284
[ [ [ "# Python Dictionaries\n", "_____no_output_____" ], [ "## Dictionaries\n\n* Collection of Key - Value pairs\n* also known as associative array\n* unordered\n* keys unique in one dictionary\n* storing, extracting\n", "_____no_output_____" ] ], [ [ "emptyd = {}\nlen(emptyd)", "_____no_output_____" ], [ "type(emptyd)", "_____no_output_____" ], [ "tel = {'jack': 4098, 'sape': 4139}\nprint(tel)\ntel['guido'] = 4127\nprint(tel.keys())\nprint(tel.values())", "{'jack': 4098, 'sape': 4139}\ndict_keys(['jack', 'sape', 'guido'])\ndict_values([4098, 4139, 4127])\n" ], [ "# add key 'valdis' with value 4127 to our tel dictionary\ntel['valdis'] = 4127\ntel", "_____no_output_____" ], [ "#get value from key in dictionary\n# very fast even in large dictionaries! O(1)\ntel['jack']", "_____no_output_____" ], [ "tel['sape'] = 54545", "_____no_output_____" ], [ "# remove key value pair\ndel tel['sape']", "_____no_output_____" ], [ "tel['sape']", "_____no_output_____" ], [ "'valdis' in tel.keys()", "_____no_output_____" ], [ "'karlis' in tel.keys()", "_____no_output_____" ], [ "# this will be slower going through all the key:value pairs\n4127 in tel.values()", "_____no_output_____" ], [ "type(tel.values())", "_____no_output_____" ], [ "dir(tel.values())", "_____no_output_____" ], [ "tel['irv'] = 4127", "_____no_output_____" ], [ "tel", "_____no_output_____" ], [ "list(tel.keys())", "_____no_output_____" ], [ "list(tel.values())", "_____no_output_____" ], [ "sorted([5,7,1,66], reverse=True)", "_____no_output_____" ], [ "?sorted", "_____no_output_____" ], [ "tel.keys()", "_____no_output_____" ], [ "sorted(tel.keys())", "_____no_output_____" ], [ "'guido' in tel", "_____no_output_____" ], [ "'Valdis' in tel", "_____no_output_____" ], [ "'valdis' in tel", "_____no_output_____" ], [ "# alternative way of creating a dictionary using tuples ()\nt2=dict([('sape', 4139), ('guido', 4127), ('jack', 4098)])\nprint(t2)", "{'sape': 4139, 'guido': 4127, 'jack': 4098}\n" ], [ "names = ['Valdis', 'valdis', 'Antons', 'Anna', 'Kārlis', 'karlis']\nnames", "_____no_output_____" ], [ "sorted(names)", "_____no_output_____" ] ], [ [ "* `globals()` always returns the dictionary of the module namespace\n* `locals()` always returns a dictionary of the current namespace\n* `vars()` returns either a dictionary of the current namespace (if called with no argument) or the dictionary of the argument.", "_____no_output_____" ] ], [ [ "globals()", "_____no_output_____" ], [ "'print(a,b)' in globals()['In']", "_____no_output_____" ], [ "vars().keys()", "_____no_output_____" ], [ "sorted(vars().keys())", "_____no_output_____" ], [ "# return value of the key AND destroy the key:value\n# if key does not exist, then KeyError will appear\ntel.pop('valdis')", "_____no_output_____" ], [ "# return value of the key AND destroy the key:value\n# if key does not exist, then KeyError will appear\ntel.pop('valdis')", "_____no_output_____" ], [ "# we can store anything in dictionaries \n# including other dictionaries and lists\nmydict = {'mylist':[1,2,6,6,\"Badac\"], 55:165, 'innerd':{'a':100,'b':[1,2,6]}}\nmydict", "_____no_output_____" ], [ "mydict.keys()", "_____no_output_____" ], [ "# we can use numeric keys as well!\nmydict[55]", "_____no_output_____" ], [ "mydict['55'] = 330", "_____no_output_____" ], [ "mydict", "_____no_output_____" ], [ "mlist = mydict['mylist']\nmlist", "_____no_output_____" ], [ "mytext = mlist[-1]\nmytext", "_____no_output_____" ], [ "mychar = mytext[-3]\nmychar", "_____no_output_____" ], [ "# get letter d\nmydict['mylist'][-1][-3]", "_____no_output_____" ], [ "mydict['mylist'][-1][2]", "_____no_output_____" ], [ "mlist[-1][2]", "_____no_output_____" ], [ "mydict['real55'] = mydict[55]", "_____no_output_____" ], [ "del mydict[55]", "_____no_output_____" ], [ "mydict", "_____no_output_____" ], [ "sorted(mydict.keys())", "_____no_output_____" ], [ "mydict.get('55')", "_____no_output_____" ], [ "# we get None on nonexisting key instead of KeyError\nmydict.get('53253242452')", "_____no_output_____" ], [ "# here we will get KeyError on nonexisting key\nmydict['53253242452']", "_____no_output_____" ], [ "mydict.get(\"badkey\") == None", "_____no_output_____" ], [ "k,v = mydict.popitem()\nk,v", "_____no_output_____" ], [ "# update for updating multiple dictionary values at once\nmydict.update({'a':[1,3,'valdis',5],'anotherkey':567})\nmydict", "_____no_output_____" ], [ "mydict.setdefault('b', 3333)\nmydict", "_____no_output_____" ], [ "# change dictionary key value pair ONLY if key does not exist\nmydict.setdefault('a', 'aaaaaaaa')\nmydict", "_____no_output_____" ], [ "# here we overwite no matter what\nmydict['a'] = 'changed a value'\nmydict", "_____no_output_____" ], [ "# and we clear our dictionary\nmydict.clear()\nmydict", "_____no_output_____" ], [ "type(mydict)", "_____no_output_____" ], [ "mydict = 5\ntype(mydict)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a4b06ea866bc86265f55089a29053bf9a257eed
2,971
ipynb
Jupyter Notebook
XML Parsing.ipynb
jjsloboda/notebooks
ee17f9a3ac1603bf1aafc6ac1d8c95dd17b6027b
[ "MIT" ]
null
null
null
XML Parsing.ipynb
jjsloboda/notebooks
ee17f9a3ac1603bf1aafc6ac1d8c95dd17b6027b
[ "MIT" ]
null
null
null
XML Parsing.ipynb
jjsloboda/notebooks
ee17f9a3ac1603bf1aafc6ac1d8c95dd17b6027b
[ "MIT" ]
null
null
null
24.553719
103
0.498485
[ [ [ "xml_doc = \"\"\"<Document>\n<SelfClosingTag\n attr1=\"val1\"\n attr2=\"val2\"\n attr3=\"val3\"\n />\n</Document>\n\"\"\"", "_____no_output_____" ], [ "other_xml_doc = \"\"\"<Officers>\n <RepNo>1234</RepNo>\n <OrgPermID>23415546</OrgPermID>\n <CompanyName>Fake Corp</CompanyName>\n <Production Date=\"2021-09-06T10:00:00\" />\n <OfficersInfo>\n <Officer ID=\"1234\" Status=\"Either\" Rank=\"6\" Active=\"1\" OfficerPermID=\"5432\">\n <Person ID=\"56432\" Active=\"1\" PersonPermID=\"664534256\" />\n <Submission Type=\"ASDF 53246\" Year=\"2020\" Month=\"07\" Day=\"12\" />\n <Submission Type=\"GFD 8765\" Year=\"2020\" Month=\"08\" Day=\"18\" />\n </Officer>\n </OfficersInfo>\n</Officers>\n\"\"\"", "_____no_output_____" ], [ "#from lxml import etree\nimport xml.etree.ElementTree as ET\nfrom io import StringIO\n\nparser = ET.XMLParser()\ntree = ET.XML(other_xml_doc, parser)\n#etree.tostring(tree.getroot(), pretty_print=True, method=\"xml\")\n\n[e.items() for e in tree.findall(\"OfficersInfo/Officer/Submission\")]\n\nfrom dataclasses import dataclass\n\n@dataclass\nclass Submission:\n Type: str\n Year: int\n Month: int\n Day: int\n \n[Submission(**dict(e.items())) for e in tree.findall(\"OfficersInfo/Officer/Submission\")]", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
4a4b0ec19d38e1f00a4e8beff4bcc8a129d8243c
946,588
ipynb
Jupyter Notebook
resources/notebooks/softmax/.ipynb_checkpoints/01_intro-checkpoint.ipynb
COHRINT/cops_and_robots
1df99caa1e38bde1b5ce2d04389bc232a68938d6
[ "Apache-2.0" ]
3
2016-01-19T17:54:51.000Z
2019-10-21T12:09:03.000Z
resources/notebooks/softmax/.ipynb_checkpoints/01_intro-checkpoint.ipynb
COHRINT/cops_and_robots
1df99caa1e38bde1b5ce2d04389bc232a68938d6
[ "Apache-2.0" ]
null
null
null
resources/notebooks/softmax/.ipynb_checkpoints/01_intro-checkpoint.ipynb
COHRINT/cops_and_robots
1df99caa1e38bde1b5ce2d04389bc232a68938d6
[ "Apache-2.0" ]
5
2015-02-19T02:53:24.000Z
2019-03-05T20:29:12.000Z
86.572892
830
0.829442
[ [ [ "# Chapter 1 - Softmax from First Principles", "_____no_output_____" ], [ "## Language barriers between humans and autonomous systems\n\nIf our goal is to help humans and autnomous systems communicate, we need to speak in a common language. Just as humans have verbal and written languages to communicate ideas, so have we developed mathematical languages to communicate information. Probability is one of those languages and, thankfully for us, autonomous systems are pretty good at describing probabilities, even if humans aren't. This document shows one technique for translating a human language (English) into a language known by autonomous systems (probability).\n\nOur translator is something called the **SoftMax classifier**, which is one type of probability distribution that takes discrete labels and translates them to probabilities. We'll show you the details on how to create a softmax model, but let's get to the punchline first: we can decompose elements of human language to represent a partitioning of arbitrary state spaces. \n\nSay, for instance, we'd like to specify the location of an object in two dimensional cartesian coordinates. Our state space is all combinations of *x* and *y*, and we'd like to translate human language into some probability that our target is at a given combination of *x* and *y*. One common tactic humans use to communicate position is range (near, far, next to, etc.) and bearing (North, South, SouthEast, etc.). This already completely partitions our *xy* space: if something is north, it's not south; if it's east, it's not west; and so on.\n\nA softmax model that translates range and bearing into probability in a state space is shown below:\n\n<img src=\"https://raw.githubusercontent.com/COHRINT/cops_and_robots/master/notebooks/softmax/img/softmax_range_bearing.png\" alt=\"Softmax range and bearing\" width=500px>\n\nAssuming that *next to* doesn't require a range, we see seventeen different word combinations we can use to describe something's position: two ranges (*nearby* and *far*) for each cardinal and intercardinal direction (eight total), and then one extra label for *next to*. This completely partitions our entire state space $\\mathbb{R}^2$.\n\nThis range and bearing language is, by its nature, inexact. If I say, \"That boat is far north.\", you don't have a deterministic notion of exactly where the boat is -- but you have a good sense of where it is, and where it is not. We can represent that sense probabilistically, such that the probability of a target existing at a location described by a range and bearing label is nonzero over the entire state space, but that probability is very small if not in the area most associated with that label.\n\nWhat do we get from this probabilistic interpretation of the state space? We get a two-way translation between humans and autonomous systems to describe anything we'd like. If our state space is one-dimensional relative velocity (i.e. the derivative of range without bearing), I can say, \"She's moving really fast!\", to give the autonomous system a probability distribution over my target's velocity with an expected value of, say, 4 m/s. Alternatively, if my autnomous system knows my target's moving at 0.04352 m/s, it can tell me, \"Your target is moving slowly.\" Our labeled partitioning of the state space (that is, our classifier) is the mechanism that translates for us.", "_____no_output_____" ], [ "## Softmax model construction\nThe [SoftMax function](http://en.wikipedia.org/wiki/Softmax_function) goes by many names: normalized exponential, multinomial logistic function, log-linear model, sigmoidal function. We use the SoftMax function to develop a classification model for our state space:\n\n$$\n\\begin{equation}\nP(L=i \\vert \\mathbf{x}) = \\frac{e^{\\mathbf{w}_i^T \\mathbf{x} + b_i}}{\\sum_{k=1}^M e^{\\mathbf{w}_k^T\\mathbf{x} + b_k}}\n\\end{equation}\n$$\n\nWhere $L = i$ is our random variable of class labels instantiated as class $i$, $\\mathbf{x}$ is our state vector, $\\mathbf{w}_i$ is a vector of parameters (or weights) associated with our class $i$, $b_i$ is a bias term for class $i$, and $M$ is the total number of classes.\n\nThe terms *label* and *class* require some distinction: a label is a set of words associated with a class (i.e. *far northwest*) whereas a class is a probability distribution over the entire state space. They are sometimes used interchangeably, and the specific meaning should be clear from context.\n\nSeveral key factors come out of the SoftMax equation:\n- The probabilities of all classes for any given point $\\mathbf{x}$ sum to 1.\n- The probability any single class for any given point $\\mathbf{x}$ is bounded by 0 and 1.\n- The space can be partitioned into an arbitrary number of classes (with some restrictions about those classes - more on this later).\n- The probability of one class for a given point $\\mathbf{x}$ is determined by that class' weighted exponential sum of the state vector *relative* to the weighted exponential sums of *all* classes.\n- Since the probability of a class is conditioned on $\\mathbf{x}$, we can apply estimators such as [Maximum Likelihood](http://en.wikipedia.org/wiki/Maximum_likelihood) to learn SoftMax models.\n- $P(L=i \\vert \\mathbf{x})$ is convex in $\\mathbf{w_i}$ for any $\\mathbf{x}$.\n\nLet's try to get some intuition about this setup. For a two-dimensional case with state $\\mathbf{x} = \\begin{bmatrix}x & y\\end{bmatrix}^T$, each class $i$ has weights $\\mathbf{w}_i = \\begin{bmatrix}w_{i,x} & w_{i,y}\\end{bmatrix}^T$. Along with the constant bias term $b_i$, we have one weighted linear function of $x$ and one weighted linear function of $y$. Each class's probability is normalized with respect to the sum of all other classes, so the weights can be seen as a relative scaling of one class over another in any given state. The bias weight increases a class's probability in all cases, the $x$ weight increases the class's probability for greater values of $x$ (and positive weights), and the $y$ weight, naturally, increases the class's probability for greater values of $y$ (and positive weights).\n\nWe can get fancy with our state space, having states of the form $\\mathbf{x} = \\begin{bmatrix}x & y & x^2 & y^2 & 2xy\\end{bmatrix}^T$, but we'll build up to states like that. Let's look at some simpler concepts first.", "_____no_output_____" ], [ "## Class boundaries\n\nFor any two classes, we can take the ratio of their probabilities to determine the **odds** of one class instead of the other:\n\n$$\nL(i,j) =\\frac{P(L=i \\vert \\mathbf{x})}{P(L=j \\vert \\mathbf{x})} = \n\\frac{\\frac{e^{\\mathbf{w}_i^T \\mathbf{x} + b_i}}{\\sum_{k=i}^M e^{\\mathbf{w}_k^T\\mathbf{x} + b_k}}}{\\frac{e^{\\mathbf{w}_j^T \\mathbf{x} + b_{j}}}{\\sum_{k=i}^M e^{\\mathbf{w}_k^T\\mathbf{x} + b_k}}} = \\frac{e^{\\mathbf{w}_i^T \\mathbf{x} + b_i}}{e^{\\mathbf{w}_j^T\\mathbf{x} + b_j}}\n$$\n\nWhen $L(i,j)=1$, the two classes have equal probability. This doesn't give us a whole lot of insight until we take the **log-odds** (the logarithm of the odds):\n\n$$\n\\begin{align}\nL_{log}(i,j) &=\n\\log{\\frac{P(L=i \\vert \\mathbf{x})}{P(L=j \\vert \\mathbf{x})}} \n= \\log{\\frac{e^{\\mathbf{w}_i^T \\mathbf{x} + b_j}}{e^{\\mathbf{w}_j^T\\mathbf{x} + b_j}}} \n= (\\mathbf{w}_i^T\\mathbf{x} + b_i)- (\\mathbf{w}_j^T\\mathbf{x} + b_j) \\\\\n&= (\\mathbf{w}_i - \\mathbf{w}_j)^T\\mathbf{x} + (b_i - b_j)\n\\end{align}\n$$\n\nWhen $L_{log}(i,j) = \\log{L(i,j)} = \\log{1} = 0$, we have equal probability between the two classes, and we've also stumbled upon the equation for an n-dimensional affine hyperplane dividing the two classes:\n\n$$\n\\begin{align}\n0 &= (\\mathbf{w}_i - \\mathbf{w}_j)^T\\mathbf{x} + (b_i - b_j) \\\\\n &= (w_{i,x_1} - w_{j,x_1})x_1 + (w_{i,x_2} - w_{j,x_2})x_2 + \\dots + (w_{i,x_n} - w_{j,x_n})x_n + (b_i - b_j)\n\\end{align}\n$$\n\nThis follows from the general definition of an <a href=\"http://en.wikipedia.org/wiki/Plane_(geometry)#Point-normal_form_and_general_form_of_the_equation_of_a_plane\">Affine Hyperplane</a> (that is, an n-dimensional flat plane):\n\n$$\na_1x_1 + a_2x_2 + \\dots + a_nx_n + b = 0\n$$\n\nWhere $a_1 = w_{i,x_1} - w_{j,x_1}$, $a_2 = w_{i,x_2} - w_{j,x_2}$, and so on. This gives us a general formula for the division of class boundaries -- that is, we can specify the class boundaries directly, rather than specifying the weights leading to those class boundaries.", "_____no_output_____" ], [ "### Example\nLet's take a step back and look at an example. Suppose I'm playing Pac-Man, and I want to warn our eponymous hero of a ghost approaching him. Let's restrict my language to the four intercardinal directions: NE, SE, SW and NW. My state space is $\\mathbf{x} = \\begin{bmatrix}x & y\\end{bmatrix}^T$ (one term for each cartesian direction in $\\mathbb{R}^2$).\n\n<img src=\"https://raw.githubusercontent.com/COHRINT/cops_and_robots/master/notebooks/softmax/img/pacman.png\" alt=\"Pacman with intercardinal bearings\" width=\"500px\">\n\nIn this simple problem, we can expect our weights to be something along the lines of:\n\n$$\n\\begin{align}\n\\mathbf{w}_{SW} &= \\begin{bmatrix}-1 & -1 \\end{bmatrix}^T \\\\\n\\mathbf{w}_{NW} &= \\begin{bmatrix}-1 & 1 \\end{bmatrix}^T \\\\\n\\mathbf{w}_{SE} &= \\begin{bmatrix}1 & -1 \\end{bmatrix}^T \\\\\n\\mathbf{w}_{NE} &= \\begin{bmatrix}1 & 1 \\end{bmatrix}^T \\\\\n\\end{align}\n$$\n\nIf we run these weights in our SoftMax model, we get the following results:", "_____no_output_____" ] ], [ [ "# See source at: https://github.com/COHRINT/cops_and_robots/blob/master/src/cops_and_robots/robo_tools/fusion/softmax.py\nimport numpy as np\nfrom cops_and_robots.robo_tools.fusion.softmax import SoftMax\n%matplotlib inline\n\nlabels = ['SW', 'NW', 'SE',' NE']\nweights = np.array([[-1, -1],\n [-1, 1],\n [1, -1],\n [1, 1],\n ])\npacman = SoftMax(weights, class_labels=labels)\npacman.plot(title='Unshifted Pac-Man Bearing Model')", "_____no_output_____" ] ], [ [ "Which is along the right path, but needs to be shifted down to Pac-Man's location. Say Pac-Man is approximately one quarter of the map south from the center point, we can bias our model accordingly (assuming a $10m \\times 10m$ space):\n\n$$\n\\begin{align}\nb_{SW} &= -2.5\\\\\nb_{NW} &= 2.5\\\\\nb_{SE} &= -2.5\\\\\nb_{NE} &= 2.5\\\\\n\\end{align}\n$$\n", "_____no_output_____" ] ], [ [ "biases = np.array([-2.5, 2.5, -2.5, 2.5,])\npacman = SoftMax(weights, biases, class_labels=labels)\npacman.plot(title='Y-Shifted Pac-Man Bearing Model')", "_____no_output_____" ] ], [ [ "Looking good! Note that we'd get the same answer had we used the following weights:\n\n$$\n\\begin{align}\nb_{SW} &= -5\\\\\nb_{NW} &= 0\\\\\nb_{SE} &= -5\\\\\nb_{NE} &= 0\\\\\n\\end{align}\n$$\nBecause the class boundaries and probability distributions are defined by the *relative differences*.\n\nBut this simply shifts the weights in the $y$ direction. How do we go about shifting weights in any state dimension? \n\nRemember that our biases will essentially scale an entire class, so, what we did was scale up the two classes that have a positive scaling for negative $y$ values. If we want to place the center of the four classes in the top-left, for instance, we'll want to bias the NW class less than the other classes. \n\nLet's think of what happens if we use another coordinate system:\n\n$$\n\\mathbf{x}' = \\mathbf{x} + \\mathbf{b}\n$$\n\nWhere $\\mathbf{x}'$ is our new state vector and $\\mathbf{b}$ are offsets to each state in our original coordinate frame (assume the new coordinate system is unbiased). For example, something like:\n\n$$\n\\mathbf{x}' = \\begin{bmatrix}x & y\\end{bmatrix}^T + \\begin{bmatrix}2 & -3\\end{bmatrix}^T = \\begin{bmatrix}x + 2 & y -3\\end{bmatrix}^T\n$$ \n\nCan we represent this shift simply by adjusting our biases, instead of having to redefine our state vector? Assuming we're just shifting the distributions, the probabilities, and thus, the hyperplanes, will simply be shifted as well, so we have:\n\n$$\n0 = (\\mathbf{w}_i - \\mathbf{w}_j)^T \\mathbf{x}' = (\\mathbf{w}_i - \\mathbf{w}_j)^T \\mathbf{x} + (\\mathbf{w}_i - \\mathbf{w}_j)^T \\mathbf{b}\n$$\n\nWhich retains our original state and shifts only our biases. If we distribute the offset $\\mathbf{b}$, we can define each class's bias term:\n\n$$\n\\begin{align}\nb_i - b_j &= (\\mathbf{w}_i - \\mathbf{w}_j)^T \\mathbf{b} \\\\\n&= \\mathbf{w}_i^T \\mathbf{b} - \\mathbf{w}_j^T \\mathbf{b}\n\\end{align}\n$$\n\nOur bias for each class $i$ in our original coordinate frame is simply $\\mathbf{w}_i^T \\mathbf{b}$.\n\nLet's try this out with $\\mathbf{b} = \\begin{bmatrix}2 & -3\\end{bmatrix}^T$ (remembering that this will push the shifted origin negatively along the x-axis and positively along the y-axis):\n\n$$\n\\begin{align}\nb_{SW} &= \\begin{bmatrix}-1 & -1 \\end{bmatrix} \\begin{bmatrix}2 \\\\ -3\\end{bmatrix} = 1\\\\\nb_{NW} &= \\begin{bmatrix}-1 & 1 \\end{bmatrix} \\begin{bmatrix}2 \\\\ -3\\end{bmatrix} =-5 \\\\\nb_{SE} &= \\begin{bmatrix}1 & -1 \\end{bmatrix} \\begin{bmatrix}2 \\\\ -3\\end{bmatrix} = 5\\\\\nb_{NE} &= \\begin{bmatrix}1 & 1 \\end{bmatrix} \\begin{bmatrix}2 \\\\ -3\\end{bmatrix} = -1 \\\\\n\\end{align}\n$$", "_____no_output_____" ] ], [ [ "biases = np.array([1, -5, 5, -1,])\npacman = SoftMax(weights, biases, class_labels=labels)\npacman.plot(title='Shifted Pac-Man Bearing Model')", "_____no_output_____" ] ], [ [ "One other thing we can illustrate with this example: how would the SoftMax model change if we multiplied all our weights and biases by 10? \nWe get:", "_____no_output_____" ] ], [ [ "weights = np.array([[-10, -10],\n [-10, 10],\n [10, -10],\n [10, 10],\n ])\nbiases = np.array([10, -50, 50, -10,])\npacman = SoftMax(weights, biases, class_labels=labels)\npacman.plot(title='Steep Pac-Man Bearing Model')", "_____no_output_____" ] ], [ [ "Why does this increase in slope happen? Let's investigate.", "_____no_output_____" ], [ "## SoftMax slope for linear states\n\nThe [gradient](http://en.wikipedia.org/wiki/Gradient) of $P(L=i \\vert \\mathbf{x})$ will give us a function for the slope of our SoftMax model of class $i$. For a linear state space, such as our go-to $\\mathbf{x} = \\begin{bmatrix}x & y\\end{bmatrix}$, our gradient is defined as:\n\n$$\n\\nabla P(L=i \\vert \\mathbf{x}) = \\nabla \\frac{e^{\\mathbf{w}_i^T \\mathbf{x} + b_i}}{\\sum_{k=1}^M e^{\\mathbf{w}_k^T\\mathbf{x} + b_k}} = \n\\frac{\\partial}{\\partial x} \\frac{e^{\\mathbf{w}_i^T \\mathbf{x}}}{\\sum_{k=1}^M e^{\\mathbf{w}_k^T\\mathbf{x}}} \\mathbf{\\hat{i}} +\n\\frac{\\partial}{\\partial y} \\frac{e^{\\mathbf{w}_i^T \\mathbf{x}}}{\\sum_{k=1}^M e^{\\mathbf{w}_k^T\\mathbf{x}}} \\mathbf{\\hat{j}}\n$$\n\nWhere $\\mathbf{\\hat{i}}$ and $\\mathbf{\\hat{j}}$ are unit vectors in the $x$ and $y$ dimensions, respectively. Given the structure of our equation, the form of either partial derivative will be the same as the other, so let's look at the partial with respect to $x$, using some abused notation:\n\n$$\n\\begin{align}\n\\frac{\\partial P(L = i \\vert \\mathbf{x})} {\\partial x} &= \\frac{d P(L = i \\vert x)} {dx} = \n\\frac{\\partial}{\\partial x} \\frac{e^{w_{i,x}x}}{\\sum_{k=1}^M e^{w_{k,x}x}} \\\\\n&= \\frac{w_{i,x}e^{w_{i,x}x}\\sum_{k=1}^M e^{w_{k,x}x} - e^{w_{i,x}x}(\\sum_{k=1}^M w_{k,x}e^{w_{k,x}x})}{(\\sum_{k=1}^M e^{w_{k,x}x})^2} \\\\\n&= \\frac{w_{i,x}e^{w_{i,x}x}\\sum_{k=1}^M e^{w_{k,x}x}}{(\\sum_{k=1}^M e^{w_{k,x}x})^2} - \n\\frac{e^{w_{i,x}x}(\\sum_{k=1}^M w_{k,x}e^{w_{k,x}x})}{(\\sum_{k=1}^M e^{w_{k,x}x})^2}\\\\\n&= w_{i,x} \\left( \\frac{e^{w_{i,x}x}}{\\sum_{k=1}^M e^{w_{k,x}x}}\\right) - \n\\left( \\frac{e^{w_{i,x}x}}{\\sum_{k=1}^M e^{w_{k,x}x}}\\right)\\frac{\\sum_{k=1}^M w_{k,x}e^{w_{k,x}x}}{\\sum_{k=1}^M e^{w_{k,x}x}}\\\\\n& = P(L = i \\vert x) \\left(w_{i,x} - \\frac{\\sum_{k=1}^M w_{k,x}e^{w_{k,x}x}}{\\sum_{k=1}^M e^{w_{k,x}x}}\\right) \\\\ \n& = P(L = i \\vert x) \\left(w_{i,x} - \\sum_{k=1}^M w_{k,x}P(L = k \\vert x) \\right) \\\\ \n\\end{align}\n$$\n\nWhere line 2 was found using the quotient rule. This is still hard to interpret, so let's break it down into multiple cases:\n\nIf $P(L = i \\vert x) \\approx 1$, the remaining probabilities are near zero, thus reducing the impact of their weights, leaving:\n\n$$\n\\frac{\\partial P(L = i \\vert \\mathbf{x})} {\\partial x}\n\\approx P(L = i \\vert x) \\left(w_{i,x} - w_{i,x}P(L = i \\vert x) \\right)\n= 0\n$$\n\nThis makes sense: a dominating probability will be flat.\n\nIf $P(L = i \\vert x) \\approx 0$, we get:\n\n$$\n\\frac{\\partial P(L = i \\vert \\mathbf{x})} {\\partial x}\n\\approx 0 \\left(w_{i,x} - w_{i,x}P(L = i \\vert x) \\right)\n= 0\n$$\n\nThis also makes sense: a diminished probability will be flat.\n\nWe can expect the greatest slope of a [logistic function](http://en.wikipedia.org/wiki/Logistic_function) (which is simply a univariate SoftMax function) to appear at its midpoint $P(L = i \\vert x) = 0.5$. Our maximum slope, then, is:\n\n$$\n\\frac{\\partial P(L = i \\vert \\mathbf{x})} {\\partial x}\n= 0.5 \\left(w_{i,x} - \\sum_{k=1}^M w_{k,x}P(L = k \\vert x) \\right) \\\\ \n= 0.5 \\left(w_{i,x} - \\sum^M _{\\substack{k = 1, \\\\ k \\neq i}} w_{k,x}P(L = k \\vert x) - 0.5w_{i,x}\\right) \\\\\n= 0.25w_{i,x} - 0.5\\sum^M _{\\substack{k = 1, \\\\ k \\neq i}} w_{k,x}P(L = k \\vert x) \\\\\n$$\n\nNOTE: This section feels really rough, and possibly unnecessary. I need to work on it some more.\n", "_____no_output_____" ], [ "## Rotations\n\nJust as we were able to shift our SoftMax distributions to a new coordinate origin, we can apply a [rotation](http://en.wikipedia.org/wiki/Rotation_matrix) to our weights and biases. Let's once again update our weights and biases through a new, rotated, coordinate scheme:\n\n$$\nR(\\theta)\\mathbf{x}' = R(\\theta)(\\mathbf{x} + \\mathbf{b})\n$$\n\nAs before, we examine the case at the linear hyperplane boundaries:\n\n$$\n0 = (\\mathbf{w}_i - \\mathbf{w}_j)^T \\mathbf{x}' = (\\mathbf{w}_i - \\mathbf{w}_j)^T R(\\theta)\\mathbf{x} + (\\mathbf{w}_i - \\mathbf{w}_j)^T R(\\theta) \\mathbf{b}\n$$\n\nOur weights are already defined, so we simply need to multiply them by $R(\\theta)$ to find our rotated weights. Let's find our biases:\n\n$$\n\\begin{align}\nb_i - b_j &= (\\mathbf{w}_i - \\mathbf{w}_j)^T R(\\theta) \\mathbf{b} \\\\\n&= \\mathbf{w}_i^T R(\\theta) \\mathbf{b} - \\mathbf{w}_j^T R(\\theta) \\mathbf{b}\n\\end{align}\n$$\n\nSo, under rotation, $b_i = \\mathbf{w}_i^T R(\\theta) \\mathbf{b}$.\n\nLet's try this with a two-dimensional rotation matrix using $\\theta = \\frac{\\pi}{4} rad$ and $\\mathbf{b} = \\begin{bmatrix}2 & -3\\end{bmatrix}^T$:\n\n$$\n\\begin{align}\nb_{SW} &= \\begin{bmatrix}-1 & -1 \\end{bmatrix}\n\\begin{bmatrix}\\frac{\\sqrt{2}}{2} & -\\frac{\\sqrt{2}}{2}\\\\ \\frac{\\sqrt{2}}{2} & \\frac{\\sqrt{2}}{2}\\end{bmatrix} \n\\begin{bmatrix}2 \\\\ -3\\end{bmatrix} = -2\\sqrt{2} \\\\\nb_{NW} &= \\begin{bmatrix}-1 & 1 \\end{bmatrix}\n\\begin{bmatrix}\\frac{\\sqrt{2}}{2} & -\\frac{\\sqrt{2}}{2}\\\\ \\frac{\\sqrt{2}}{2} & \\frac{\\sqrt{2}}{2}\\end{bmatrix} \n\\begin{bmatrix}2 \\\\ -3\\end{bmatrix} = -3\\sqrt{2} \\\\\nb_{SE} &= \\begin{bmatrix}1 & -1 \\end{bmatrix}\n\\begin{bmatrix}\\frac{\\sqrt{2}}{2} & -\\frac{\\sqrt{2}}{2}\\\\ \\frac{\\sqrt{2}}{2} & \\frac{\\sqrt{2}}{2}\\end{bmatrix} \n\\begin{bmatrix}2 \\\\ -3\\end{bmatrix} = 3\\sqrt{2} \\\\\nb_{NE} &= \\begin{bmatrix}1 & 1 \\end{bmatrix}\\begin{bmatrix}\\frac{\\sqrt{2}}{2} & -\\frac{\\sqrt{2}}{2}\\\\ \\frac{\\sqrt{2}}{2} & \\frac{\\sqrt{2}}{2}\\end{bmatrix} \n \\begin{bmatrix}2 \\\\ -3\\end{bmatrix} = 2\\sqrt{2} \\\\\n\\end{align}\n$$\n", "_____no_output_____" ] ], [ [ "# Define rotation matrix\ntheta = np.pi/4\nR = np.array([[np.cos(theta), -np.sin(theta)],\n [np.sin(theta), np.cos(theta)]])\n\n# Rotate weights\nweights = np.array([[-1, -1],\n [-1, 1],\n [1, -1],\n [1, 1],\n ])\nweights = np.dot(weights,R)\n\n# Apply rotated biases\nbiases = np.array([-2 * np.sqrt(2),\n -3 * np.sqrt(2),\n 3 * np.sqrt(2),\n 2 * np.sqrt(2),])\n\npacman = SoftMax(weights, biases, class_labels=labels)\npacman.plot(title='Rotated and Shifted Pac-Man Bearing Model')", "_____no_output_____" ] ], [ [ "##Summary\n\nThat should be a basic introduction to the SoftMax model. We've only barely scraped the surface of why you might want to use SoftMax models as a tool for aspects of HRI. \n\nLet's move on to [Chapter 2](02_from_normals.ipynb) where we examine a more practical way of constructing SoftMax distributions.", "_____no_output_____" ] ], [ [ "from IPython.core.display import HTML\n\n# Borrowed style from Probabilistic Programming and Bayesian Methods for Hackers\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a4b20eccf2c072fc5f7bbad8bd41777c7eb7aa0
5,117
ipynb
Jupyter Notebook
models/.ipynb_checkpoints/Model-checkpoint.ipynb
amc-econ/nlp-emerging-technologies
0f584d5e8d290fb4a24f644269722e902bc4a7fd
[ "MIT" ]
null
null
null
models/.ipynb_checkpoints/Model-checkpoint.ipynb
amc-econ/nlp-emerging-technologies
0f584d5e8d290fb4a24f644269722e902bc4a7fd
[ "MIT" ]
null
null
null
models/.ipynb_checkpoints/Model-checkpoint.ipynb
amc-econ/nlp-emerging-technologies
0f584d5e8d290fb4a24f644269722e902bc4a7fd
[ "MIT" ]
null
null
null
22.247826
117
0.512214
[ [ [ "# Model\n\n- **Antoine MATHIEU COLLIN**\n- *KU Leuven*", "_____no_output_____" ], [ "## Introduction", "_____no_output_____" ], [ "In this example, we are interested in wind technologies (CPC code Y02E 10/7) over the period 1990-2000.", "_____no_output_____" ], [ "## 1. Imports", "_____no_output_____" ], [ "### 1.1 Standard libraries", "_____no_output_____" ] ], [ [ "from sqlalchemy import create_engine", "_____no_output_____" ] ], [ [ "### 1.2. Custom modules", "_____no_output_____" ] ], [ [ "import Setup\nimport Parameters as param\nfrom CustomEngineForPatstat import CustomEngineForPATSTAT\nfrom Model import Model", "_____no_output_____" ] ], [ [ "## 2. Loading the data\nTo load the data, we use a custom Engine for PATSTAT.", "_____no_output_____" ], [ "### 2.1. Creation of the custom engine for PATSTAT on a PostgrSQL base", "_____no_output_____" ] ], [ [ "eng = create_engine('postgresql://postgres:[email protected]:5432/patstat2018a')", "_____no_output_____" ], [ "PATSTAT_eng = CustomEngineForPATSTAT(eng)", "---------------------------------------\nCustomEngineForPATSTAT instanciated.\n---------------------------------------\n" ] ], [ [ "## 2.2. Creation of a model", "_____no_output_____" ] ], [ [ "technology_classes = ['Y02E 10/7']\nstart_date = 1990\nend_date = 2020", "_____no_output_____" ], [ "model = Model(custom_engine_for_PATSTAT = PATSTAT_eng,\n technology_classes = technology_classes,\n start_date = start_date,\n end_date = end_date,\n percentage_top_patents = 0.01)", "----------------------------\nInitialisation of the model.\n----------------------------\n" ] ], [ [ "### Setting up the parameters of our query", "_____no_output_____" ] ], [ [ "%%time\nmodel._fit()", "---------------------------------------\nFitting the model to the data available\n---------------------------------------\n-> Retriving primary data about the patents linked to the selected technologies\n-> Retrieving the patent ids corresponding to the technology class Y02E 10/7 filled between 1990 and 2020\n=> Number of patents linked to selected technologies: 106271\n-> Selection of the earliest patent for each patent family\n-> Selection of the top X% most cited patent (by year)\n=> Number of breakthrough patents selected: 614\n-> Retrieving PATSTAT data using the CustomEngineForPatstat\n-> Creating a temporary table in the SQL database contaning the patent ids\n-> Retrieving general information about the selected patents\n-> Retrieving CPC technology classes of the selected patents\n-> Retrieving information about the patentees (individuals) of the selected patents\n" ], [ "len(model.CC)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
4a4b578168c0b01f7a486147f35f37b9bf627677
61,800
ipynb
Jupyter Notebook
Applied-Text-Mining-in-Python/Week1/Assignment1.ipynb
rahul263-stack/PROJECT-Dump
d8b1cfe0da8cad9fe2f3bbd427334b979c7d2c09
[ "MIT" ]
1
2020-04-06T04:41:56.000Z
2020-04-06T04:41:56.000Z
Applied-Text-Mining-in-Python/Week1/Assignment1.ipynb
rahul263-stack/quarantine
d8b1cfe0da8cad9fe2f3bbd427334b979c7d2c09
[ "MIT" ]
null
null
null
Applied-Text-Mining-in-Python/Week1/Assignment1.ipynb
rahul263-stack/quarantine
d8b1cfe0da8cad9fe2f3bbd427334b979c7d2c09
[ "MIT" ]
null
null
null
25.72856
294
0.44966
[ [ [ "---\n\n_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._\n\n---", "_____no_output_____" ], [ "# Assignment 1\n\nIn this assignment, you'll be working with messy medical data and using regex to extract relevant infromation from the data. \n\nEach line of the `dates.txt` file corresponds to a medical note. Each note has a date that needs to be extracted, but each date is encoded in one of many formats.\n\nThe goal of this assignment is to correctly identify all of the different date variants encoded in this dataset and to properly normalize and sort the dates. \n\nHere is a list of some of the variants you might encounter in this dataset:\n* 04/20/2009; 04/20/09; 4/20/09; 4/3/09\n* Mar-20-2009; Mar 20, 2009; March 20, 2009; Mar. 20, 2009; Mar 20 2009;\n* 20 Mar 2009; 20 March 2009; 20 Mar. 2009; 20 March, 2009\n* Mar 20th, 2009; Mar 21st, 2009; Mar 22nd, 2009\n* Feb 2009; Sep 2009; Oct 2010\n* 6/2008; 12/2009\n* 2009; 2010\n\nOnce you have extracted these date patterns from the text, the next step is to sort them in ascending chronological order accoring to the following rules:\n* Assume all dates in xx/xx/xx format are mm/dd/yy\n* Assume all dates where year is encoded in only two digits are years from the 1900's (e.g. 1/5/89 is January 5th, 1989)\n* If the day is missing (e.g. 9/2009), assume it is the first day of the month (e.g. September 1, 2009).\n* If the month is missing (e.g. 2010), assume it is the first of January of that year (e.g. January 1, 2010).\n* Watch out for potential typos as this is a raw, real-life derived dataset.\n\nWith these rules in mind, find the correct date in each note and return a pandas Series in chronological order of the original Series' indices.\n\nFor example if the original series was this:\n\n 0 1999\n 1 2010\n 2 1978\n 3 2015\n 4 1985\n\nYour function should return this:\n\n 0 2\n 1 4\n 2 0\n 3 1\n 4 3\n\nYour score will be calculated using [Kendall's tau](https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient), a correlation measure for ordinal data.\n\n*This function should return a Series of length 500 and dtype int.*", "_____no_output_____" ] ], [ [ "# Load the data\n# Reference: https://necromuralist.github.io/data_science/posts/extracting-dates-from-medical-data/\nimport pandas\n\ndoc = []\nwith open('dates.txt') as file:\n for line in file:\n doc.append(line)\n\ndata = pandas.Series(doc)\ndata.head(10)", "_____no_output_____" ], [ "data.describe()", "_____no_output_____" ], [ "# 4 The Grammar\n# 4.1 Cardinality\nZERO_OR_MORE = '*'\nONE_OR_MORE = \"+\"\nZERO_OR_ONE = '?'\nEXACTLY_TWO = \"{2}\"\nONE_OR_TWO = \"{1,2}\"\nEXACTLY_ONE = '{1}'", "_____no_output_____" ], [ "# 4.2 Groups and Classes\nGROUP = r\"({})\"\nNAMED = r\"(?P<{}>{})\"\nCLASS = \"[{}]\"\nNEGATIVE_LOOKAHEAD = \"(?!{})\"\nNEGATIVE_LOOKBEHIND = \"(?<!{})\"\nPOSITIVE_LOOKAHEAD = \"(?={})\"\nPOSITIVE_LOOKBEHIND = \"(?<={})\"\nESCAPE = \"\\{}\"", "_____no_output_____" ], [ "# 4.3 Numbers\nDIGIT = r\"\\d\"\nONE_DIGIT = DIGIT + EXACTLY_ONE\nONE_OR_TWO_DIGITS = DIGIT + ONE_OR_TWO\nNON_DIGIT = NEGATIVE_LOOKAHEAD.format(DIGIT)\nTWO_DIGITS = DIGIT + EXACTLY_TWO\nTHREE_DIGITS = DIGIT + \"{3}\"\nEXACTLY_TWO_DIGITS = DIGIT + EXACTLY_TWO + NON_DIGIT\nFOUR_DIGITS = DIGIT + r\"{4}\" + NON_DIGIT", "_____no_output_____" ], [ "# 4.4 String Literals\nSLASH = r\"/\"\nOR = r'|'\nLOWER_CASE = \"a-z\"\nSPACE = \"\\s\"\nDOT = \".\"\nDASH = \"-\"\nCOMMA = \",\"\nPUNCTUATION = CLASS.format(DOT + COMMA + DASH)\nEMPTY_STRING = \"\"", "_____no_output_____" ], [ "# 4.5 Dates\n# These are parts to build up the date-expressions.\nMONTH_SUFFIX = (CLASS.format(LOWER_CASE) + ZERO_OR_MORE\n + CLASS.format(SPACE + DOT + COMMA + DASH) + ONE_OR_TWO)\nMONTH_PREFIXES = \"Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec\".split()\nMONTHS = [month + MONTH_SUFFIX for month in MONTH_PREFIXES]\nMONTHS = GROUP.format(OR.join(MONTHS))\nDAY_SUFFIX = CLASS.format(DASH + COMMA + SPACE) + ONE_OR_TWO\nDAYS = ONE_OR_TWO_DIGITS + DAY_SUFFIX\nYEAR = FOUR_DIGITS", "_____no_output_____" ], [ "# This is for dates like Mar 21st, 2009, those with suffixes on the days.\nCONTRACTED = (ONE_OR_TWO_DIGITS\n + LOWER_CASE\n + EXACTLY_TWO\n )\nCONTRACTION = NAMED.format(\"contraction\",\n MONTHS\n + CONTRACTED\n + DAY_SUFFIX\n + YEAR)", "_____no_output_____" ], [ "# This is for dates that have no days in them, like May 2009.\nNO_DAY_BEHIND = NEGATIVE_LOOKBEHIND.format(DIGIT + SPACE)\nNO_DAY = NAMED.format(\"no_day\", NO_DAY_BEHIND + MONTHS + YEAR)", "_____no_output_____" ], [ "# This is for the most common form (that I use) - May 21, 2017.\nWORDS = NAMED.format(\"words\", MONTHS + DAYS + YEAR)", "_____no_output_____" ], [ "BACKWARDS = NAMED.format(\"backwards\", ONE_OR_TWO_DIGITS + SPACE + MONTHS + YEAR)", "_____no_output_____" ], [ "slashed = SLASH.join([ONE_OR_TWO_DIGITS,\n ONE_OR_TWO_DIGITS,\n EXACTLY_TWO_DIGITS])\ndashed = DASH.join([ONE_OR_TWO_DIGITS,\n ONE_OR_TWO_DIGITS,\n EXACTLY_TWO_DIGITS])\nTWENTIETH_CENTURY = NAMED.format(\"twentieth\",\n OR.join([slashed, dashed]))", "_____no_output_____" ], [ "NUMERIC = NAMED.format(\"numeric\",\n SLASH.join([ONE_OR_TWO_DIGITS,\n ONE_OR_TWO_DIGITS,\n FOUR_DIGITS]))", "_____no_output_____" ], [ "NO_PRECEDING_SLASH = NEGATIVE_LOOKBEHIND.format(SLASH)\nNO_PRECEDING_SLASH_DIGIT = NEGATIVE_LOOKBEHIND.format(CLASS.format(SLASH + DIGIT))\nNO_ONE_DAY = (NO_PRECEDING_SLASH_DIGIT\n + ONE_DIGIT\n + SLASH\n + FOUR_DIGITS)\nNO_TWO_DAYS = (NO_PRECEDING_SLASH\n + TWO_DIGITS\n + SLASH\n + FOUR_DIGITS)\nNO_DAY_NUMERIC = NAMED.format(\"no_day_numeric\",\n NO_ONE_DAY\n + OR\n + NO_TWO_DAYS\n )", "_____no_output_____" ], [ "CENTURY = GROUP.format('19' + OR + \"20\") + TWO_DIGITS\nDIGIT_SLASH = DIGIT + SLASH\nDIGIT_DASH = DIGIT + DASH\nDIGIT_SPACE = DIGIT + SPACE\nLETTER_SPACE = CLASS.format(LOWER_CASE) + SPACE\nCOMMA_SPACE = COMMA + SPACE\nYEAR_PREFIX = NEGATIVE_LOOKBEHIND.format(OR.join([\n DIGIT_SLASH,\n DIGIT_DASH,\n DIGIT_SPACE,\n LETTER_SPACE,\n COMMA_SPACE,\n]))\n\nYEAR_ONLY = NAMED.format(\"year_only\",\n YEAR_PREFIX + CENTURY\n)", "_____no_output_____" ], [ "IN_PREFIX = POSITIVE_LOOKBEHIND.format(CLASS.format('iI') + 'n' + SPACE) + CENTURY\nSINCE_PREFIX = POSITIVE_LOOKBEHIND.format(CLASS.format(\"Ss\") + 'ince' + SPACE) + CENTURY\nAGE = POSITIVE_LOOKBEHIND.format(\"Age\" + SPACE + TWO_DIGITS + COMMA + SPACE) + CENTURY\nAGE_COMMA = POSITIVE_LOOKBEHIND.format(\"Age\" + COMMA + SPACE + TWO_DIGITS + COMMA + SPACE) + CENTURY\nOTHERS = ['delivery', \"quit\", \"attempt\", \"nephrectomy\", THREE_DIGITS]\nOTHERS = [POSITIVE_LOOKBEHIND.format(label + SPACE) + CENTURY for label in OTHERS]\nOTHERS = OR.join(OTHERS)\nLEFTOVERS_PREFIX = OR.join([IN_PREFIX, SINCE_PREFIX, AGE, AGE_COMMA]) + OR + OTHERS\nLEFTOVERS = NAMED.format(\"leftovers\", LEFTOVERS_PREFIX)", "_____no_output_____" ], [ "DATE = NAMED.format(\"date\", OR.join([NUMERIC,\n TWENTIETH_CENTURY,\n WORDS,\n BACKWARDS,\n CONTRACTION,\n NO_DAY,\n NO_DAY_NUMERIC,\n YEAR_ONLY,\n LEFTOVERS]))", "_____no_output_____" ], [ "def twentieth_century(date):\n \"\"\"adds a 19 to the year\n\n Args:\n date (re.Regex): Extracted date\n \"\"\"\n month, day, year = date.group(1).split(SLASH)\n year = \"19{}\".format(year)\n return SLASH.join([month, day, year])", "_____no_output_____" ], [ "def take_two(line):\n match = re.search(TWENTIETH_CENTURY, line)\n if match:\n return twentieth_century(match)\n return line", "_____no_output_____" ], [ "def extract_and_count(expression, data, name):\n \"\"\"extract all matches and report the count\n\n Args:\n expression (str): regular expression to match\n data (pandas.Series): data with dates to extratc\n name (str): name of the group for the expression\n\n Returns:\n tuple (pandas.Series, int): extracted dates, count\n \"\"\"\n extracted = data.str.extractall(expression)[name]\n count = len(extracted)\n print(\"'{}' matched {} rows\".format(name, count))\n return extracted, count", "_____no_output_____" ], [ "numeric, numeric_count = extract_and_count(NUMERIC, data, 'numeric')\n# 'numeric' matched 25 rows\ntwentieth, twentieth_count = extract_and_count(TWENTIETH_CENTURY, data, 'twentieth')\n# 'twentieth' matched 100 rows\nwords, words_count = extract_and_count(WORDS, data, 'words')\n# 'words' matched 34 rows\nbackwards, backwards_count = extract_and_count(BACKWARDS, data, 'backwards')\n# 'backwards' matched 69 rows\ncontraction_data, contraction = extract_and_count(CONTRACTION, data, 'contraction')\n# 'contraction' matched 0 rows\nno_day, no_day_count = extract_and_count(NO_DAY, data, 'no_day')\n# 'no_day' matched 115 rows\nno_day_numeric, no_day_numeric_count = extract_and_count(NO_DAY_NUMERIC, data,\n \"no_day_numeric\")\n# 'no_day_numeric' matched 112 rows\nyear_only, year_only_count = extract_and_count(YEAR_ONLY, data, \"year_only\")\n# 'year_only' matched 15 rows\nleftovers, leftovers_count = extract_and_count(LEFTOVERS, data, \"leftovers\")\n# 'leftovers' matched 30 rows", "'numeric' matched 25 rows\n'twentieth' matched 100 rows\n'words' matched 34 rows\n'backwards' matched 69 rows\n'contraction' matched 0 rows\n'no_day' matched 115 rows\n'no_day_numeric' matched 112 rows\n'year_only' matched 15 rows\n'leftovers' matched 30 rows\n" ], [ "found = data.str.extractall(DATE)\ntotal_found = len(found.date)\n\nprint(\"Total Found: {}\".format(total_found))\nprint(\"Remaining: {}\".format(len(data) - total_found))\nprint(\"Discrepancy: {}\".format(total_found - (numeric_count\n + twentieth_count\n + words_count\n + backwards_count\n + contraction\n + no_day_count\n + no_day_numeric_count\n + year_only_count\n + leftovers_count)))", "Total Found: 500\nRemaining: 0\nDiscrepancy: 0\n" ], [ "# Total Found: 500\n# Remaining: 0\n# Discrepancy: 0", "_____no_output_____" ], [ "missing = [label for label in data.index if label not in found.index.levels[0]]\ntry:\n print(missing[0], data.loc[missing[0]])\nexcept IndexError:\n print(\"all rows matched\")", "all rows matched\n" ], [ "# all rows matched", "_____no_output_____" ], [ "def clean(source, expression, replacement, sample=5):\n \"\"\"applies the replacement to the source\n\n as a side-effect shows sample rows before and after\n\n Args:\n source (pandas.Series): source of the strings\n expression (str): regular expression to match what to replace\n replacement: function or expression to replace the matching expression\n sample (int): number of randomly chosen examples to show\n\n Returns:\n pandas.Series: the source with the replacement applied to it\n \"\"\"\n print(\"Random Sample Before:\")\n print(source.sample(sample))\n cleaned = source.str.replace(expression, replacement)\n print(\"\\nRandom Sample After:\")\n print(cleaned.sample(sample))\n print(\"\\nCount of cleaned: {}\".format(len(cleaned)))\n assert len(source) == len(cleaned)\n return cleaned", "_____no_output_____" ], [ "def clean_punctuation(source, sample=5):\n \"\"\"removes punctuation\n\n Args:\n source (pandas.Series): data to clean\n sample (int): size of sample to show\n\n Returns:\n pandas.Series: source with punctuation removed\n \"\"\"\n print(\"Cleaning Punctuation\")\n if any(source.str.contains(PUNCTUATION)):\n source = clean(source, PUNCTUATION, EMPTY_STRING)\n return source", "_____no_output_____" ], [ "LONG_TO_SHORT = dict(January=\"Jan\",\n February=\"Feb\",\n March=\"Mar\",\n April=\"Apr\",\n May=\"May\",\n June=\"Jun\",\n July=\"Jul\",\n August=\"Aug\",\n September=\"Sep\",\n October=\"Oct\",\n November=\"Nov\",\n December=\"Dec\")\n\n# it turns out there are spelling errors in the data so this has to be fuzzy\nLONG_TO_SHORT_EXPRESSION = OR.join([GROUP.format(month)\n + CLASS.format(LOWER_CASE)\n + ZERO_OR_MORE\n for month in LONG_TO_SHORT.values()])\n\ndef long_month_to_short(match):\n \"\"\"convert long month to short\n\n Args:\n match (re.Match): object matching a long month\n\n Returns:\n str: shortened version of the month\n \"\"\"\n return match.group(match.lastindex)", "_____no_output_____" ], [ "def convert_long_months_to_short(source, sample=5):\n \"\"\"convert long month names to short\n\n Args:\n source (pandas.Series): data with months\n sample (int): size of sample to show\n\n Returns:\n pandas.Series: data with short months\n \"\"\"\n return clean(source,\n LONG_TO_SHORT_EXPRESSION,\n long_month_to_short)", "_____no_output_____" ], [ "def add_month_date(match):\n \"\"\"adds 01/01 to years\n\n Args:\n match (re.Match): object that only matched a 4-digit year\n\n Returns:\n str: 01/01/YYYY\n \"\"\"\n return \"01/01/\" + match.group()\n", "_____no_output_____" ], [ "def add_january_one(source):\n \"\"\"adds /01/01/ to year-only dates\n\n Args:\n source (pandas.Series): data with the dates\n\n Returns:\n pandas.Series: years in source with /01/01/ added\n \"\"\"\n return clean(source, YEAR_ONLY, add_month_date)", "_____no_output_____" ], [ "two_digit_expression = GROUP.format(ONE_OR_TWO_DIGITS) + POSITIVE_LOOKAHEAD.format(SLASH)\n\ndef two_digits(match):\n \"\"\"add a leading zero if needed\n\n Args:\n match (re.Match): match with one or two digits\n\n Returns:\n str: the matched string with leading zero if needed\n \"\"\"\n # for some reason the string-formatting raises an error if it's a string\n # so cast it to an int\n return \"{:02}\".format(int(match.group()))", "_____no_output_____" ], [ "def clean_two_digits(source, sample=5):\n \"\"\"makes sure source has two-digits\n\n Args:\n source (pandas.Series): data with digit followed by slash\n sample (int): number of samples to show\n\n Returns:\n pandas.Series: source with digits coerced to two digits\n \"\"\"\n return clean(source, two_digit_expression, two_digits, sample)", "_____no_output_____" ], [ "def clean_two_digits_isolated(source, sample=5):\n \"\"\"cleans two digits that are standalone\n\n Args:\n source (pandas.Series): source of the data\n sample (int): number of samples to show\n\n Returns:\n pandas.Series: converted data\n \"\"\"\n return clean(source, ONE_OR_TWO_DIGITS, two_digits, sample)", "_____no_output_____" ], [ "digits = (\"{:02}\".format(month) for month in range(1, 13))\nMONTH_TO_DIGITS = dict(zip(MONTH_PREFIXES, digits))\nSHORT_MONTHS_EXPRESSION = OR.join((GROUP.format(month) for month in MONTH_TO_DIGITS))\ndef month_to_digits(match):\n \"\"\"converts short month to digits\n\n Args:\n match (re.Match): object with short-month\n\n Returns:\n str: month as two-digit number (e.g. Jan -> 01)\n \"\"\"\n return MONTH_TO_DIGITS[match.group()]", "_____no_output_____" ], [ "def convert_short_month_to_digits(source, sample=5):\n \"\"\"converts three-letter months to two-digits\n\n Args:\n source (pandas.Series): data with three-letter months\n sample (int): number of samples to show\n\n Returns:\n pandas.Series: source with short-months coverted to digits\n \"\"\"\n return clean(source,\n SHORT_MONTHS_EXPRESSION,\n month_to_digits,\n sample)", "_____no_output_____" ], [ "def clean_months(source, sample=5):\n \"\"\"clean up months (which start as words)\n\n Args:\n source (pandas.Series): source of the months\n sample (int): number of random samples to show\n \"\"\"\n cleaned = clean_punctuation(source)\n\n print(\"Converting long months to short\")\n cleaned = clean(cleaned,\n LONG_TO_SHORT_EXPRESSION,\n long_month_to_short, sample)\n\n print(\"Converting short months to digits\")\n cleaned = clean(cleaned,\n SHORT_MONTHS_EXPRESSION,\n month_to_digits, sample)\n return cleaned", "_____no_output_____" ], [ "def frame_to_series(frame, index_source, samples=5):\n \"\"\"re-combines data-frame into a series\n\n Args:\n frame (pandas.DataFrame): frame with month, day, year columns\n index_source (pandas.series): source to copy index from\n samples (index): number of random entries to print when done\n\n Returns:\n pandas.Series: series with dates as month/day/year\n \"\"\"\n combined = frame.month + SLASH + frame.day + SLASH + frame.year\n combined.index = index_source.index\n print(combined.sample(samples))\n return combined", "_____no_output_____" ], [ "year_only_cleaned = add_january_one(year_only)", "Random Sample Before:\n match\n496 0 2006\n472 0 2010\n486 0 1973\n495 0 1979\n497 0 2008\nName: year_only, dtype: object\n\nRandom Sample After:\n match\n481 0 01/01/1974\n486 0 01/01/1973\n473 0 01/01/1975\n466 0 01/01/1981\n472 0 01/01/2010\nName: year_only, dtype: object\n\nCount of cleaned: 15\n" ], [ "# Random Sample Before:\n# match\n# 472 0 2010\n# 495 0 1979\n# 497 0 2008\n# 481 0 1974\n# 486 0 1973\n# Name: year_only, dtype: object\n\n# Random Sample After:\n# match\n# 495 0 01/01/1979\n# 470 0 01/01/1983\n# 462 0 01/01/1988\n# 481 0 01/01/1974\n# 480 0 01/01/2013\n# Name: year_only, dtype: object\n\n# Count of cleaned: 15", "_____no_output_____" ], [ "leftovers_cleaned = add_january_one(leftovers)", "Random Sample Before:\n match\n488 0 1977\n463 0 2014\n494 0 2002\n477 0 1994\n474 0 1972\nName: leftovers, dtype: object\n\nRandom Sample After:\n match\n475 0 01/01/2015\n477 0 01/01/1994\n488 0 01/01/1977\n456 0 01/01/2000\n494 0 01/01/2002\nName: leftovers, dtype: object\n\nCount of cleaned: 30\n" ], [ "# Random Sample Before:\n# match\n# 487 0 1992\n# 477 0 1994\n# 498 0 2005\n# 488 0 1977\n# 484 0 2004\n# Name: leftovers, dtype: object\n\n# Random Sample After:\n# match\n# 464 0 01/01/2016\n# 455 0 01/01/1984\n# 465 0 01/01/1976\n# 475 0 01/01/2015\n# 498 0 01/01/2005\n# Name: leftovers, dtype: object\n\n# Count of cleaned: 30", "_____no_output_____" ], [ "cleaned = pandas.concat([year_only_cleaned, leftovers_cleaned])\nprint(len(cleaned))", "45\n" ], [ "no_day_numeric_cleaned = clean_two_digits(no_day_numeric)", "Random Sample Before:\n match\n426 0 11/1984\n417 0 8/2000\n407 0 8/1999\n389 0 2/2009\n354 0 3/1993\nName: no_day_numeric, dtype: object\n\nRandom Sample After:\n match\n363 0 12/1975\n447 0 07/1985\n351 0 08/1974\n414 0 04/2004\n344 0 06/2005\nName: no_day_numeric, dtype: object\n\nCount of cleaned: 112\n" ], [ "no_day_numeric_cleaned = clean(no_day_numeric_cleaned,\n SLASH,\n lambda m: \"/01/\")", "Random Sample Before:\n match\n366 0 07/2014\n401 0 12/2014\n452 0 03/2003\n346 0 09/2005\n380 0 07/1973\nName: no_day_numeric, dtype: object\n\nRandom Sample After:\n match\n404 0 10/01/1986\n405 0 03/01/1973\n346 0 09/01/2005\n407 0 08/01/1999\n353 0 10/01/1997\nName: no_day_numeric, dtype: object\n\nCount of cleaned: 112\n" ], [ "original = len(cleaned)\ncleaned = pandas.concat([cleaned, no_day_numeric_cleaned])\nassert len(cleaned) == no_day_numeric_count + original", "_____no_output_____" ], [ "print(len(cleaned))", "157\n" ], [ "no_day_cleaned = clean_months(no_day)", "Cleaning Punctuation\nRandom Sample Before:\n match\n269 0 July 1992\n284 0 Jan 1987\n327 0 April 1999\n283 0 Feb 1977\n256 0 Aug 1988\nName: no_day, dtype: object\n\nRandom Sample After:\n match\n257 0 Sep 2015\n295 0 March 1983\n321 0 June 1999\n276 0 April 1986\n231 0 May 2016\nName: no_day, dtype: object\n\nCount of cleaned: 115\nConverting long months to short\nRandom Sample Before:\n match\n260 0 February 2000\n314 0 January 2007\n332 0 June 1974\n231 0 May 2016\n302 0 November 2004\nName: no_day, dtype: object\n\nRandom Sample After:\n match\n294 0 Feb 1983\n271 0 Aug 2008\n248 0 Jul 1995\n254 0 Aug 1979\n327 0 Apr 1999\nName: no_day, dtype: object\n\nCount of cleaned: 115\nConverting short months to digits\nRandom Sample Before:\n match\n253 0 Feb 2016\n274 0 Apr 1985\n317 0 Mar 1975\n326 0 Oct 1995\n323 0 Mar 1973\nName: no_day, dtype: object\n\nRandom Sample After:\n match\n242 0 11 2010\n283 0 02 1977\n239 0 02 1978\n247 0 05 1983\n325 0 06 2007\nName: no_day, dtype: object\n\nCount of cleaned: 115\n" ], [ "no_day_cleaned = clean(no_day_cleaned,\n SPACE + ONE_OR_MORE,\n lambda match: \"/01/\")", "Random Sample Before:\n match\n319 0 07 1975\n329 0 03 2000\n229 0 06 2011\n312 0 02 1989\n294 0 02 1983\nName: no_day, dtype: object\n\nRandom Sample After:\n match\n294 0 02/01/1983\n280 0 07/01/1985\n304 0 03/01/2002\n261 0 10/01/1986\n266 0 09/01/1999\nName: no_day, dtype: object\n\nCount of cleaned: 115\n" ], [ "original = len(cleaned)\ncleaned = pandas.concat([cleaned, no_day_cleaned])\nprint(len(cleaned))", "272\n" ], [ "assert len(cleaned) == no_day_count + original", "_____no_output_____" ], [ "frame = pandas.DataFrame(backwards.str.split().tolist(),\n columns=\"day month year\".split())\nframe.head()", "_____no_output_____" ], [ "frame.day = clean_two_digits(frame.day)", "Random Sample Before:\n27 28\n24 09\n49 04\n54 10\n14 21\nName: day, dtype: object\n\nRandom Sample After:\n7 30\n46 04\n22 30\n35 21\n20 18\nName: day, dtype: object\n\nCount of cleaned: 69\n" ], [ "frame.month = clean_months(frame.month)", "Cleaning Punctuation\nConverting long months to short\nRandom Sample Before:\n22 May\n51 May\n10 Oct\n64 Oct\n38 Jan\nName: month, dtype: object\n\nRandom Sample After:\n16 May\n33 Aug\n5 Oct\n0 Jan\n35 Oct\nName: month, dtype: object\n\nCount of cleaned: 69\nConverting short months to digits\nRandom Sample Before:\n51 May\n26 Jun\n33 Aug\n20 Oct\n42 Oct\nName: month, dtype: object\n\nRandom Sample After:\n18 10\n53 06\n44 10\n45 01\n10 10\nName: month, dtype: object\n\nCount of cleaned: 69\n" ], [ "backwards_cleaned = frame_to_series(frame, backwards)", " match\n188 0 03/12/2004\n137 0 02/10/1983\n191 0 11/30/1972\n136 0 02/11/1985\n125 0 01/24/2001\ndtype: object\n" ], [ "original = len(cleaned)\ncleaned = pandas.concat([cleaned, backwards_cleaned])\nassert len(cleaned) == original + backwards_count", "_____no_output_____" ], [ "print(len(cleaned))", "341\n" ], [ "frame = pandas.DataFrame(words.str.split().tolist(), columns=\"month day year\".split())\nprint(frame.head())", " month day year\n0 April 11, 1990\n1 May 30, 2001\n2 Feb 18, 1994\n3 February 18, 1981\n4 October. 11, 2013\n" ], [ "frame.month = clean_months(frame.month)", "Cleaning Punctuation\nRandom Sample Before:\n6 July\n30 July\n4 October.\n22 Nov\n15 July\nName: month, dtype: object\n\nRandom Sample After:\n22 Nov\n20 Sep\n26 June\n16 August\n7 December\nName: month, dtype: object\n\nCount of cleaned: 34\nConverting long months to short\nRandom Sample Before:\n28 May\n22 Nov\n17 April\n23 June\n26 June\nName: month, dtype: object\n\nRandom Sample After:\n15 Jul\n19 Jul\n18 Jul\n20 Sep\n29 Oct\nName: month, dtype: object\n\nCount of cleaned: 34\nConverting short months to digits\nRandom Sample Before:\n2 Feb\n29 Oct\n31 Jun\n24 May\n26 Jun\nName: month, dtype: object\n\nRandom Sample After:\n21 08\n11 01\n25 12\n17 04\n20 09\nName: month, dtype: object\n\nCount of cleaned: 34\n" ], [ "frame.day = clean_punctuation(frame.day)", "Cleaning Punctuation\nRandom Sample Before:\n28 15,\n18 24,\n12 23\n26 25,\n19 11,\nName: day, dtype: object\n\nRandom Sample After:\n18 24\n14 01\n10 10\n3 18\n2 18\nName: day, dtype: object\n\nCount of cleaned: 34\n" ], [ "frame.head()", "_____no_output_____" ], [ "words_cleaned = frame_to_series(frame, words)", " match\n201 0 12/23/1999\n215 0 08/14/1981\n199 0 01/24/1986\n211 0 04/17/1992\n200 0 07/26/1978\ndtype: object\n" ], [ "original = len(cleaned)\ncleaned = pandas.concat([cleaned, words_cleaned])\nassert len(cleaned) == original + words_count\nprint(len(cleaned))", "375\n" ], [ "print(twentieth.iloc[21])\ntwentieth_cleaned = twentieth.str.replace(DASH, SLASH)\nprint(cleaned.iloc[21])", "4-13-82\n01/01/1991\n" ], [ "frame = pandas.DataFrame(twentieth_cleaned.str.split(SLASH).tolist(),\n columns=[\"month\", \"day\", \"year\"])\nprint(frame.head())", " month day year\n0 03 25 93\n1 6 18 85\n2 7 8 71\n3 9 27 75\n4 2 6 96\n" ], [ "frame.month = clean_two_digits_isolated(frame.month)", "Random Sample Before:\n83 2\n75 7\n62 12\n34 6\n77 5\nName: month, dtype: object\n\nRandom Sample After:\n5 07\n20 02\n63 04\n56 07\n2 07\nName: month, dtype: object\n\nCount of cleaned: 100\n" ], [ "frame.day = clean_two_digits_isolated(frame.day)", "Random Sample Before:\n31 24\n80 05\n8 7\n49 06\n15 24\nName: day, dtype: object\n\nRandom Sample After:\n93 20\n82 24\n31 24\n2 08\n4 06\nName: day, dtype: object\n\nCount of cleaned: 100\n" ], [ "frame.head()", "_____no_output_____" ], [ "frame.year = clean(frame.year, TWO_DIGITS, lambda match: \"19\" + match.group())", "Random Sample Before:\n44 71\n72 95\n55 87\n38 94\n41 75\nName: year, dtype: object\n\nRandom Sample After:\n62 1987\n6 1978\n9 1971\n35 1994\n16 1977\nName: year, dtype: object\n\nCount of cleaned: 100\n" ], [ "twentieth_cleaned = frame_to_series(frame, twentieth)", " match\n96 0 07/18/1986\n81 0 08/04/1978\n36 0 02/14/1973\n114 0 12/08/1997\n72 0 07/11/1977\ndtype: object\n" ], [ "original = len(cleaned)\ncleaned = pandas.concat([cleaned, twentieth_cleaned])", "_____no_output_____" ], [ "assert len(cleaned) == original + twentieth_count", "_____no_output_____" ], [ "print(numeric.head())", " match\n14 0 5/24/1990\n15 0 1/25/2011\n17 0 10/13/1976\n24 0 07/25/1984\n30 0 03/31/1985\nName: numeric, dtype: object\n" ], [ "has_dashes = numeric.str.contains(DASH)\nprint(numeric[has_dashes])", "Series([], Name: numeric, dtype: object)\n" ], [ "frame = pandas.DataFrame(numeric.str.split(SLASH).tolist(),\n columns=\"month day year\".split())\nprint(frame.head())", " month day year\n0 5 24 1990\n1 1 25 2011\n2 10 13 1976\n3 07 25 1984\n4 03 31 1985\n" ], [ "frame.month = clean_two_digits_isolated(frame.month)", "Random Sample Before:\n18 04\n10 12\n20 4\n16 09\n3 07\nName: month, dtype: object\n\nRandom Sample After:\n1 01\n14 01\n9 10\n19 12\n24 04\nName: month, dtype: object\n\nCount of cleaned: 25\n" ], [ "frame.day = clean_two_digits_isolated(frame.day)", "Random Sample Before:\n4 31\n7 13\n6 27\n19 08\n12 29\nName: day, dtype: object\n\nRandom Sample After:\n6 27\n10 05\n3 25\n16 14\n8 15\nName: day, dtype: object\n\nCount of cleaned: 25\n" ], [ "numeric_cleaned = frame_to_series(frame, numeric)", " match\n109 0 07/20/2011\n38 0 07/27/1986\n92 0 04/08/2004\n34 0 05/12/2012\n94 0 12/08/1990\ndtype: object\n" ], [ "original = len(cleaned)\ncleaned = pandas.concat([cleaned, numeric_cleaned])\nassert len(cleaned) == original + numeric_count\nprint(len(cleaned))", "500\n" ], [ "cleaned = pandas.concat([numeric_cleaned,\n twentieth_cleaned,\n words_cleaned,\n backwards_cleaned,\n no_day_cleaned,\n no_day_numeric_cleaned,\n year_only_cleaned,\n leftovers_cleaned,\n])\nprint(len(cleaned))\nprint(cleaned.head())\nassert len(cleaned) == len(data)", "500\n match\n14 0 05/24/1990\n15 0 01/25/2011\n17 0 10/13/1976\n24 0 07/25/1984\n30 0 03/31/1985\ndtype: object\n" ], [ "print(cleaned.head())\ndatetimes = pandas.to_datetime(cleaned, format=\"%m/%d/%Y\")\nprint(datetimes.head())", " match\n14 0 05/24/1990\n15 0 01/25/2011\n17 0 10/13/1976\n24 0 07/25/1984\n30 0 03/31/1985\ndtype: object\n match\n14 0 1990-05-24\n15 0 2011-01-25\n17 0 1976-10-13\n24 0 1984-07-25\n30 0 1985-03-31\ndtype: datetime64[ns]\n" ], [ "sorted_dates = datetimes.sort_values()\nprint(sorted_dates.head())", " match\n9 0 1971-04-10\n84 0 1971-05-18\n2 0 1971-07-08\n53 0 1971-07-11\n28 0 1971-09-12\ndtype: datetime64[ns]\n" ], [ "print(sorted_dates.tail())", " match\n231 0 2016-05-01\n141 0 2016-05-30\n186 0 2016-10-13\n161 0 2016-10-19\n413 0 2016-11-01\ndtype: datetime64[ns]\n" ], [ "answer = pandas.Series(sorted_dates.index.labels[0])\nprint(answer.head())", "0 9\n1 84\n2 2\n3 53\n4 28\ndtype: int16\n" ], [ "def date_sorter():\n return answer", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a4b69f0bd4173db1fa126a29603eea5be250092
797,490
ipynb
Jupyter Notebook
Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb
j-zeg/sagemaker-deployment
36faf2711fc6e516ff9abf148c08f8fa9f7658d4
[ "MIT" ]
null
null
null
Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb
j-zeg/sagemaker-deployment
36faf2711fc6e516ff9abf148c08f8fa9f7658d4
[ "MIT" ]
null
null
null
Tutorials/Boston Housing - XGBoost (Batch Transform) - High Level.ipynb
j-zeg/sagemaker-deployment
36faf2711fc6e516ff9abf148c08f8fa9f7658d4
[ "MIT" ]
null
null
null
87.511248
14,796
0.68484
[ [ [ "# Predicting Boston Housing Prices\n\n## Using XGBoost in SageMaker (Batch Transform)\n\n_Deep Learning Nanodegree Program | Deployment_\n\n---\n\nAs an introduction to using SageMaker's High Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.\n\nThe documentation for the high level API can be found on the [ReadTheDocs page](http://sagemaker.readthedocs.io/en/latest/)\n\n## General Outline\n\nTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.\n\n1. Download or otherwise retrieve the data.\n2. Process / Prepare the data.\n3. Upload the processed data to S3.\n4. Train a chosen model.\n5. Test the trained model (typically using a batch transform job).\n6. Deploy the trained model.\n7. Use the deployed model.\n\nIn this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail.", "_____no_output_____" ], [ "## Step 0: Setting up the notebook\n\nWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.", "_____no_output_____" ] ], [ [ "%matplotlib inline\n\nimport os\n\nimport numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\n\nfrom sklearn.datasets import load_boston\nimport sklearn.model_selection", "_____no_output_____" ] ], [ [ "In addition to the modules above, we need to import the various bits of SageMaker that we will be using. ", "_____no_output_____" ] ], [ [ "import sagemaker\nfrom sagemaker import get_execution_role\nfrom sagemaker.amazon.amazon_estimator import get_image_uri\nfrom sagemaker.predictor import csv_serializer\n\n# This is an object that represents the SageMaker session that we are currently operating in. This\n# object contains some useful information that we will need to access later such as our region.\nsession = sagemaker.Session()\n\n# This is an object that represents the IAM role that we are currently assigned. When we construct\n# and launch the training job later we will need to tell it what IAM role it should have. Since our\n# use case is relatively simple we will simply assign the training job the role we currently have.\nrole = get_execution_role()", "_____no_output_____" ], [ "# dir(role)", "_____no_output_____" ] ], [ [ "## Step 1: Downloading the data\n\nFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.", "_____no_output_____" ] ], [ [ "boston = load_boston()", "_____no_output_____" ], [ "print(boston.DESCR)", ".. _boston_dataset:\n\nBoston house prices dataset\n---------------------------\n\n**Data Set Characteristics:** \n\n :Number of Instances: 506 \n\n :Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target.\n\n :Attribute Information (in order):\n - CRIM per capita crime rate by town\n - ZN proportion of residential land zoned for lots over 25,000 sq.ft.\n - INDUS proportion of non-retail business acres per town\n - CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)\n - NOX nitric oxides concentration (parts per 10 million)\n - RM average number of rooms per dwelling\n - AGE proportion of owner-occupied units built prior to 1940\n - DIS weighted distances to five Boston employment centres\n - RAD index of accessibility to radial highways\n - TAX full-value property-tax rate per $10,000\n - PTRATIO pupil-teacher ratio by town\n - B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town\n - LSTAT % lower status of the population\n - MEDV Median value of owner-occupied homes in $1000's\n\n :Missing Attribute Values: None\n\n :Creator: Harrison, D. and Rubinfeld, D.L.\n\nThis is a copy of UCI ML housing dataset.\nhttps://archive.ics.uci.edu/ml/machine-learning-databases/housing/\n\n\nThis dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.\n\nThe Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic\nprices and the demand for clean air', J. Environ. Economics & Management,\nvol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics\n...', Wiley, 1980. N.B. Various transformations are used in the table on\npages 244-261 of the latter.\n\nThe Boston house-price data has been used in many machine learning papers that address regression\nproblems. \n \n.. topic:: References\n\n - Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261.\n - Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.\n\n" ] ], [ [ "## Step 2: Preparing and splitting the data\n\nGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.", "_____no_output_____" ] ], [ [ "# First we package up the input data and the target variable (the median value) as pandas dataframes. This\n# will make saving the data to a file a little easier later on.\n\nX_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)\nY_bos_pd = pd.DataFrame(boston.target)\n\n# We split the dataset into 2/3 training and 1/3 testing sets.\nX_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)\n\n# Then we split the training set further into 2/3 training and 1/3 validation sets.\nX_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)", "_____no_output_____" ] ], [ [ "## Step 3: Uploading the data files to S3\n\nWhen a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details.\n\n### Save the data locally\n\nFirst we need to create the test, train and validation csv files which we will then upload to S3.", "_____no_output_____" ] ], [ [ "# This is our local data directory. We need to make sure that it exists.\ndata_dir = '../data/boston'\nif not os.path.exists(data_dir):\n os.makedirs(data_dir)", "_____no_output_____" ], [ "# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header\n# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and\n# validation data, it is assumed that the first entry in each row is the target variable.\n\nX_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)\n\npd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)\npd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)", "_____no_output_____" ] ], [ [ "### Upload to S3\n\nSince we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.", "_____no_output_____" ] ], [ [ "prefix = 'boston-xgboost-HL'\n\ntest_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)\nval_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)\ntrain_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)", "_____no_output_____" ] ], [ [ "## Step 4: Train the XGBoost model\n\nNow that we have the training and validation data uploaded to S3, we can construct our XGBoost model and train it. We will be making use of the high level SageMaker API to do this which will make the resulting code a little easier to read at the cost of some flexibility.\n\nTo construct an estimator, the object which we wish to train, we need to provide the location of a container which contains the training code. Since we are using a built in algorithm this container is provided by Amazon. However, the full name of the container is a bit lengthy and depends on the region that we are operating in. Fortunately, SageMaker provides a useful utility method called `get_image_uri` that constructs the image name for us.\n\nTo use the `get_image_uri` method we need to provide it with our current region, which can be obtained from the session object, and the name of the algorithm we wish to use. In this notebook we will be using XGBoost however you could try another algorithm if you wish. The list of built in algorithms can be found in the list of [Common Parameters](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html).", "_____no_output_____" ] ], [ [ "# As stated above, we use this utility method to construct the image name for the training container.\ncontainer = get_image_uri(session.boto_region_name, 'xgboost')\n\n# Now that we know which container to use, we can construct the estimator object.\nxgb = sagemaker.estimator.Estimator(container, # The image name of the training container\n role, # The IAM role to use (our current role in this case)\n train_instance_count=1, # The number of instances to use for training\n train_instance_type='ml.m4.xlarge', # The type of instance to use for training\n output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),\n # Where to save the output (the model artifacts)\n sagemaker_session=session) # The current SageMaker session", "WARNING:root:There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='0.90-1'. For example:\n\tget_image_uri(region, 'xgboost', '0.90-1').\n" ] ], [ [ "Before asking SageMaker to begin the training job, we should probably set any model specific hyperparameters. There are quite a few that can be set when using the XGBoost algorithm, below are just a few of them. If you would like to change the hyperparameters below or modify additional ones you can find additional information on the [XGBoost hyperparameter page](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost_hyperparameters.html)", "_____no_output_____" ] ], [ [ "xgb.set_hyperparameters(max_depth=5,\n eta=0.2,\n gamma=4,\n min_child_weight=6,\n subsample=0.8,\n objective='reg:linear',\n early_stopping_rounds=10,\n num_round=200)", "_____no_output_____" ] ], [ [ "Now that we have our estimator object completely set up, it is time to train it. To do this we make sure that SageMaker knows our input data is in csv format and then execute the `fit` method.", "_____no_output_____" ] ], [ [ "# This is a wrapper around the location of our train and validation data, to make sure that SageMaker\n# knows our data is in csv format.\ns3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')\ns3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')\n\nxgb.fit({'train': s3_input_train, 'validation': s3_input_validation})", "2020-06-19 10:02:19 Starting - Starting the training job...\n2020-06-19 10:02:20 Starting - Launching requested ML instances......\n2020-06-19 10:03:29 Starting - Preparing the instances for training......\n2020-06-19 10:04:40 Downloading - Downloading input data\n2020-06-19 10:04:40 Training - Downloading the training image..\u001b[34mArguments: train\u001b[0m\n\u001b[34m[2020-06-19:10:05:01:INFO] Running standalone xgboost training.\u001b[0m\n\u001b[34m[2020-06-19:10:05:01:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8488.58mb\u001b[0m\n\u001b[34m[2020-06-19:10:05:01:INFO] Determined delimiter of CSV input is ','\u001b[0m\n\u001b[34m[10:05:01] S3DistributionType set as FullyReplicated\u001b[0m\n\u001b[34m[10:05:01] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,\u001b[0m\n\u001b[34m[2020-06-19:10:05:01:INFO] Determined delimiter of CSV input is ','\u001b[0m\n\u001b[34m[10:05:01] S3DistributionType set as FullyReplicated\u001b[0m\n\u001b[34m[10:05:01] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 0 pruned nodes, max_depth=3\u001b[0m\n\u001b[34m[0]#011train-rmse:19.763#011validation-rmse:19.7121\u001b[0m\n\u001b[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.\n\u001b[0m\n\u001b[34mWill train until validation-rmse hasn't improved in 10 rounds.\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=4\u001b[0m\n\u001b[34m[1]#011train-rmse:16.1564#011validation-rmse:16.1243\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 0 pruned nodes, max_depth=3\u001b[0m\n\u001b[34m[2]#011train-rmse:13.3373#011validation-rmse:13.4093\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[3]#011train-rmse:11.0022#011validation-rmse:11.2882\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[4]#011train-rmse:9.18487#011validation-rmse:9.53739\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[5]#011train-rmse:7.70571#011validation-rmse:8.18771\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[6]#011train-rmse:6.52281#011validation-rmse:7.18525\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[7]#011train-rmse:5.54241#011validation-rmse:6.35235\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[8]#011train-rmse:4.81398#011validation-rmse:5.91751\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[9]#011train-rmse:4.18159#011validation-rmse:5.41686\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[10]#011train-rmse:3.72271#011validation-rmse:5.05303\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[11]#011train-rmse:3.37531#011validation-rmse:4.83314\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[12]#011train-rmse:3.10011#011validation-rmse:4.66749\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[13]#011train-rmse:2.8336#011validation-rmse:4.44194\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[14]#011train-rmse:2.67771#011validation-rmse:4.37748\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[15]#011train-rmse:2.56842#011validation-rmse:4.2789\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[16]#011train-rmse:2.49046#011validation-rmse:4.211\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[17]#011train-rmse:2.38789#011validation-rmse:4.18879\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[18]#011train-rmse:2.29867#011validation-rmse:4.18533\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[19]#011train-rmse:2.24218#011validation-rmse:4.15377\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[20]#011train-rmse:2.19876#011validation-rmse:4.14155\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[21]#011train-rmse:2.15465#011validation-rmse:4.14608\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[22]#011train-rmse:2.07456#011validation-rmse:4.11843\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[23]#011train-rmse:2.00483#011validation-rmse:4.07925\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[24]#011train-rmse:1.98375#011validation-rmse:4.10024\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[25]#011train-rmse:1.95631#011validation-rmse:4.11296\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[26]#011train-rmse:1.87344#011validation-rmse:4.1\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[27]#011train-rmse:1.81025#011validation-rmse:4.08312\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[28]#011train-rmse:1.77307#011validation-rmse:4.0394\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[29]#011train-rmse:1.71674#011validation-rmse:4.03952\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[30]#011train-rmse:1.65888#011validation-rmse:4.05758\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[31]#011train-rmse:1.62055#011validation-rmse:4.1322\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[32]#011train-rmse:1.58819#011validation-rmse:4.09761\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[33]#011train-rmse:1.55843#011validation-rmse:4.07826\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[34]#011train-rmse:1.50777#011validation-rmse:4.05382\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[35]#011train-rmse:1.48228#011validation-rmse:4.06931\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[36]#011train-rmse:1.46698#011validation-rmse:4.04621\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[37]#011train-rmse:1.44047#011validation-rmse:4.04105\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[38]#011train-rmse:1.41678#011validation-rmse:4.03642\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[39]#011train-rmse:1.38011#011validation-rmse:4.03613\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[40]#011train-rmse:1.37321#011validation-rmse:4.0115\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=4\u001b[0m\n\u001b[34m[41]#011train-rmse:1.35911#011validation-rmse:4.01131\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[42]#011train-rmse:1.34092#011validation-rmse:4.02243\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[43]#011train-rmse:1.32265#011validation-rmse:4.03209\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[44]#011train-rmse:1.30134#011validation-rmse:4.02671\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 8 pruned nodes, max_depth=1\u001b[0m\n\u001b[34m[45]#011train-rmse:1.30243#011validation-rmse:4.01145\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[46]#011train-rmse:1.29794#011validation-rmse:3.99821\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[47]#011train-rmse:1.28705#011validation-rmse:3.98084\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[48]#011train-rmse:1.24627#011validation-rmse:3.93809\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[49]#011train-rmse:1.21419#011validation-rmse:3.97412\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 14 pruned nodes, max_depth=3\u001b[0m\n\u001b[34m[50]#011train-rmse:1.21339#011validation-rmse:3.96582\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 16 pruned nodes, max_depth=4\u001b[0m\n\u001b[34m[51]#011train-rmse:1.19798#011validation-rmse:3.97013\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[52]#011train-rmse:1.17525#011validation-rmse:3.98646\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[53]#011train-rmse:1.16732#011validation-rmse:4.00046\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=4\u001b[0m\n\u001b[34m[54]#011train-rmse:1.1439#011validation-rmse:3.98644\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=4\u001b[0m\n\u001b[34m[55]#011train-rmse:1.13518#011validation-rmse:3.99813\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 12 pruned nodes, max_depth=3\u001b[0m\n\u001b[34m[56]#011train-rmse:1.11412#011validation-rmse:4.01411\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 18 pruned nodes, max_depth=4\u001b[0m\n\u001b[34m[57]#011train-rmse:1.08082#011validation-rmse:4.018\u001b[0m\n\u001b[34m[10:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[58]#011train-rmse:1.07163#011validation-rmse:4.01367\u001b[0m\n\u001b[34mStopping. Best iteration:\u001b[0m\n\u001b[34m[48]#011train-rmse:1.24627#011validation-rmse:3.93809\n\u001b[0m\n" ] ], [ [ "## Step 5: Test the model\n\nNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. To start with, we need to build a transformer object from our fit model.", "_____no_output_____" ] ], [ [ "xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')", "_____no_output_____" ] ], [ [ "Next we ask SageMaker to begin a batch transform job using our trained model and applying it to the test data we previously stored in S3. We need to make sure to provide SageMaker with the type of data that we are providing to our model, in our case `text/csv`, so that it knows how to serialize our data. In addition, we need to make sure to let SageMaker know how to split our data up into chunks if the entire data set happens to be too large to send to our model all at once.\n\nNote that when we ask SageMaker to do this it will execute the batch transform job in the background. Since we need to wait for the results of this job before we can continue, we use the `wait()` method. An added benefit of this is that we get some output from our batch transform job which lets us know if anything went wrong.", "_____no_output_____" ] ], [ [ "xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')", "_____no_output_____" ], [ "xgb_transformer.wait()", "....................\u001b[34mArguments: serve\u001b[0m\n\u001b[34m[2020-06-19 10:11:15 +0000] [1] [INFO] Starting gunicorn 19.7.1\u001b[0m\n\u001b[34m[2020-06-19 10:11:15 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)\u001b[0m\n\u001b[34m[2020-06-19 10:11:15 +0000] [1] [INFO] Using worker: gevent\u001b[0m\n\u001b[34m[2020-06-19 10:11:15 +0000] [41] [INFO] Booting worker with pid: 41\u001b[0m\n\u001b[34m[2020-06-19 10:11:15 +0000] [42] [INFO] Booting worker with pid: 42\u001b[0m\n\u001b[34m[2020-06-19 10:11:15 +0000] [43] [INFO] Booting worker with pid: 43\u001b[0m\n\u001b[34m[2020-06-19 10:11:15 +0000] [44] [INFO] Booting worker with pid: 44\u001b[0m\n\u001b[34m[2020-06-19:10:11:15:INFO] Model loaded successfully for worker : 41\u001b[0m\n\u001b[34m[2020-06-19:10:11:15:INFO] Model loaded successfully for worker : 42\u001b[0m\n\u001b[34m[2020-06-19:10:11:16:INFO] Model loaded successfully for worker : 44\u001b[0m\n\u001b[34m[2020-06-19:10:11:16:INFO] Model loaded successfully for worker : 43\u001b[0m\n\n\u001b[34m[2020-06-19:10:11:34:INFO] Sniff delimiter as ','\u001b[0m\n\u001b[34m[2020-06-19:10:11:34:INFO] Determined delimiter of CSV input is ','\u001b[0m\n\u001b[32m2020-06-19T10:11:34.609:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD\u001b[0m\n" ] ], [ [ "Now that the batch transform job has finished, the resulting output is stored on S3. Since we wish to analyze the output inside of our notebook we can use a bit of notebook magic to copy the output file from its S3 location and save it locally.", "_____no_output_____" ] ], [ [ "!aws s3 cp --recursive $xgb_transformer.output_path $data_dir", "Completed 2.3 KiB/2.3 KiB (35.6 KiB/s) with 1 file(s) remaining\rdownload: s3://sagemaker-eu-central-1-245452871727/xgboost-2020-06-19-10-08-09-582/test.csv.out to ../data/boston/test.csv.out\r\n" ] ], [ [ "To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.", "_____no_output_____" ] ], [ [ "Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)", "_____no_output_____" ], [ "plt.scatter(Y_test, Y_pred)\nplt.xlabel(\"Median Price\")\nplt.ylabel(\"Predicted Price\")\nplt.title(\"Median Price vs Predicted Price\")", "_____no_output_____" ] ], [ [ "## Optional: Clean up\n\nThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.", "_____no_output_____" ] ], [ [ "# First we will remove all of the files contained in the data_dir directory\n!rm $data_dir/*\n\n# And then we delete the directory itself\n!rmdir $data_dir", "_____no_output_____" ], [ "data_dir = '../data/aclImdb/train/unsup'\n", "_____no_output_____" ], [ "!rm $data_dir/*\n\n# !rmdir $data_dir", "/bin/sh: /bin/rm: Argument list too long\r\n" ], [ "!ls $data_dir", "0_0.txt 17499_0.txt 25000_0.txt 32500_0.txt 40001_0.txt 47501_0.txt\r\n10000_0.txt 17500_0.txt 2500_0.txt 3250_0.txt 40002_0.txt 47502_0.txt\r\n1000_0.txt 1750_0.txt 25001_0.txt 32501_0.txt 40003_0.txt 47503_0.txt\r\n10001_0.txt 17501_0.txt 25002_0.txt 32502_0.txt 40004_0.txt 47504_0.txt\r\n10002_0.txt 17502_0.txt 25003_0.txt 32503_0.txt 40005_0.txt 47505_0.txt\r\n10003_0.txt 17503_0.txt 25004_0.txt 32504_0.txt 40006_0.txt 47506_0.txt\r\n10004_0.txt 17504_0.txt 25005_0.txt 32505_0.txt 40007_0.txt 47507_0.txt\r\n10005_0.txt 17505_0.txt 25006_0.txt 32506_0.txt 40008_0.txt 47508_0.txt\r\n10006_0.txt 17506_0.txt 25007_0.txt 32507_0.txt 40009_0.txt 47509_0.txt\r\n10007_0.txt 17507_0.txt 25008_0.txt 32508_0.txt 400_0.txt\t 475_0.txt\r\n10008_0.txt 17508_0.txt 25009_0.txt 32509_0.txt 40010_0.txt 47510_0.txt\r\n10009_0.txt 17509_0.txt 250_0.txt 325_0.txt 4001_0.txt\t 4751_0.txt\r\n100_0.txt 175_0.txt\t 25010_0.txt 32510_0.txt 40011_0.txt 47511_0.txt\r\n10010_0.txt 17510_0.txt 2501_0.txt 3251_0.txt 40012_0.txt 47512_0.txt\r\n1001_0.txt 1751_0.txt 25011_0.txt 32511_0.txt 40013_0.txt 47513_0.txt\r\n10011_0.txt 17511_0.txt 25012_0.txt 32512_0.txt 40014_0.txt 47514_0.txt\r\n10012_0.txt 17512_0.txt 25013_0.txt 32513_0.txt 40015_0.txt 47515_0.txt\r\n10013_0.txt 17513_0.txt 25014_0.txt 32514_0.txt 40016_0.txt 47516_0.txt\r\n10014_0.txt 17514_0.txt 25015_0.txt 32515_0.txt 40017_0.txt 47517_0.txt\r\n10015_0.txt 17515_0.txt 25016_0.txt 32516_0.txt 40018_0.txt 47518_0.txt\r\n10016_0.txt 17516_0.txt 25017_0.txt 32517_0.txt 40019_0.txt 47519_0.txt\r\n10017_0.txt 17517_0.txt 25018_0.txt 32518_0.txt 40020_0.txt 47520_0.txt\r\n10018_0.txt 17518_0.txt 25019_0.txt 32519_0.txt 4002_0.txt\t 4752_0.txt\r\n10019_0.txt 17519_0.txt 25020_0.txt 32520_0.txt 40021_0.txt 47521_0.txt\r\n10020_0.txt 17520_0.txt 2502_0.txt 3252_0.txt 40022_0.txt 47522_0.txt\r\n1002_0.txt 1752_0.txt 25021_0.txt 32521_0.txt 40023_0.txt 47523_0.txt\r\n10021_0.txt 17521_0.txt 25022_0.txt 32522_0.txt 40024_0.txt 47524_0.txt\r\n10022_0.txt 17522_0.txt 25023_0.txt 32523_0.txt 40025_0.txt 47525_0.txt\r\n10023_0.txt 17523_0.txt 25024_0.txt 32524_0.txt 40026_0.txt 47526_0.txt\r\n10024_0.txt 17524_0.txt 25025_0.txt 32525_0.txt 40027_0.txt 47527_0.txt\r\n10025_0.txt 17525_0.txt 25026_0.txt 32526_0.txt 40028_0.txt 47528_0.txt\r\n10026_0.txt 17526_0.txt 25027_0.txt 32527_0.txt 40029_0.txt 47529_0.txt\r\n10027_0.txt 17527_0.txt 25028_0.txt 32528_0.txt 40030_0.txt 47530_0.txt\r\n10028_0.txt 17528_0.txt 25029_0.txt 32529_0.txt 4003_0.txt\t 4753_0.txt\r\n10029_0.txt 17529_0.txt 25030_0.txt 32530_0.txt 40031_0.txt 47531_0.txt\r\n10030_0.txt 17530_0.txt 2503_0.txt 3253_0.txt 40032_0.txt 47532_0.txt\r\n1003_0.txt 1753_0.txt 25031_0.txt 32531_0.txt 40033_0.txt 47533_0.txt\r\n10031_0.txt 17531_0.txt 25032_0.txt 32532_0.txt 40034_0.txt 47534_0.txt\r\n10032_0.txt 17532_0.txt 25033_0.txt 32533_0.txt 40035_0.txt 47535_0.txt\r\n10033_0.txt 17533_0.txt 25034_0.txt 32534_0.txt 40036_0.txt 47536_0.txt\r\n10034_0.txt 17534_0.txt 25035_0.txt 32535_0.txt 40037_0.txt 47537_0.txt\r\n10035_0.txt 17535_0.txt 25036_0.txt 32536_0.txt 40038_0.txt 47538_0.txt\r\n10036_0.txt 17536_0.txt 25037_0.txt 32537_0.txt 40039_0.txt 47539_0.txt\r\n10037_0.txt 17537_0.txt 25038_0.txt 32538_0.txt 40040_0.txt 47540_0.txt\r\n10038_0.txt 17538_0.txt 25039_0.txt 32539_0.txt 4004_0.txt\t 4754_0.txt\r\n10039_0.txt 17539_0.txt 25040_0.txt 32540_0.txt 40041_0.txt 47541_0.txt\r\n10040_0.txt 17540_0.txt 2504_0.txt 3254_0.txt 40042_0.txt 47542_0.txt\r\n1004_0.txt 1754_0.txt 25041_0.txt 32541_0.txt 40043_0.txt 47543_0.txt\r\n10041_0.txt 17541_0.txt 25042_0.txt 32542_0.txt 40044_0.txt 47544_0.txt\r\n10042_0.txt 17542_0.txt 25043_0.txt 32543_0.txt 40045_0.txt 47545_0.txt\r\n10043_0.txt 17543_0.txt 25044_0.txt 32544_0.txt 40046_0.txt 47546_0.txt\r\n10044_0.txt 17544_0.txt 25045_0.txt 32545_0.txt 40047_0.txt 47547_0.txt\r\n10045_0.txt 17545_0.txt 25046_0.txt 32546_0.txt 40048_0.txt 47548_0.txt\r\n10046_0.txt 17546_0.txt 25047_0.txt 32547_0.txt 40049_0.txt 47549_0.txt\r\n10047_0.txt 17547_0.txt 25048_0.txt 32548_0.txt 40050_0.txt 47550_0.txt\r\n10048_0.txt 17548_0.txt 25049_0.txt 32549_0.txt 4005_0.txt\t 4755_0.txt\r\n10049_0.txt 17549_0.txt 25050_0.txt 32550_0.txt 40051_0.txt 47551_0.txt\r\n10050_0.txt 17550_0.txt 2505_0.txt 3255_0.txt 40052_0.txt 47552_0.txt\r\n1005_0.txt 1755_0.txt 25051_0.txt 32551_0.txt 40053_0.txt 47553_0.txt\r\n10051_0.txt 17551_0.txt 25052_0.txt 32552_0.txt 40054_0.txt 47554_0.txt\r\n10052_0.txt 17552_0.txt 25053_0.txt 32553_0.txt 40055_0.txt 47555_0.txt\r\n10053_0.txt 17553_0.txt 25054_0.txt 32554_0.txt 40056_0.txt 47556_0.txt\r\n10054_0.txt 17554_0.txt 25055_0.txt 32555_0.txt 40057_0.txt 47557_0.txt\r\n10055_0.txt 17555_0.txt 25056_0.txt 32556_0.txt 40058_0.txt 47558_0.txt\r\n10056_0.txt 17556_0.txt 25057_0.txt 32557_0.txt 40059_0.txt 47559_0.txt\r\n10057_0.txt 17557_0.txt 25058_0.txt 32558_0.txt 40060_0.txt 47560_0.txt\r\n10058_0.txt 17558_0.txt 25059_0.txt 32559_0.txt 4006_0.txt\t 4756_0.txt\r\n10059_0.txt 17559_0.txt 25060_0.txt 32560_0.txt 40061_0.txt 47561_0.txt\r\n10060_0.txt 17560_0.txt 2506_0.txt 3256_0.txt 40062_0.txt 47562_0.txt\r\n1006_0.txt 1756_0.txt 25061_0.txt 32561_0.txt 40063_0.txt 47563_0.txt\r\n10061_0.txt 17561_0.txt 25062_0.txt 32562_0.txt 40064_0.txt 47564_0.txt\r\n10062_0.txt 17562_0.txt 25063_0.txt 32563_0.txt 40065_0.txt 47565_0.txt\r\n10063_0.txt 17563_0.txt 25064_0.txt 32564_0.txt 40066_0.txt 47566_0.txt\r\n10064_0.txt 17564_0.txt 25065_0.txt 32565_0.txt 40067_0.txt 47567_0.txt\r\n10065_0.txt 17565_0.txt 25066_0.txt 32566_0.txt 40068_0.txt 47568_0.txt\r\n10066_0.txt 17566_0.txt 25067_0.txt 32567_0.txt 40069_0.txt 47569_0.txt\r\n10067_0.txt 17567_0.txt 25068_0.txt 32568_0.txt 40070_0.txt 47570_0.txt\r\n10068_0.txt 17568_0.txt 25069_0.txt 32569_0.txt 4007_0.txt\t 4757_0.txt\r\n10069_0.txt 17569_0.txt 25070_0.txt 32570_0.txt 40071_0.txt 47571_0.txt\r\n10070_0.txt 17570_0.txt 2507_0.txt 3257_0.txt 40072_0.txt 47572_0.txt\r\n1007_0.txt 1757_0.txt 25071_0.txt 32571_0.txt 40073_0.txt 47573_0.txt\r\n10071_0.txt 17571_0.txt 25072_0.txt 32572_0.txt 40074_0.txt 47574_0.txt\r\n10072_0.txt 17572_0.txt 25073_0.txt 32573_0.txt 40075_0.txt 47575_0.txt\r\n10073_0.txt 17573_0.txt 25074_0.txt 32574_0.txt 40076_0.txt 47576_0.txt\r\n10074_0.txt 17574_0.txt 25075_0.txt 32575_0.txt 40077_0.txt 47577_0.txt\r\n10075_0.txt 17575_0.txt 25076_0.txt 32576_0.txt 40078_0.txt 47578_0.txt\r\n10076_0.txt 17576_0.txt 25077_0.txt 32577_0.txt 40079_0.txt 47579_0.txt\r\n10077_0.txt 17577_0.txt 25078_0.txt 32578_0.txt 40080_0.txt 47580_0.txt\r\n10078_0.txt 17578_0.txt 25079_0.txt 32579_0.txt 4008_0.txt\t 4758_0.txt\r\n10079_0.txt 17579_0.txt 25080_0.txt 32580_0.txt 40081_0.txt 47581_0.txt\r\n10080_0.txt 17580_0.txt 2508_0.txt 3258_0.txt 40082_0.txt 47582_0.txt\r\n1008_0.txt 1758_0.txt 25081_0.txt 32581_0.txt 40083_0.txt 47583_0.txt\r\n10081_0.txt 17581_0.txt 25082_0.txt 32582_0.txt 40084_0.txt 47584_0.txt\r\n10082_0.txt 17582_0.txt 25083_0.txt 32583_0.txt 40085_0.txt 47585_0.txt\r\n10083_0.txt 17583_0.txt 25084_0.txt 32584_0.txt 40086_0.txt 47586_0.txt\r\n10084_0.txt 17584_0.txt 25085_0.txt 32585_0.txt 40087_0.txt 47587_0.txt\r\n10085_0.txt 17585_0.txt 25086_0.txt 32586_0.txt 40088_0.txt 47588_0.txt\r\n10086_0.txt 17586_0.txt 25087_0.txt 32587_0.txt 40089_0.txt 47589_0.txt\r\n10087_0.txt 17587_0.txt 25088_0.txt 32588_0.txt 40090_0.txt 47590_0.txt\r\n10088_0.txt 17588_0.txt 25089_0.txt 32589_0.txt 4009_0.txt\t 4759_0.txt\r\n10089_0.txt 17589_0.txt 25090_0.txt 32590_0.txt 40091_0.txt 47591_0.txt\r\n10090_0.txt 17590_0.txt 2509_0.txt 3259_0.txt 40092_0.txt 47592_0.txt\r\n1009_0.txt 1759_0.txt 25091_0.txt 32591_0.txt 40093_0.txt 47593_0.txt\r\n10091_0.txt 17591_0.txt 25092_0.txt 32592_0.txt 40094_0.txt 47594_0.txt\r\n10092_0.txt 17592_0.txt 25093_0.txt 32593_0.txt 40095_0.txt 47595_0.txt\r\n10093_0.txt 17593_0.txt 25094_0.txt 32594_0.txt 40096_0.txt 47596_0.txt\r\n10094_0.txt 17594_0.txt 25095_0.txt 32595_0.txt 40097_0.txt 47597_0.txt\r\n10095_0.txt 17595_0.txt 25096_0.txt 32596_0.txt 40098_0.txt 47598_0.txt\r\n10096_0.txt 17596_0.txt 25097_0.txt 32597_0.txt 40099_0.txt 47599_0.txt\r\n10097_0.txt 17597_0.txt 25098_0.txt 32598_0.txt 40_0.txt\t 47600_0.txt\r\n10098_0.txt 17598_0.txt 25099_0.txt 32599_0.txt 40100_0.txt 4760_0.txt\r\n10099_0.txt 17599_0.txt 25_0.txt 32600_0.txt 4010_0.txt\t 47601_0.txt\r\n10_0.txt 17600_0.txt 25100_0.txt 3260_0.txt 40101_0.txt 47602_0.txt\r\n10100_0.txt 1760_0.txt 2510_0.txt 32601_0.txt 40102_0.txt 47603_0.txt\r\n1010_0.txt 17601_0.txt 25101_0.txt 32602_0.txt 40103_0.txt 47604_0.txt\r\n10101_0.txt 17602_0.txt 25102_0.txt 32603_0.txt 40104_0.txt 47605_0.txt\r\n10102_0.txt 17603_0.txt 25103_0.txt 32604_0.txt 40105_0.txt 47606_0.txt\r\n10103_0.txt 17604_0.txt 25104_0.txt 32605_0.txt 40106_0.txt 47607_0.txt\r\n10104_0.txt 17605_0.txt 25105_0.txt 32606_0.txt 40107_0.txt 47608_0.txt\r\n10105_0.txt 17606_0.txt 25106_0.txt 32607_0.txt 40108_0.txt 47609_0.txt\r\n10106_0.txt 17607_0.txt 25107_0.txt 32608_0.txt 40109_0.txt 476_0.txt\r\n10107_0.txt 17608_0.txt 25108_0.txt 32609_0.txt 401_0.txt\t 47610_0.txt\r\n10108_0.txt 17609_0.txt 25109_0.txt 326_0.txt 40110_0.txt 4761_0.txt\r\n10109_0.txt 176_0.txt\t 251_0.txt 32610_0.txt 4011_0.txt\t 47611_0.txt\r\n101_0.txt 17610_0.txt 25110_0.txt 3261_0.txt 40111_0.txt 47612_0.txt\r\n10110_0.txt 1761_0.txt 2511_0.txt 32611_0.txt 40112_0.txt 47613_0.txt\r\n1011_0.txt 17611_0.txt 25111_0.txt 32612_0.txt 40113_0.txt 47614_0.txt\r\n10111_0.txt 17612_0.txt 25112_0.txt 32613_0.txt 40114_0.txt 47615_0.txt\r\n10112_0.txt 17613_0.txt 25113_0.txt 32614_0.txt 40115_0.txt 47616_0.txt\r\n10113_0.txt 17614_0.txt 25114_0.txt 32615_0.txt 40116_0.txt 47617_0.txt\r\n10114_0.txt 17615_0.txt 25115_0.txt 32616_0.txt 40117_0.txt 47618_0.txt\r\n10115_0.txt 17616_0.txt 25116_0.txt 32617_0.txt 40118_0.txt 47619_0.txt\r\n10116_0.txt 17617_0.txt 25117_0.txt 32618_0.txt 40119_0.txt 47620_0.txt\r\n10117_0.txt 17618_0.txt 25118_0.txt 32619_0.txt 40120_0.txt 4762_0.txt\r\n10118_0.txt 17619_0.txt 25119_0.txt 32620_0.txt 4012_0.txt\t 47621_0.txt\r\n10119_0.txt 17620_0.txt 25120_0.txt 3262_0.txt 40121_0.txt 47622_0.txt\r\n10120_0.txt 1762_0.txt 2512_0.txt 32621_0.txt 40122_0.txt 47623_0.txt\r\n1012_0.txt 17621_0.txt 25121_0.txt 32622_0.txt 40123_0.txt 47624_0.txt\r\n10121_0.txt 17622_0.txt 25122_0.txt 32623_0.txt 40124_0.txt 47625_0.txt\r\n10122_0.txt 17623_0.txt 25123_0.txt 32624_0.txt 40125_0.txt 47626_0.txt\r\n10123_0.txt 17624_0.txt 25124_0.txt 32625_0.txt 40126_0.txt 47627_0.txt\r\n10124_0.txt 17625_0.txt 25125_0.txt 32626_0.txt 40127_0.txt 47628_0.txt\r\n10125_0.txt 17626_0.txt 25126_0.txt 32627_0.txt 40128_0.txt 47629_0.txt\r\n10126_0.txt 17627_0.txt 25127_0.txt 32628_0.txt 40129_0.txt 47630_0.txt\r\n10127_0.txt 17628_0.txt 25128_0.txt 32629_0.txt 40130_0.txt 4763_0.txt\r\n10128_0.txt 17629_0.txt 25129_0.txt 32630_0.txt 4013_0.txt\t 47631_0.txt\r\n10129_0.txt 17630_0.txt 25130_0.txt 3263_0.txt 40131_0.txt 47632_0.txt\r\n10130_0.txt 1763_0.txt 2513_0.txt 32631_0.txt 40132_0.txt 47633_0.txt\r\n1013_0.txt 17631_0.txt 25131_0.txt 32632_0.txt 40133_0.txt 47634_0.txt\r\n10131_0.txt 17632_0.txt 25132_0.txt 32633_0.txt 40134_0.txt 47635_0.txt\r\n10132_0.txt 17633_0.txt 25133_0.txt 32634_0.txt 40135_0.txt 47636_0.txt\r\n10133_0.txt 17634_0.txt 25134_0.txt 32635_0.txt 40136_0.txt 47637_0.txt\r\n10134_0.txt 17635_0.txt 25135_0.txt 32636_0.txt 40137_0.txt 47638_0.txt\r\n10135_0.txt 17636_0.txt 25136_0.txt 32637_0.txt 40138_0.txt 47639_0.txt\r\n10136_0.txt 17637_0.txt 25137_0.txt 32638_0.txt 40139_0.txt 47640_0.txt\r\n10137_0.txt 17638_0.txt 25138_0.txt 32639_0.txt 40140_0.txt 4764_0.txt\r\n10138_0.txt 17639_0.txt 25139_0.txt 32640_0.txt 4014_0.txt\t 47641_0.txt\r\n10139_0.txt 17640_0.txt 25140_0.txt 3264_0.txt 40141_0.txt 47642_0.txt\r\n10140_0.txt 1764_0.txt 2514_0.txt 32641_0.txt 40142_0.txt 47643_0.txt\r\n1014_0.txt 17641_0.txt 25141_0.txt 32642_0.txt 40143_0.txt 47644_0.txt\r\n10141_0.txt 17642_0.txt 25142_0.txt 32643_0.txt 40144_0.txt 47645_0.txt\r\n10142_0.txt 17643_0.txt 25143_0.txt 32644_0.txt 40145_0.txt 47646_0.txt\r\n10143_0.txt 17644_0.txt 25144_0.txt 32645_0.txt 40146_0.txt 47647_0.txt\r\n10144_0.txt 17645_0.txt 25145_0.txt 32646_0.txt 40147_0.txt 47648_0.txt\r\n10145_0.txt 17646_0.txt 25146_0.txt 32647_0.txt 40148_0.txt 47649_0.txt\r\n10146_0.txt 17647_0.txt 25147_0.txt 32648_0.txt 40149_0.txt 47650_0.txt\r\n10147_0.txt 17648_0.txt 25148_0.txt 32649_0.txt 40150_0.txt 4765_0.txt\r\n10148_0.txt 17649_0.txt 25149_0.txt 32650_0.txt 4015_0.txt\t 47651_0.txt\r\n10149_0.txt 17650_0.txt 25150_0.txt 3265_0.txt 40151_0.txt 47652_0.txt\r\n10150_0.txt 1765_0.txt 2515_0.txt 32651_0.txt 40152_0.txt 47653_0.txt\r\n1015_0.txt 17651_0.txt 25151_0.txt 32652_0.txt 40153_0.txt 47654_0.txt\r\n10151_0.txt 17652_0.txt 25152_0.txt 32653_0.txt 40154_0.txt 47655_0.txt\r\n10152_0.txt 17653_0.txt 25153_0.txt 32654_0.txt 40155_0.txt 47656_0.txt\r\n10153_0.txt 17654_0.txt 25154_0.txt 32655_0.txt 40156_0.txt 47657_0.txt\r\n10154_0.txt 17655_0.txt 25155_0.txt 32656_0.txt 40157_0.txt 47658_0.txt\r\n10155_0.txt 17656_0.txt 25156_0.txt 32657_0.txt 40158_0.txt 47659_0.txt\r\n10156_0.txt 17657_0.txt 25157_0.txt 32658_0.txt 40159_0.txt 47660_0.txt\r\n10157_0.txt 17658_0.txt 25158_0.txt 32659_0.txt 40160_0.txt 4766_0.txt\r\n10158_0.txt 17659_0.txt 25159_0.txt 32660_0.txt 4016_0.txt\t 47661_0.txt\r\n10159_0.txt 17660_0.txt 25160_0.txt 3266_0.txt 40161_0.txt 47662_0.txt\r\n10160_0.txt 1766_0.txt 2516_0.txt 32661_0.txt 40162_0.txt 47663_0.txt\r\n1016_0.txt 17661_0.txt 25161_0.txt 32662_0.txt 40163_0.txt 47664_0.txt\r\n10161_0.txt 17662_0.txt 25162_0.txt 32663_0.txt 40164_0.txt 47665_0.txt\r\n10162_0.txt 17663_0.txt 25163_0.txt 32664_0.txt 40165_0.txt 47666_0.txt\r\n10163_0.txt 17664_0.txt 25164_0.txt 32665_0.txt 40166_0.txt 47667_0.txt\r\n10164_0.txt 17665_0.txt 25165_0.txt 32666_0.txt 40167_0.txt 47668_0.txt\r\n10165_0.txt 17666_0.txt 25166_0.txt 32667_0.txt 40168_0.txt 47669_0.txt\r\n10166_0.txt 17667_0.txt 25167_0.txt 32668_0.txt 40169_0.txt 47670_0.txt\r\n10167_0.txt 17668_0.txt 25168_0.txt 32669_0.txt 40170_0.txt 4767_0.txt\r\n10168_0.txt 17669_0.txt 25169_0.txt 32670_0.txt 4017_0.txt\t 47671_0.txt\r\n10169_0.txt 17670_0.txt 25170_0.txt 3267_0.txt 40171_0.txt 47672_0.txt\r\n10170_0.txt 1767_0.txt 2517_0.txt 32671_0.txt 40172_0.txt 47673_0.txt\r\n1017_0.txt 17671_0.txt 25171_0.txt 32672_0.txt 40173_0.txt 47674_0.txt\r\n10171_0.txt 17672_0.txt 25172_0.txt 32673_0.txt 40174_0.txt 47675_0.txt\r\n10172_0.txt 17673_0.txt 25173_0.txt 32674_0.txt 40175_0.txt 47676_0.txt\r\n10173_0.txt 17674_0.txt 25174_0.txt 32675_0.txt 40176_0.txt 47677_0.txt\r\n10174_0.txt 17675_0.txt 25175_0.txt 32676_0.txt 40177_0.txt 47678_0.txt\r\n10175_0.txt 17676_0.txt 25176_0.txt 32677_0.txt 40178_0.txt 47679_0.txt\r\n10176_0.txt 17677_0.txt 25177_0.txt 32678_0.txt 40179_0.txt 47680_0.txt\r\n10177_0.txt 17678_0.txt 25178_0.txt 32679_0.txt 40180_0.txt 4768_0.txt\r\n10178_0.txt 17679_0.txt 25179_0.txt 32680_0.txt 4018_0.txt\t 47681_0.txt\r\n10179_0.txt 17680_0.txt 25180_0.txt 3268_0.txt 40181_0.txt 47682_0.txt\r\n10180_0.txt 1768_0.txt 2518_0.txt 32681_0.txt 40182_0.txt 47683_0.txt\r\n1018_0.txt 17681_0.txt 25181_0.txt 32682_0.txt 40183_0.txt 47684_0.txt\r\n10181_0.txt 17682_0.txt 25182_0.txt 32683_0.txt 40184_0.txt 47685_0.txt\r\n10182_0.txt 17683_0.txt 25183_0.txt 32684_0.txt 40185_0.txt 47686_0.txt\r\n10183_0.txt 17684_0.txt 25184_0.txt 32685_0.txt 40186_0.txt 47687_0.txt\r\n10184_0.txt 17685_0.txt 25185_0.txt 32686_0.txt 40187_0.txt 47688_0.txt\r\n10185_0.txt 17686_0.txt 25186_0.txt 32687_0.txt 40188_0.txt 47689_0.txt\r\n10186_0.txt 17687_0.txt 25187_0.txt 32688_0.txt 40189_0.txt 47690_0.txt\r\n10187_0.txt 17688_0.txt 25188_0.txt 32689_0.txt 40190_0.txt 4769_0.txt\r\n10188_0.txt 17689_0.txt 25189_0.txt 32690_0.txt 4019_0.txt\t 47691_0.txt\r\n10189_0.txt 17690_0.txt 25190_0.txt 3269_0.txt 40191_0.txt 47692_0.txt\r\n10190_0.txt 1769_0.txt 2519_0.txt 32691_0.txt 40192_0.txt 47693_0.txt\r\n1019_0.txt 17691_0.txt 25191_0.txt 32692_0.txt 40193_0.txt 47694_0.txt\r\n10191_0.txt 17692_0.txt 25192_0.txt 32693_0.txt 40194_0.txt 47695_0.txt\r\n10192_0.txt 17693_0.txt 25193_0.txt 32694_0.txt 40195_0.txt 47696_0.txt\r\n10193_0.txt 17694_0.txt 25194_0.txt 32695_0.txt 40196_0.txt 47697_0.txt\r\n10194_0.txt 17695_0.txt 25195_0.txt 32696_0.txt 40197_0.txt 47698_0.txt\r\n10195_0.txt 17696_0.txt 25196_0.txt 32697_0.txt 40198_0.txt 47699_0.txt\r\n10196_0.txt 17697_0.txt 25197_0.txt 32698_0.txt 40199_0.txt 47700_0.txt\r\n10197_0.txt 17698_0.txt 25198_0.txt 32699_0.txt 40200_0.txt 4770_0.txt\r\n10198_0.txt 17699_0.txt 25199_0.txt 32700_0.txt 4020_0.txt\t 47701_0.txt\r\n10199_0.txt 17700_0.txt 25200_0.txt 3270_0.txt 40201_0.txt 47702_0.txt\r\n10200_0.txt 1770_0.txt 2520_0.txt 32701_0.txt 40202_0.txt 47703_0.txt\r\n1020_0.txt 17701_0.txt 25201_0.txt 32702_0.txt 40203_0.txt 47704_0.txt\r\n10201_0.txt 17702_0.txt 25202_0.txt 32703_0.txt 40204_0.txt 47705_0.txt\r\n10202_0.txt 17703_0.txt 25203_0.txt 32704_0.txt 40205_0.txt 47706_0.txt\r\n10203_0.txt 17704_0.txt 25204_0.txt 32705_0.txt 40206_0.txt 47707_0.txt\r\n10204_0.txt 17705_0.txt 25205_0.txt 32706_0.txt 40207_0.txt 47708_0.txt\r\n10205_0.txt 17706_0.txt 25206_0.txt 32707_0.txt 40208_0.txt 47709_0.txt\r\n10206_0.txt 17707_0.txt 25207_0.txt 32708_0.txt 40209_0.txt 477_0.txt\r\n10207_0.txt 17708_0.txt 25208_0.txt 32709_0.txt 402_0.txt\t 47710_0.txt\r\n10208_0.txt 17709_0.txt 25209_0.txt 327_0.txt 40210_0.txt 4771_0.txt\r\n10209_0.txt 177_0.txt\t 252_0.txt 32710_0.txt 4021_0.txt\t 47711_0.txt\r\n102_0.txt 17710_0.txt 25210_0.txt 3271_0.txt 40211_0.txt 47712_0.txt\r\n10210_0.txt 1771_0.txt 2521_0.txt 32711_0.txt 40212_0.txt 47713_0.txt\r\n1021_0.txt 17711_0.txt 25211_0.txt 32712_0.txt 40213_0.txt 47714_0.txt\r\n10211_0.txt 17712_0.txt 25212_0.txt 32713_0.txt 40214_0.txt 47715_0.txt\r\n10212_0.txt 17713_0.txt 25213_0.txt 32714_0.txt 40215_0.txt 47716_0.txt\r\n10213_0.txt 17714_0.txt 25214_0.txt 32715_0.txt 40216_0.txt 47717_0.txt\r\n10214_0.txt 17715_0.txt 25215_0.txt 32716_0.txt 40217_0.txt 47718_0.txt\r\n10215_0.txt 17716_0.txt 25216_0.txt 32717_0.txt 40218_0.txt 47719_0.txt\r\n10216_0.txt 17717_0.txt 25217_0.txt 32718_0.txt 40219_0.txt 47720_0.txt\r\n10217_0.txt 17718_0.txt 25218_0.txt 32719_0.txt 40220_0.txt 4772_0.txt\r\n10218_0.txt 17719_0.txt 25219_0.txt 32720_0.txt 4022_0.txt\t 47721_0.txt\r\n10219_0.txt 17720_0.txt 25220_0.txt 3272_0.txt 40221_0.txt 47722_0.txt\r\n10220_0.txt 1772_0.txt 2522_0.txt 32721_0.txt 40222_0.txt 47723_0.txt\r\n1022_0.txt 17721_0.txt 25221_0.txt 32722_0.txt 40223_0.txt 47724_0.txt\r\n10221_0.txt 17722_0.txt 25222_0.txt 32723_0.txt 40224_0.txt 47725_0.txt\r\n10222_0.txt 17723_0.txt 25223_0.txt 32724_0.txt 40225_0.txt 47726_0.txt\r\n10223_0.txt 17724_0.txt 25224_0.txt 32725_0.txt 40226_0.txt 47727_0.txt\r\n10224_0.txt 17725_0.txt 25225_0.txt 32726_0.txt 40227_0.txt 47728_0.txt\r\n10225_0.txt 17726_0.txt 25226_0.txt 32727_0.txt 40228_0.txt 47729_0.txt\r\n10226_0.txt 17727_0.txt 25227_0.txt 32728_0.txt 40229_0.txt 47730_0.txt\r\n10227_0.txt 17728_0.txt 25228_0.txt 32729_0.txt 40230_0.txt 4773_0.txt\r\n10228_0.txt 17729_0.txt 25229_0.txt 32730_0.txt 4023_0.txt\t 47731_0.txt\r\n10229_0.txt 17730_0.txt 25230_0.txt 3273_0.txt 40231_0.txt 47732_0.txt\r\n10230_0.txt 1773_0.txt 2523_0.txt 32731_0.txt 40232_0.txt 47733_0.txt\r\n1023_0.txt 17731_0.txt 25231_0.txt 32732_0.txt 40233_0.txt 47734_0.txt\r\n10231_0.txt 17732_0.txt 25232_0.txt 32733_0.txt 40234_0.txt 47735_0.txt\r\n10232_0.txt 17733_0.txt 25233_0.txt 32734_0.txt 40235_0.txt 47736_0.txt\r\n10233_0.txt 17734_0.txt 25234_0.txt 32735_0.txt 40236_0.txt 47737_0.txt\r\n10234_0.txt 17735_0.txt 25235_0.txt 32736_0.txt 40237_0.txt 47738_0.txt\r\n10235_0.txt 17736_0.txt 25236_0.txt 32737_0.txt 40238_0.txt 47739_0.txt\r\n10236_0.txt 17737_0.txt 25237_0.txt 32738_0.txt 40239_0.txt 47740_0.txt\r\n10237_0.txt 17738_0.txt 25238_0.txt 32739_0.txt 40240_0.txt 4774_0.txt\r\n10238_0.txt 17739_0.txt 25239_0.txt 32740_0.txt 4024_0.txt\t 47741_0.txt\r\n10239_0.txt 17740_0.txt 25240_0.txt 3274_0.txt 40241_0.txt 47742_0.txt\r\n10240_0.txt 1774_0.txt 2524_0.txt 32741_0.txt 40242_0.txt 47743_0.txt\r\n1024_0.txt 17741_0.txt 25241_0.txt 32742_0.txt 40243_0.txt 47744_0.txt\r\n10241_0.txt 17742_0.txt 25242_0.txt 32743_0.txt 40244_0.txt 47745_0.txt\r\n10242_0.txt 17743_0.txt 25243_0.txt 32744_0.txt 40245_0.txt 47746_0.txt\r\n10243_0.txt 17744_0.txt 25244_0.txt 32745_0.txt 40246_0.txt 47747_0.txt\r\n10244_0.txt 17745_0.txt 25245_0.txt 32746_0.txt 40247_0.txt 47748_0.txt\r\n10245_0.txt 17746_0.txt 25246_0.txt 32747_0.txt 40248_0.txt 47749_0.txt\r\n10246_0.txt 17747_0.txt 25247_0.txt 32748_0.txt 40249_0.txt 47750_0.txt\r\n10247_0.txt 17748_0.txt 25248_0.txt 32749_0.txt 40250_0.txt 4775_0.txt\r\n10248_0.txt 17749_0.txt 25249_0.txt 32750_0.txt 4025_0.txt\t 47751_0.txt\r\n10249_0.txt 17750_0.txt 25250_0.txt 3275_0.txt 40251_0.txt 47752_0.txt\r\n10250_0.txt 1775_0.txt 2525_0.txt 32751_0.txt 40252_0.txt 47753_0.txt\r\n1025_0.txt 17751_0.txt 25251_0.txt 32752_0.txt 40253_0.txt 47754_0.txt\r\n10251_0.txt 17752_0.txt 25252_0.txt 32753_0.txt 40254_0.txt 47755_0.txt\r\n10252_0.txt 17753_0.txt 25253_0.txt 32754_0.txt 40255_0.txt 47756_0.txt\r\n10253_0.txt 17754_0.txt 25254_0.txt 32755_0.txt 40256_0.txt 47757_0.txt\r\n10254_0.txt 17755_0.txt 25255_0.txt 32756_0.txt 40257_0.txt 47758_0.txt\r\n10255_0.txt 17756_0.txt 25256_0.txt 32757_0.txt 40258_0.txt 47759_0.txt\r\n10256_0.txt 17757_0.txt 25257_0.txt 32758_0.txt 40259_0.txt 47760_0.txt\r\n10257_0.txt 17758_0.txt 25258_0.txt 32759_0.txt 40260_0.txt 4776_0.txt\r\n10258_0.txt 17759_0.txt 25259_0.txt 32760_0.txt 4026_0.txt\t 47761_0.txt\r\n10259_0.txt 17760_0.txt 25260_0.txt 3276_0.txt 40261_0.txt 47762_0.txt\r\n10260_0.txt 1776_0.txt 2526_0.txt 32761_0.txt 40262_0.txt 47763_0.txt\r\n1026_0.txt 17761_0.txt 25261_0.txt 32762_0.txt 40263_0.txt 47764_0.txt\r\n10261_0.txt 17762_0.txt 25262_0.txt 32763_0.txt 40264_0.txt 47765_0.txt\r\n10262_0.txt 17763_0.txt 25263_0.txt 32764_0.txt 40265_0.txt 47766_0.txt\r\n10263_0.txt 17764_0.txt 25264_0.txt 32765_0.txt 40266_0.txt 47767_0.txt\r\n10264_0.txt 17765_0.txt 25265_0.txt 32766_0.txt 40267_0.txt 47768_0.txt\r\n10265_0.txt 17766_0.txt 25266_0.txt 32767_0.txt 40268_0.txt 47769_0.txt\r\n10266_0.txt 17767_0.txt 25267_0.txt 32768_0.txt 40269_0.txt 47770_0.txt\r\n10267_0.txt 17768_0.txt 25268_0.txt 32769_0.txt 40270_0.txt 4777_0.txt\r\n10268_0.txt 17769_0.txt 25269_0.txt 32770_0.txt 4027_0.txt\t 47771_0.txt\r\n10269_0.txt 17770_0.txt 25270_0.txt 3277_0.txt 40271_0.txt 47772_0.txt\r\n10270_0.txt 1777_0.txt 2527_0.txt 32771_0.txt 40272_0.txt 47773_0.txt\r\n1027_0.txt 17771_0.txt 25271_0.txt 32772_0.txt 40273_0.txt 47774_0.txt\r\n10271_0.txt 17772_0.txt 25272_0.txt 32773_0.txt 40274_0.txt 47775_0.txt\r\n10272_0.txt 17773_0.txt 25273_0.txt 32774_0.txt 40275_0.txt 47776_0.txt\r\n10273_0.txt 17774_0.txt 25274_0.txt 32775_0.txt 40276_0.txt 47777_0.txt\r\n10274_0.txt 17775_0.txt 25275_0.txt 32776_0.txt 40277_0.txt 47778_0.txt\r\n10275_0.txt 17776_0.txt 25276_0.txt 32777_0.txt 40278_0.txt 47779_0.txt\r\n10276_0.txt 17777_0.txt 25277_0.txt 32778_0.txt 40279_0.txt 47780_0.txt\r\n10277_0.txt 17778_0.txt 25278_0.txt 32779_0.txt 40280_0.txt 4778_0.txt\r\n10278_0.txt 17779_0.txt 25279_0.txt 32780_0.txt 4028_0.txt\t 47781_0.txt\r\n10279_0.txt 17780_0.txt 25280_0.txt 3278_0.txt 40281_0.txt 47782_0.txt\r\n10280_0.txt 1778_0.txt 2528_0.txt 32781_0.txt 40282_0.txt 47783_0.txt\r\n1028_0.txt 17781_0.txt 25281_0.txt 32782_0.txt 40283_0.txt 47784_0.txt\r\n10281_0.txt 17782_0.txt 25282_0.txt 32783_0.txt 40284_0.txt 47785_0.txt\r\n10282_0.txt 17783_0.txt 25283_0.txt 32784_0.txt 40285_0.txt 47786_0.txt\r\n10283_0.txt 17784_0.txt 25284_0.txt 32785_0.txt 40286_0.txt 47787_0.txt\r\n10284_0.txt 17785_0.txt 25285_0.txt 32786_0.txt 40287_0.txt 47788_0.txt\r\n10285_0.txt 17786_0.txt 25286_0.txt 32787_0.txt 40288_0.txt 47789_0.txt\r\n10286_0.txt 17787_0.txt 25287_0.txt 32788_0.txt 40289_0.txt 47790_0.txt\r\n10287_0.txt 17788_0.txt 25288_0.txt 32789_0.txt 40290_0.txt 4779_0.txt\r\n10288_0.txt 17789_0.txt 25289_0.txt 32790_0.txt 4029_0.txt\t 47791_0.txt\r\n10289_0.txt 17790_0.txt 25290_0.txt 3279_0.txt 40291_0.txt 47792_0.txt\r\n10290_0.txt 1779_0.txt 2529_0.txt 32791_0.txt 40292_0.txt 47793_0.txt\r\n1029_0.txt 17791_0.txt 25291_0.txt 32792_0.txt 40293_0.txt 47794_0.txt\r\n10291_0.txt 17792_0.txt 25292_0.txt 32793_0.txt 40294_0.txt 47795_0.txt\r\n10292_0.txt 17793_0.txt 25293_0.txt 32794_0.txt 40295_0.txt 47796_0.txt\r\n10293_0.txt 17794_0.txt 25294_0.txt 32795_0.txt 40296_0.txt 47797_0.txt\r\n10294_0.txt 17795_0.txt 25295_0.txt 32796_0.txt 40297_0.txt 47798_0.txt\r\n10295_0.txt 17796_0.txt 25296_0.txt 32797_0.txt 40298_0.txt 47799_0.txt\r\n10296_0.txt 17797_0.txt 25297_0.txt 32798_0.txt 40299_0.txt 47800_0.txt\r\n10297_0.txt 17798_0.txt 25298_0.txt 32799_0.txt 40300_0.txt 4780_0.txt\r\n10298_0.txt 17799_0.txt 25299_0.txt 32800_0.txt 4030_0.txt\t 47801_0.txt\r\n10299_0.txt 17800_0.txt 25300_0.txt 3280_0.txt 40301_0.txt 47802_0.txt\r\n10300_0.txt 1780_0.txt 2530_0.txt 32801_0.txt 40302_0.txt 47803_0.txt\r\n1030_0.txt 17801_0.txt 25301_0.txt 32802_0.txt 40303_0.txt 47804_0.txt\r\n10301_0.txt 17802_0.txt 25302_0.txt 32803_0.txt 40304_0.txt 47805_0.txt\r\n10302_0.txt 17803_0.txt 25303_0.txt 32804_0.txt 40305_0.txt 47806_0.txt\r\n10303_0.txt 17804_0.txt 25304_0.txt 32805_0.txt 40306_0.txt 47807_0.txt\r\n10304_0.txt 17805_0.txt 25305_0.txt 32806_0.txt 40307_0.txt 47808_0.txt\r\n10305_0.txt 17806_0.txt 25306_0.txt 32807_0.txt 40308_0.txt 47809_0.txt\r\n10306_0.txt 17807_0.txt 25307_0.txt 32808_0.txt 40309_0.txt 478_0.txt\r\n10307_0.txt 17808_0.txt 25308_0.txt 32809_0.txt 403_0.txt\t 47810_0.txt\r\n10308_0.txt 17809_0.txt 25309_0.txt 328_0.txt 40310_0.txt 4781_0.txt\r\n10309_0.txt 178_0.txt\t 253_0.txt 32810_0.txt 4031_0.txt\t 47811_0.txt\r\n103_0.txt 17810_0.txt 25310_0.txt 3281_0.txt 40311_0.txt 47812_0.txt\r\n10310_0.txt 1781_0.txt 2531_0.txt 32811_0.txt 40312_0.txt 47813_0.txt\r\n1031_0.txt 17811_0.txt 25311_0.txt 32812_0.txt 40313_0.txt 47814_0.txt\r\n10311_0.txt 17812_0.txt 25312_0.txt 32813_0.txt 40314_0.txt 47815_0.txt\r\n10312_0.txt 17813_0.txt 25313_0.txt 32814_0.txt 40315_0.txt 47816_0.txt\r\n10313_0.txt 17814_0.txt 25314_0.txt 32815_0.txt 40316_0.txt 47817_0.txt\r\n10314_0.txt 17815_0.txt 25315_0.txt 32816_0.txt 40317_0.txt 47818_0.txt\r\n10315_0.txt 17816_0.txt 25316_0.txt 32817_0.txt 40318_0.txt 47819_0.txt\r\n10316_0.txt 17817_0.txt 25317_0.txt 32818_0.txt 40319_0.txt 47820_0.txt\r\n10317_0.txt 17818_0.txt 25318_0.txt 32819_0.txt 40320_0.txt 4782_0.txt\r\n10318_0.txt 17819_0.txt 25319_0.txt 32820_0.txt 4032_0.txt\t 47821_0.txt\r\n10319_0.txt 17820_0.txt 25320_0.txt 3282_0.txt 40321_0.txt 47822_0.txt\r\n10320_0.txt 1782_0.txt 2532_0.txt 32821_0.txt 40322_0.txt 47823_0.txt\r\n1032_0.txt 17821_0.txt 25321_0.txt 32822_0.txt 40323_0.txt 47824_0.txt\r\n10321_0.txt 17822_0.txt 25322_0.txt 32823_0.txt 40324_0.txt 47825_0.txt\r\n10322_0.txt 17823_0.txt 25323_0.txt 32824_0.txt 40325_0.txt 47826_0.txt\r\n10323_0.txt 17824_0.txt 25324_0.txt 32825_0.txt 40326_0.txt 47827_0.txt\r\n10324_0.txt 17825_0.txt 25325_0.txt 32826_0.txt 40327_0.txt 47828_0.txt\r\n10325_0.txt 17826_0.txt 25326_0.txt 32827_0.txt 40328_0.txt 47829_0.txt\r\n10326_0.txt 17827_0.txt 25327_0.txt 32828_0.txt 40329_0.txt 47830_0.txt\r\n10327_0.txt 17828_0.txt 25328_0.txt 32829_0.txt 40330_0.txt 4783_0.txt\r\n10328_0.txt 17829_0.txt 25329_0.txt 32830_0.txt 4033_0.txt\t 47831_0.txt\r\n10329_0.txt 17830_0.txt 25330_0.txt 3283_0.txt 40331_0.txt 47832_0.txt\r\n10330_0.txt 1783_0.txt 2533_0.txt 32831_0.txt 40332_0.txt 47833_0.txt\r\n1033_0.txt 17831_0.txt 25331_0.txt 32832_0.txt 40333_0.txt 47834_0.txt\r\n10331_0.txt 17832_0.txt 25332_0.txt 32833_0.txt 40334_0.txt 47835_0.txt\r\n10332_0.txt 17833_0.txt 25333_0.txt 32834_0.txt 40335_0.txt 47836_0.txt\r\n10333_0.txt 17834_0.txt 25334_0.txt 32835_0.txt 40336_0.txt 47837_0.txt\r\n10334_0.txt 17835_0.txt 25335_0.txt 32836_0.txt 40337_0.txt 47838_0.txt\r\n10335_0.txt 17836_0.txt 25336_0.txt 32837_0.txt 40338_0.txt 47839_0.txt\r\n10336_0.txt 17837_0.txt 25337_0.txt 32838_0.txt 40339_0.txt 47840_0.txt\r\n10337_0.txt 17838_0.txt 25338_0.txt 32839_0.txt 40340_0.txt 4784_0.txt\r\n10338_0.txt 17839_0.txt 25339_0.txt 32840_0.txt 4034_0.txt\t 47841_0.txt\r\n10339_0.txt 17840_0.txt 25340_0.txt 3284_0.txt 40341_0.txt 47842_0.txt\r\n10340_0.txt 1784_0.txt 2534_0.txt 32841_0.txt 40342_0.txt 47843_0.txt\r\n1034_0.txt 17841_0.txt 25341_0.txt 32842_0.txt 40343_0.txt 47844_0.txt\r\n10341_0.txt 17842_0.txt 25342_0.txt 32843_0.txt 40344_0.txt 47845_0.txt\r\n10342_0.txt 17843_0.txt 25343_0.txt 32844_0.txt 40345_0.txt 47846_0.txt\r\n10343_0.txt 17844_0.txt 25344_0.txt 32845_0.txt 40346_0.txt 47847_0.txt\r\n10344_0.txt 17845_0.txt 25345_0.txt 32846_0.txt 40347_0.txt 47848_0.txt\r\n10345_0.txt 17846_0.txt 25346_0.txt 32847_0.txt 40348_0.txt 47849_0.txt\r\n10346_0.txt 17847_0.txt 25347_0.txt 32848_0.txt 40349_0.txt 47850_0.txt\r\n10347_0.txt 17848_0.txt 25348_0.txt 32849_0.txt 40350_0.txt 4785_0.txt\r\n10348_0.txt 17849_0.txt 25349_0.txt 32850_0.txt 4035_0.txt\t 47851_0.txt\r\n10349_0.txt 17850_0.txt 25350_0.txt 3285_0.txt 40351_0.txt 47852_0.txt\r\n10350_0.txt 1785_0.txt 2535_0.txt 32851_0.txt 40352_0.txt 47853_0.txt\r\n1035_0.txt 17851_0.txt 25351_0.txt 32852_0.txt 40353_0.txt 47854_0.txt\r\n10351_0.txt 17852_0.txt 25352_0.txt 32853_0.txt 40354_0.txt 47855_0.txt\r\n10352_0.txt 17853_0.txt 25353_0.txt 32854_0.txt 40355_0.txt 47856_0.txt\r\n10353_0.txt 17854_0.txt 25354_0.txt 32855_0.txt 40356_0.txt 47857_0.txt\r\n10354_0.txt 17855_0.txt 25355_0.txt 32856_0.txt 40357_0.txt 47858_0.txt\r\n10355_0.txt 17856_0.txt 25356_0.txt 32857_0.txt 40358_0.txt 47859_0.txt\r\n10356_0.txt 17857_0.txt 25357_0.txt 32858_0.txt 40359_0.txt 47860_0.txt\r\n10357_0.txt 17858_0.txt 25358_0.txt 32859_0.txt 40360_0.txt 4786_0.txt\r\n10358_0.txt 17859_0.txt 25359_0.txt 32860_0.txt 4036_0.txt\t 47861_0.txt\r\n10359_0.txt 17860_0.txt 25360_0.txt 3286_0.txt 40361_0.txt 47862_0.txt\r\n10360_0.txt 1786_0.txt 2536_0.txt 32861_0.txt 40362_0.txt 47863_0.txt\r\n1036_0.txt 17861_0.txt 25361_0.txt 32862_0.txt 40363_0.txt 47864_0.txt\r\n10361_0.txt 17862_0.txt 25362_0.txt 32863_0.txt 40364_0.txt 47865_0.txt\r\n10362_0.txt 17863_0.txt 25363_0.txt 32864_0.txt 40365_0.txt 47866_0.txt\r\n10363_0.txt 17864_0.txt 25364_0.txt 32865_0.txt 40366_0.txt 47867_0.txt\r\n10364_0.txt 17865_0.txt 25365_0.txt 32866_0.txt 40367_0.txt 47868_0.txt\r\n10365_0.txt 17866_0.txt 25366_0.txt 32867_0.txt 40368_0.txt 47869_0.txt\r\n10366_0.txt 17867_0.txt 25367_0.txt 32868_0.txt 40369_0.txt 47870_0.txt\r\n10367_0.txt 17868_0.txt 25368_0.txt 32869_0.txt 40370_0.txt 4787_0.txt\r\n10368_0.txt 17869_0.txt 25369_0.txt 32870_0.txt 4037_0.txt\t 47871_0.txt\r\n10369_0.txt 17870_0.txt 25370_0.txt 3287_0.txt 40371_0.txt 47872_0.txt\r\n10370_0.txt 1787_0.txt 2537_0.txt 32871_0.txt 40372_0.txt 47873_0.txt\r\n1037_0.txt 17871_0.txt 25371_0.txt 32872_0.txt 40373_0.txt 47874_0.txt\r\n10371_0.txt 17872_0.txt 25372_0.txt 32873_0.txt 40374_0.txt 47875_0.txt\r\n10372_0.txt 17873_0.txt 25373_0.txt 32874_0.txt 40375_0.txt 47876_0.txt\r\n10373_0.txt 17874_0.txt 25374_0.txt 32875_0.txt 40376_0.txt 47877_0.txt\r\n10374_0.txt 17875_0.txt 25375_0.txt 32876_0.txt 40377_0.txt 47878_0.txt\r\n10375_0.txt 17876_0.txt 25376_0.txt 32877_0.txt 40378_0.txt 47879_0.txt\r\n10376_0.txt 17877_0.txt 25377_0.txt 32878_0.txt 40379_0.txt 47880_0.txt\r\n10377_0.txt 17878_0.txt 25378_0.txt 32879_0.txt 40380_0.txt 4788_0.txt\r\n10378_0.txt 17879_0.txt 25379_0.txt 32880_0.txt 4038_0.txt\t 47881_0.txt\r\n10379_0.txt 17880_0.txt 25380_0.txt 3288_0.txt 40381_0.txt 47882_0.txt\r\n10380_0.txt 1788_0.txt 2538_0.txt 32881_0.txt 40382_0.txt 47883_0.txt\r\n1038_0.txt 17881_0.txt 25381_0.txt 32882_0.txt 40383_0.txt 47884_0.txt\r\n10381_0.txt 17882_0.txt 25382_0.txt 32883_0.txt 40384_0.txt 47885_0.txt\r\n10382_0.txt 17883_0.txt 25383_0.txt 32884_0.txt 40385_0.txt 47886_0.txt\r\n10383_0.txt 17884_0.txt 25384_0.txt 32885_0.txt 40386_0.txt 47887_0.txt\r\n10384_0.txt 17885_0.txt 25385_0.txt 32886_0.txt 40387_0.txt 47888_0.txt\r\n10385_0.txt 17886_0.txt 25386_0.txt 32887_0.txt 40388_0.txt 47889_0.txt\r\n10386_0.txt 17887_0.txt 25387_0.txt 32888_0.txt 40389_0.txt 47890_0.txt\r\n10387_0.txt 17888_0.txt 25388_0.txt 32889_0.txt 40390_0.txt 4789_0.txt\r\n10388_0.txt 17889_0.txt 25389_0.txt 32890_0.txt 4039_0.txt\t 47891_0.txt\r\n10389_0.txt 17890_0.txt 25390_0.txt 3289_0.txt 40391_0.txt 47892_0.txt\r\n10390_0.txt 1789_0.txt 2539_0.txt 32891_0.txt 40392_0.txt 47893_0.txt\r\n1039_0.txt 17891_0.txt 25391_0.txt 32892_0.txt 40393_0.txt 47894_0.txt\r\n10391_0.txt 17892_0.txt 25392_0.txt 32893_0.txt 40394_0.txt 47895_0.txt\r\n10392_0.txt 17893_0.txt 25393_0.txt 32894_0.txt 40395_0.txt 47896_0.txt\r\n10393_0.txt 17894_0.txt 25394_0.txt 32895_0.txt 40396_0.txt 47897_0.txt\r\n10394_0.txt 17895_0.txt 25395_0.txt 32896_0.txt 40397_0.txt 47898_0.txt\r\n10395_0.txt 17896_0.txt 25396_0.txt 32897_0.txt 40398_0.txt 47899_0.txt\r\n10396_0.txt 17897_0.txt 25397_0.txt 32898_0.txt 40399_0.txt 47900_0.txt\r\n10397_0.txt 17898_0.txt 25398_0.txt 32899_0.txt 40400_0.txt 4790_0.txt\r\n10398_0.txt 17899_0.txt 25399_0.txt 32900_0.txt 4040_0.txt\t 47901_0.txt\r\n10399_0.txt 17900_0.txt 25400_0.txt 3290_0.txt 40401_0.txt 47902_0.txt\r\n10400_0.txt 1790_0.txt 2540_0.txt 32901_0.txt 40402_0.txt 47903_0.txt\r\n1040_0.txt 17901_0.txt 25401_0.txt 32902_0.txt 40403_0.txt 47904_0.txt\r\n10401_0.txt 17902_0.txt 25402_0.txt 32903_0.txt 40404_0.txt 47905_0.txt\r\n10402_0.txt 17903_0.txt 25403_0.txt 32904_0.txt 40405_0.txt 47906_0.txt\r\n10403_0.txt 17904_0.txt 25404_0.txt 32905_0.txt 40406_0.txt 47907_0.txt\r\n10404_0.txt 17905_0.txt 25405_0.txt 32906_0.txt 40407_0.txt 47908_0.txt\r\n10405_0.txt 17906_0.txt 25406_0.txt 32907_0.txt 40408_0.txt 47909_0.txt\r\n10406_0.txt 17907_0.txt 25407_0.txt 32908_0.txt 40409_0.txt 479_0.txt\r\n10407_0.txt 17908_0.txt 25408_0.txt 32909_0.txt 404_0.txt\t 47910_0.txt\r\n10408_0.txt 17909_0.txt 25409_0.txt 329_0.txt 40410_0.txt 4791_0.txt\r\n10409_0.txt 179_0.txt\t 254_0.txt 32910_0.txt 4041_0.txt\t 47911_0.txt\r\n104_0.txt 17910_0.txt 25410_0.txt 3291_0.txt 40411_0.txt 47912_0.txt\r\n10410_0.txt 1791_0.txt 2541_0.txt 32911_0.txt 40412_0.txt 47913_0.txt\r\n1041_0.txt 17911_0.txt 25411_0.txt 32912_0.txt 40413_0.txt 47914_0.txt\r\n10411_0.txt 17912_0.txt 25412_0.txt 32913_0.txt 40414_0.txt 47915_0.txt\r\n10412_0.txt 17913_0.txt 25413_0.txt 32914_0.txt 40415_0.txt 47916_0.txt\r\n10413_0.txt 17914_0.txt 25414_0.txt 32915_0.txt 40416_0.txt 47917_0.txt\r\n10414_0.txt 17915_0.txt 25415_0.txt 32916_0.txt 40417_0.txt 47918_0.txt\r\n10415_0.txt 17916_0.txt 25416_0.txt 32917_0.txt 40418_0.txt 47919_0.txt\r\n10416_0.txt 17917_0.txt 25417_0.txt 32918_0.txt 40419_0.txt 47920_0.txt\r\n10417_0.txt 17918_0.txt 25418_0.txt 32919_0.txt 40420_0.txt 4792_0.txt\r\n10418_0.txt 17919_0.txt 25419_0.txt 32920_0.txt 4042_0.txt\t 47921_0.txt\r\n10419_0.txt 17920_0.txt 25420_0.txt 3292_0.txt 40421_0.txt 47922_0.txt\r\n10420_0.txt 1792_0.txt 2542_0.txt 32921_0.txt 40422_0.txt 47923_0.txt\r\n1042_0.txt 17921_0.txt 25421_0.txt 32922_0.txt 40423_0.txt 47924_0.txt\r\n10421_0.txt 17922_0.txt 25422_0.txt 32923_0.txt 40424_0.txt 47925_0.txt\r\n10422_0.txt 17923_0.txt 25423_0.txt 32924_0.txt 40425_0.txt 47926_0.txt\r\n10423_0.txt 17924_0.txt 25424_0.txt 32925_0.txt 40426_0.txt 47927_0.txt\r\n10424_0.txt 17925_0.txt 25425_0.txt 32926_0.txt 40427_0.txt 47928_0.txt\r\n10425_0.txt 17926_0.txt 25426_0.txt 32927_0.txt 40428_0.txt 47929_0.txt\r\n10426_0.txt 17927_0.txt 25427_0.txt 32928_0.txt 40429_0.txt 47930_0.txt\r\n10427_0.txt 17928_0.txt 25428_0.txt 32929_0.txt 40430_0.txt 4793_0.txt\r\n10428_0.txt 17929_0.txt 25429_0.txt 32930_0.txt 4043_0.txt\t 47931_0.txt\r\n10429_0.txt 17930_0.txt 25430_0.txt 3293_0.txt 40431_0.txt 47932_0.txt\r\n10430_0.txt 1793_0.txt 2543_0.txt 32931_0.txt 40432_0.txt 47933_0.txt\r\n1043_0.txt 17931_0.txt 25431_0.txt 32932_0.txt 40433_0.txt 47934_0.txt\r\n10431_0.txt 17932_0.txt 25432_0.txt 32933_0.txt 40434_0.txt 47935_0.txt\r\n10432_0.txt 17933_0.txt 25433_0.txt 32934_0.txt 40435_0.txt 47936_0.txt\r\n10433_0.txt 17934_0.txt 25434_0.txt 32935_0.txt 40436_0.txt 47937_0.txt\r\n10434_0.txt 17935_0.txt 25435_0.txt 32936_0.txt 40437_0.txt 47938_0.txt\r\n10435_0.txt 17936_0.txt 25436_0.txt 32937_0.txt 40438_0.txt 47939_0.txt\r\n10436_0.txt 17937_0.txt 25437_0.txt 32938_0.txt 40439_0.txt 47940_0.txt\r\n10437_0.txt 17938_0.txt 25438_0.txt 32939_0.txt 40440_0.txt 4794_0.txt\r\n10438_0.txt 17939_0.txt 25439_0.txt 32940_0.txt 4044_0.txt\t 47941_0.txt\r\n10439_0.txt 17940_0.txt 25440_0.txt 3294_0.txt 40441_0.txt 47942_0.txt\r\n10440_0.txt 1794_0.txt 2544_0.txt 32941_0.txt 40442_0.txt 47943_0.txt\r\n1044_0.txt 17941_0.txt 25441_0.txt 32942_0.txt 40443_0.txt 47944_0.txt\r\n10441_0.txt 17942_0.txt 25442_0.txt 32943_0.txt 40444_0.txt 47945_0.txt\r\n10442_0.txt 17943_0.txt 25443_0.txt 32944_0.txt 40445_0.txt 47946_0.txt\r\n10443_0.txt 17944_0.txt 25444_0.txt 32945_0.txt 40446_0.txt 47947_0.txt\r\n10444_0.txt 17945_0.txt 25445_0.txt 32946_0.txt 40447_0.txt 47948_0.txt\r\n10445_0.txt 17946_0.txt 25446_0.txt 32947_0.txt 40448_0.txt 47949_0.txt\r\n10446_0.txt 17947_0.txt 25447_0.txt 32948_0.txt 40449_0.txt 47950_0.txt\r\n10447_0.txt 17948_0.txt 25448_0.txt 32949_0.txt 40450_0.txt 4795_0.txt\r\n10448_0.txt 17949_0.txt 25449_0.txt 32950_0.txt 4045_0.txt\t 47951_0.txt\r\n10449_0.txt 17950_0.txt 25450_0.txt 3295_0.txt 40451_0.txt 47952_0.txt\r\n10450_0.txt 1795_0.txt 2545_0.txt 32951_0.txt 40452_0.txt 47953_0.txt\r\n1045_0.txt 17951_0.txt 25451_0.txt 32952_0.txt 40453_0.txt 47954_0.txt\r\n10451_0.txt 17952_0.txt 25452_0.txt 32953_0.txt 40454_0.txt 47955_0.txt\r\n10452_0.txt 17953_0.txt 25453_0.txt 32954_0.txt 40455_0.txt 47956_0.txt\r\n10453_0.txt 17954_0.txt 25454_0.txt 32955_0.txt 40456_0.txt 47957_0.txt\r\n10454_0.txt 17955_0.txt 25455_0.txt 32956_0.txt 40457_0.txt 47958_0.txt\r\n10455_0.txt 17956_0.txt 25456_0.txt 32957_0.txt 40458_0.txt 47959_0.txt\r\n10456_0.txt 17957_0.txt 25457_0.txt 32958_0.txt 40459_0.txt 47960_0.txt\r\n10457_0.txt 17958_0.txt 25458_0.txt 32959_0.txt 40460_0.txt 4796_0.txt\r\n10458_0.txt 17959_0.txt 25459_0.txt 32960_0.txt 4046_0.txt\t 47961_0.txt\r\n10459_0.txt 17960_0.txt 25460_0.txt 3296_0.txt 40461_0.txt 47962_0.txt\r\n10460_0.txt 1796_0.txt 2546_0.txt 32961_0.txt 40462_0.txt 47963_0.txt\r\n1046_0.txt 17961_0.txt 25461_0.txt 32962_0.txt 40463_0.txt 47964_0.txt\r\n10461_0.txt 17962_0.txt 25462_0.txt 32963_0.txt 40464_0.txt 47965_0.txt\r\n10462_0.txt 17963_0.txt 25463_0.txt 32964_0.txt 40465_0.txt 47966_0.txt\r\n10463_0.txt 17964_0.txt 25464_0.txt 32965_0.txt 40466_0.txt 47967_0.txt\r\n10464_0.txt 17965_0.txt 25465_0.txt 32966_0.txt 40467_0.txt 47968_0.txt\r\n10465_0.txt 17966_0.txt 25466_0.txt 32967_0.txt 40468_0.txt 47969_0.txt\r\n10466_0.txt 17967_0.txt 25467_0.txt 32968_0.txt 40469_0.txt 47970_0.txt\r\n10467_0.txt 17968_0.txt 25468_0.txt 32969_0.txt 40470_0.txt 4797_0.txt\r\n10468_0.txt 17969_0.txt 25469_0.txt 32970_0.txt 4047_0.txt\t 47971_0.txt\r\n10469_0.txt 17970_0.txt 25470_0.txt 3297_0.txt 40471_0.txt 47972_0.txt\r\n10470_0.txt 1797_0.txt 2547_0.txt 32971_0.txt 40472_0.txt 47973_0.txt\r\n1047_0.txt 17971_0.txt 25471_0.txt 32972_0.txt 40473_0.txt 47974_0.txt\r\n10471_0.txt 17972_0.txt 25472_0.txt 32973_0.txt 40474_0.txt 47975_0.txt\r\n10472_0.txt 17973_0.txt 25473_0.txt 32974_0.txt 40475_0.txt 47976_0.txt\r\n10473_0.txt 17974_0.txt 25474_0.txt 32975_0.txt 40476_0.txt 47977_0.txt\r\n10474_0.txt 17975_0.txt 25475_0.txt 32976_0.txt 40477_0.txt 47978_0.txt\r\n10475_0.txt 17976_0.txt 25476_0.txt 32977_0.txt 40478_0.txt 47979_0.txt\r\n10476_0.txt 17977_0.txt 25477_0.txt 32978_0.txt 40479_0.txt 47980_0.txt\r\n10477_0.txt 17978_0.txt 25478_0.txt 32979_0.txt 40480_0.txt 4798_0.txt\r\n10478_0.txt 17979_0.txt 25479_0.txt 32980_0.txt 4048_0.txt\t 47981_0.txt\r\n10479_0.txt 17980_0.txt 25480_0.txt 3298_0.txt 40481_0.txt 47982_0.txt\r\n10480_0.txt 1798_0.txt 2548_0.txt 32981_0.txt 40482_0.txt 47983_0.txt\r\n1048_0.txt 17981_0.txt 25481_0.txt 32982_0.txt 40483_0.txt 47984_0.txt\r\n10481_0.txt 17982_0.txt 25482_0.txt 32983_0.txt 40484_0.txt 47985_0.txt\r\n10482_0.txt 17983_0.txt 25483_0.txt 32984_0.txt 40485_0.txt 47986_0.txt\r\n10483_0.txt 17984_0.txt 25484_0.txt 32985_0.txt 40486_0.txt 47987_0.txt\r\n10484_0.txt 17985_0.txt 25485_0.txt 32986_0.txt 40487_0.txt 47988_0.txt\r\n10485_0.txt 17986_0.txt 25486_0.txt 32987_0.txt 40488_0.txt 47989_0.txt\r\n10486_0.txt 17987_0.txt 25487_0.txt 32988_0.txt 40489_0.txt 47990_0.txt\r\n10487_0.txt 17988_0.txt 25488_0.txt 32989_0.txt 40490_0.txt 4799_0.txt\r\n10488_0.txt 17989_0.txt 25489_0.txt 32990_0.txt 4049_0.txt\t 47991_0.txt\r\n10489_0.txt 17990_0.txt 25490_0.txt 3299_0.txt 40491_0.txt 47992_0.txt\r\n10490_0.txt 1799_0.txt 2549_0.txt 32991_0.txt 40492_0.txt 47993_0.txt\r\n1049_0.txt 17991_0.txt 25491_0.txt 32992_0.txt 40493_0.txt 47994_0.txt\r\n10491_0.txt 17992_0.txt 25492_0.txt 32993_0.txt 40494_0.txt 47995_0.txt\r\n10492_0.txt 17993_0.txt 25493_0.txt 32994_0.txt 40495_0.txt 47996_0.txt\r\n10493_0.txt 17994_0.txt 25494_0.txt 32995_0.txt 40496_0.txt 47997_0.txt\r\n10494_0.txt 17995_0.txt 25495_0.txt 32996_0.txt 40497_0.txt 47998_0.txt\r\n10495_0.txt 17996_0.txt 25496_0.txt 32997_0.txt 40498_0.txt 47999_0.txt\r\n10496_0.txt 17997_0.txt 25497_0.txt 32998_0.txt 40499_0.txt 48000_0.txt\r\n10497_0.txt 17998_0.txt 25498_0.txt 32999_0.txt 40500_0.txt 4800_0.txt\r\n10498_0.txt 17999_0.txt 25499_0.txt 33000_0.txt 4050_0.txt\t 48001_0.txt\r\n10499_0.txt 18000_0.txt 25500_0.txt 3300_0.txt 40501_0.txt 48002_0.txt\r\n10500_0.txt 1800_0.txt 2550_0.txt 33001_0.txt 40502_0.txt 48003_0.txt\r\n1050_0.txt 18001_0.txt 25501_0.txt 33002_0.txt 40503_0.txt 48004_0.txt\r\n10501_0.txt 18002_0.txt 25502_0.txt 33003_0.txt 40504_0.txt 48005_0.txt\r\n10502_0.txt 18003_0.txt 25503_0.txt 33004_0.txt 40505_0.txt 48006_0.txt\r\n10503_0.txt 18004_0.txt 25504_0.txt 33005_0.txt 40506_0.txt 48007_0.txt\r\n10504_0.txt 18005_0.txt 25505_0.txt 33006_0.txt 40507_0.txt 48008_0.txt\r\n10505_0.txt 18006_0.txt 25506_0.txt 33007_0.txt 40508_0.txt 48009_0.txt\r\n10506_0.txt 18007_0.txt 25507_0.txt 33008_0.txt 40509_0.txt 480_0.txt\r\n10507_0.txt 18008_0.txt 25508_0.txt 33009_0.txt 405_0.txt\t 48010_0.txt\r\n10508_0.txt 18009_0.txt 25509_0.txt 330_0.txt 40510_0.txt 4801_0.txt\r\n10509_0.txt 180_0.txt\t 255_0.txt 33010_0.txt 4051_0.txt\t 48011_0.txt\r\n105_0.txt 18010_0.txt 25510_0.txt 3301_0.txt 40511_0.txt 48012_0.txt\r\n10510_0.txt 1801_0.txt 2551_0.txt 33011_0.txt 40512_0.txt 48013_0.txt\r\n1051_0.txt 18011_0.txt 25511_0.txt 33012_0.txt 40513_0.txt 48014_0.txt\r\n10511_0.txt 18012_0.txt 25512_0.txt 33013_0.txt 40514_0.txt 48015_0.txt\r\n10512_0.txt 18013_0.txt 25513_0.txt 33014_0.txt 40515_0.txt 48016_0.txt\r\n10513_0.txt 18014_0.txt 25514_0.txt 33015_0.txt 40516_0.txt 48017_0.txt\r\n10514_0.txt 18015_0.txt 25515_0.txt 33016_0.txt 40517_0.txt 48018_0.txt\r\n10515_0.txt 18016_0.txt 25516_0.txt 33017_0.txt 40518_0.txt 48019_0.txt\r\n10516_0.txt 18017_0.txt 25517_0.txt 33018_0.txt 40519_0.txt 48020_0.txt\r\n10517_0.txt 18018_0.txt 25518_0.txt 33019_0.txt 40520_0.txt 4802_0.txt\r\n10518_0.txt 18019_0.txt 25519_0.txt 33020_0.txt 4052_0.txt\t 48021_0.txt\r\n10519_0.txt 18020_0.txt 25520_0.txt 3302_0.txt 40521_0.txt 48022_0.txt\r\n10520_0.txt 1802_0.txt 2552_0.txt 33021_0.txt 40522_0.txt 48023_0.txt\r\n1052_0.txt 18021_0.txt 25521_0.txt 33022_0.txt 40523_0.txt 48024_0.txt\r\n10521_0.txt 18022_0.txt 25522_0.txt 33023_0.txt 40524_0.txt 48025_0.txt\r\n10522_0.txt 18023_0.txt 25523_0.txt 33024_0.txt 40525_0.txt 48026_0.txt\r\n10523_0.txt 18024_0.txt 25524_0.txt 33025_0.txt 40526_0.txt 48027_0.txt\r\n10524_0.txt 18025_0.txt 25525_0.txt 33026_0.txt 40527_0.txt 48028_0.txt\r\n10525_0.txt 18026_0.txt 25526_0.txt 33027_0.txt 40528_0.txt 48029_0.txt\r\n10526_0.txt 18027_0.txt 25527_0.txt 33028_0.txt 40529_0.txt 48030_0.txt\r\n10527_0.txt 18028_0.txt 25528_0.txt 33029_0.txt 40530_0.txt 4803_0.txt\r\n10528_0.txt 18029_0.txt 25529_0.txt 33030_0.txt 4053_0.txt\t 48031_0.txt\r\n10529_0.txt 18030_0.txt 25530_0.txt 3303_0.txt 40531_0.txt 48032_0.txt\r\n10530_0.txt 1803_0.txt 2553_0.txt 33031_0.txt 40532_0.txt 48033_0.txt\r\n1053_0.txt 18031_0.txt 25531_0.txt 33032_0.txt 40533_0.txt 48034_0.txt\r\n10531_0.txt 18032_0.txt 25532_0.txt 33033_0.txt 40534_0.txt 48035_0.txt\r\n10532_0.txt 18033_0.txt 25533_0.txt 33034_0.txt 40535_0.txt 48036_0.txt\r\n10533_0.txt 18034_0.txt 25534_0.txt 33035_0.txt 40536_0.txt 48037_0.txt\r\n10534_0.txt 18035_0.txt 25535_0.txt 33036_0.txt 40537_0.txt 48038_0.txt\r\n10535_0.txt 18036_0.txt 25536_0.txt 33037_0.txt 40538_0.txt 48039_0.txt\r\n10536_0.txt 18037_0.txt 25537_0.txt 33038_0.txt 40539_0.txt 48040_0.txt\r\n10537_0.txt 18038_0.txt 25538_0.txt 33039_0.txt 40540_0.txt 4804_0.txt\r\n10538_0.txt 18039_0.txt 25539_0.txt 33040_0.txt 4054_0.txt\t 48041_0.txt\r\n10539_0.txt 18040_0.txt 25540_0.txt 3304_0.txt 40541_0.txt 48042_0.txt\r\n10540_0.txt 1804_0.txt 2554_0.txt 33041_0.txt 40542_0.txt 48043_0.txt\r\n1054_0.txt 18041_0.txt 25541_0.txt 33042_0.txt 40543_0.txt 48044_0.txt\r\n10541_0.txt 18042_0.txt 25542_0.txt 33043_0.txt 40544_0.txt 48045_0.txt\r\n10542_0.txt 18043_0.txt 25543_0.txt 33044_0.txt 40545_0.txt 48046_0.txt\r\n10543_0.txt 18044_0.txt 25544_0.txt 33045_0.txt 40546_0.txt 48047_0.txt\r\n10544_0.txt 18045_0.txt 25545_0.txt 33046_0.txt 40547_0.txt 48048_0.txt\r\n10545_0.txt 18046_0.txt 25546_0.txt 33047_0.txt 40548_0.txt 48049_0.txt\r\n10546_0.txt 18047_0.txt 25547_0.txt 33048_0.txt 40549_0.txt 48050_0.txt\r\n10547_0.txt 18048_0.txt 25548_0.txt 33049_0.txt 40550_0.txt 4805_0.txt\r\n10548_0.txt 18049_0.txt 25549_0.txt 33050_0.txt 4055_0.txt\t 48051_0.txt\r\n10549_0.txt 18050_0.txt 25550_0.txt 3305_0.txt 40551_0.txt 48052_0.txt\r\n10550_0.txt 1805_0.txt 2555_0.txt 33051_0.txt 40552_0.txt 48053_0.txt\r\n1055_0.txt 18051_0.txt 25551_0.txt 33052_0.txt 40553_0.txt 48054_0.txt\r\n10551_0.txt 18052_0.txt 25552_0.txt 33053_0.txt 40554_0.txt 48055_0.txt\r\n10552_0.txt 18053_0.txt 25553_0.txt 33054_0.txt 40555_0.txt 48056_0.txt\r\n10553_0.txt 18054_0.txt 25554_0.txt 33055_0.txt 40556_0.txt 48057_0.txt\r\n10554_0.txt 18055_0.txt 25555_0.txt 33056_0.txt 40557_0.txt 48058_0.txt\r\n10555_0.txt 18056_0.txt 25556_0.txt 33057_0.txt 40558_0.txt 48059_0.txt\r\n10556_0.txt 18057_0.txt 25557_0.txt 33058_0.txt 40559_0.txt 48060_0.txt\r\n10557_0.txt 18058_0.txt 25558_0.txt 33059_0.txt 40560_0.txt 4806_0.txt\r\n10558_0.txt 18059_0.txt 25559_0.txt 33060_0.txt 4056_0.txt\t 48061_0.txt\r\n10559_0.txt 18060_0.txt 25560_0.txt 3306_0.txt 40561_0.txt 48062_0.txt\r\n10560_0.txt 1806_0.txt 2556_0.txt 33061_0.txt 40562_0.txt 48063_0.txt\r\n1056_0.txt 18061_0.txt 25561_0.txt 33062_0.txt 40563_0.txt 48064_0.txt\r\n10561_0.txt 18062_0.txt 25562_0.txt 33063_0.txt 40564_0.txt 48065_0.txt\r\n10562_0.txt 18063_0.txt 25563_0.txt 33064_0.txt 40565_0.txt 48066_0.txt\r\n10563_0.txt 18064_0.txt 25564_0.txt 33065_0.txt 40566_0.txt 48067_0.txt\r\n10564_0.txt 18065_0.txt 25565_0.txt 33066_0.txt 40567_0.txt 48068_0.txt\r\n10565_0.txt 18066_0.txt 25566_0.txt 33067_0.txt 40568_0.txt 48069_0.txt\r\n10566_0.txt 18067_0.txt 25567_0.txt 33068_0.txt 40569_0.txt 48070_0.txt\r\n10567_0.txt 18068_0.txt 25568_0.txt 33069_0.txt 40570_0.txt 4807_0.txt\r\n10568_0.txt 18069_0.txt 25569_0.txt 33070_0.txt 4057_0.txt\t 48071_0.txt\r\n10569_0.txt 18070_0.txt 25570_0.txt 3307_0.txt 40571_0.txt 48072_0.txt\r\n10570_0.txt 1807_0.txt 2557_0.txt 33071_0.txt 40572_0.txt 48073_0.txt\r\n1057_0.txt 18071_0.txt 25571_0.txt 33072_0.txt 40573_0.txt 48074_0.txt\r\n10571_0.txt 18072_0.txt 25572_0.txt 33073_0.txt 40574_0.txt 48075_0.txt\r\n10572_0.txt 18073_0.txt 25573_0.txt 33074_0.txt 40575_0.txt 48076_0.txt\r\n10573_0.txt 18074_0.txt 25574_0.txt 33075_0.txt 40576_0.txt 48077_0.txt\r\n10574_0.txt 18075_0.txt 25575_0.txt 33076_0.txt 40577_0.txt 48078_0.txt\r\n10575_0.txt 18076_0.txt 25576_0.txt 33077_0.txt 40578_0.txt 48079_0.txt\r\n10576_0.txt 18077_0.txt 25577_0.txt 33078_0.txt 40579_0.txt 48080_0.txt\r\n10577_0.txt 18078_0.txt 25578_0.txt 33079_0.txt 40580_0.txt 4808_0.txt\r\n10578_0.txt 18079_0.txt 25579_0.txt 33080_0.txt 4058_0.txt\t 48081_0.txt\r\n10579_0.txt 18080_0.txt 25580_0.txt 3308_0.txt 40581_0.txt 48082_0.txt\r\n10580_0.txt 1808_0.txt 2558_0.txt 33081_0.txt 40582_0.txt 48083_0.txt\r\n1058_0.txt 18081_0.txt 25581_0.txt 33082_0.txt 40583_0.txt 48084_0.txt\r\n10581_0.txt 18082_0.txt 25582_0.txt 33083_0.txt 40584_0.txt 48085_0.txt\r\n10582_0.txt 18083_0.txt 25583_0.txt 33084_0.txt 40585_0.txt 48086_0.txt\r\n10583_0.txt 18084_0.txt 25584_0.txt 33085_0.txt 40586_0.txt 48087_0.txt\r\n10584_0.txt 18085_0.txt 25585_0.txt 33086_0.txt 40587_0.txt 48088_0.txt\r\n10585_0.txt 18086_0.txt 25586_0.txt 33087_0.txt 40588_0.txt 48089_0.txt\r\n10586_0.txt 18087_0.txt 25587_0.txt 33088_0.txt 40589_0.txt 48090_0.txt\r\n10587_0.txt 18088_0.txt 25588_0.txt 33089_0.txt 40590_0.txt 4809_0.txt\r\n10588_0.txt 18089_0.txt 25589_0.txt 33090_0.txt 4059_0.txt\t 48091_0.txt\r\n10589_0.txt 18090_0.txt 25590_0.txt 3309_0.txt 40591_0.txt 48092_0.txt\r\n10590_0.txt 1809_0.txt 2559_0.txt 33091_0.txt 40592_0.txt 48093_0.txt\r\n1059_0.txt 18091_0.txt 25591_0.txt 33092_0.txt 40593_0.txt 48094_0.txt\r\n10591_0.txt 18092_0.txt 25592_0.txt 33093_0.txt 40594_0.txt 48095_0.txt\r\n10592_0.txt 18093_0.txt 25593_0.txt 33094_0.txt 40595_0.txt 48096_0.txt\r\n10593_0.txt 18094_0.txt 25594_0.txt 33095_0.txt 40596_0.txt 48097_0.txt\r\n10594_0.txt 18095_0.txt 25595_0.txt 33096_0.txt 40597_0.txt 48098_0.txt\r\n10595_0.txt 18096_0.txt 25596_0.txt 33097_0.txt 40598_0.txt 48099_0.txt\r\n10596_0.txt 18097_0.txt 25597_0.txt 33098_0.txt 40599_0.txt 48_0.txt\r\n10597_0.txt 18098_0.txt 25598_0.txt 33099_0.txt 40600_0.txt 48100_0.txt\r\n10598_0.txt 18099_0.txt 25599_0.txt 33_0.txt 4060_0.txt\t 4810_0.txt\r\n10599_0.txt 18_0.txt\t 25600_0.txt 33100_0.txt 40601_0.txt 48101_0.txt\r\n10600_0.txt 18100_0.txt 2560_0.txt 3310_0.txt 40602_0.txt 48102_0.txt\r\n1060_0.txt 1810_0.txt 25601_0.txt 33101_0.txt 40603_0.txt 48103_0.txt\r\n10601_0.txt 18101_0.txt 25602_0.txt 33102_0.txt 40604_0.txt 48104_0.txt\r\n10602_0.txt 18102_0.txt 25603_0.txt 33103_0.txt 40605_0.txt 48105_0.txt\r\n10603_0.txt 18103_0.txt 25604_0.txt 33104_0.txt 40606_0.txt 48106_0.txt\r\n10604_0.txt 18104_0.txt 25605_0.txt 33105_0.txt 40607_0.txt 48107_0.txt\r\n10605_0.txt 18105_0.txt 25606_0.txt 33106_0.txt 40608_0.txt 48108_0.txt\r\n10606_0.txt 18106_0.txt 25607_0.txt 33107_0.txt 40609_0.txt 48109_0.txt\r\n10607_0.txt 18107_0.txt 25608_0.txt 33108_0.txt 406_0.txt\t 481_0.txt\r\n10608_0.txt 18108_0.txt 25609_0.txt 33109_0.txt 40610_0.txt 48110_0.txt\r\n10609_0.txt 18109_0.txt 256_0.txt 331_0.txt 4061_0.txt\t 4811_0.txt\r\n106_0.txt 181_0.txt\t 25610_0.txt 33110_0.txt 40611_0.txt 48111_0.txt\r\n10610_0.txt 18110_0.txt 2561_0.txt 3311_0.txt 40612_0.txt 48112_0.txt\r\n1061_0.txt 1811_0.txt 25611_0.txt 33111_0.txt 40613_0.txt 48113_0.txt\r\n10611_0.txt 18111_0.txt 25612_0.txt 33112_0.txt 40614_0.txt 48114_0.txt\r\n10612_0.txt 18112_0.txt 25613_0.txt 33113_0.txt 40615_0.txt 48115_0.txt\r\n10613_0.txt 18113_0.txt 25614_0.txt 33114_0.txt 40616_0.txt 48116_0.txt\r\n10614_0.txt 18114_0.txt 25615_0.txt 33115_0.txt 40617_0.txt 48117_0.txt\r\n10615_0.txt 18115_0.txt 25616_0.txt 33116_0.txt 40618_0.txt 48118_0.txt\r\n10616_0.txt 18116_0.txt 25617_0.txt 33117_0.txt 40619_0.txt 48119_0.txt\r\n10617_0.txt 18117_0.txt 25618_0.txt 33118_0.txt 40620_0.txt 48120_0.txt\r\n10618_0.txt 18118_0.txt 25619_0.txt 33119_0.txt 4062_0.txt\t 4812_0.txt\r\n10619_0.txt 18119_0.txt 25620_0.txt 33120_0.txt 40621_0.txt 48121_0.txt\r\n10620_0.txt 18120_0.txt 2562_0.txt 3312_0.txt 40622_0.txt 48122_0.txt\r\n1062_0.txt 1812_0.txt 25621_0.txt 33121_0.txt 40623_0.txt 48123_0.txt\r\n10621_0.txt 18121_0.txt 25622_0.txt 33122_0.txt 40624_0.txt 48124_0.txt\r\n10622_0.txt 18122_0.txt 25623_0.txt 33123_0.txt 40625_0.txt 48125_0.txt\r\n10623_0.txt 18123_0.txt 25624_0.txt 33124_0.txt 40626_0.txt 48126_0.txt\r\n10624_0.txt 18124_0.txt 25625_0.txt 33125_0.txt 40627_0.txt 48127_0.txt\r\n10625_0.txt 18125_0.txt 25626_0.txt 33126_0.txt 40628_0.txt 48128_0.txt\r\n10626_0.txt 18126_0.txt 25627_0.txt 33127_0.txt 40629_0.txt 48129_0.txt\r\n10627_0.txt 18127_0.txt 25628_0.txt 33128_0.txt 40630_0.txt 48130_0.txt\r\n10628_0.txt 18128_0.txt 25629_0.txt 33129_0.txt 4063_0.txt\t 4813_0.txt\r\n10629_0.txt 18129_0.txt 25630_0.txt 33130_0.txt 40631_0.txt 48131_0.txt\r\n10630_0.txt 18130_0.txt 2563_0.txt 3313_0.txt 40632_0.txt 48132_0.txt\r\n1063_0.txt 1813_0.txt 25631_0.txt 33131_0.txt 40633_0.txt 48133_0.txt\r\n10631_0.txt 18131_0.txt 25632_0.txt 33132_0.txt 40634_0.txt 48134_0.txt\r\n10632_0.txt 18132_0.txt 25633_0.txt 33133_0.txt 40635_0.txt 48135_0.txt\r\n10633_0.txt 18133_0.txt 25634_0.txt 33134_0.txt 40636_0.txt 48136_0.txt\r\n10634_0.txt 18134_0.txt 25635_0.txt 33135_0.txt 40637_0.txt 48137_0.txt\r\n10635_0.txt 18135_0.txt 25636_0.txt 33136_0.txt 40638_0.txt 48138_0.txt\r\n10636_0.txt 18136_0.txt 25637_0.txt 33137_0.txt 40639_0.txt 48139_0.txt\r\n10637_0.txt 18137_0.txt 25638_0.txt 33138_0.txt 40640_0.txt 48140_0.txt\r\n10638_0.txt 18138_0.txt 25639_0.txt 33139_0.txt 4064_0.txt\t 4814_0.txt\r\n10639_0.txt 18139_0.txt 25640_0.txt 33140_0.txt 40641_0.txt 48141_0.txt\r\n10640_0.txt 18140_0.txt 2564_0.txt 3314_0.txt 40642_0.txt 48142_0.txt\r\n1064_0.txt 1814_0.txt 25641_0.txt 33141_0.txt 40643_0.txt 48143_0.txt\r\n10641_0.txt 18141_0.txt 25642_0.txt 33142_0.txt 40644_0.txt 48144_0.txt\r\n10642_0.txt 18142_0.txt 25643_0.txt 33143_0.txt 40645_0.txt 48145_0.txt\r\n10643_0.txt 18143_0.txt 25644_0.txt 33144_0.txt 40646_0.txt 48146_0.txt\r\n10644_0.txt 18144_0.txt 25645_0.txt 33145_0.txt 40647_0.txt 48147_0.txt\r\n10645_0.txt 18145_0.txt 25646_0.txt 33146_0.txt 40648_0.txt 48148_0.txt\r\n10646_0.txt 18146_0.txt 25647_0.txt 33147_0.txt 40649_0.txt 48149_0.txt\r\n10647_0.txt 18147_0.txt 25648_0.txt 33148_0.txt 40650_0.txt 48150_0.txt\r\n10648_0.txt 18148_0.txt 25649_0.txt 33149_0.txt 4065_0.txt\t 4815_0.txt\r\n10649_0.txt 18149_0.txt 25650_0.txt 33150_0.txt 40651_0.txt 48151_0.txt\r\n10650_0.txt 18150_0.txt 2565_0.txt 3315_0.txt 40652_0.txt 48152_0.txt\r\n1065_0.txt 1815_0.txt 25651_0.txt 33151_0.txt 40653_0.txt 48153_0.txt\r\n10651_0.txt 18151_0.txt 25652_0.txt 33152_0.txt 40654_0.txt 48154_0.txt\r\n10652_0.txt 18152_0.txt 25653_0.txt 33153_0.txt 40655_0.txt 48155_0.txt\r\n10653_0.txt 18153_0.txt 25654_0.txt 33154_0.txt 40656_0.txt 48156_0.txt\r\n10654_0.txt 18154_0.txt 25655_0.txt 33155_0.txt 40657_0.txt 48157_0.txt\r\n10655_0.txt 18155_0.txt 25656_0.txt 33156_0.txt 40658_0.txt 48158_0.txt\r\n10656_0.txt 18156_0.txt 25657_0.txt 33157_0.txt 40659_0.txt 48159_0.txt\r\n10657_0.txt 18157_0.txt 25658_0.txt 33158_0.txt 40660_0.txt 48160_0.txt\r\n10658_0.txt 18158_0.txt 25659_0.txt 33159_0.txt 4066_0.txt\t 4816_0.txt\r\n10659_0.txt 18159_0.txt 25660_0.txt 33160_0.txt 40661_0.txt 48161_0.txt\r\n10660_0.txt 18160_0.txt 2566_0.txt 3316_0.txt 40662_0.txt 48162_0.txt\r\n1066_0.txt 1816_0.txt 25661_0.txt 33161_0.txt 40663_0.txt 48163_0.txt\r\n10661_0.txt 18161_0.txt 25662_0.txt 33162_0.txt 40664_0.txt 48164_0.txt\r\n10662_0.txt 18162_0.txt 25663_0.txt 33163_0.txt 40665_0.txt 48165_0.txt\r\n10663_0.txt 18163_0.txt 25664_0.txt 33164_0.txt 40666_0.txt 48166_0.txt\r\n10664_0.txt 18164_0.txt 25665_0.txt 33165_0.txt 40667_0.txt 48167_0.txt\r\n10665_0.txt 18165_0.txt 25666_0.txt 33166_0.txt 40668_0.txt 48168_0.txt\r\n10666_0.txt 18166_0.txt 25667_0.txt 33167_0.txt 40669_0.txt 48169_0.txt\r\n10667_0.txt 18167_0.txt 25668_0.txt 33168_0.txt 40670_0.txt 48170_0.txt\r\n10668_0.txt 18168_0.txt 25669_0.txt 33169_0.txt 4067_0.txt\t 4817_0.txt\r\n10669_0.txt 18169_0.txt 25670_0.txt 33170_0.txt 40671_0.txt 48171_0.txt\r\n10670_0.txt 18170_0.txt 2567_0.txt 3317_0.txt 40672_0.txt 48172_0.txt\r\n1067_0.txt 1817_0.txt 25671_0.txt 33171_0.txt 40673_0.txt 48173_0.txt\r\n10671_0.txt 18171_0.txt 25672_0.txt 33172_0.txt 40674_0.txt 48174_0.txt\r\n10672_0.txt 18172_0.txt 25673_0.txt 33173_0.txt 40675_0.txt 48175_0.txt\r\n10673_0.txt 18173_0.txt 25674_0.txt 33174_0.txt 40676_0.txt 48176_0.txt\r\n10674_0.txt 18174_0.txt 25675_0.txt 33175_0.txt 40677_0.txt 48177_0.txt\r\n10675_0.txt 18175_0.txt 25676_0.txt 33176_0.txt 40678_0.txt 48178_0.txt\r\n10676_0.txt 18176_0.txt 25677_0.txt 33177_0.txt 40679_0.txt 48179_0.txt\r\n10677_0.txt 18177_0.txt 25678_0.txt 33178_0.txt 40680_0.txt 48180_0.txt\r\n10678_0.txt 18178_0.txt 25679_0.txt 33179_0.txt 4068_0.txt\t 4818_0.txt\r\n10679_0.txt 18179_0.txt 25680_0.txt 33180_0.txt 40681_0.txt 48181_0.txt\r\n10680_0.txt 18180_0.txt 2568_0.txt 3318_0.txt 40682_0.txt 48182_0.txt\r\n1068_0.txt 1818_0.txt 25681_0.txt 33181_0.txt 40683_0.txt 48183_0.txt\r\n10681_0.txt 18181_0.txt 25682_0.txt 33182_0.txt 40684_0.txt 48184_0.txt\r\n10682_0.txt 18182_0.txt 25683_0.txt 33183_0.txt 40685_0.txt 48185_0.txt\r\n10683_0.txt 18183_0.txt 25684_0.txt 33184_0.txt 40686_0.txt 48186_0.txt\r\n10684_0.txt 18184_0.txt 25685_0.txt 33185_0.txt 40687_0.txt 48187_0.txt\r\n10685_0.txt 18185_0.txt 25686_0.txt 33186_0.txt 40688_0.txt 48188_0.txt\r\n10686_0.txt 18186_0.txt 25687_0.txt 33187_0.txt 40689_0.txt 48189_0.txt\r\n10687_0.txt 18187_0.txt 25688_0.txt 33188_0.txt 40690_0.txt 48190_0.txt\r\n10688_0.txt 18188_0.txt 25689_0.txt 33189_0.txt 4069_0.txt\t 4819_0.txt\r\n10689_0.txt 18189_0.txt 25690_0.txt 33190_0.txt 40691_0.txt 48191_0.txt\r\n10690_0.txt 18190_0.txt 2569_0.txt 3319_0.txt 40692_0.txt 48192_0.txt\r\n1069_0.txt 1819_0.txt 25691_0.txt 33191_0.txt 40693_0.txt 48193_0.txt\r\n10691_0.txt 18191_0.txt 25692_0.txt 33192_0.txt 40694_0.txt 48194_0.txt\r\n10692_0.txt 18192_0.txt 25693_0.txt 33193_0.txt 40695_0.txt 48195_0.txt\r\n10693_0.txt 18193_0.txt 25694_0.txt 33194_0.txt 40696_0.txt 48196_0.txt\r\n10694_0.txt 18194_0.txt 25695_0.txt 33195_0.txt 40697_0.txt 48197_0.txt\r\n10695_0.txt 18195_0.txt 25696_0.txt 33196_0.txt 40698_0.txt 48198_0.txt\r\n10696_0.txt 18196_0.txt 25697_0.txt 33197_0.txt 40699_0.txt 48199_0.txt\r\n10697_0.txt 18197_0.txt 25698_0.txt 33198_0.txt 40700_0.txt 48200_0.txt\r\n10698_0.txt 18198_0.txt 25699_0.txt 33199_0.txt 4070_0.txt\t 4820_0.txt\r\n10699_0.txt 18199_0.txt 25700_0.txt 33200_0.txt 40701_0.txt 48201_0.txt\r\n10700_0.txt 18200_0.txt 2570_0.txt 3320_0.txt 40702_0.txt 48202_0.txt\r\n1070_0.txt 1820_0.txt 25701_0.txt 33201_0.txt 40703_0.txt 48203_0.txt\r\n10701_0.txt 18201_0.txt 25702_0.txt 33202_0.txt 40704_0.txt 48204_0.txt\r\n10702_0.txt 18202_0.txt 25703_0.txt 33203_0.txt 40705_0.txt 48205_0.txt\r\n10703_0.txt 18203_0.txt 25704_0.txt 33204_0.txt 40706_0.txt 48206_0.txt\r\n10704_0.txt 18204_0.txt 25705_0.txt 33205_0.txt 40707_0.txt 48207_0.txt\r\n10705_0.txt 18205_0.txt 25706_0.txt 33206_0.txt 40708_0.txt 48208_0.txt\r\n10706_0.txt 18206_0.txt 25707_0.txt 33207_0.txt 40709_0.txt 48209_0.txt\r\n10707_0.txt 18207_0.txt 25708_0.txt 33208_0.txt 407_0.txt\t 482_0.txt\r\n10708_0.txt 18208_0.txt 25709_0.txt 33209_0.txt 40710_0.txt 48210_0.txt\r\n10709_0.txt 18209_0.txt 257_0.txt 332_0.txt 4071_0.txt\t 4821_0.txt\r\n107_0.txt 182_0.txt\t 25710_0.txt 33210_0.txt 40711_0.txt 48211_0.txt\r\n10710_0.txt 18210_0.txt 2571_0.txt 3321_0.txt 40712_0.txt 48212_0.txt\r\n1071_0.txt 1821_0.txt 25711_0.txt 33211_0.txt 40713_0.txt 48213_0.txt\r\n10711_0.txt 18211_0.txt 25712_0.txt 33212_0.txt 40714_0.txt 48214_0.txt\r\n10712_0.txt 18212_0.txt 25713_0.txt 33213_0.txt 40715_0.txt 48215_0.txt\r\n10713_0.txt 18213_0.txt 25714_0.txt 33214_0.txt 40716_0.txt 48216_0.txt\r\n10714_0.txt 18214_0.txt 25715_0.txt 33215_0.txt 40717_0.txt 48217_0.txt\r\n10715_0.txt 18215_0.txt 25716_0.txt 33216_0.txt 40718_0.txt 48218_0.txt\r\n10716_0.txt 18216_0.txt 25717_0.txt 33217_0.txt 40719_0.txt 48219_0.txt\r\n10717_0.txt 18217_0.txt 25718_0.txt 33218_0.txt 40720_0.txt 48220_0.txt\r\n10718_0.txt 18218_0.txt 25719_0.txt 33219_0.txt 4072_0.txt\t 4822_0.txt\r\n10719_0.txt 18219_0.txt 25720_0.txt 33220_0.txt 40721_0.txt 48221_0.txt\r\n10720_0.txt 18220_0.txt 2572_0.txt 3322_0.txt 40722_0.txt 48222_0.txt\r\n1072_0.txt 1822_0.txt 25721_0.txt 33221_0.txt 40723_0.txt 48223_0.txt\r\n10721_0.txt 18221_0.txt 25722_0.txt 33222_0.txt 40724_0.txt 48224_0.txt\r\n10722_0.txt 18222_0.txt 25723_0.txt 33223_0.txt 40725_0.txt 48225_0.txt\r\n10723_0.txt 18223_0.txt 25724_0.txt 33224_0.txt 40726_0.txt 48226_0.txt\r\n10724_0.txt 18224_0.txt 25725_0.txt 33225_0.txt 40727_0.txt 48227_0.txt\r\n10725_0.txt 18225_0.txt 25726_0.txt 33226_0.txt 40728_0.txt 48228_0.txt\r\n10726_0.txt 18226_0.txt 25727_0.txt 33227_0.txt 40729_0.txt 48229_0.txt\r\n10727_0.txt 18227_0.txt 25728_0.txt 33228_0.txt 40730_0.txt 48230_0.txt\r\n10728_0.txt 18228_0.txt 25729_0.txt 33229_0.txt 4073_0.txt\t 4823_0.txt\r\n10729_0.txt 18229_0.txt 25730_0.txt 33230_0.txt 40731_0.txt 48231_0.txt\r\n10730_0.txt 18230_0.txt 2573_0.txt 3323_0.txt 40732_0.txt 48232_0.txt\r\n1073_0.txt 1823_0.txt 25731_0.txt 33231_0.txt 40733_0.txt 48233_0.txt\r\n10731_0.txt 18231_0.txt 25732_0.txt 33232_0.txt 40734_0.txt 48234_0.txt\r\n10732_0.txt 18232_0.txt 25733_0.txt 33233_0.txt 40735_0.txt 48235_0.txt\r\n10733_0.txt 18233_0.txt 25734_0.txt 33234_0.txt 40736_0.txt 48236_0.txt\r\n10734_0.txt 18234_0.txt 25735_0.txt 33235_0.txt 40737_0.txt 48237_0.txt\r\n10735_0.txt 18235_0.txt 25736_0.txt 33236_0.txt 40738_0.txt 48238_0.txt\r\n10736_0.txt 18236_0.txt 25737_0.txt 33237_0.txt 40739_0.txt 48239_0.txt\r\n10737_0.txt 18237_0.txt 25738_0.txt 33238_0.txt 40740_0.txt 48240_0.txt\r\n10738_0.txt 18238_0.txt 25739_0.txt 33239_0.txt 4074_0.txt\t 4824_0.txt\r\n10739_0.txt 18239_0.txt 25740_0.txt 33240_0.txt 40741_0.txt 48241_0.txt\r\n10740_0.txt 18240_0.txt 2574_0.txt 3324_0.txt 40742_0.txt 48242_0.txt\r\n1074_0.txt 1824_0.txt 25741_0.txt 33241_0.txt 40743_0.txt 48243_0.txt\r\n10741_0.txt 18241_0.txt 25742_0.txt 33242_0.txt 40744_0.txt 48244_0.txt\r\n10742_0.txt 18242_0.txt 25743_0.txt 33243_0.txt 40745_0.txt 48245_0.txt\r\n10743_0.txt 18243_0.txt 25744_0.txt 33244_0.txt 40746_0.txt 48246_0.txt\r\n10744_0.txt 18244_0.txt 25745_0.txt 33245_0.txt 40747_0.txt 48247_0.txt\r\n10745_0.txt 18245_0.txt 25746_0.txt 33246_0.txt 40748_0.txt 48248_0.txt\r\n10746_0.txt 18246_0.txt 25747_0.txt 33247_0.txt 40749_0.txt 48249_0.txt\r\n10747_0.txt 18247_0.txt 25748_0.txt 33248_0.txt 40750_0.txt 48250_0.txt\r\n10748_0.txt 18248_0.txt 25749_0.txt 33249_0.txt 4075_0.txt\t 4825_0.txt\r\n10749_0.txt 18249_0.txt 25750_0.txt 33250_0.txt 40751_0.txt 48251_0.txt\r\n10750_0.txt 18250_0.txt 2575_0.txt 3325_0.txt 40752_0.txt 48252_0.txt\r\n1075_0.txt 1825_0.txt 25751_0.txt 33251_0.txt 40753_0.txt 48253_0.txt\r\n10751_0.txt 18251_0.txt 25752_0.txt 33252_0.txt 40754_0.txt 48254_0.txt\r\n10752_0.txt 18252_0.txt 25753_0.txt 33253_0.txt 40755_0.txt 48255_0.txt\r\n10753_0.txt 18253_0.txt 25754_0.txt 33254_0.txt 40756_0.txt 48256_0.txt\r\n10754_0.txt 18254_0.txt 25755_0.txt 33255_0.txt 40757_0.txt 48257_0.txt\r\n10755_0.txt 18255_0.txt 25756_0.txt 33256_0.txt 40758_0.txt 48258_0.txt\r\n10756_0.txt 18256_0.txt 25757_0.txt 33257_0.txt 40759_0.txt 48259_0.txt\r\n10757_0.txt 18257_0.txt 25758_0.txt 33258_0.txt 40760_0.txt 48260_0.txt\r\n10758_0.txt 18258_0.txt 25759_0.txt 33259_0.txt 4076_0.txt\t 4826_0.txt\r\n10759_0.txt 18259_0.txt 25760_0.txt 33260_0.txt 40761_0.txt 48261_0.txt\r\n10760_0.txt 18260_0.txt 2576_0.txt 3326_0.txt 40762_0.txt 48262_0.txt\r\n1076_0.txt 1826_0.txt 25761_0.txt 33261_0.txt 40763_0.txt 48263_0.txt\r\n10761_0.txt 18261_0.txt 25762_0.txt 33262_0.txt 40764_0.txt 48264_0.txt\r\n10762_0.txt 18262_0.txt 25763_0.txt 33263_0.txt 40765_0.txt 48265_0.txt\r\n10763_0.txt 18263_0.txt 25764_0.txt 33264_0.txt 40766_0.txt 48266_0.txt\r\n10764_0.txt 18264_0.txt 25765_0.txt 33265_0.txt 40767_0.txt 48267_0.txt\r\n10765_0.txt 18265_0.txt 25766_0.txt 33266_0.txt 40768_0.txt 48268_0.txt\r\n10766_0.txt 18266_0.txt 25767_0.txt 33267_0.txt 40769_0.txt 48269_0.txt\r\n10767_0.txt 18267_0.txt 25768_0.txt 33268_0.txt 40770_0.txt 48270_0.txt\r\n10768_0.txt 18268_0.txt 25769_0.txt 33269_0.txt 4077_0.txt\t 4827_0.txt\r\n10769_0.txt 18269_0.txt 25770_0.txt 33270_0.txt 40771_0.txt 48271_0.txt\r\n10770_0.txt 18270_0.txt 2577_0.txt 3327_0.txt 40772_0.txt 48272_0.txt\r\n1077_0.txt 1827_0.txt 25771_0.txt 33271_0.txt 40773_0.txt 48273_0.txt\r\n10771_0.txt 18271_0.txt 25772_0.txt 33272_0.txt 40774_0.txt 48274_0.txt\r\n10772_0.txt 18272_0.txt 25773_0.txt 33273_0.txt 40775_0.txt 48275_0.txt\r\n10773_0.txt 18273_0.txt 25774_0.txt 33274_0.txt 40776_0.txt 48276_0.txt\r\n10774_0.txt 18274_0.txt 25775_0.txt 33275_0.txt 40777_0.txt 48277_0.txt\r\n10775_0.txt 18275_0.txt 25776_0.txt 33276_0.txt 40778_0.txt 48278_0.txt\r\n10776_0.txt 18276_0.txt 25777_0.txt 33277_0.txt 40779_0.txt 48279_0.txt\r\n10777_0.txt 18277_0.txt 25778_0.txt 33278_0.txt 40780_0.txt 48280_0.txt\r\n10778_0.txt 18278_0.txt 25779_0.txt 33279_0.txt 4078_0.txt\t 4828_0.txt\r\n10779_0.txt 18279_0.txt 25780_0.txt 33280_0.txt 40781_0.txt 48281_0.txt\r\n10780_0.txt 18280_0.txt 2578_0.txt 3328_0.txt 40782_0.txt 48282_0.txt\r\n1078_0.txt 1828_0.txt 25781_0.txt 33281_0.txt 40783_0.txt 48283_0.txt\r\n10781_0.txt 18281_0.txt 25782_0.txt 33282_0.txt 40784_0.txt 48284_0.txt\r\n10782_0.txt 18282_0.txt 25783_0.txt 33283_0.txt 40785_0.txt 48285_0.txt\r\n10783_0.txt 18283_0.txt 25784_0.txt 33284_0.txt 40786_0.txt 48286_0.txt\r\n10784_0.txt 18284_0.txt 25785_0.txt 33285_0.txt 40787_0.txt 48287_0.txt\r\n10785_0.txt 18285_0.txt 25786_0.txt 33286_0.txt 40788_0.txt 48288_0.txt\r\n10786_0.txt 18286_0.txt 25787_0.txt 33287_0.txt 40789_0.txt 48289_0.txt\r\n10787_0.txt 18287_0.txt 25788_0.txt 33288_0.txt 40790_0.txt 48290_0.txt\r\n10788_0.txt 18288_0.txt 25789_0.txt 33289_0.txt 4079_0.txt\t 4829_0.txt\r\n10789_0.txt 18289_0.txt 25790_0.txt 33290_0.txt 40791_0.txt 48291_0.txt\r\n10790_0.txt 18290_0.txt 2579_0.txt 3329_0.txt 40792_0.txt 48292_0.txt\r\n1079_0.txt 1829_0.txt 25791_0.txt 33291_0.txt 40793_0.txt 48293_0.txt\r\n10791_0.txt 18291_0.txt 25792_0.txt 33292_0.txt 40794_0.txt 48294_0.txt\r\n10792_0.txt 18292_0.txt 25793_0.txt 33293_0.txt 40795_0.txt 48295_0.txt\r\n10793_0.txt 18293_0.txt 25794_0.txt 33294_0.txt 40796_0.txt 48296_0.txt\r\n10794_0.txt 18294_0.txt 25795_0.txt 33295_0.txt 40797_0.txt 48297_0.txt\r\n10795_0.txt 18295_0.txt 25796_0.txt 33296_0.txt 40798_0.txt 48298_0.txt\r\n10796_0.txt 18296_0.txt 25797_0.txt 33297_0.txt 40799_0.txt 48299_0.txt\r\n10797_0.txt 18297_0.txt 25798_0.txt 33298_0.txt 40800_0.txt 48300_0.txt\r\n10798_0.txt 18298_0.txt 25799_0.txt 33299_0.txt 4080_0.txt\t 4830_0.txt\r\n10799_0.txt 18299_0.txt 25800_0.txt 33300_0.txt 40801_0.txt 48301_0.txt\r\n10800_0.txt 18300_0.txt 2580_0.txt 3330_0.txt 40802_0.txt 48302_0.txt\r\n1080_0.txt 1830_0.txt 25801_0.txt 33301_0.txt 40803_0.txt 48303_0.txt\r\n10801_0.txt 18301_0.txt 25802_0.txt 33302_0.txt 40804_0.txt 48304_0.txt\r\n10802_0.txt 18302_0.txt 25803_0.txt 33303_0.txt 40805_0.txt 48305_0.txt\r\n10803_0.txt 18303_0.txt 25804_0.txt 33304_0.txt 40806_0.txt 48306_0.txt\r\n10804_0.txt 18304_0.txt 25805_0.txt 33305_0.txt 40807_0.txt 48307_0.txt\r\n10805_0.txt 18305_0.txt 25806_0.txt 33306_0.txt 40808_0.txt 48308_0.txt\r\n10806_0.txt 18306_0.txt 25807_0.txt 33307_0.txt 40809_0.txt 48309_0.txt\r\n10807_0.txt 18307_0.txt 25808_0.txt 33308_0.txt 408_0.txt\t 483_0.txt\r\n10808_0.txt 18308_0.txt 25809_0.txt 33309_0.txt 40810_0.txt 48310_0.txt\r\n10809_0.txt 18309_0.txt 258_0.txt 333_0.txt 4081_0.txt\t 4831_0.txt\r\n108_0.txt 183_0.txt\t 25810_0.txt 33310_0.txt 40811_0.txt 48311_0.txt\r\n10810_0.txt 18310_0.txt 2581_0.txt 3331_0.txt 40812_0.txt 48312_0.txt\r\n1081_0.txt 1831_0.txt 25811_0.txt 33311_0.txt 40813_0.txt 48313_0.txt\r\n10811_0.txt 18311_0.txt 25812_0.txt 33312_0.txt 40814_0.txt 48314_0.txt\r\n10812_0.txt 18312_0.txt 25813_0.txt 33313_0.txt 40815_0.txt 48315_0.txt\r\n10813_0.txt 18313_0.txt 25814_0.txt 33314_0.txt 40816_0.txt 48316_0.txt\r\n10814_0.txt 18314_0.txt 25815_0.txt 33315_0.txt 40817_0.txt 48317_0.txt\r\n10815_0.txt 18315_0.txt 25816_0.txt 33316_0.txt 40818_0.txt 48318_0.txt\r\n10816_0.txt 18316_0.txt 25817_0.txt 33317_0.txt 40819_0.txt 48319_0.txt\r\n10817_0.txt 18317_0.txt 25818_0.txt 33318_0.txt 40820_0.txt 48320_0.txt\r\n10818_0.txt 18318_0.txt 25819_0.txt 33319_0.txt 4082_0.txt\t 4832_0.txt\r\n10819_0.txt 18319_0.txt 25820_0.txt 33320_0.txt 40821_0.txt 48321_0.txt\r\n10820_0.txt 18320_0.txt 2582_0.txt 3332_0.txt 40822_0.txt 48322_0.txt\r\n1082_0.txt 1832_0.txt 25821_0.txt 33321_0.txt 40823_0.txt 48323_0.txt\r\n10821_0.txt 18321_0.txt 25822_0.txt 33322_0.txt 40824_0.txt 48324_0.txt\r\n10822_0.txt 18322_0.txt 25823_0.txt 33323_0.txt 40825_0.txt 48325_0.txt\r\n10823_0.txt 18323_0.txt 25824_0.txt 33324_0.txt 40826_0.txt 48326_0.txt\r\n10824_0.txt 18324_0.txt 25825_0.txt 33325_0.txt 40827_0.txt 48327_0.txt\r\n10825_0.txt 18325_0.txt 25826_0.txt 33326_0.txt 40828_0.txt 48328_0.txt\r\n10826_0.txt 18326_0.txt 25827_0.txt 33327_0.txt 40829_0.txt 48329_0.txt\r\n10827_0.txt 18327_0.txt 25828_0.txt 33328_0.txt 40830_0.txt 48330_0.txt\r\n10828_0.txt 18328_0.txt 25829_0.txt 33329_0.txt 4083_0.txt\t 4833_0.txt\r\n10829_0.txt 18329_0.txt 25830_0.txt 33330_0.txt 40831_0.txt 48331_0.txt\r\n10830_0.txt 18330_0.txt 2583_0.txt 3333_0.txt 40832_0.txt 48332_0.txt\r\n1083_0.txt 1833_0.txt 25831_0.txt 33331_0.txt 40833_0.txt 48333_0.txt\r\n10831_0.txt 18331_0.txt 25832_0.txt 33332_0.txt 40834_0.txt 48334_0.txt\r\n10832_0.txt 18332_0.txt 25833_0.txt 33333_0.txt 40835_0.txt 48335_0.txt\r\n10833_0.txt 18333_0.txt 25834_0.txt 33334_0.txt 40836_0.txt 48336_0.txt\r\n10834_0.txt 18334_0.txt 25835_0.txt 33335_0.txt 40837_0.txt 48337_0.txt\r\n10835_0.txt 18335_0.txt 25836_0.txt 33336_0.txt 40838_0.txt 48338_0.txt\r\n10836_0.txt 18336_0.txt 25837_0.txt 33337_0.txt 40839_0.txt 48339_0.txt\r\n10837_0.txt 18337_0.txt 25838_0.txt 33338_0.txt 40840_0.txt 48340_0.txt\r\n10838_0.txt 18338_0.txt 25839_0.txt 33339_0.txt 4084_0.txt\t 4834_0.txt\r\n10839_0.txt 18339_0.txt 25840_0.txt 33340_0.txt 40841_0.txt 48341_0.txt\r\n10840_0.txt 18340_0.txt 2584_0.txt 3334_0.txt 40842_0.txt 48342_0.txt\r\n1084_0.txt 1834_0.txt 25841_0.txt 33341_0.txt 40843_0.txt 48343_0.txt\r\n10841_0.txt 18341_0.txt 25842_0.txt 33342_0.txt 40844_0.txt 48344_0.txt\r\n10842_0.txt 18342_0.txt 25843_0.txt 33343_0.txt 40845_0.txt 48345_0.txt\r\n10843_0.txt 18343_0.txt 25844_0.txt 33344_0.txt 40846_0.txt 48346_0.txt\r\n10844_0.txt 18344_0.txt 25845_0.txt 33345_0.txt 40847_0.txt 48347_0.txt\r\n10845_0.txt 18345_0.txt 25846_0.txt 33346_0.txt 40848_0.txt 48348_0.txt\r\n10846_0.txt 18346_0.txt 25847_0.txt 33347_0.txt 40849_0.txt 48349_0.txt\r\n10847_0.txt 18347_0.txt 25848_0.txt 33348_0.txt 40850_0.txt 48350_0.txt\r\n10848_0.txt 18348_0.txt 25849_0.txt 33349_0.txt 4085_0.txt\t 4835_0.txt\r\n10849_0.txt 18349_0.txt 25850_0.txt 33350_0.txt 40851_0.txt 48351_0.txt\r\n10850_0.txt 18350_0.txt 2585_0.txt 3335_0.txt 40852_0.txt 48352_0.txt\r\n1085_0.txt 1835_0.txt 25851_0.txt 33351_0.txt 40853_0.txt 48353_0.txt\r\n10851_0.txt 18351_0.txt 25852_0.txt 33352_0.txt 40854_0.txt 48354_0.txt\r\n10852_0.txt 18352_0.txt 25853_0.txt 33353_0.txt 40855_0.txt 48355_0.txt\r\n10853_0.txt 18353_0.txt 25854_0.txt 33354_0.txt 40856_0.txt 48356_0.txt\r\n10854_0.txt 18354_0.txt 25855_0.txt 33355_0.txt 40857_0.txt 48357_0.txt\r\n10855_0.txt 18355_0.txt 25856_0.txt 33356_0.txt 40858_0.txt 48358_0.txt\r\n10856_0.txt 18356_0.txt 25857_0.txt 33357_0.txt 40859_0.txt 48359_0.txt\r\n10857_0.txt 18357_0.txt 25858_0.txt 33358_0.txt 40860_0.txt 48360_0.txt\r\n10858_0.txt 18358_0.txt 25859_0.txt 33359_0.txt 4086_0.txt\t 4836_0.txt\r\n10859_0.txt 18359_0.txt 25860_0.txt 33360_0.txt 40861_0.txt 48361_0.txt\r\n10860_0.txt 18360_0.txt 2586_0.txt 3336_0.txt 40862_0.txt 48362_0.txt\r\n1086_0.txt 1836_0.txt 25861_0.txt 33361_0.txt 40863_0.txt 48363_0.txt\r\n10861_0.txt 18361_0.txt 25862_0.txt 33362_0.txt 40864_0.txt 48364_0.txt\r\n10862_0.txt 18362_0.txt 25863_0.txt 33363_0.txt 40865_0.txt 48365_0.txt\r\n10863_0.txt 18363_0.txt 25864_0.txt 33364_0.txt 40866_0.txt 48366_0.txt\r\n10864_0.txt 18364_0.txt 25865_0.txt 33365_0.txt 40867_0.txt 48367_0.txt\r\n10865_0.txt 18365_0.txt 25866_0.txt 33366_0.txt 40868_0.txt 48368_0.txt\r\n10866_0.txt 18366_0.txt 25867_0.txt 33367_0.txt 40869_0.txt 48369_0.txt\r\n10867_0.txt 18367_0.txt 25868_0.txt 33368_0.txt 40870_0.txt 48370_0.txt\r\n10868_0.txt 18368_0.txt 25869_0.txt 33369_0.txt 4087_0.txt\t 4837_0.txt\r\n10869_0.txt 18369_0.txt 25870_0.txt 33370_0.txt 40871_0.txt 48371_0.txt\r\n10870_0.txt 18370_0.txt 2587_0.txt 3337_0.txt 40872_0.txt 48372_0.txt\r\n1087_0.txt 1837_0.txt 25871_0.txt 33371_0.txt 40873_0.txt 48373_0.txt\r\n10871_0.txt 18371_0.txt 25872_0.txt 33372_0.txt 40874_0.txt 48374_0.txt\r\n10872_0.txt 18372_0.txt 25873_0.txt 33373_0.txt 40875_0.txt 48375_0.txt\r\n10873_0.txt 18373_0.txt 25874_0.txt 33374_0.txt 40876_0.txt 48376_0.txt\r\n10874_0.txt 18374_0.txt 25875_0.txt 33375_0.txt 40877_0.txt 48377_0.txt\r\n10875_0.txt 18375_0.txt 25876_0.txt 33376_0.txt 40878_0.txt 48378_0.txt\r\n10876_0.txt 18376_0.txt 25877_0.txt 33377_0.txt 40879_0.txt 48379_0.txt\r\n10877_0.txt 18377_0.txt 25878_0.txt 33378_0.txt 40880_0.txt 48380_0.txt\r\n10878_0.txt 18378_0.txt 25879_0.txt 33379_0.txt 4088_0.txt\t 4838_0.txt\r\n10879_0.txt 18379_0.txt 25880_0.txt 33380_0.txt 40881_0.txt 48381_0.txt\r\n10880_0.txt 18380_0.txt 2588_0.txt 3338_0.txt 40882_0.txt 48382_0.txt\r\n1088_0.txt 1838_0.txt 25881_0.txt 33381_0.txt 40883_0.txt 48383_0.txt\r\n10881_0.txt 18381_0.txt 25882_0.txt 33382_0.txt 40884_0.txt 48384_0.txt\r\n10882_0.txt 18382_0.txt 25883_0.txt 33383_0.txt 40885_0.txt 48385_0.txt\r\n10883_0.txt 18383_0.txt 25884_0.txt 33384_0.txt 40886_0.txt 48386_0.txt\r\n10884_0.txt 18384_0.txt 25885_0.txt 33385_0.txt 40887_0.txt 48387_0.txt\r\n10885_0.txt 18385_0.txt 25886_0.txt 33386_0.txt 40888_0.txt 48388_0.txt\r\n10886_0.txt 18386_0.txt 25887_0.txt 33387_0.txt 40889_0.txt 48389_0.txt\r\n10887_0.txt 18387_0.txt 25888_0.txt 33388_0.txt 40890_0.txt 48390_0.txt\r\n10888_0.txt 18388_0.txt 25889_0.txt 33389_0.txt 4089_0.txt\t 4839_0.txt\r\n10889_0.txt 18389_0.txt 25890_0.txt 33390_0.txt 40891_0.txt 48391_0.txt\r\n10890_0.txt 18390_0.txt 2589_0.txt 3339_0.txt 40892_0.txt 48392_0.txt\r\n1089_0.txt 1839_0.txt 25891_0.txt 33391_0.txt 40893_0.txt 48393_0.txt\r\n10891_0.txt 18391_0.txt 25892_0.txt 33392_0.txt 40894_0.txt 48394_0.txt\r\n10892_0.txt 18392_0.txt 25893_0.txt 33393_0.txt 40895_0.txt 48395_0.txt\r\n10893_0.txt 18393_0.txt 25894_0.txt 33394_0.txt 40896_0.txt 48396_0.txt\r\n10894_0.txt 18394_0.txt 25895_0.txt 33395_0.txt 40897_0.txt 48397_0.txt\r\n10895_0.txt 18395_0.txt 25896_0.txt 33396_0.txt 40898_0.txt 48398_0.txt\r\n10896_0.txt 18396_0.txt 25897_0.txt 33397_0.txt 40899_0.txt 48399_0.txt\r\n10897_0.txt 18397_0.txt 25898_0.txt 33398_0.txt 40900_0.txt 48400_0.txt\r\n10898_0.txt 18398_0.txt 25899_0.txt 33399_0.txt 4090_0.txt\t 4840_0.txt\r\n10899_0.txt 18399_0.txt 25900_0.txt 33400_0.txt 40901_0.txt 48401_0.txt\r\n10900_0.txt 18400_0.txt 2590_0.txt 3340_0.txt 40902_0.txt 48402_0.txt\r\n1090_0.txt 1840_0.txt 25901_0.txt 33401_0.txt 40903_0.txt 48403_0.txt\r\n10901_0.txt 18401_0.txt 25902_0.txt 33402_0.txt 40904_0.txt 48404_0.txt\r\n10902_0.txt 18402_0.txt 25903_0.txt 33403_0.txt 40905_0.txt 48405_0.txt\r\n10903_0.txt 18403_0.txt 25904_0.txt 33404_0.txt 40906_0.txt 48406_0.txt\r\n10904_0.txt 18404_0.txt 25905_0.txt 33405_0.txt 40907_0.txt 48407_0.txt\r\n10905_0.txt 18405_0.txt 25906_0.txt 33406_0.txt 40908_0.txt 48408_0.txt\r\n10906_0.txt 18406_0.txt 25907_0.txt 33407_0.txt 40909_0.txt 48409_0.txt\r\n10907_0.txt 18407_0.txt 25908_0.txt 33408_0.txt 409_0.txt\t 484_0.txt\r\n10908_0.txt 18408_0.txt 25909_0.txt 33409_0.txt 40910_0.txt 48410_0.txt\r\n10909_0.txt 18409_0.txt 259_0.txt 334_0.txt 4091_0.txt\t 4841_0.txt\r\n109_0.txt 184_0.txt\t 25910_0.txt 33410_0.txt 40911_0.txt 48411_0.txt\r\n10910_0.txt 18410_0.txt 2591_0.txt 3341_0.txt 40912_0.txt 48412_0.txt\r\n1091_0.txt 1841_0.txt 25911_0.txt 33411_0.txt 40913_0.txt 48413_0.txt\r\n10911_0.txt 18411_0.txt 25912_0.txt 33412_0.txt 40914_0.txt 48414_0.txt\r\n10912_0.txt 18412_0.txt 25913_0.txt 33413_0.txt 40915_0.txt 48415_0.txt\r\n10913_0.txt 18413_0.txt 25914_0.txt 33414_0.txt 40916_0.txt 48416_0.txt\r\n10914_0.txt 18414_0.txt 25915_0.txt 33415_0.txt 40917_0.txt 48417_0.txt\r\n10915_0.txt 18415_0.txt 25916_0.txt 33416_0.txt 40918_0.txt 48418_0.txt\r\n10916_0.txt 18416_0.txt 25917_0.txt 33417_0.txt 40919_0.txt 48419_0.txt\r\n10917_0.txt 18417_0.txt 25918_0.txt 33418_0.txt 40920_0.txt 48420_0.txt\r\n10918_0.txt 18418_0.txt 25919_0.txt 33419_0.txt 4092_0.txt\t 4842_0.txt\r\n10919_0.txt 18419_0.txt 25920_0.txt 33420_0.txt 40921_0.txt 48421_0.txt\r\n10920_0.txt 18420_0.txt 2592_0.txt 3342_0.txt 40922_0.txt 48422_0.txt\r\n1092_0.txt 1842_0.txt 25921_0.txt 33421_0.txt 40923_0.txt 48423_0.txt\r\n10921_0.txt 18421_0.txt 25922_0.txt 33422_0.txt 40924_0.txt 48424_0.txt\r\n10922_0.txt 18422_0.txt 25923_0.txt 33423_0.txt 40925_0.txt 48425_0.txt\r\n10923_0.txt 18423_0.txt 25924_0.txt 33424_0.txt 40926_0.txt 48426_0.txt\r\n10924_0.txt 18424_0.txt 25925_0.txt 33425_0.txt 40927_0.txt 48427_0.txt\r\n10925_0.txt 18425_0.txt 25926_0.txt 33426_0.txt 40928_0.txt 48428_0.txt\r\n10926_0.txt 18426_0.txt 25927_0.txt 33427_0.txt 40929_0.txt 48429_0.txt\r\n10927_0.txt 18427_0.txt 25928_0.txt 33428_0.txt 40930_0.txt 48430_0.txt\r\n10928_0.txt 18428_0.txt 25929_0.txt 33429_0.txt 4093_0.txt\t 4843_0.txt\r\n10929_0.txt 18429_0.txt 25930_0.txt 33430_0.txt 40931_0.txt 48431_0.txt\r\n10930_0.txt 18430_0.txt 2593_0.txt 3343_0.txt 40932_0.txt 48432_0.txt\r\n1093_0.txt 1843_0.txt 25931_0.txt 33431_0.txt 40933_0.txt 48433_0.txt\r\n10931_0.txt 18431_0.txt 25932_0.txt 33432_0.txt 40934_0.txt 48434_0.txt\r\n10932_0.txt 18432_0.txt 25933_0.txt 33433_0.txt 40935_0.txt 48435_0.txt\r\n10933_0.txt 18433_0.txt 25934_0.txt 33434_0.txt 40936_0.txt 48436_0.txt\r\n10934_0.txt 18434_0.txt 25935_0.txt 33435_0.txt 40937_0.txt 48437_0.txt\r\n10935_0.txt 18435_0.txt 25936_0.txt 33436_0.txt 40938_0.txt 48438_0.txt\r\n10936_0.txt 18436_0.txt 25937_0.txt 33437_0.txt 40939_0.txt 48439_0.txt\r\n10937_0.txt 18437_0.txt 25938_0.txt 33438_0.txt 40940_0.txt 48440_0.txt\r\n10938_0.txt 18438_0.txt 25939_0.txt 33439_0.txt 4094_0.txt\t 4844_0.txt\r\n10939_0.txt 18439_0.txt 25940_0.txt 33440_0.txt 40941_0.txt 48441_0.txt\r\n10940_0.txt 18440_0.txt 2594_0.txt 3344_0.txt 40942_0.txt 48442_0.txt\r\n1094_0.txt 1844_0.txt 25941_0.txt 33441_0.txt 40943_0.txt 48443_0.txt\r\n10941_0.txt 18441_0.txt 25942_0.txt 33442_0.txt 40944_0.txt 48444_0.txt\r\n10942_0.txt 18442_0.txt 25943_0.txt 33443_0.txt 40945_0.txt 48445_0.txt\r\n10943_0.txt 18443_0.txt 25944_0.txt 33444_0.txt 40946_0.txt 48446_0.txt\r\n10944_0.txt 18444_0.txt 25945_0.txt 33445_0.txt 40947_0.txt 48447_0.txt\r\n10945_0.txt 18445_0.txt 25946_0.txt 33446_0.txt 40948_0.txt 48448_0.txt\r\n10946_0.txt 18446_0.txt 25947_0.txt 33447_0.txt 40949_0.txt 48449_0.txt\r\n10947_0.txt 18447_0.txt 25948_0.txt 33448_0.txt 40950_0.txt 48450_0.txt\r\n10948_0.txt 18448_0.txt 25949_0.txt 33449_0.txt 4095_0.txt\t 4845_0.txt\r\n10949_0.txt 18449_0.txt 25950_0.txt 33450_0.txt 40951_0.txt 48451_0.txt\r\n10950_0.txt 18450_0.txt 2595_0.txt 3345_0.txt 40952_0.txt 48452_0.txt\r\n1095_0.txt 1845_0.txt 25951_0.txt 33451_0.txt 40953_0.txt 48453_0.txt\r\n10951_0.txt 18451_0.txt 25952_0.txt 33452_0.txt 40954_0.txt 48454_0.txt\r\n10952_0.txt 18452_0.txt 25953_0.txt 33453_0.txt 40955_0.txt 48455_0.txt\r\n10953_0.txt 18453_0.txt 25954_0.txt 33454_0.txt 40956_0.txt 48456_0.txt\r\n10954_0.txt 18454_0.txt 25955_0.txt 33455_0.txt 40957_0.txt 48457_0.txt\r\n10955_0.txt 18455_0.txt 25956_0.txt 33456_0.txt 40958_0.txt 48458_0.txt\r\n10956_0.txt 18456_0.txt 25957_0.txt 33457_0.txt 40959_0.txt 48459_0.txt\r\n10957_0.txt 18457_0.txt 25958_0.txt 33458_0.txt 40960_0.txt 48460_0.txt\r\n10958_0.txt 18458_0.txt 25959_0.txt 33459_0.txt 4096_0.txt\t 4846_0.txt\r\n10959_0.txt 18459_0.txt 25960_0.txt 33460_0.txt 40961_0.txt 48461_0.txt\r\n10960_0.txt 18460_0.txt 2596_0.txt 3346_0.txt 40962_0.txt 48462_0.txt\r\n1096_0.txt 1846_0.txt 25961_0.txt 33461_0.txt 40963_0.txt 48463_0.txt\r\n10961_0.txt 18461_0.txt 25962_0.txt 33462_0.txt 40964_0.txt 48464_0.txt\r\n10962_0.txt 18462_0.txt 25963_0.txt 33463_0.txt 40965_0.txt 48465_0.txt\r\n10963_0.txt 18463_0.txt 25964_0.txt 33464_0.txt 40966_0.txt 48466_0.txt\r\n10964_0.txt 18464_0.txt 25965_0.txt 33465_0.txt 40967_0.txt 48467_0.txt\r\n10965_0.txt 18465_0.txt 25966_0.txt 33466_0.txt 40968_0.txt 48468_0.txt\r\n10966_0.txt 18466_0.txt 25967_0.txt 33467_0.txt 40969_0.txt 48469_0.txt\r\n10967_0.txt 18467_0.txt 25968_0.txt 33468_0.txt 40970_0.txt 48470_0.txt\r\n10968_0.txt 18468_0.txt 25969_0.txt 33469_0.txt 4097_0.txt\t 4847_0.txt\r\n10969_0.txt 18469_0.txt 25970_0.txt 33470_0.txt 40971_0.txt 48471_0.txt\r\n10970_0.txt 18470_0.txt 2597_0.txt 3347_0.txt 40972_0.txt 48472_0.txt\r\n1097_0.txt 1847_0.txt 25971_0.txt 33471_0.txt 40973_0.txt 48473_0.txt\r\n10971_0.txt 18471_0.txt 25972_0.txt 33472_0.txt 40974_0.txt 48474_0.txt\r\n10972_0.txt 18472_0.txt 25973_0.txt 33473_0.txt 40975_0.txt 48475_0.txt\r\n10973_0.txt 18473_0.txt 25974_0.txt 33474_0.txt 40976_0.txt 48476_0.txt\r\n10974_0.txt 18474_0.txt 25975_0.txt 33475_0.txt 40977_0.txt 48477_0.txt\r\n10975_0.txt 18475_0.txt 25976_0.txt 33476_0.txt 40978_0.txt 48478_0.txt\r\n10976_0.txt 18476_0.txt 25977_0.txt 33477_0.txt 40979_0.txt 48479_0.txt\r\n10977_0.txt 18477_0.txt 25978_0.txt 33478_0.txt 40980_0.txt 48480_0.txt\r\n10978_0.txt 18478_0.txt 25979_0.txt 33479_0.txt 4098_0.txt\t 4848_0.txt\r\n10979_0.txt 18479_0.txt 25980_0.txt 33480_0.txt 40981_0.txt 48481_0.txt\r\n10980_0.txt 18480_0.txt 2598_0.txt 3348_0.txt 40982_0.txt 48482_0.txt\r\n1098_0.txt 1848_0.txt 25981_0.txt 33481_0.txt 40983_0.txt 48483_0.txt\r\n10981_0.txt 18481_0.txt 25982_0.txt 33482_0.txt 40984_0.txt 48484_0.txt\r\n10982_0.txt 18482_0.txt 25983_0.txt 33483_0.txt 40985_0.txt 48485_0.txt\r\n10983_0.txt 18483_0.txt 25984_0.txt 33484_0.txt 40986_0.txt 48486_0.txt\r\n10984_0.txt 18484_0.txt 25985_0.txt 33485_0.txt 40987_0.txt 48487_0.txt\r\n10985_0.txt 18485_0.txt 25986_0.txt 33486_0.txt 40988_0.txt 48488_0.txt\r\n10986_0.txt 18486_0.txt 25987_0.txt 33487_0.txt 40989_0.txt 48489_0.txt\r\n10987_0.txt 18487_0.txt 25988_0.txt 33488_0.txt 40990_0.txt 48490_0.txt\r\n10988_0.txt 18488_0.txt 25989_0.txt 33489_0.txt 4099_0.txt\t 4849_0.txt\r\n10989_0.txt 18489_0.txt 25990_0.txt 33490_0.txt 40991_0.txt 48491_0.txt\r\n10990_0.txt 18490_0.txt 2599_0.txt 3349_0.txt 40992_0.txt 48492_0.txt\r\n1099_0.txt 1849_0.txt 25991_0.txt 33491_0.txt 40993_0.txt 48493_0.txt\r\n10991_0.txt 18491_0.txt 25992_0.txt 33492_0.txt 40994_0.txt 48494_0.txt\r\n10992_0.txt 18492_0.txt 25993_0.txt 33493_0.txt 40995_0.txt 48495_0.txt\r\n10993_0.txt 18493_0.txt 25994_0.txt 33494_0.txt 40996_0.txt 48496_0.txt\r\n10994_0.txt 18494_0.txt 25995_0.txt 33495_0.txt 40997_0.txt 48497_0.txt\r\n10995_0.txt 18495_0.txt 25996_0.txt 33496_0.txt 40998_0.txt 48498_0.txt\r\n10996_0.txt 18496_0.txt 25997_0.txt 33497_0.txt 40999_0.txt 48499_0.txt\r\n10997_0.txt 18497_0.txt 25998_0.txt 33498_0.txt 4_0.txt\t 48500_0.txt\r\n10998_0.txt 18498_0.txt 25999_0.txt 33499_0.txt 41000_0.txt 4850_0.txt\r\n10999_0.txt 18499_0.txt 26000_0.txt 33500_0.txt 4100_0.txt\t 48501_0.txt\r\n1_0.txt 18500_0.txt 2600_0.txt 3350_0.txt 41001_0.txt 48502_0.txt\r\n11000_0.txt 1850_0.txt 26001_0.txt 33501_0.txt 41002_0.txt 48503_0.txt\r\n1100_0.txt 18501_0.txt 26002_0.txt 33502_0.txt 41003_0.txt 48504_0.txt\r\n11001_0.txt 18502_0.txt 26003_0.txt 33503_0.txt 41004_0.txt 48505_0.txt\r\n11002_0.txt 18503_0.txt 26004_0.txt 33504_0.txt 41005_0.txt 48506_0.txt\r\n11003_0.txt 18504_0.txt 26005_0.txt 33505_0.txt 41006_0.txt 48507_0.txt\r\n11004_0.txt 18505_0.txt 26006_0.txt 33506_0.txt 41007_0.txt 48508_0.txt\r\n11005_0.txt 18506_0.txt 26007_0.txt 33507_0.txt 41008_0.txt 48509_0.txt\r\n11006_0.txt 18507_0.txt 26008_0.txt 33508_0.txt 41009_0.txt 485_0.txt\r\n11007_0.txt 18508_0.txt 26009_0.txt 33509_0.txt 410_0.txt\t 48510_0.txt\r\n11008_0.txt 18509_0.txt 260_0.txt 335_0.txt 41010_0.txt 4851_0.txt\r\n11009_0.txt 185_0.txt\t 26010_0.txt 33510_0.txt 4101_0.txt\t 48511_0.txt\r\n110_0.txt 18510_0.txt 2601_0.txt 3351_0.txt 41011_0.txt 48512_0.txt\r\n11010_0.txt 1851_0.txt 26011_0.txt 33511_0.txt 41012_0.txt 48513_0.txt\r\n1101_0.txt 18511_0.txt 26012_0.txt 33512_0.txt 41013_0.txt 48514_0.txt\r\n11011_0.txt 18512_0.txt 26013_0.txt 33513_0.txt 41014_0.txt 48515_0.txt\r\n11012_0.txt 18513_0.txt 26014_0.txt 33514_0.txt 41015_0.txt 48516_0.txt\r\n11013_0.txt 18514_0.txt 26015_0.txt 33515_0.txt 41016_0.txt 48517_0.txt\r\n11014_0.txt 18515_0.txt 26016_0.txt 33516_0.txt 41017_0.txt 48518_0.txt\r\n11015_0.txt 18516_0.txt 26017_0.txt 33517_0.txt 41018_0.txt 48519_0.txt\r\n11016_0.txt 18517_0.txt 26018_0.txt 33518_0.txt 41019_0.txt 48520_0.txt\r\n11017_0.txt 18518_0.txt 26019_0.txt 33519_0.txt 41020_0.txt 4852_0.txt\r\n11018_0.txt 18519_0.txt 26020_0.txt 33520_0.txt 4102_0.txt\t 48521_0.txt\r\n11019_0.txt 18520_0.txt 2602_0.txt 3352_0.txt 41021_0.txt 48522_0.txt\r\n11020_0.txt 1852_0.txt 26021_0.txt 33521_0.txt 41022_0.txt 48523_0.txt\r\n1102_0.txt 18521_0.txt 26022_0.txt 33522_0.txt 41023_0.txt 48524_0.txt\r\n11021_0.txt 18522_0.txt 26023_0.txt 33523_0.txt 41024_0.txt 48525_0.txt\r\n11022_0.txt 18523_0.txt 26024_0.txt 33524_0.txt 41025_0.txt 48526_0.txt\r\n11023_0.txt 18524_0.txt 26025_0.txt 33525_0.txt 41026_0.txt 48527_0.txt\r\n11024_0.txt 18525_0.txt 26026_0.txt 33526_0.txt 41027_0.txt 48528_0.txt\r\n11025_0.txt 18526_0.txt 26027_0.txt 33527_0.txt 41028_0.txt 48529_0.txt\r\n11026_0.txt 18527_0.txt 26028_0.txt 33528_0.txt 41029_0.txt 48530_0.txt\r\n11027_0.txt 18528_0.txt 26029_0.txt 33529_0.txt 41030_0.txt 4853_0.txt\r\n11028_0.txt 18529_0.txt 26030_0.txt 33530_0.txt 4103_0.txt\t 48531_0.txt\r\n11029_0.txt 18530_0.txt 2603_0.txt 3353_0.txt 41031_0.txt 48532_0.txt\r\n11030_0.txt 1853_0.txt 26031_0.txt 33531_0.txt 41032_0.txt 48533_0.txt\r\n1103_0.txt 18531_0.txt 26032_0.txt 33532_0.txt 41033_0.txt 48534_0.txt\r\n11031_0.txt 18532_0.txt 26033_0.txt 33533_0.txt 41034_0.txt 48535_0.txt\r\n11032_0.txt 18533_0.txt 26034_0.txt 33534_0.txt 41035_0.txt 48536_0.txt\r\n11033_0.txt 18534_0.txt 26035_0.txt 33535_0.txt 41036_0.txt 48537_0.txt\r\n11034_0.txt 18535_0.txt 26036_0.txt 33536_0.txt 41037_0.txt 48538_0.txt\r\n11035_0.txt 18536_0.txt 26037_0.txt 33537_0.txt 41038_0.txt 48539_0.txt\r\n11036_0.txt 18537_0.txt 26038_0.txt 33538_0.txt 41039_0.txt 48540_0.txt\r\n11037_0.txt 18538_0.txt 26039_0.txt 33539_0.txt 41040_0.txt 4854_0.txt\r\n11038_0.txt 18539_0.txt 26040_0.txt 33540_0.txt 4104_0.txt\t 48541_0.txt\r\n11039_0.txt 18540_0.txt 2604_0.txt 3354_0.txt 41041_0.txt 48542_0.txt\r\n11040_0.txt 1854_0.txt 26041_0.txt 33541_0.txt 41042_0.txt 48543_0.txt\r\n1104_0.txt 18541_0.txt 26042_0.txt 33542_0.txt 41043_0.txt 48544_0.txt\r\n11041_0.txt 18542_0.txt 26043_0.txt 33543_0.txt 41044_0.txt 48545_0.txt\r\n11042_0.txt 18543_0.txt 26044_0.txt 33544_0.txt 41045_0.txt 48546_0.txt\r\n11043_0.txt 18544_0.txt 26045_0.txt 33545_0.txt 41046_0.txt 48547_0.txt\r\n11044_0.txt 18545_0.txt 26046_0.txt 33546_0.txt 41047_0.txt 48548_0.txt\r\n11045_0.txt 18546_0.txt 26047_0.txt 33547_0.txt 41048_0.txt 48549_0.txt\r\n11046_0.txt 18547_0.txt 26048_0.txt 33548_0.txt 41049_0.txt 48550_0.txt\r\n11047_0.txt 18548_0.txt 26049_0.txt 33549_0.txt 41050_0.txt 4855_0.txt\r\n11048_0.txt 18549_0.txt 26050_0.txt 33550_0.txt 4105_0.txt\t 48551_0.txt\r\n11049_0.txt 18550_0.txt 2605_0.txt 3355_0.txt 41051_0.txt 48552_0.txt\r\n11050_0.txt 1855_0.txt 26051_0.txt 33551_0.txt 41052_0.txt 48553_0.txt\r\n1105_0.txt 18551_0.txt 26052_0.txt 33552_0.txt 41053_0.txt 48554_0.txt\r\n11051_0.txt 18552_0.txt 26053_0.txt 33553_0.txt 41054_0.txt 48555_0.txt\r\n11052_0.txt 18553_0.txt 26054_0.txt 33554_0.txt 41055_0.txt 48556_0.txt\r\n11053_0.txt 18554_0.txt 26055_0.txt 33555_0.txt 41056_0.txt 48557_0.txt\r\n11054_0.txt 18555_0.txt 26056_0.txt 33556_0.txt 41057_0.txt 48558_0.txt\r\n11055_0.txt 18556_0.txt 26057_0.txt 33557_0.txt 41058_0.txt 48559_0.txt\r\n11056_0.txt 18557_0.txt 26058_0.txt 33558_0.txt 41059_0.txt 48560_0.txt\r\n11057_0.txt 18558_0.txt 26059_0.txt 33559_0.txt 41060_0.txt 4856_0.txt\r\n11058_0.txt 18559_0.txt 26060_0.txt 33560_0.txt 4106_0.txt\t 48561_0.txt\r\n11059_0.txt 18560_0.txt 2606_0.txt 3356_0.txt 41061_0.txt 48562_0.txt\r\n11060_0.txt 1856_0.txt 26061_0.txt 33561_0.txt 41062_0.txt 48563_0.txt\r\n1106_0.txt 18561_0.txt 26062_0.txt 33562_0.txt 41063_0.txt 48564_0.txt\r\n11061_0.txt 18562_0.txt 26063_0.txt 33563_0.txt 41064_0.txt 48565_0.txt\r\n11062_0.txt 18563_0.txt 26064_0.txt 33564_0.txt 41065_0.txt 48566_0.txt\r\n11063_0.txt 18564_0.txt 26065_0.txt 33565_0.txt 41066_0.txt 48567_0.txt\r\n11064_0.txt 18565_0.txt 26066_0.txt 33566_0.txt 41067_0.txt 48568_0.txt\r\n11065_0.txt 18566_0.txt 26067_0.txt 33567_0.txt 41068_0.txt 48569_0.txt\r\n11066_0.txt 18567_0.txt 26068_0.txt 33568_0.txt 41069_0.txt 48570_0.txt\r\n11067_0.txt 18568_0.txt 26069_0.txt 33569_0.txt 41070_0.txt 4857_0.txt\r\n11068_0.txt 18569_0.txt 26070_0.txt 33570_0.txt 4107_0.txt\t 48571_0.txt\r\n11069_0.txt 18570_0.txt 2607_0.txt 3357_0.txt 41071_0.txt 48572_0.txt\r\n11070_0.txt 1857_0.txt 26071_0.txt 33571_0.txt 41072_0.txt 48573_0.txt\r\n1107_0.txt 18571_0.txt 26072_0.txt 33572_0.txt 41073_0.txt 48574_0.txt\r\n11071_0.txt 18572_0.txt 26073_0.txt 33573_0.txt 41074_0.txt 48575_0.txt\r\n11072_0.txt 18573_0.txt 26074_0.txt 33574_0.txt 41075_0.txt 48576_0.txt\r\n11073_0.txt 18574_0.txt 26075_0.txt 33575_0.txt 41076_0.txt 48577_0.txt\r\n11074_0.txt 18575_0.txt 26076_0.txt 33576_0.txt 41077_0.txt 48578_0.txt\r\n11075_0.txt 18576_0.txt 26077_0.txt 33577_0.txt 41078_0.txt 48579_0.txt\r\n11076_0.txt 18577_0.txt 26078_0.txt 33578_0.txt 41079_0.txt 48580_0.txt\r\n11077_0.txt 18578_0.txt 26079_0.txt 33579_0.txt 41080_0.txt 4858_0.txt\r\n11078_0.txt 18579_0.txt 26080_0.txt 33580_0.txt 4108_0.txt\t 48581_0.txt\r\n11079_0.txt 18580_0.txt 2608_0.txt 3358_0.txt 41081_0.txt 48582_0.txt\r\n11080_0.txt 1858_0.txt 26081_0.txt 33581_0.txt 41082_0.txt 48583_0.txt\r\n1108_0.txt 18581_0.txt 26082_0.txt 33582_0.txt 41083_0.txt 48584_0.txt\r\n11081_0.txt 18582_0.txt 26083_0.txt 33583_0.txt 41084_0.txt 48585_0.txt\r\n11082_0.txt 18583_0.txt 26084_0.txt 33584_0.txt 41085_0.txt 48586_0.txt\r\n11083_0.txt 18584_0.txt 26085_0.txt 33585_0.txt 41086_0.txt 48587_0.txt\r\n11084_0.txt 18585_0.txt 26086_0.txt 33586_0.txt 41087_0.txt 48588_0.txt\r\n11085_0.txt 18586_0.txt 26087_0.txt 33587_0.txt 41088_0.txt 48589_0.txt\r\n11086_0.txt 18587_0.txt 26088_0.txt 33588_0.txt 41089_0.txt 48590_0.txt\r\n11087_0.txt 18588_0.txt 26089_0.txt 33589_0.txt 41090_0.txt 4859_0.txt\r\n11088_0.txt 18589_0.txt 26090_0.txt 33590_0.txt 4109_0.txt\t 48591_0.txt\r\n11089_0.txt 18590_0.txt 2609_0.txt 3359_0.txt 41091_0.txt 48592_0.txt\r\n11090_0.txt 1859_0.txt 26091_0.txt 33591_0.txt 41092_0.txt 48593_0.txt\r\n1109_0.txt 18591_0.txt 26092_0.txt 33592_0.txt 41093_0.txt 48594_0.txt\r\n11091_0.txt 18592_0.txt 26093_0.txt 33593_0.txt 41094_0.txt 48595_0.txt\r\n11092_0.txt 18593_0.txt 26094_0.txt 33594_0.txt 41095_0.txt 48596_0.txt\r\n11093_0.txt 18594_0.txt 26095_0.txt 33595_0.txt 41096_0.txt 48597_0.txt\r\n11094_0.txt 18595_0.txt 26096_0.txt 33596_0.txt 41097_0.txt 48598_0.txt\r\n11095_0.txt 18596_0.txt 26097_0.txt 33597_0.txt 41098_0.txt 48599_0.txt\r\n11096_0.txt 18597_0.txt 26098_0.txt 33598_0.txt 41099_0.txt 48600_0.txt\r\n11097_0.txt 18598_0.txt 26099_0.txt 33599_0.txt 41_0.txt\t 4860_0.txt\r\n11098_0.txt 18599_0.txt 26_0.txt 33600_0.txt 41100_0.txt 48601_0.txt\r\n11099_0.txt 18600_0.txt 26100_0.txt 3360_0.txt 4110_0.txt\t 48602_0.txt\r\n11_0.txt 1860_0.txt 2610_0.txt 33601_0.txt 41101_0.txt 48603_0.txt\r\n11100_0.txt 18601_0.txt 26101_0.txt 33602_0.txt 41102_0.txt 48604_0.txt\r\n1110_0.txt 18602_0.txt 26102_0.txt 33603_0.txt 41103_0.txt 48605_0.txt\r\n11101_0.txt 18603_0.txt 26103_0.txt 33604_0.txt 41104_0.txt 48606_0.txt\r\n11102_0.txt 18604_0.txt 26104_0.txt 33605_0.txt 41105_0.txt 48607_0.txt\r\n11103_0.txt 18605_0.txt 26105_0.txt 33606_0.txt 41106_0.txt 48608_0.txt\r\n11104_0.txt 18606_0.txt 26106_0.txt 33607_0.txt 41107_0.txt 48609_0.txt\r\n11105_0.txt 18607_0.txt 26107_0.txt 33608_0.txt 41108_0.txt 486_0.txt\r\n11106_0.txt 18608_0.txt 26108_0.txt 33609_0.txt 41109_0.txt 48610_0.txt\r\n11107_0.txt 18609_0.txt 26109_0.txt 336_0.txt 411_0.txt\t 4861_0.txt\r\n11108_0.txt 186_0.txt\t 261_0.txt 33610_0.txt 41110_0.txt 48611_0.txt\r\n11109_0.txt 18610_0.txt 26110_0.txt 3361_0.txt 4111_0.txt\t 48612_0.txt\r\n111_0.txt 1861_0.txt 2611_0.txt 33611_0.txt 41111_0.txt 48613_0.txt\r\n11110_0.txt 18611_0.txt 26111_0.txt 33612_0.txt 41112_0.txt 48614_0.txt\r\n1111_0.txt 18612_0.txt 26112_0.txt 33613_0.txt 41113_0.txt 48615_0.txt\r\n11111_0.txt 18613_0.txt 26113_0.txt 33614_0.txt 41114_0.txt 48616_0.txt\r\n11112_0.txt 18614_0.txt 26114_0.txt 33615_0.txt 41115_0.txt 48617_0.txt\r\n11113_0.txt 18615_0.txt 26115_0.txt 33616_0.txt 41116_0.txt 48618_0.txt\r\n11114_0.txt 18616_0.txt 26116_0.txt 33617_0.txt 41117_0.txt 48619_0.txt\r\n11115_0.txt 18617_0.txt 26117_0.txt 33618_0.txt 41118_0.txt 48620_0.txt\r\n11116_0.txt 18618_0.txt 26118_0.txt 33619_0.txt 41119_0.txt 4862_0.txt\r\n11117_0.txt 18619_0.txt 26119_0.txt 33620_0.txt 41120_0.txt 48621_0.txt\r\n11118_0.txt 18620_0.txt 26120_0.txt 3362_0.txt 4112_0.txt\t 48622_0.txt\r\n11119_0.txt 1862_0.txt 2612_0.txt 33621_0.txt 41121_0.txt 48623_0.txt\r\n11120_0.txt 18621_0.txt 26121_0.txt 33622_0.txt 41122_0.txt 48624_0.txt\r\n1112_0.txt 18622_0.txt 26122_0.txt 33623_0.txt 41123_0.txt 48625_0.txt\r\n11121_0.txt 18623_0.txt 26123_0.txt 33624_0.txt 41124_0.txt 48626_0.txt\r\n11122_0.txt 18624_0.txt 26124_0.txt 33625_0.txt 41125_0.txt 48627_0.txt\r\n11123_0.txt 18625_0.txt 26125_0.txt 33626_0.txt 41126_0.txt 48628_0.txt\r\n11124_0.txt 18626_0.txt 26126_0.txt 33627_0.txt 41127_0.txt 48629_0.txt\r\n11125_0.txt 18627_0.txt 26127_0.txt 33628_0.txt 41128_0.txt 48630_0.txt\r\n11126_0.txt 18628_0.txt 26128_0.txt 33629_0.txt 41129_0.txt 4863_0.txt\r\n11127_0.txt 18629_0.txt 26129_0.txt 33630_0.txt 41130_0.txt 48631_0.txt\r\n11128_0.txt 18630_0.txt 26130_0.txt 3363_0.txt 4113_0.txt\t 48632_0.txt\r\n11129_0.txt 1863_0.txt 2613_0.txt 33631_0.txt 41131_0.txt 48633_0.txt\r\n11130_0.txt 18631_0.txt 26131_0.txt 33632_0.txt 41132_0.txt 48634_0.txt\r\n1113_0.txt 18632_0.txt 26132_0.txt 33633_0.txt 41133_0.txt 48635_0.txt\r\n11131_0.txt 18633_0.txt 26133_0.txt 33634_0.txt 41134_0.txt 48636_0.txt\r\n11132_0.txt 18634_0.txt 26134_0.txt 33635_0.txt 41135_0.txt 48637_0.txt\r\n11133_0.txt 18635_0.txt 26135_0.txt 33636_0.txt 41136_0.txt 48638_0.txt\r\n11134_0.txt 18636_0.txt 26136_0.txt 33637_0.txt 41137_0.txt 48639_0.txt\r\n11135_0.txt 18637_0.txt 26137_0.txt 33638_0.txt 41138_0.txt 48640_0.txt\r\n11136_0.txt 18638_0.txt 26138_0.txt 33639_0.txt 41139_0.txt 4864_0.txt\r\n11137_0.txt 18639_0.txt 26139_0.txt 33640_0.txt 41140_0.txt 48641_0.txt\r\n11138_0.txt 18640_0.txt 26140_0.txt 3364_0.txt 4114_0.txt\t 48642_0.txt\r\n11139_0.txt 1864_0.txt 2614_0.txt 33641_0.txt 41141_0.txt 48643_0.txt\r\n11140_0.txt 18641_0.txt 26141_0.txt 33642_0.txt 41142_0.txt 48644_0.txt\r\n1114_0.txt 18642_0.txt 26142_0.txt 33643_0.txt 41143_0.txt 48645_0.txt\r\n11141_0.txt 18643_0.txt 26143_0.txt 33644_0.txt 41144_0.txt 48646_0.txt\r\n11142_0.txt 18644_0.txt 26144_0.txt 33645_0.txt 41145_0.txt 48647_0.txt\r\n11143_0.txt 18645_0.txt 26145_0.txt 33646_0.txt 41146_0.txt 48648_0.txt\r\n11144_0.txt 18646_0.txt 26146_0.txt 33647_0.txt 41147_0.txt 48649_0.txt\r\n11145_0.txt 18647_0.txt 26147_0.txt 33648_0.txt 41148_0.txt 48650_0.txt\r\n11146_0.txt 18648_0.txt 26148_0.txt 33649_0.txt 41149_0.txt 4865_0.txt\r\n11147_0.txt 18649_0.txt 26149_0.txt 33650_0.txt 41150_0.txt 48651_0.txt\r\n11148_0.txt 18650_0.txt 26150_0.txt 3365_0.txt 4115_0.txt\t 48652_0.txt\r\n11149_0.txt 1865_0.txt 2615_0.txt 33651_0.txt 41151_0.txt 48653_0.txt\r\n11150_0.txt 18651_0.txt 26151_0.txt 33652_0.txt 41152_0.txt 48654_0.txt\r\n1115_0.txt 18652_0.txt 26152_0.txt 33653_0.txt 41153_0.txt 48655_0.txt\r\n11151_0.txt 18653_0.txt 26153_0.txt 33654_0.txt 41154_0.txt 48656_0.txt\r\n11152_0.txt 18654_0.txt 26154_0.txt 33655_0.txt 41155_0.txt 48657_0.txt\r\n11153_0.txt 18655_0.txt 26155_0.txt 33656_0.txt 41156_0.txt 48658_0.txt\r\n11154_0.txt 18656_0.txt 26156_0.txt 33657_0.txt 41157_0.txt 48659_0.txt\r\n11155_0.txt 18657_0.txt 26157_0.txt 33658_0.txt 41158_0.txt 48660_0.txt\r\n11156_0.txt 18658_0.txt 26158_0.txt 33659_0.txt 41159_0.txt 4866_0.txt\r\n11157_0.txt 18659_0.txt 26159_0.txt 33660_0.txt 41160_0.txt 48661_0.txt\r\n11158_0.txt 18660_0.txt 26160_0.txt 3366_0.txt 4116_0.txt\t 48662_0.txt\r\n11159_0.txt 1866_0.txt 2616_0.txt 33661_0.txt 41161_0.txt 48663_0.txt\r\n11160_0.txt 18661_0.txt 26161_0.txt 33662_0.txt 41162_0.txt 48664_0.txt\r\n1116_0.txt 18662_0.txt 26162_0.txt 33663_0.txt 41163_0.txt 48665_0.txt\r\n11161_0.txt 18663_0.txt 26163_0.txt 33664_0.txt 41164_0.txt 48666_0.txt\r\n11162_0.txt 18664_0.txt 26164_0.txt 33665_0.txt 41165_0.txt 48667_0.txt\r\n11163_0.txt 18665_0.txt 26165_0.txt 33666_0.txt 41166_0.txt 48668_0.txt\r\n11164_0.txt 18666_0.txt 26166_0.txt 33667_0.txt 41167_0.txt 48669_0.txt\r\n11165_0.txt 18667_0.txt 26167_0.txt 33668_0.txt 41168_0.txt 48670_0.txt\r\n11166_0.txt 18668_0.txt 26168_0.txt 33669_0.txt 41169_0.txt 4867_0.txt\r\n11167_0.txt 18669_0.txt 26169_0.txt 33670_0.txt 41170_0.txt 48671_0.txt\r\n11168_0.txt 18670_0.txt 26170_0.txt 3367_0.txt 4117_0.txt\t 48672_0.txt\r\n11169_0.txt 1867_0.txt 2617_0.txt 33671_0.txt 41171_0.txt 48673_0.txt\r\n11170_0.txt 18671_0.txt 26171_0.txt 33672_0.txt 41172_0.txt 48674_0.txt\r\n1117_0.txt 18672_0.txt 26172_0.txt 33673_0.txt 41173_0.txt 48675_0.txt\r\n11171_0.txt 18673_0.txt 26173_0.txt 33674_0.txt 41174_0.txt 48676_0.txt\r\n11172_0.txt 18674_0.txt 26174_0.txt 33675_0.txt 41175_0.txt 48677_0.txt\r\n11173_0.txt 18675_0.txt 26175_0.txt 33676_0.txt 41176_0.txt 48678_0.txt\r\n11174_0.txt 18676_0.txt 26176_0.txt 33677_0.txt 41177_0.txt 48679_0.txt\r\n11175_0.txt 18677_0.txt 26177_0.txt 33678_0.txt 41178_0.txt 48680_0.txt\r\n11176_0.txt 18678_0.txt 26178_0.txt 33679_0.txt 41179_0.txt 4868_0.txt\r\n11177_0.txt 18679_0.txt 26179_0.txt 33680_0.txt 41180_0.txt 48681_0.txt\r\n11178_0.txt 18680_0.txt 26180_0.txt 3368_0.txt 4118_0.txt\t 48682_0.txt\r\n11179_0.txt 1868_0.txt 2618_0.txt 33681_0.txt 41181_0.txt 48683_0.txt\r\n11180_0.txt 18681_0.txt 26181_0.txt 33682_0.txt 41182_0.txt 48684_0.txt\r\n1118_0.txt 18682_0.txt 26182_0.txt 33683_0.txt 41183_0.txt 48685_0.txt\r\n11181_0.txt 18683_0.txt 26183_0.txt 33684_0.txt 41184_0.txt 48686_0.txt\r\n11182_0.txt 18684_0.txt 26184_0.txt 33685_0.txt 41185_0.txt 48687_0.txt\r\n11183_0.txt 18685_0.txt 26185_0.txt 33686_0.txt 41186_0.txt 48688_0.txt\r\n11184_0.txt 18686_0.txt 26186_0.txt 33687_0.txt 41187_0.txt 48689_0.txt\r\n11185_0.txt 18687_0.txt 26187_0.txt 33688_0.txt 41188_0.txt 48690_0.txt\r\n11186_0.txt 18688_0.txt 26188_0.txt 33689_0.txt 41189_0.txt 4869_0.txt\r\n11187_0.txt 18689_0.txt 26189_0.txt 33690_0.txt 41190_0.txt 48691_0.txt\r\n11188_0.txt 18690_0.txt 26190_0.txt 3369_0.txt 4119_0.txt\t 48692_0.txt\r\n11189_0.txt 1869_0.txt 2619_0.txt 33691_0.txt 41191_0.txt 48693_0.txt\r\n11190_0.txt 18691_0.txt 26191_0.txt 33692_0.txt 41192_0.txt 48694_0.txt\r\n1119_0.txt 18692_0.txt 26192_0.txt 33693_0.txt 41193_0.txt 48695_0.txt\r\n11191_0.txt 18693_0.txt 26193_0.txt 33694_0.txt 41194_0.txt 48696_0.txt\r\n11192_0.txt 18694_0.txt 26194_0.txt 33695_0.txt 41195_0.txt 48697_0.txt\r\n11193_0.txt 18695_0.txt 26195_0.txt 33696_0.txt 41196_0.txt 48698_0.txt\r\n11194_0.txt 18696_0.txt 26196_0.txt 33697_0.txt 41197_0.txt 48699_0.txt\r\n11195_0.txt 18697_0.txt 26197_0.txt 33698_0.txt 41198_0.txt 48700_0.txt\r\n11196_0.txt 18698_0.txt 26198_0.txt 33699_0.txt 41199_0.txt 4870_0.txt\r\n11197_0.txt 18699_0.txt 26199_0.txt 33700_0.txt 41200_0.txt 48701_0.txt\r\n11198_0.txt 18700_0.txt 26200_0.txt 3370_0.txt 4120_0.txt\t 48702_0.txt\r\n11199_0.txt 1870_0.txt 2620_0.txt 33701_0.txt 41201_0.txt 48703_0.txt\r\n11200_0.txt 18701_0.txt 26201_0.txt 33702_0.txt 41202_0.txt 48704_0.txt\r\n1120_0.txt 18702_0.txt 26202_0.txt 33703_0.txt 41203_0.txt 48705_0.txt\r\n11201_0.txt 18703_0.txt 26203_0.txt 33704_0.txt 41204_0.txt 48706_0.txt\r\n11202_0.txt 18704_0.txt 26204_0.txt 33705_0.txt 41205_0.txt 48707_0.txt\r\n11203_0.txt 18705_0.txt 26205_0.txt 33706_0.txt 41206_0.txt 48708_0.txt\r\n11204_0.txt 18706_0.txt 26206_0.txt 33707_0.txt 41207_0.txt 48709_0.txt\r\n11205_0.txt 18707_0.txt 26207_0.txt 33708_0.txt 41208_0.txt 487_0.txt\r\n11206_0.txt 18708_0.txt 26208_0.txt 33709_0.txt 41209_0.txt 48710_0.txt\r\n11207_0.txt 18709_0.txt 26209_0.txt 337_0.txt 412_0.txt\t 4871_0.txt\r\n11208_0.txt 187_0.txt\t 262_0.txt 33710_0.txt 41210_0.txt 48711_0.txt\r\n11209_0.txt 18710_0.txt 26210_0.txt 3371_0.txt 4121_0.txt\t 48712_0.txt\r\n112_0.txt 1871_0.txt 2621_0.txt 33711_0.txt 41211_0.txt 48713_0.txt\r\n11210_0.txt 18711_0.txt 26211_0.txt 33712_0.txt 41212_0.txt 48714_0.txt\r\n1121_0.txt 18712_0.txt 26212_0.txt 33713_0.txt 41213_0.txt 48715_0.txt\r\n11211_0.txt 18713_0.txt 26213_0.txt 33714_0.txt 41214_0.txt 48716_0.txt\r\n11212_0.txt 18714_0.txt 26214_0.txt 33715_0.txt 41215_0.txt 48717_0.txt\r\n11213_0.txt 18715_0.txt 26215_0.txt 33716_0.txt 41216_0.txt 48718_0.txt\r\n11214_0.txt 18716_0.txt 26216_0.txt 33717_0.txt 41217_0.txt 48719_0.txt\r\n11215_0.txt 18717_0.txt 26217_0.txt 33718_0.txt 41218_0.txt 48720_0.txt\r\n11216_0.txt 18718_0.txt 26218_0.txt 33719_0.txt 41219_0.txt 4872_0.txt\r\n11217_0.txt 18719_0.txt 26219_0.txt 33720_0.txt 41220_0.txt 48721_0.txt\r\n11218_0.txt 18720_0.txt 26220_0.txt 3372_0.txt 4122_0.txt\t 48722_0.txt\r\n11219_0.txt 1872_0.txt 2622_0.txt 33721_0.txt 41221_0.txt 48723_0.txt\r\n11220_0.txt 18721_0.txt 26221_0.txt 33722_0.txt 41222_0.txt 48724_0.txt\r\n1122_0.txt 18722_0.txt 26222_0.txt 33723_0.txt 41223_0.txt 48725_0.txt\r\n11221_0.txt 18723_0.txt 26223_0.txt 33724_0.txt 41224_0.txt 48726_0.txt\r\n11222_0.txt 18724_0.txt 26224_0.txt 33725_0.txt 41225_0.txt 48727_0.txt\r\n11223_0.txt 18725_0.txt 26225_0.txt 33726_0.txt 41226_0.txt 48728_0.txt\r\n11224_0.txt 18726_0.txt 26226_0.txt 33727_0.txt 41227_0.txt 48729_0.txt\r\n11225_0.txt 18727_0.txt 26227_0.txt 33728_0.txt 41228_0.txt 48730_0.txt\r\n11226_0.txt 18728_0.txt 26228_0.txt 33729_0.txt 41229_0.txt 4873_0.txt\r\n11227_0.txt 18729_0.txt 26229_0.txt 33730_0.txt 41230_0.txt 48731_0.txt\r\n11228_0.txt 18730_0.txt 26230_0.txt 3373_0.txt 4123_0.txt\t 48732_0.txt\r\n11229_0.txt 1873_0.txt 2623_0.txt 33731_0.txt 41231_0.txt 48733_0.txt\r\n11230_0.txt 18731_0.txt 26231_0.txt 33732_0.txt 41232_0.txt 48734_0.txt\r\n1123_0.txt 18732_0.txt 26232_0.txt 33733_0.txt 41233_0.txt 48735_0.txt\r\n11231_0.txt 18733_0.txt 26233_0.txt 33734_0.txt 41234_0.txt 48736_0.txt\r\n11232_0.txt 18734_0.txt 26234_0.txt 33735_0.txt 41235_0.txt 48737_0.txt\r\n11233_0.txt 18735_0.txt 26235_0.txt 33736_0.txt 41236_0.txt 48738_0.txt\r\n11234_0.txt 18736_0.txt 26236_0.txt 33737_0.txt 41237_0.txt 48739_0.txt\r\n11235_0.txt 18737_0.txt 26237_0.txt 33738_0.txt 41238_0.txt 48740_0.txt\r\n11236_0.txt 18738_0.txt 26238_0.txt 33739_0.txt 41239_0.txt 4874_0.txt\r\n11237_0.txt 18739_0.txt 26239_0.txt 33740_0.txt 41240_0.txt 48741_0.txt\r\n11238_0.txt 18740_0.txt 26240_0.txt 3374_0.txt 4124_0.txt\t 48742_0.txt\r\n11239_0.txt 1874_0.txt 2624_0.txt 33741_0.txt 41241_0.txt 48743_0.txt\r\n11240_0.txt 18741_0.txt 26241_0.txt 33742_0.txt 41242_0.txt 48744_0.txt\r\n1124_0.txt 18742_0.txt 26242_0.txt 33743_0.txt 41243_0.txt 48745_0.txt\r\n11241_0.txt 18743_0.txt 26243_0.txt 33744_0.txt 41244_0.txt 48746_0.txt\r\n11242_0.txt 18744_0.txt 26244_0.txt 33745_0.txt 41245_0.txt 48747_0.txt\r\n11243_0.txt 18745_0.txt 26245_0.txt 33746_0.txt 41246_0.txt 48748_0.txt\r\n11244_0.txt 18746_0.txt 26246_0.txt 33747_0.txt 41247_0.txt 48749_0.txt\r\n11245_0.txt 18747_0.txt 26247_0.txt 33748_0.txt 41248_0.txt 48750_0.txt\r\n11246_0.txt 18748_0.txt 26248_0.txt 33749_0.txt 41249_0.txt 4875_0.txt\r\n11247_0.txt 18749_0.txt 26249_0.txt 33750_0.txt 41250_0.txt 48751_0.txt\r\n11248_0.txt 18750_0.txt 26250_0.txt 3375_0.txt 4125_0.txt\t 48752_0.txt\r\n11249_0.txt 1875_0.txt 2625_0.txt 33751_0.txt 41251_0.txt 48753_0.txt\r\n11250_0.txt 18751_0.txt 26251_0.txt 33752_0.txt 41252_0.txt 48754_0.txt\r\n1125_0.txt 18752_0.txt 26252_0.txt 33753_0.txt 41253_0.txt 48755_0.txt\r\n11251_0.txt 18753_0.txt 26253_0.txt 33754_0.txt 41254_0.txt 48756_0.txt\r\n11252_0.txt 18754_0.txt 26254_0.txt 33755_0.txt 41255_0.txt 48757_0.txt\r\n11253_0.txt 18755_0.txt 26255_0.txt 33756_0.txt 41256_0.txt 48758_0.txt\r\n11254_0.txt 18756_0.txt 26256_0.txt 33757_0.txt 41257_0.txt 48759_0.txt\r\n11255_0.txt 18757_0.txt 26257_0.txt 33758_0.txt 41258_0.txt 48760_0.txt\r\n11256_0.txt 18758_0.txt 26258_0.txt 33759_0.txt 41259_0.txt 4876_0.txt\r\n11257_0.txt 18759_0.txt 26259_0.txt 33760_0.txt 41260_0.txt 48761_0.txt\r\n11258_0.txt 18760_0.txt 26260_0.txt 3376_0.txt 4126_0.txt\t 48762_0.txt\r\n11259_0.txt 1876_0.txt 2626_0.txt 33761_0.txt 41261_0.txt 48763_0.txt\r\n11260_0.txt 18761_0.txt 26261_0.txt 33762_0.txt 41262_0.txt 48764_0.txt\r\n1126_0.txt 18762_0.txt 26262_0.txt 33763_0.txt 41263_0.txt 48765_0.txt\r\n11261_0.txt 18763_0.txt 26263_0.txt 33764_0.txt 41264_0.txt 48766_0.txt\r\n11262_0.txt 18764_0.txt 26264_0.txt 33765_0.txt 41265_0.txt 48767_0.txt\r\n11263_0.txt 18765_0.txt 26265_0.txt 33766_0.txt 41266_0.txt 48768_0.txt\r\n11264_0.txt 18766_0.txt 26266_0.txt 33767_0.txt 41267_0.txt 48769_0.txt\r\n11265_0.txt 18767_0.txt 26267_0.txt 33768_0.txt 41268_0.txt 48770_0.txt\r\n11266_0.txt 18768_0.txt 26268_0.txt 33769_0.txt 41269_0.txt 4877_0.txt\r\n11267_0.txt 18769_0.txt 26269_0.txt 33770_0.txt 41270_0.txt 48771_0.txt\r\n11268_0.txt 18770_0.txt 26270_0.txt 3377_0.txt 4127_0.txt\t 48772_0.txt\r\n11269_0.txt 1877_0.txt 2627_0.txt 33771_0.txt 41271_0.txt 48773_0.txt\r\n11270_0.txt 18771_0.txt 26271_0.txt 33772_0.txt 41272_0.txt 48774_0.txt\r\n1127_0.txt 18772_0.txt 26272_0.txt 33773_0.txt 41273_0.txt 48775_0.txt\r\n11271_0.txt 18773_0.txt 26273_0.txt 33774_0.txt 41274_0.txt 48776_0.txt\r\n11272_0.txt 18774_0.txt 26274_0.txt 33775_0.txt 41275_0.txt 48777_0.txt\r\n11273_0.txt 18775_0.txt 26275_0.txt 33776_0.txt 41276_0.txt 48778_0.txt\r\n11274_0.txt 18776_0.txt 26276_0.txt 33777_0.txt 41277_0.txt 48779_0.txt\r\n11275_0.txt 18777_0.txt 26277_0.txt 33778_0.txt 41278_0.txt 48780_0.txt\r\n11276_0.txt 18778_0.txt 26278_0.txt 33779_0.txt 41279_0.txt 4878_0.txt\r\n11277_0.txt 18779_0.txt 26279_0.txt 33780_0.txt 41280_0.txt 48781_0.txt\r\n11278_0.txt 18780_0.txt 26280_0.txt 3378_0.txt 4128_0.txt\t 48782_0.txt\r\n11279_0.txt 1878_0.txt 2628_0.txt 33781_0.txt 41281_0.txt 48783_0.txt\r\n11280_0.txt 18781_0.txt 26281_0.txt 33782_0.txt 41282_0.txt 48784_0.txt\r\n1128_0.txt 18782_0.txt 26282_0.txt 33783_0.txt 41283_0.txt 48785_0.txt\r\n11281_0.txt 18783_0.txt 26283_0.txt 33784_0.txt 41284_0.txt 48786_0.txt\r\n11282_0.txt 18784_0.txt 26284_0.txt 33785_0.txt 41285_0.txt 48787_0.txt\r\n11283_0.txt 18785_0.txt 26285_0.txt 33786_0.txt 41286_0.txt 48788_0.txt\r\n11284_0.txt 18786_0.txt 26286_0.txt 33787_0.txt 41287_0.txt 48789_0.txt\r\n11285_0.txt 18787_0.txt 26287_0.txt 33788_0.txt 41288_0.txt 48790_0.txt\r\n11286_0.txt 18788_0.txt 26288_0.txt 33789_0.txt 41289_0.txt 4879_0.txt\r\n11287_0.txt 18789_0.txt 26289_0.txt 33790_0.txt 41290_0.txt 48791_0.txt\r\n11288_0.txt 18790_0.txt 26290_0.txt 3379_0.txt 4129_0.txt\t 48792_0.txt\r\n11289_0.txt 1879_0.txt 2629_0.txt 33791_0.txt 41291_0.txt 48793_0.txt\r\n11290_0.txt 18791_0.txt 26291_0.txt 33792_0.txt 41292_0.txt 48794_0.txt\r\n1129_0.txt 18792_0.txt 26292_0.txt 33793_0.txt 41293_0.txt 48795_0.txt\r\n11291_0.txt 18793_0.txt 26293_0.txt 33794_0.txt 41294_0.txt 48796_0.txt\r\n11292_0.txt 18794_0.txt 26294_0.txt 33795_0.txt 41295_0.txt 48797_0.txt\r\n11293_0.txt 18795_0.txt 26295_0.txt 33796_0.txt 41296_0.txt 48798_0.txt\r\n11294_0.txt 18796_0.txt 26296_0.txt 33797_0.txt 41297_0.txt 48799_0.txt\r\n11295_0.txt 18797_0.txt 26297_0.txt 33798_0.txt 41298_0.txt 48800_0.txt\r\n11296_0.txt 18798_0.txt 26298_0.txt 33799_0.txt 41299_0.txt 4880_0.txt\r\n11297_0.txt 18799_0.txt 26299_0.txt 33800_0.txt 41300_0.txt 48801_0.txt\r\n11298_0.txt 18800_0.txt 26300_0.txt 3380_0.txt 4130_0.txt\t 48802_0.txt\r\n11299_0.txt 1880_0.txt 2630_0.txt 33801_0.txt 41301_0.txt 48803_0.txt\r\n11300_0.txt 18801_0.txt 26301_0.txt 33802_0.txt 41302_0.txt 48804_0.txt\r\n1130_0.txt 18802_0.txt 26302_0.txt 33803_0.txt 41303_0.txt 48805_0.txt\r\n11301_0.txt 18803_0.txt 26303_0.txt 33804_0.txt 41304_0.txt 48806_0.txt\r\n11302_0.txt 18804_0.txt 26304_0.txt 33805_0.txt 41305_0.txt 48807_0.txt\r\n11303_0.txt 18805_0.txt 26305_0.txt 33806_0.txt 41306_0.txt 48808_0.txt\r\n11304_0.txt 18806_0.txt 26306_0.txt 33807_0.txt 41307_0.txt 48809_0.txt\r\n11305_0.txt 18807_0.txt 26307_0.txt 33808_0.txt 41308_0.txt 488_0.txt\r\n11306_0.txt 18808_0.txt 26308_0.txt 33809_0.txt 41309_0.txt 48810_0.txt\r\n11307_0.txt 18809_0.txt 26309_0.txt 338_0.txt 413_0.txt\t 4881_0.txt\r\n11308_0.txt 188_0.txt\t 263_0.txt 33810_0.txt 41310_0.txt 48811_0.txt\r\n11309_0.txt 18810_0.txt 26310_0.txt 3381_0.txt 4131_0.txt\t 48812_0.txt\r\n113_0.txt 1881_0.txt 2631_0.txt 33811_0.txt 41311_0.txt 48813_0.txt\r\n11310_0.txt 18811_0.txt 26311_0.txt 33812_0.txt 41312_0.txt 48814_0.txt\r\n1131_0.txt 18812_0.txt 26312_0.txt 33813_0.txt 41313_0.txt 48815_0.txt\r\n11311_0.txt 18813_0.txt 26313_0.txt 33814_0.txt 41314_0.txt 48816_0.txt\r\n11312_0.txt 18814_0.txt 26314_0.txt 33815_0.txt 41315_0.txt 48817_0.txt\r\n11313_0.txt 18815_0.txt 26315_0.txt 33816_0.txt 41316_0.txt 48818_0.txt\r\n11314_0.txt 18816_0.txt 26316_0.txt 33817_0.txt 41317_0.txt 48819_0.txt\r\n11315_0.txt 18817_0.txt 26317_0.txt 33818_0.txt 41318_0.txt 48820_0.txt\r\n11316_0.txt 18818_0.txt 26318_0.txt 33819_0.txt 41319_0.txt 4882_0.txt\r\n11317_0.txt 18819_0.txt 26319_0.txt 33820_0.txt 41320_0.txt 48821_0.txt\r\n11318_0.txt 18820_0.txt 26320_0.txt 3382_0.txt 4132_0.txt\t 48822_0.txt\r\n11319_0.txt 1882_0.txt 2632_0.txt 33821_0.txt 41321_0.txt 48823_0.txt\r\n11320_0.txt 18821_0.txt 26321_0.txt 33822_0.txt 41322_0.txt 48824_0.txt\r\n1132_0.txt 18822_0.txt 26322_0.txt 33823_0.txt 41323_0.txt 48825_0.txt\r\n11321_0.txt 18823_0.txt 26323_0.txt 33824_0.txt 41324_0.txt 48826_0.txt\r\n11322_0.txt 18824_0.txt 26324_0.txt 33825_0.txt 41325_0.txt 48827_0.txt\r\n11323_0.txt 18825_0.txt 26325_0.txt 33826_0.txt 41326_0.txt 48828_0.txt\r\n11324_0.txt 18826_0.txt 26326_0.txt 33827_0.txt 41327_0.txt 48829_0.txt\r\n11325_0.txt 18827_0.txt 26327_0.txt 33828_0.txt 41328_0.txt 48830_0.txt\r\n11326_0.txt 18828_0.txt 26328_0.txt 33829_0.txt 41329_0.txt 4883_0.txt\r\n11327_0.txt 18829_0.txt 26329_0.txt 33830_0.txt 41330_0.txt 48831_0.txt\r\n11328_0.txt 18830_0.txt 26330_0.txt 3383_0.txt 4133_0.txt\t 48832_0.txt\r\n11329_0.txt 1883_0.txt 2633_0.txt 33831_0.txt 41331_0.txt 48833_0.txt\r\n11330_0.txt 18831_0.txt 26331_0.txt 33832_0.txt 41332_0.txt 48834_0.txt\r\n1133_0.txt 18832_0.txt 26332_0.txt 33833_0.txt 41333_0.txt 48835_0.txt\r\n11331_0.txt 18833_0.txt 26333_0.txt 33834_0.txt 41334_0.txt 48836_0.txt\r\n11332_0.txt 18834_0.txt 26334_0.txt 33835_0.txt 41335_0.txt 48837_0.txt\r\n11333_0.txt 18835_0.txt 26335_0.txt 33836_0.txt 41336_0.txt 48838_0.txt\r\n11334_0.txt 18836_0.txt 26336_0.txt 33837_0.txt 41337_0.txt 48839_0.txt\r\n11335_0.txt 18837_0.txt 26337_0.txt 33838_0.txt 41338_0.txt 48840_0.txt\r\n11336_0.txt 18838_0.txt 26338_0.txt 33839_0.txt 41339_0.txt 4884_0.txt\r\n11337_0.txt 18839_0.txt 26339_0.txt 33840_0.txt 41340_0.txt 48841_0.txt\r\n11338_0.txt 18840_0.txt 26340_0.txt 3384_0.txt 4134_0.txt\t 48842_0.txt\r\n11339_0.txt 1884_0.txt 2634_0.txt 33841_0.txt 41341_0.txt 48843_0.txt\r\n11340_0.txt 18841_0.txt 26341_0.txt 33842_0.txt 41342_0.txt 48844_0.txt\r\n1134_0.txt 18842_0.txt 26342_0.txt 33843_0.txt 41343_0.txt 48845_0.txt\r\n11341_0.txt 18843_0.txt 26343_0.txt 33844_0.txt 41344_0.txt 48846_0.txt\r\n11342_0.txt 18844_0.txt 26344_0.txt 33845_0.txt 41345_0.txt 48847_0.txt\r\n11343_0.txt 18845_0.txt 26345_0.txt 33846_0.txt 41346_0.txt 48848_0.txt\r\n11344_0.txt 18846_0.txt 26346_0.txt 33847_0.txt 41347_0.txt 48849_0.txt\r\n11345_0.txt 18847_0.txt 26347_0.txt 33848_0.txt 41348_0.txt 48850_0.txt\r\n11346_0.txt 18848_0.txt 26348_0.txt 33849_0.txt 41349_0.txt 4885_0.txt\r\n11347_0.txt 18849_0.txt 26349_0.txt 33850_0.txt 41350_0.txt 48851_0.txt\r\n11348_0.txt 18850_0.txt 26350_0.txt 3385_0.txt 4135_0.txt\t 48852_0.txt\r\n11349_0.txt 1885_0.txt 2635_0.txt 33851_0.txt 41351_0.txt 48853_0.txt\r\n11350_0.txt 18851_0.txt 26351_0.txt 33852_0.txt 41352_0.txt 48854_0.txt\r\n1135_0.txt 18852_0.txt 26352_0.txt 33853_0.txt 41353_0.txt 48855_0.txt\r\n11351_0.txt 18853_0.txt 26353_0.txt 33854_0.txt 41354_0.txt 48856_0.txt\r\n11352_0.txt 18854_0.txt 26354_0.txt 33855_0.txt 41355_0.txt 48857_0.txt\r\n11353_0.txt 18855_0.txt 26355_0.txt 33856_0.txt 41356_0.txt 48858_0.txt\r\n11354_0.txt 18856_0.txt 26356_0.txt 33857_0.txt 41357_0.txt 48859_0.txt\r\n11355_0.txt 18857_0.txt 26357_0.txt 33858_0.txt 41358_0.txt 48860_0.txt\r\n11356_0.txt 18858_0.txt 26358_0.txt 33859_0.txt 41359_0.txt 4886_0.txt\r\n11357_0.txt 18859_0.txt 26359_0.txt 33860_0.txt 41360_0.txt 48861_0.txt\r\n11358_0.txt 18860_0.txt 26360_0.txt 3386_0.txt 4136_0.txt\t 48862_0.txt\r\n11359_0.txt 1886_0.txt 2636_0.txt 33861_0.txt 41361_0.txt 48863_0.txt\r\n11360_0.txt 18861_0.txt 26361_0.txt 33862_0.txt 41362_0.txt 48864_0.txt\r\n1136_0.txt 18862_0.txt 26362_0.txt 33863_0.txt 41363_0.txt 48865_0.txt\r\n11361_0.txt 18863_0.txt 26363_0.txt 33864_0.txt 41364_0.txt 48866_0.txt\r\n11362_0.txt 18864_0.txt 26364_0.txt 33865_0.txt 41365_0.txt 48867_0.txt\r\n11363_0.txt 18865_0.txt 26365_0.txt 33866_0.txt 41366_0.txt 48868_0.txt\r\n11364_0.txt 18866_0.txt 26366_0.txt 33867_0.txt 41367_0.txt 48869_0.txt\r\n11365_0.txt 18867_0.txt 26367_0.txt 33868_0.txt 41368_0.txt 48870_0.txt\r\n11366_0.txt 18868_0.txt 26368_0.txt 33869_0.txt 41369_0.txt 4887_0.txt\r\n11367_0.txt 18869_0.txt 26369_0.txt 33870_0.txt 41370_0.txt 48871_0.txt\r\n11368_0.txt 18870_0.txt 26370_0.txt 3387_0.txt 4137_0.txt\t 48872_0.txt\r\n11369_0.txt 1887_0.txt 2637_0.txt 33871_0.txt 41371_0.txt 48873_0.txt\r\n11370_0.txt 18871_0.txt 26371_0.txt 33872_0.txt 41372_0.txt 48874_0.txt\r\n1137_0.txt 18872_0.txt 26372_0.txt 33873_0.txt 41373_0.txt 48875_0.txt\r\n11371_0.txt 18873_0.txt 26373_0.txt 33874_0.txt 41374_0.txt 48876_0.txt\r\n11372_0.txt 18874_0.txt 26374_0.txt 33875_0.txt 41375_0.txt 48877_0.txt\r\n11373_0.txt 18875_0.txt 26375_0.txt 33876_0.txt 41376_0.txt 48878_0.txt\r\n11374_0.txt 18876_0.txt 26376_0.txt 33877_0.txt 41377_0.txt 48879_0.txt\r\n11375_0.txt 18877_0.txt 26377_0.txt 33878_0.txt 41378_0.txt 48880_0.txt\r\n11376_0.txt 18878_0.txt 26378_0.txt 33879_0.txt 41379_0.txt 4888_0.txt\r\n11377_0.txt 18879_0.txt 26379_0.txt 33880_0.txt 41380_0.txt 48881_0.txt\r\n11378_0.txt 18880_0.txt 26380_0.txt 3388_0.txt 4138_0.txt\t 48882_0.txt\r\n11379_0.txt 1888_0.txt 2638_0.txt 33881_0.txt 41381_0.txt 48883_0.txt\r\n11380_0.txt 18881_0.txt 26381_0.txt 33882_0.txt 41382_0.txt 48884_0.txt\r\n1138_0.txt 18882_0.txt 26382_0.txt 33883_0.txt 41383_0.txt 48885_0.txt\r\n11381_0.txt 18883_0.txt 26383_0.txt 33884_0.txt 41384_0.txt 48886_0.txt\r\n11382_0.txt 18884_0.txt 26384_0.txt 33885_0.txt 41385_0.txt 48887_0.txt\r\n11383_0.txt 18885_0.txt 26385_0.txt 33886_0.txt 41386_0.txt 48888_0.txt\r\n11384_0.txt 18886_0.txt 26386_0.txt 33887_0.txt 41387_0.txt 48889_0.txt\r\n11385_0.txt 18887_0.txt 26387_0.txt 33888_0.txt 41388_0.txt 48890_0.txt\r\n11386_0.txt 18888_0.txt 26388_0.txt 33889_0.txt 41389_0.txt 4889_0.txt\r\n11387_0.txt 18889_0.txt 26389_0.txt 33890_0.txt 41390_0.txt 48891_0.txt\r\n11388_0.txt 18890_0.txt 26390_0.txt 3389_0.txt 4139_0.txt\t 48892_0.txt\r\n11389_0.txt 1889_0.txt 2639_0.txt 33891_0.txt 41391_0.txt 48893_0.txt\r\n11390_0.txt 18891_0.txt 26391_0.txt 33892_0.txt 41392_0.txt 48894_0.txt\r\n1139_0.txt 18892_0.txt 26392_0.txt 33893_0.txt 41393_0.txt 48895_0.txt\r\n11391_0.txt 18893_0.txt 26393_0.txt 33894_0.txt 41394_0.txt 48896_0.txt\r\n11392_0.txt 18894_0.txt 26394_0.txt 33895_0.txt 41395_0.txt 48897_0.txt\r\n11393_0.txt 18895_0.txt 26395_0.txt 33896_0.txt 41396_0.txt 48898_0.txt\r\n11394_0.txt 18896_0.txt 26396_0.txt 33897_0.txt 41397_0.txt 48899_0.txt\r\n11395_0.txt 18897_0.txt 26397_0.txt 33898_0.txt 41398_0.txt 48900_0.txt\r\n11396_0.txt 18898_0.txt 26398_0.txt 33899_0.txt 41399_0.txt 4890_0.txt\r\n11397_0.txt 18899_0.txt 26399_0.txt 33900_0.txt 41400_0.txt 48901_0.txt\r\n11398_0.txt 18900_0.txt 26400_0.txt 3390_0.txt 4140_0.txt\t 48902_0.txt\r\n11399_0.txt 1890_0.txt 2640_0.txt 33901_0.txt 41401_0.txt 48903_0.txt\r\n11400_0.txt 18901_0.txt 26401_0.txt 33902_0.txt 41402_0.txt 48904_0.txt\r\n1140_0.txt 18902_0.txt 26402_0.txt 33903_0.txt 41403_0.txt 48905_0.txt\r\n11401_0.txt 18903_0.txt 26403_0.txt 33904_0.txt 41404_0.txt 48906_0.txt\r\n11402_0.txt 18904_0.txt 26404_0.txt 33905_0.txt 41405_0.txt 48907_0.txt\r\n11403_0.txt 18905_0.txt 26405_0.txt 33906_0.txt 41406_0.txt 48908_0.txt\r\n11404_0.txt 18906_0.txt 26406_0.txt 33907_0.txt 41407_0.txt 48909_0.txt\r\n11405_0.txt 18907_0.txt 26407_0.txt 33908_0.txt 41408_0.txt 489_0.txt\r\n11406_0.txt 18908_0.txt 26408_0.txt 33909_0.txt 41409_0.txt 48910_0.txt\r\n11407_0.txt 18909_0.txt 26409_0.txt 339_0.txt 414_0.txt\t 4891_0.txt\r\n11408_0.txt 189_0.txt\t 264_0.txt 33910_0.txt 41410_0.txt 48911_0.txt\r\n11409_0.txt 18910_0.txt 26410_0.txt 3391_0.txt 4141_0.txt\t 48912_0.txt\r\n114_0.txt 1891_0.txt 2641_0.txt 33911_0.txt 41411_0.txt 48913_0.txt\r\n11410_0.txt 18911_0.txt 26411_0.txt 33912_0.txt 41412_0.txt 48914_0.txt\r\n1141_0.txt 18912_0.txt 26412_0.txt 33913_0.txt 41413_0.txt 48915_0.txt\r\n11411_0.txt 18913_0.txt 26413_0.txt 33914_0.txt 41414_0.txt 48916_0.txt\r\n11412_0.txt 18914_0.txt 26414_0.txt 33915_0.txt 41415_0.txt 48917_0.txt\r\n11413_0.txt 18915_0.txt 26415_0.txt 33916_0.txt 41416_0.txt 48918_0.txt\r\n11414_0.txt 18916_0.txt 26416_0.txt 33917_0.txt 41417_0.txt 48919_0.txt\r\n11415_0.txt 18917_0.txt 26417_0.txt 33918_0.txt 41418_0.txt 48920_0.txt\r\n11416_0.txt 18918_0.txt 26418_0.txt 33919_0.txt 41419_0.txt 4892_0.txt\r\n11417_0.txt 18919_0.txt 26419_0.txt 33920_0.txt 41420_0.txt 48921_0.txt\r\n11418_0.txt 18920_0.txt 26420_0.txt 3392_0.txt 4142_0.txt\t 48922_0.txt\r\n11419_0.txt 1892_0.txt 2642_0.txt 33921_0.txt 41421_0.txt 48923_0.txt\r\n11420_0.txt 18921_0.txt 26421_0.txt 33922_0.txt 41422_0.txt 48924_0.txt\r\n1142_0.txt 18922_0.txt 26422_0.txt 33923_0.txt 41423_0.txt 48925_0.txt\r\n11421_0.txt 18923_0.txt 26423_0.txt 33924_0.txt 41424_0.txt 48926_0.txt\r\n11422_0.txt 18924_0.txt 26424_0.txt 33925_0.txt 41425_0.txt 48927_0.txt\r\n11423_0.txt 18925_0.txt 26425_0.txt 33926_0.txt 41426_0.txt 48928_0.txt\r\n11424_0.txt 18926_0.txt 26426_0.txt 33927_0.txt 41427_0.txt 48929_0.txt\r\n11425_0.txt 18927_0.txt 26427_0.txt 33928_0.txt 41428_0.txt 48930_0.txt\r\n11426_0.txt 18928_0.txt 26428_0.txt 33929_0.txt 41429_0.txt 4893_0.txt\r\n11427_0.txt 18929_0.txt 26429_0.txt 33930_0.txt 41430_0.txt 48931_0.txt\r\n11428_0.txt 18930_0.txt 26430_0.txt 3393_0.txt 4143_0.txt\t 48932_0.txt\r\n11429_0.txt 1893_0.txt 2643_0.txt 33931_0.txt 41431_0.txt 48933_0.txt\r\n11430_0.txt 18931_0.txt 26431_0.txt 33932_0.txt 41432_0.txt 48934_0.txt\r\n1143_0.txt 18932_0.txt 26432_0.txt 33933_0.txt 41433_0.txt 48935_0.txt\r\n11431_0.txt 18933_0.txt 26433_0.txt 33934_0.txt 41434_0.txt 48936_0.txt\r\n11432_0.txt 18934_0.txt 26434_0.txt 33935_0.txt 41435_0.txt 48937_0.txt\r\n11433_0.txt 18935_0.txt 26435_0.txt 33936_0.txt 41436_0.txt 48938_0.txt\r\n11434_0.txt 18936_0.txt 26436_0.txt 33937_0.txt 41437_0.txt 48939_0.txt\r\n11435_0.txt 18937_0.txt 26437_0.txt 33938_0.txt 41438_0.txt 48940_0.txt\r\n11436_0.txt 18938_0.txt 26438_0.txt 33939_0.txt 41439_0.txt 4894_0.txt\r\n11437_0.txt 18939_0.txt 26439_0.txt 33940_0.txt 41440_0.txt 48941_0.txt\r\n11438_0.txt 18940_0.txt 26440_0.txt 3394_0.txt 4144_0.txt\t 48942_0.txt\r\n11439_0.txt 1894_0.txt 2644_0.txt 33941_0.txt 41441_0.txt 48943_0.txt\r\n11440_0.txt 18941_0.txt 26441_0.txt 33942_0.txt 41442_0.txt 48944_0.txt\r\n1144_0.txt 18942_0.txt 26442_0.txt 33943_0.txt 41443_0.txt 48945_0.txt\r\n11441_0.txt 18943_0.txt 26443_0.txt 33944_0.txt 41444_0.txt 48946_0.txt\r\n11442_0.txt 18944_0.txt 26444_0.txt 33945_0.txt 41445_0.txt 48947_0.txt\r\n11443_0.txt 18945_0.txt 26445_0.txt 33946_0.txt 41446_0.txt 48948_0.txt\r\n11444_0.txt 18946_0.txt 26446_0.txt 33947_0.txt 41447_0.txt 48949_0.txt\r\n11445_0.txt 18947_0.txt 26447_0.txt 33948_0.txt 41448_0.txt 48950_0.txt\r\n11446_0.txt 18948_0.txt 26448_0.txt 33949_0.txt 41449_0.txt 4895_0.txt\r\n11447_0.txt 18949_0.txt 26449_0.txt 33950_0.txt 41450_0.txt 48951_0.txt\r\n11448_0.txt 18950_0.txt 26450_0.txt 3395_0.txt 4145_0.txt\t 48952_0.txt\r\n11449_0.txt 1895_0.txt 2645_0.txt 33951_0.txt 41451_0.txt 48953_0.txt\r\n11450_0.txt 18951_0.txt 26451_0.txt 33952_0.txt 41452_0.txt 48954_0.txt\r\n1145_0.txt 18952_0.txt 26452_0.txt 33953_0.txt 41453_0.txt 48955_0.txt\r\n11451_0.txt 18953_0.txt 26453_0.txt 33954_0.txt 41454_0.txt 48956_0.txt\r\n11452_0.txt 18954_0.txt 26454_0.txt 33955_0.txt 41455_0.txt 48957_0.txt\r\n11453_0.txt 18955_0.txt 26455_0.txt 33956_0.txt 41456_0.txt 48958_0.txt\r\n11454_0.txt 18956_0.txt 26456_0.txt 33957_0.txt 41457_0.txt 48959_0.txt\r\n11455_0.txt 18957_0.txt 26457_0.txt 33958_0.txt 41458_0.txt 48960_0.txt\r\n11456_0.txt 18958_0.txt 26458_0.txt 33959_0.txt 41459_0.txt 4896_0.txt\r\n11457_0.txt 18959_0.txt 26459_0.txt 33960_0.txt 41460_0.txt 48961_0.txt\r\n11458_0.txt 18960_0.txt 26460_0.txt 3396_0.txt 4146_0.txt\t 48962_0.txt\r\n11459_0.txt 1896_0.txt 2646_0.txt 33961_0.txt 41461_0.txt 48963_0.txt\r\n11460_0.txt 18961_0.txt 26461_0.txt 33962_0.txt 41462_0.txt 48964_0.txt\r\n1146_0.txt 18962_0.txt 26462_0.txt 33963_0.txt 41463_0.txt 48965_0.txt\r\n11461_0.txt 18963_0.txt 26463_0.txt 33964_0.txt 41464_0.txt 48966_0.txt\r\n11462_0.txt 18964_0.txt 26464_0.txt 33965_0.txt 41465_0.txt 48967_0.txt\r\n11463_0.txt 18965_0.txt 26465_0.txt 33966_0.txt 41466_0.txt 48968_0.txt\r\n11464_0.txt 18966_0.txt 26466_0.txt 33967_0.txt 41467_0.txt 48969_0.txt\r\n11465_0.txt 18967_0.txt 26467_0.txt 33968_0.txt 41468_0.txt 48970_0.txt\r\n11466_0.txt 18968_0.txt 26468_0.txt 33969_0.txt 41469_0.txt 4897_0.txt\r\n11467_0.txt 18969_0.txt 26469_0.txt 33970_0.txt 41470_0.txt 48971_0.txt\r\n11468_0.txt 18970_0.txt 26470_0.txt 3397_0.txt 4147_0.txt\t 48972_0.txt\r\n11469_0.txt 1897_0.txt 2647_0.txt 33971_0.txt 41471_0.txt 48973_0.txt\r\n11470_0.txt 18971_0.txt 26471_0.txt 33972_0.txt 41472_0.txt 48974_0.txt\r\n1147_0.txt 18972_0.txt 26472_0.txt 33973_0.txt 41473_0.txt 48975_0.txt\r\n11471_0.txt 18973_0.txt 26473_0.txt 33974_0.txt 41474_0.txt 48976_0.txt\r\n11472_0.txt 18974_0.txt 26474_0.txt 33975_0.txt 41475_0.txt 48977_0.txt\r\n11473_0.txt 18975_0.txt 26475_0.txt 33976_0.txt 41476_0.txt 48978_0.txt\r\n11474_0.txt 18976_0.txt 26476_0.txt 33977_0.txt 41477_0.txt 48979_0.txt\r\n11475_0.txt 18977_0.txt 26477_0.txt 33978_0.txt 41478_0.txt 48980_0.txt\r\n11476_0.txt 18978_0.txt 26478_0.txt 33979_0.txt 41479_0.txt 4898_0.txt\r\n11477_0.txt 18979_0.txt 26479_0.txt 33980_0.txt 41480_0.txt 48981_0.txt\r\n11478_0.txt 18980_0.txt 26480_0.txt 3398_0.txt 4148_0.txt\t 48982_0.txt\r\n11479_0.txt 1898_0.txt 2648_0.txt 33981_0.txt 41481_0.txt 48983_0.txt\r\n11480_0.txt 18981_0.txt 26481_0.txt 33982_0.txt 41482_0.txt 48984_0.txt\r\n1148_0.txt 18982_0.txt 26482_0.txt 33983_0.txt 41483_0.txt 48985_0.txt\r\n11481_0.txt 18983_0.txt 26483_0.txt 33984_0.txt 41484_0.txt 48986_0.txt\r\n11482_0.txt 18984_0.txt 26484_0.txt 33985_0.txt 41485_0.txt 48987_0.txt\r\n11483_0.txt 18985_0.txt 26485_0.txt 33986_0.txt 41486_0.txt 48988_0.txt\r\n11484_0.txt 18986_0.txt 26486_0.txt 33987_0.txt 41487_0.txt 48989_0.txt\r\n11485_0.txt 18987_0.txt 26487_0.txt 33988_0.txt 41488_0.txt 48990_0.txt\r\n11486_0.txt 18988_0.txt 26488_0.txt 33989_0.txt 41489_0.txt 4899_0.txt\r\n11487_0.txt 18989_0.txt 26489_0.txt 33990_0.txt 41490_0.txt 48991_0.txt\r\n11488_0.txt 18990_0.txt 26490_0.txt 3399_0.txt 4149_0.txt\t 48992_0.txt\r\n11489_0.txt 1899_0.txt 2649_0.txt 33991_0.txt 41491_0.txt 48993_0.txt\r\n11490_0.txt 18991_0.txt 26491_0.txt 33992_0.txt 41492_0.txt 48994_0.txt\r\n1149_0.txt 18992_0.txt 26492_0.txt 33993_0.txt 41493_0.txt 48995_0.txt\r\n11491_0.txt 18993_0.txt 26493_0.txt 33994_0.txt 41494_0.txt 48996_0.txt\r\n11492_0.txt 18994_0.txt 26494_0.txt 33995_0.txt 41495_0.txt 48997_0.txt\r\n11493_0.txt 18995_0.txt 26495_0.txt 33996_0.txt 41496_0.txt 48998_0.txt\r\n11494_0.txt 18996_0.txt 26496_0.txt 33997_0.txt 41497_0.txt 48999_0.txt\r\n11495_0.txt 18997_0.txt 26497_0.txt 33998_0.txt 41498_0.txt 49000_0.txt\r\n11496_0.txt 18998_0.txt 26498_0.txt 33999_0.txt 41499_0.txt 4900_0.txt\r\n11497_0.txt 18999_0.txt 26499_0.txt 34000_0.txt 41500_0.txt 49001_0.txt\r\n11498_0.txt 19000_0.txt 26500_0.txt 3400_0.txt 4150_0.txt\t 49002_0.txt\r\n11499_0.txt 1900_0.txt 2650_0.txt 34001_0.txt 41501_0.txt 49003_0.txt\r\n11500_0.txt 19001_0.txt 26501_0.txt 34002_0.txt 41502_0.txt 49004_0.txt\r\n1150_0.txt 19002_0.txt 26502_0.txt 34003_0.txt 41503_0.txt 49005_0.txt\r\n11501_0.txt 19003_0.txt 26503_0.txt 34004_0.txt 41504_0.txt 49006_0.txt\r\n11502_0.txt 19004_0.txt 26504_0.txt 34005_0.txt 41505_0.txt 49007_0.txt\r\n11503_0.txt 19005_0.txt 26505_0.txt 34006_0.txt 41506_0.txt 49008_0.txt\r\n11504_0.txt 19006_0.txt 26506_0.txt 34007_0.txt 41507_0.txt 49009_0.txt\r\n11505_0.txt 19007_0.txt 26507_0.txt 34008_0.txt 41508_0.txt 490_0.txt\r\n11506_0.txt 19008_0.txt 26508_0.txt 34009_0.txt 41509_0.txt 49010_0.txt\r\n11507_0.txt 19009_0.txt 26509_0.txt 340_0.txt 415_0.txt\t 4901_0.txt\r\n11508_0.txt 190_0.txt\t 265_0.txt 34010_0.txt 41510_0.txt 49011_0.txt\r\n11509_0.txt 19010_0.txt 26510_0.txt 3401_0.txt 4151_0.txt\t 49012_0.txt\r\n115_0.txt 1901_0.txt 2651_0.txt 34011_0.txt 41511_0.txt 49013_0.txt\r\n11510_0.txt 19011_0.txt 26511_0.txt 34012_0.txt 41512_0.txt 49014_0.txt\r\n1151_0.txt 19012_0.txt 26512_0.txt 34013_0.txt 41513_0.txt 49015_0.txt\r\n11511_0.txt 19013_0.txt 26513_0.txt 34014_0.txt 41514_0.txt 49016_0.txt\r\n11512_0.txt 19014_0.txt 26514_0.txt 34015_0.txt 41515_0.txt 49017_0.txt\r\n11513_0.txt 19015_0.txt 26515_0.txt 34016_0.txt 41516_0.txt 49018_0.txt\r\n11514_0.txt 19016_0.txt 26516_0.txt 34017_0.txt 41517_0.txt 49019_0.txt\r\n11515_0.txt 19017_0.txt 26517_0.txt 34018_0.txt 41518_0.txt 49020_0.txt\r\n11516_0.txt 19018_0.txt 26518_0.txt 34019_0.txt 41519_0.txt 4902_0.txt\r\n11517_0.txt 19019_0.txt 26519_0.txt 34020_0.txt 41520_0.txt 49021_0.txt\r\n11518_0.txt 19020_0.txt 26520_0.txt 3402_0.txt 4152_0.txt\t 49022_0.txt\r\n11519_0.txt 1902_0.txt 2652_0.txt 34021_0.txt 41521_0.txt 49023_0.txt\r\n11520_0.txt 19021_0.txt 26521_0.txt 34022_0.txt 41522_0.txt 49024_0.txt\r\n1152_0.txt 19022_0.txt 26522_0.txt 34023_0.txt 41523_0.txt 49025_0.txt\r\n11521_0.txt 19023_0.txt 26523_0.txt 34024_0.txt 41524_0.txt 49026_0.txt\r\n11522_0.txt 19024_0.txt 26524_0.txt 34025_0.txt 41525_0.txt 49027_0.txt\r\n11523_0.txt 19025_0.txt 26525_0.txt 34026_0.txt 41526_0.txt 49028_0.txt\r\n11524_0.txt 19026_0.txt 26526_0.txt 34027_0.txt 41527_0.txt 49029_0.txt\r\n11525_0.txt 19027_0.txt 26527_0.txt 34028_0.txt 41528_0.txt 49030_0.txt\r\n11526_0.txt 19028_0.txt 26528_0.txt 34029_0.txt 41529_0.txt 4903_0.txt\r\n11527_0.txt 19029_0.txt 26529_0.txt 34030_0.txt 41530_0.txt 49031_0.txt\r\n11528_0.txt 19030_0.txt 26530_0.txt 3403_0.txt 4153_0.txt\t 49032_0.txt\r\n11529_0.txt 1903_0.txt 2653_0.txt 34031_0.txt 41531_0.txt 49033_0.txt\r\n11530_0.txt 19031_0.txt 26531_0.txt 34032_0.txt 41532_0.txt 49034_0.txt\r\n1153_0.txt 19032_0.txt 26532_0.txt 34033_0.txt 41533_0.txt 49035_0.txt\r\n11531_0.txt 19033_0.txt 26533_0.txt 34034_0.txt 41534_0.txt 49036_0.txt\r\n11532_0.txt 19034_0.txt 26534_0.txt 34035_0.txt 41535_0.txt 49037_0.txt\r\n11533_0.txt 19035_0.txt 26535_0.txt 34036_0.txt 41536_0.txt 49038_0.txt\r\n11534_0.txt 19036_0.txt 26536_0.txt 34037_0.txt 41537_0.txt 49039_0.txt\r\n11535_0.txt 19037_0.txt 26537_0.txt 34038_0.txt 41538_0.txt 49040_0.txt\r\n11536_0.txt 19038_0.txt 26538_0.txt 34039_0.txt 41539_0.txt 4904_0.txt\r\n11537_0.txt 19039_0.txt 26539_0.txt 34040_0.txt 41540_0.txt 49041_0.txt\r\n11538_0.txt 19040_0.txt 26540_0.txt 3404_0.txt 4154_0.txt\t 49042_0.txt\r\n11539_0.txt 1904_0.txt 2654_0.txt 34041_0.txt 41541_0.txt 49043_0.txt\r\n11540_0.txt 19041_0.txt 26541_0.txt 34042_0.txt 41542_0.txt 49044_0.txt\r\n1154_0.txt 19042_0.txt 26542_0.txt 34043_0.txt 41543_0.txt 49045_0.txt\r\n11541_0.txt 19043_0.txt 26543_0.txt 34044_0.txt 41544_0.txt 49046_0.txt\r\n11542_0.txt 19044_0.txt 26544_0.txt 34045_0.txt 41545_0.txt 49047_0.txt\r\n11543_0.txt 19045_0.txt 26545_0.txt 34046_0.txt 41546_0.txt 49048_0.txt\r\n11544_0.txt 19046_0.txt 26546_0.txt 34047_0.txt 41547_0.txt 49049_0.txt\r\n11545_0.txt 19047_0.txt 26547_0.txt 34048_0.txt 41548_0.txt 49050_0.txt\r\n11546_0.txt 19048_0.txt 26548_0.txt 34049_0.txt 41549_0.txt 4905_0.txt\r\n11547_0.txt 19049_0.txt 26549_0.txt 34050_0.txt 41550_0.txt 49051_0.txt\r\n11548_0.txt 19050_0.txt 26550_0.txt 3405_0.txt 4155_0.txt\t 49052_0.txt\r\n11549_0.txt 1905_0.txt 2655_0.txt 34051_0.txt 41551_0.txt 49053_0.txt\r\n11550_0.txt 19051_0.txt 26551_0.txt 34052_0.txt 41552_0.txt 49054_0.txt\r\n1155_0.txt 19052_0.txt 26552_0.txt 34053_0.txt 41553_0.txt 49055_0.txt\r\n11551_0.txt 19053_0.txt 26553_0.txt 34054_0.txt 41554_0.txt 49056_0.txt\r\n11552_0.txt 19054_0.txt 26554_0.txt 34055_0.txt 41555_0.txt 49057_0.txt\r\n11553_0.txt 19055_0.txt 26555_0.txt 34056_0.txt 41556_0.txt 49058_0.txt\r\n11554_0.txt 19056_0.txt 26556_0.txt 34057_0.txt 41557_0.txt 49059_0.txt\r\n11555_0.txt 19057_0.txt 26557_0.txt 34058_0.txt 41558_0.txt 49060_0.txt\r\n11556_0.txt 19058_0.txt 26558_0.txt 34059_0.txt 41559_0.txt 4906_0.txt\r\n11557_0.txt 19059_0.txt 26559_0.txt 34060_0.txt 41560_0.txt 49061_0.txt\r\n11558_0.txt 19060_0.txt 26560_0.txt 3406_0.txt 4156_0.txt\t 49062_0.txt\r\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
4a4b80b49b288dd7c6ba164c73bc30ae4252a131
247,245
ipynb
Jupyter Notebook
demo.ipynb
peckjon/detectorch
69d31250d79a72b12b7419638ef59163f833bbba
[ "Apache-2.0" ]
627
2018-03-07T17:24:09.000Z
2021-12-21T12:54:28.000Z
demo.ipynb
peckjon/detectorch
69d31250d79a72b12b7419638ef59163f833bbba
[ "Apache-2.0" ]
20
2018-03-08T22:07:54.000Z
2021-12-24T14:27:58.000Z
demo.ipynb
peckjon/detectorch
69d31250d79a72b12b7419638ef59163f833bbba
[ "Apache-2.0" ]
83
2018-03-08T04:27:15.000Z
2021-11-21T04:18:45.000Z
843.83959
238,296
0.951963
[ [ [ "# Imports", "_____no_output_____" ] ], [ [ "import torch\nfrom torch.autograd import Variable\nfrom torch.utils.data import DataLoader\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport sys\nsys.path.insert(0, \"lib/\")\nfrom utils.preprocess_sample import preprocess_sample\nfrom utils.collate_custom import collate_custom\nfrom utils.utils import to_cuda_variable\nfrom utils.json_dataset_evaluator import evaluate_boxes,evaluate_masks\nfrom model.detector import detector\nimport utils.result_utils as result_utils\nimport utils.vis as vis_utils\nimport skimage.io as io\nfrom utils.blob import prep_im_for_blob\nimport utils.dummy_datasets as dummy_datasets\n\nfrom utils.selective_search import selective_search # needed for proposal extraction in Fast RCNN\nfrom PIL import Image\n\ntorch_ver = torch.__version__[:3]", "_____no_output_____" ] ], [ [ "# Parameters", "_____no_output_____" ] ], [ [ "# Pretrained model\narch='resnet50'\n\n# COCO minival2014 dataset path\ncoco_ann_file='datasets/data/coco/annotations/instances_minival2014.json'\nimg_dir='datasets/data/coco/val2014'\n\n# model type\nmodel_type='mask' # change here\n\nif model_type=='mask':\n # https://s3-us-west-2.amazonaws.com/detectron/35858828/12_2017_baselines/e2e_mask_rcnn_R-50-C4_2x.yaml.01_46_47.HBThTerB/output/train/coco_2014_train%3Acoco_2014_valminusminival/generalized_rcnn/model_final.pkl\n pretrained_model_file = 'files/trained_models/mask/model_final.pkl'\n use_rpn_head = True\n use_mask_head = True\nelif model_type=='faster':\n # https://s3-us-west-2.amazonaws.com/detectron/35857281/12_2017_baselines/e2e_faster_rcnn_R-50-C4_2x.yaml.01_34_56.ScPH0Z4r/output/train/coco_2014_train%3Acoco_2014_valminusminival/generalized_rcnn/model_final.pkl\n pretrained_model_file = 'files/trained_models/faster/model_final.pkl'\n use_rpn_head = True\n use_mask_head = False\nelif model_type=='fast':\n # https://s3-us-west-2.amazonaws.com/detectron/36224046/12_2017_baselines/fast_rcnn_R-50-C4_2x.yaml.08_22_57.XFxNqEnL/output/train/coco_2014_train%3Acoco_2014_valminusminival/generalized_rcnn/model_final.pkl\n pretrained_model_file = 'files/trained_models/fast/model_final.pkl'\n use_rpn_head = False\n use_mask_head = False", "_____no_output_____" ] ], [ [ "# Load image", "_____no_output_____" ] ], [ [ "image_fn = 'demo/33823288584_1d21cf0a26_k.jpg'\n\n# Load image\nimage = io.imread(image_fn)\nif len(image.shape) == 2: # convert grayscale to RGB\n image = np.repeat(np.expand_dims(image,2), 3, axis=2)\norig_im_size = image.shape\n# Preprocess image\nim_list, im_scales = prep_im_for_blob(image)\n# Build sample\nsample = {}\nsample['image'] = torch.FloatTensor(im_list[0]).permute(2,0,1).unsqueeze(0)\nsample['scaling_factors'] = torch.FloatTensor([im_scales[0]])\nsample['original_im_size'] = torch.FloatTensor(orig_im_size)\n# Extract proposals\nif model_type=='fast':\n # extract proposals using selective search (xmin,ymin,xmax,ymax format)\n rects = selective_search(pil_image=Image.fromarray(image),quality='f')\n sample['proposal_coords']=torch.FloatTensor(preprocess_sample().remove_dup_prop(rects)[0])*im_scales[0]\nelse:\n sample['proposal_coords']=torch.FloatTensor([-1]) # dummy value\n# Convert to cuda variable\nsample = to_cuda_variable(sample)", "_____no_output_____" ] ], [ [ "# Create detector model", "_____no_output_____" ] ], [ [ "model = detector(arch=arch,\n detector_pkl_file=pretrained_model_file,\n use_rpn_head = use_rpn_head,\n use_mask_head = use_mask_head)\nmodel = model.cuda()", "Loading pretrained weights:\n-> loading conv. body weights\n-> loading output head weights\n-> loading rpn head weights\n-> loading mask head weights\n" ] ], [ [ "# Evaluate", "_____no_output_____" ] ], [ [ "def eval_model(sample):\n class_scores,bbox_deltas,rois,img_features=model(sample['image'],\n sample['proposal_coords'],\n scaling_factor=sample['scaling_factors'].cpu().data.numpy().item()) \n return class_scores,bbox_deltas,rois,img_features", "_____no_output_____" ], [ "if torch_ver==\"0.4\":\n with torch.no_grad():\n class_scores,bbox_deltas,rois,img_features=eval_model(sample)\nelse:\n class_scores,bbox_deltas,rois,img_features=eval_model(sample)\n\n# postprocess output:\n# - convert coordinates back to original image size, \n# - treshold proposals based on score,\n# - do NMS.\nscores_final, boxes_final, boxes_per_class = result_utils.postprocess_output(rois,\n sample['scaling_factors'],\n sample['original_im_size'],\n class_scores,\n bbox_deltas)\n\nif model_type=='mask':\n # compute masks\n boxes_final_th = Variable(torch.cuda.FloatTensor(boxes_final))*sample['scaling_factors']\n masks=model.mask_head(img_features,boxes_final_th)\n # postprocess mask output:\n h_orig = int(sample['original_im_size'].squeeze()[0].data.cpu().numpy().item())\n w_orig = int(sample['original_im_size'].squeeze()[1].data.cpu().numpy().item())\n cls_segms = result_utils.segm_results(boxes_per_class, masks.cpu().data.numpy(), boxes_final, h_orig, w_orig)\nelse:\n cls_segms = None\n\nprint('Done!')", "Done!\n" ] ], [ [ "# Visualize", "_____no_output_____" ] ], [ [ "output_dir = 'demo/output/'\nvis_utils.vis_one_image(\n image, # BGR -> RGB for visualization\n image_fn,\n output_dir,\n boxes_per_class,\n cls_segms,\n None,\n dataset=dummy_datasets.get_coco_dataset(),\n box_alpha=0.3,\n show_class=True,\n thresh=0.7,\n kp_thresh=2,\n show=True\n)", "result saved to demo/output/33823288584_1d21cf0a26_k.jpg.pdf\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
4a4b82f1e4dad82fea4786d62b8ab8d793cb7745
64,188
ipynb
Jupyter Notebook
app/notebooks/queries_v2.ipynb
DanFu09/esper
ccc5547de3637728b8aaab059b6781baebc269ec
[ "Apache-2.0" ]
4
2018-12-27T07:21:38.000Z
2019-01-04T10:35:02.000Z
app/notebooks/queries_v2.ipynb
DanFu09/esper
ccc5547de3637728b8aaab059b6781baebc269ec
[ "Apache-2.0" ]
null
null
null
app/notebooks/queries_v2.ipynb
DanFu09/esper
ccc5547de3637728b8aaab059b6781baebc269ec
[ "Apache-2.0" ]
null
null
null
32.615854
6,369
0.541955
[ [ [ "<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\" style=\"margin-top: 1em;\"><ul class=\"toc-item\"><li><span><a href=\"#Queries\" data-toc-modified-id=\"Queries-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Queries</a></span><ul class=\"toc-item\"><li><span><a href=\"#All-Videos\" data-toc-modified-id=\"All-Videos-1.1\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>All Videos</a></span></li><li><span><a href=\"#Videos-by-Channel\" data-toc-modified-id=\"Videos-by-Channel-1.2\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>Videos by Channel</a></span></li><li><span><a href=\"#Videos-by-Show\" data-toc-modified-id=\"Videos-by-Show-1.3\"><span class=\"toc-item-num\">1.3&nbsp;&nbsp;</span>Videos by Show</a></span></li><li><span><a href=\"#Videos-by-Canonical-Show\" data-toc-modified-id=\"Videos-by-Canonical-Show-1.4\"><span class=\"toc-item-num\">1.4&nbsp;&nbsp;</span>Videos by Canonical Show</a></span></li><li><span><a href=\"#Videos-by-time-of-day\" data-toc-modified-id=\"Videos-by-time-of-day-1.5\"><span class=\"toc-item-num\">1.5&nbsp;&nbsp;</span>Videos by time of day</a></span></li></ul></li><li><span><a href=\"#Shots\" data-toc-modified-id=\"Shots-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Shots</a></span><ul class=\"toc-item\"><li><span><a href=\"#Shot-Validation\" data-toc-modified-id=\"Shot-Validation-2.1\"><span class=\"toc-item-num\">2.1&nbsp;&nbsp;</span>Shot Validation</a></span></li><li><span><a href=\"#All-Shots\" data-toc-modified-id=\"All-Shots-2.2\"><span class=\"toc-item-num\">2.2&nbsp;&nbsp;</span>All Shots</a></span></li><li><span><a href=\"#Shots-by-Channel\" data-toc-modified-id=\"Shots-by-Channel-2.3\"><span class=\"toc-item-num\">2.3&nbsp;&nbsp;</span>Shots by Channel</a></span></li><li><span><a href=\"#Shots-by-Show\" data-toc-modified-id=\"Shots-by-Show-2.4\"><span class=\"toc-item-num\">2.4&nbsp;&nbsp;</span>Shots by Show</a></span></li><li><span><a href=\"#Shots-by-Canonical-Show\" data-toc-modified-id=\"Shots-by-Canonical-Show-2.5\"><span class=\"toc-item-num\">2.5&nbsp;&nbsp;</span>Shots by Canonical Show</a></span></li><li><span><a href=\"#Shots-by-Time-of-Day\" data-toc-modified-id=\"Shots-by-Time-of-Day-2.6\"><span class=\"toc-item-num\">2.6&nbsp;&nbsp;</span>Shots by Time of Day</a></span></li></ul></li><li><span><a href=\"#Commercials\" data-toc-modified-id=\"Commercials-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>Commercials</a></span><ul class=\"toc-item\"><li><span><a href=\"#All-Commercials\" data-toc-modified-id=\"All-Commercials-3.1\"><span class=\"toc-item-num\">3.1&nbsp;&nbsp;</span>All Commercials</a></span></li><li><span><a href=\"#Commercials-by-Channel\" data-toc-modified-id=\"Commercials-by-Channel-3.2\"><span class=\"toc-item-num\">3.2&nbsp;&nbsp;</span>Commercials by Channel</a></span></li><li><span><a href=\"#Commercials-by-Show\" data-toc-modified-id=\"Commercials-by-Show-3.3\"><span class=\"toc-item-num\">3.3&nbsp;&nbsp;</span>Commercials by Show</a></span></li><li><span><a href=\"#Commercials-by-Canonical-Show\" data-toc-modified-id=\"Commercials-by-Canonical-Show-3.4\"><span class=\"toc-item-num\">3.4&nbsp;&nbsp;</span>Commercials by Canonical Show</a></span></li><li><span><a href=\"#Commercials-by-Time-of-Day\" data-toc-modified-id=\"Commercials-by-Time-of-Day-3.5\"><span class=\"toc-item-num\">3.5&nbsp;&nbsp;</span>Commercials by Time of Day</a></span></li></ul></li><li><span><a href=\"#Faces\" data-toc-modified-id=\"Faces-4\"><span class=\"toc-item-num\">4&nbsp;&nbsp;</span>Faces</a></span><ul class=\"toc-item\"><li><span><a href=\"#Face-Validation\" data-toc-modified-id=\"Face-Validation-4.1\"><span class=\"toc-item-num\">4.1&nbsp;&nbsp;</span>Face Validation</a></span></li><li><span><a href=\"#All-Faces\" data-toc-modified-id=\"All-Faces-4.2\"><span class=\"toc-item-num\">4.2&nbsp;&nbsp;</span>All Faces</a></span></li></ul></li><li><span><a href=\"#Genders\" data-toc-modified-id=\"Genders-5\"><span class=\"toc-item-num\">5&nbsp;&nbsp;</span>Genders</a></span><ul class=\"toc-item\"><li><span><a href=\"#All-Gender\" data-toc-modified-id=\"All-Gender-5.1\"><span class=\"toc-item-num\">5.1&nbsp;&nbsp;</span>All Gender</a></span><ul class=\"toc-item\"><li><span><a href=\"#Persist-for-Report\" data-toc-modified-id=\"Persist-for-Report-5.1.1\"><span class=\"toc-item-num\">5.1.1&nbsp;&nbsp;</span>Persist for Report</a></span></li></ul></li><li><span><a href=\"#Gender-by-Channel\" data-toc-modified-id=\"Gender-by-Channel-5.2\"><span class=\"toc-item-num\">5.2&nbsp;&nbsp;</span>Gender by Channel</a></span></li><li><span><a href=\"#Gender-by-Show\" data-toc-modified-id=\"Gender-by-Show-5.3\"><span class=\"toc-item-num\">5.3&nbsp;&nbsp;</span>Gender by Show</a></span><ul class=\"toc-item\"><li><span><a href=\"#Persist-for-Report\" data-toc-modified-id=\"Persist-for-Report-5.3.1\"><span class=\"toc-item-num\">5.3.1&nbsp;&nbsp;</span>Persist for Report</a></span></li></ul></li><li><span><a href=\"#Gender-by-Canonical-Show\" data-toc-modified-id=\"Gender-by-Canonical-Show-5.4\"><span class=\"toc-item-num\">5.4&nbsp;&nbsp;</span>Gender by Canonical Show</a></span><ul class=\"toc-item\"><li><span><a href=\"#Persist-for-Report\" data-toc-modified-id=\"Persist-for-Report-5.4.1\"><span class=\"toc-item-num\">5.4.1&nbsp;&nbsp;</span>Persist for Report</a></span></li></ul></li><li><span><a href=\"#Gender-by-time-of-day\" data-toc-modified-id=\"Gender-by-time-of-day-5.5\"><span class=\"toc-item-num\">5.5&nbsp;&nbsp;</span>Gender by time of day</a></span></li><li><span><a href=\"#Gender-by-Day-of-the-Week\" data-toc-modified-id=\"Gender-by-Day-of-the-Week-5.6\"><span class=\"toc-item-num\">5.6&nbsp;&nbsp;</span>Gender by Day of the Week</a></span></li><li><span><a href=\"#Gender-by-topic\" data-toc-modified-id=\"Gender-by-topic-5.7\"><span class=\"toc-item-num\">5.7&nbsp;&nbsp;</span>Gender by topic</a></span></li><li><span><a href=\"#Male-vs.-female-faces-in-panels\" data-toc-modified-id=\"Male-vs.-female-faces-in-panels-5.8\"><span class=\"toc-item-num\">5.8&nbsp;&nbsp;</span>Male vs. female faces in panels</a></span></li></ul></li><li><span><a href=\"#Pose\" data-toc-modified-id=\"Pose-6\"><span class=\"toc-item-num\">6&nbsp;&nbsp;</span>Pose</a></span></li><li><span><a href=\"#Topics\" data-toc-modified-id=\"Topics-7\"><span class=\"toc-item-num\">7&nbsp;&nbsp;</span>Topics</a></span></li></ul></div>", "_____no_output_____" ] ], [ [ "%matplotlib inline\nfrom esper.stdlib import *\nfrom esper.prelude import *\nfrom esper.spark_util import *\nfrom esper.validation import *\n\nimport IPython\nimport shutil", "_____no_output_____" ], [ "shows = get_shows()\nprint('Schema:', shows)\nprint('Count:', shows.count())", "_____no_output_____" ], [ "videos = get_videos()\nprint('Schema:', videos)\nprint('Count:', videos.count())", "_____no_output_____" ], [ "shots = get_shots()\nprint('Schema:', shots)\nprint('Count:', shots.count())", "_____no_output_____" ], [ "speakers = get_speakers()\nprint('Schema:', speakers)\nprint('Count:', speakers.count())\n# speakers.where(speakers.in_commercial == True).show()\n# speakers.where(speakers.in_commercial == False).show()", "_____no_output_____" ], [ "segments = get_segments()\nprint('Schema:', segments)\nprint('Count:', segments.count())\n# segments.where(segments.in_commercial == True).show()\n# segments.where(segments.in_commercial == False).show()", "_____no_output_____" ], [ "faces = get_faces()\nprint('Schema:', faces)\nprint('Count:', faces.count())", "_____no_output_____" ], [ "face_genders = get_face_genders() \nprint('Schema:', face_genders)\nprint('Count:', face_genders.count())", "_____no_output_____" ], [ "face_identities = get_face_identities()\nprint('Schema:', face_identities)\nprint('Count:', face_identities.count())", "_____no_output_____" ], [ "commercials = get_commercials()\nprint('Schema:', commercials)\nprint('Count:', commercials.count())", "_____no_output_____" ] ], [ [ "# Queries", "_____no_output_____" ] ], [ [ "def format_time(seconds, padding=4):\n return '{{:0{}d}}:{{:02d}}:{{:02d}}'.format(padding).format(\n int(seconds/3600), int(seconds/60 % 60), int(seconds % 60))\n\ndef format_number(n):\n def fmt(n):\n suffixes = {\n 6: 'thousand',\n 9: 'million',\n 12: 'billion',\n 15: 'trillion'\n }\n\n log = math.log10(n)\n suffix = None\n key = None\n for k in sorted(suffixes.keys()):\n if log < k:\n suffix = suffixes[k]\n key = k\n break\n\n return '{:.2f} {}'.format(n / float(10**(key-3)), suffix)\n if isinstance(n, list):\n return map(fmt, n)\n else:\n return fmt(n)\n\ndef show_df(table, ordering, clear=True):\n if clear:\n IPython.display.clear_output()\n return pd.DataFrame(table)[ordering]\n \ndef format_hour(h):\n if h <= 12:\n return '{} AM'.format(h)\n else:\n return '{} PM'.format(h-12)\n\ndef video_stats(key, labels):\n if key is not None:\n rows = videos.groupBy(key).agg(\n videos[key], \n func.count('duration'), \n func.avg('duration'), \n func.sum('duration'), \n func.stddev_pop('duration')\n ).collect()\n else:\n rows = videos.agg(\n func.count('duration'), \n func.avg('duration'), \n func.sum('duration'), \n func.stddev_pop('duration')\n ).collect()\n rmap = {(0 if key is None else r[key]): r for r in rows}\n \n return [{\n 'label': label['name'],\n 'count': rmap[label['id']]['count(duration)'],\n 'duration': format_time(int(rmap[label['id']]['sum(duration)'])),\n 'avg_duration': '{} (σ = {})'.format(\n format_time(int(rmap[label['id']]['avg(duration)'])),\n format_time(int(rmap[label['id']]['stddev_pop(duration)']), padding=0))\n } for label in labels if not key or label['id'] in rmap]\n\nvideo_ordering = ['label', 'count', 'duration', 'avg_duration']\n\nhours = [\n r['hour'] for r in \n Video.objects.annotate(\n hour=Extract('time', 'hour')\n ).distinct('hour').order_by('hour').values('hour')\n]", "_____no_output_____" ] ], [ [ "## All Videos", "_____no_output_____" ] ], [ [ "show_df(\n video_stats(None, [{'id': 0, 'name': 'whole dataset'}]),\n video_ordering)", "_____no_output_____" ] ], [ [ "## Videos by Channel", "_____no_output_____" ] ], [ [ "show_df(\n video_stats('channel_id', list(Channel.objects.all().values('id', 'name'))),\n video_ordering)", "_____no_output_____" ] ], [ [ "## Videos by Show\n\"Situation Room with Wolf Blitzer\" and \"Special Report with Bret Baier\" were ingested as 60 10-minute segments each, whereas the other shows have 10 ≥1 hour segments.", "_____no_output_____" ] ], [ [ "show_df(\n video_stats('show_id', list(Show.objects.all().values('id', 'name'))),\n video_ordering)", "_____no_output_____" ] ], [ [ "## Videos by Canonical Show", "_____no_output_____" ] ], [ [ "show_df(\n video_stats('canonical_show_id', list(CanonicalShow.objects.all().values('id', 'name'))),\n video_ordering)", "_____no_output_____" ] ], [ [ "## Videos by time of day\nInitial selection of videos was only prime-time, so between 4pm-11pm.", "_____no_output_____" ] ], [ [ "show_df(\n video_stats('hour', [{'id': hour, 'name': format_hour(hour)} for hour in hours]),\n video_ordering)", "_____no_output_____" ] ], [ [ "# Shots", "_____no_output_____" ] ], [ [ "med_withcom = shots.approxQuantile('duration', [0.5], 0.01)[0]\nprint('Median shot length with commercials: {:0.2f}s'.format(med_withcom))\n\nmed_nocom = shots.where(\n shots.in_commercial == False\n).approxQuantile('duration', [0.5], 0.01)[0]\nprint('Median shot length w/o commercials: {:0.2f}s'.format(med_nocom))\n\nmed_channels = {\n c.name: shots.where(\n shots.channel_id == c.id\n ).approxQuantile('duration', [0.5], 0.01)[0]\n for c in Channel.objects.all()\n}\nprint('Median shot length by_channel:')\nfor c, v in med_channels.items():\n print(' {}: {:0.2f}s'.format(c, v))\n \npickle.dump({\n 'withcom': med_withcom,\n 'nocom': med_nocom,\n 'channels': med_channels\n}, open('/app/data/shot_medians.pkl', 'wb'))", "_____no_output_____" ], [ "all_shot_durations = np.array(\n [r['duration'] for r in shots.select('duration').collect()]\n)\nhist, edges = np.histogram(all_shot_durations, bins=list(range(0, 3600)) + [10000000])\npickle.dump(hist, open('/app/data/shot_histogram.pkl', 'wb'))", "_____no_output_____" ] ], [ [ "## Shot Validation", "_____no_output_____" ] ], [ [ "# TODO: what is this hack?\nshot_precision = 0.97\nshot_recall = 0.97\ndef shot_error_interval(n):\n return [n * shot_precision, n * (2 - shot_recall)]\n\n\ndef shot_stats(key, labels, shots=shots):\n if key is not None:\n df = shots.groupBy(key)\n rows = df.agg(shots[key], func.count('duration'), func.avg('duration'), func.sum('duration'), func.stddev_pop('duration')).collect()\n else:\n df = shots\n rows = df.agg(func.count('duration'), func.avg('duration'), func.sum('duration'), func.stddev_pop('duration')).collect()\n rmap = {(0 if key is None else r[key]): r for r in rows}\n out_rows = []\n for label in labels:\n try:\n out_rows.append({\n 'label': label['name'],\n 'count': rmap[label['id']]['count(duration)'], #format_number(shot_error_interval(rmap[label['id']]['count(duration)'])),\n 'duration': format_time(int(rmap[label['id']]['sum(duration)'])),\n 'avg_duration': '{:06.2f}s (σ = {:06.2f})'.format(\n rmap[label['id']]['avg(duration)'],\n rmap[label['id']]['stddev_pop(duration)'])\n })\n except KeyError:\n pass\n return out_rows\n\nshot_ordering = ['label', 'count', 'duration', 'avg_duration']", "_____no_output_____" ] ], [ [ "## All Shots", "_____no_output_____" ] ], [ [ "show_df(\n shot_stats(None, [{'id': 0, 'name': 'whole dataset'}]),\n shot_ordering)", "_____no_output_____" ] ], [ [ "## Shots by Channel", "_____no_output_____" ] ], [ [ "show_df(\n shot_stats('channel_id', list(Channel.objects.all().values('id', 'name'))),\n shot_ordering)", "_____no_output_____" ] ], [ [ "## Shots by Show", "_____no_output_____" ] ], [ [ "show_df(\n shot_stats('show_id', list(Show.objects.all().values('id', 'name'))),\n shot_ordering)", "_____no_output_____" ] ], [ [ "## Shots by Canonical Show", "_____no_output_____" ] ], [ [ "show_df(\n shot_stats('canonical_show_id', list(CanonicalShow.objects.all().values('id', 'name'))),\n shot_ordering)", "_____no_output_____" ] ], [ [ "## Shots by Time of Day", "_____no_output_____" ] ], [ [ "show_df(\n shot_stats('hour', [{'id': hour, 'name': format_hour(hour)} for hour in hours]),\n shot_ordering)", "_____no_output_____" ] ], [ [ "# Commercials", "_____no_output_____" ] ], [ [ "def commercial_stats(key, labels):\n if key is not None:\n rows = commercials.groupBy(key).agg(\n commercials[key], \n func.count('duration'), \n func.avg('duration'), \n func.sum('duration')\n ).collect()\n else:\n rows = commercials.agg(\n func.count('duration'), \n func.avg('duration'),\n func.sum('duration')\n ).collect()\n rmap = {(0 if key is None else r[key]): r for r in rows}\n out_rows = []\n for label in labels:\n try:\n out_rows.append({\n 'label': label['name'],\n 'count': format_number(rmap[label['id']]['count(duration)']),\n 'duration': format_time(int(rmap[label['id']]['sum(duration)'])),\n 'avg_duration': '{:06.2f}s'.format(rmap[label['id']]['avg(duration)'])\n })\n except KeyError:\n pass\n return out_rows\n\ncommercial_ordering = ['label', 'count', 'duration', 'avg_duration']", "_____no_output_____" ] ], [ [ "## All Commercials", "_____no_output_____" ] ], [ [ "show_df(\n commercial_stats(None, [{'id': 0, 'name': 'whole dataset'}]),\n commercial_ordering)", "_____no_output_____" ], [ "print('Average # of commercials per video: {:0.2f}'.format(\n commercials.groupBy('video_id').count().agg(\n func.avg(func.col('count'))\n ).collect()[0]['avg(count)']\n))", "_____no_output_____" ] ], [ [ "## Commercials by Channel", "_____no_output_____" ] ], [ [ "show_df(\n commercial_stats('channel_id', list(Channel.objects.all().values('id', 'name'))),\n commercial_ordering)", "_____no_output_____" ] ], [ [ "## Commercials by Show", "_____no_output_____" ] ], [ [ "show_df(\n commercial_stats('show_id', list(Show.objects.all().values('id', 'name'))),\n commercial_ordering)", "_____no_output_____" ] ], [ [ "## Commercials by Canonical Show", "_____no_output_____" ] ], [ [ "show_df(\n commercial_stats('canonical_show_id', list(CanonicalShow.objects.all().values('id', 'name'))),\n commercial_ordering)", "_____no_output_____" ] ], [ [ "## Commercials by Time of Day", "_____no_output_____" ] ], [ [ "show_df(\n commercial_stats('hour', [{'id': hour, 'name': format_hour(hour)} for hour in hours]),\n commercial_ordering)", "_____no_output_____" ] ], [ [ "# Faces", "_____no_output_____" ], [ "## Face Validation", "_____no_output_____" ] ], [ [ "base_face_stats = face_validation('All faces', lambda x: x)\nbig_face_stats = face_validation(\n 'Faces height > 0.2', lambda qs: qs.annotate(height=F('bbox_y2') - F('bbox_y1')).filter(height__gte=0.2))", "_____no_output_____" ], [ "shot_precision = 0.97\nshot_recall = 0.97\n\ndef face_error_interval(n, face_stats):\n (face_precision, face_recall, _) = face_stats\n return [n * shot_precision * face_precision, n * (2 - shot_recall) * (2 - face_recall)]", "_____no_output_____" ] ], [ [ "## All Faces", "_____no_output_____" ] ], [ [ "print('Total faces: {}'.format(\n format_number(face_error_interval(faces.count(), base_face_stats[2]))))\n\ntotal_duration = videos.agg(func.sum('duration')).collect()[0]['sum(duration)'] - \\\n commercials.agg(func.sum('duration')).collect()[0]['sum(duration)']\nface_duration = faces.groupBy('shot_id') \\\n .agg(\n func.first('duration').alias('duration')\n ).agg(func.sum('duration')).collect()[0]['sum(duration)']\nprint('% of time a face is on screen: {:0.2f}'.format(100.0 * face_duration / total_duration))", "_____no_output_____" ] ], [ [ "# Genders\n\nThese queries analyze the distribution of men vs. women across a number of axes. We use faces detected by [MTCNN](https://github.com/kpzhang93/MTCNN_face_detection_alignment/) and gender detected by [rude-carnie](https://github.com/dpressel/rude-carnie). We only consider faces with a height > 20% of the frame to eliminate people in the background.\n\nTime for a given gender is the amount of time during which at least one person of that gender was on screen. Percentages are (gender screen time) / (total time any person was on screen).", "_____no_output_____" ] ], [ [ "_, Cm = gender_validation('Gender w/ face height > 0.2', big_face_stats)\n\ndef P(y, yhat):\n d = {'M': 0, 'F': 1, 'U': 2}\n return float(Cm[d[y]][d[yhat]]) / sum([Cm[i][d[yhat]] for i in d.values()])", "_____no_output_____" ], [ "# TODO: remove a host -- use face features to identify and remove rachel maddow from computation\n# TODO: more discrete time zones (\"sunday mornings\", \"prime time\", \"daytime\", \"late evening\")\n# TODO: by year\n# TODO: specific dates, e.g. during the RNC\n\nMALE = Gender.objects.get(name='M')\nFEMALE = Gender.objects.get(name='F')\nUNKNOWN = Gender.objects.get(name='U')\ngender_names = {g.id: g.name for g in Gender.objects.all()}\n\ndef gender_singlecount_stats(key, labels, min_dur=None):\n if key == 'topic': \n # TODO: Fix this\n df1 = face_genders.join(segment_links, face_genders.segment_id == segment_links.segment_id)\n df2 = df1.join(things, segment_links.thing_id == things.id)\n topic_type = ThingType.objects.get(name='topic').id\n df3 = df2.where(things.type_id == topic_type).select(\n *(['duration', 'channel_id', 'show_id', 'hour', 'week_day', 'gender_id'] + \\\n [things.id.alias('topic'), 'shot_id']))\n full_df = df3\n else:\n full_df = face_genders\n \n keys = ['duration', 'channel_id', 'show_id', 'hour', 'week_day']\n aggs = [func.count('gender_id')] + [func.first(full_df[k]).alias(k) for k in keys] + \\\n ([full_df.topic] if key == 'topic' else [])\n groups = ([key] if key is not None else []) + ['gender_id']\n counts = full_df.groupBy(\n # this is very brittle, need to add joined fields like 'canonical_show_id' here\n *(['shot_id', 'gender_id', 'canonical_show_id'] + (['topic'] if key == 'topic' else []))\n ).agg(*aggs)\n rows = counts.where(\n counts['count(gender_id)'] > 0\n ).groupBy(\n *groups\n ).agg(\n func.sum('duration')\n ).collect()\n\n if key is not None:\n base_counts = full_df.groupBy(\n ['shot_id', key]\n ).agg(full_df[key], func.first('duration').alias('duration')) \\\n .groupBy(key).agg(full_df[key], func.sum('duration')).collect()\n else:\n base_counts = full_df.groupBy(\n 'shot_id'\n ).agg(\n func.first('duration').alias('duration')\n ).agg(func.sum('duration')).collect()\n base_map = {\n (row[key] if key is not None else 0): row['sum(duration)']\n for row in base_counts\n }\n \n out_rows = []\n for label in labels:\n label_rows = {\n row.gender_id: row for row in rows if key is None or row[key] == label['id']\n }\n if len(label_rows) < 3: \n continue\n\n base_dur = int(base_map[label['id']])\n if min_dur != None and base_dur < min_dur:\n continue\n \n durs = {\n g.id: int(label_rows[g.id]['sum(duration)'])\n for g in [MALE, FEMALE, UNKNOWN]\n } \n \n def adjust(g):\n return int(\n reduce(lambda a, b: \n a + b, [durs[g2] * P(gender_names[g], gender_names[g2]) \n for g2 in durs]))\n \n adj_durs = {\n g: adjust(g)\n for g in durs\n }\n \n out_rows.append({\n key: label['name'],\n 'M': format_time(durs[MALE.id]),\n 'F': format_time(durs[FEMALE.id]),\n 'U': format_time(durs[UNKNOWN.id]),\n 'base': format_time(base_dur),\n 'M%': int(100.0 * durs[MALE.id] / base_dur),\n 'F%': int(100.0 * durs[FEMALE.id] / base_dur),\n 'U%': int(100.0 * durs[UNKNOWN.id] / base_dur),\n# 'M-Adj': format_time(adj_durs[MALE.id]),\n# 'F-Adj': format_time(adj_durs[FEMALE.id]),\n# 'U-Adj': format_time(adj_durs[UNKNOWN.id]),\n# 'M-Adj%': int(100.0 * adj_durs[MALE.id] / base_dur),\n# 'F-Adj%': int(100.0 * adj_durs[FEMALE.id] / base_dur),\n# 'U-Adj%': int(100.0 * adj_durs[UNKNOWN.id] / base_dur),\n #'Overlap': int(100.0 * float(male_dur + female_dur) / base_dur) - 100\n })\n return out_rows\ngender_ordering = ['M', 'M%', 'F', 'F%', 'U', 'U%']\n#gender_ordering = ['M', 'M%', 'M-Adj', 'M-Adj%', 'F', 'F%', 'F-Adj', 'F-Adj%', 'U', 'U%', 'U-Adj', 'U-Adj%']", "_____no_output_____" ], [ "def gender_multicount_stats(key, labels, min_dur=None, no_host=False, just_host=False):\n df0 = face_genders\n if no_host:\n df0 = df0.where(df0.is_host == False) \n if just_host:\n df0 = df0.where(df0.is_host == True)\n \n if key == 'topic': \n df1 = df0.join(segment_links, df0.segment_id == segment_links.segment_id)\n df2 = df1.join(things, segment_links.thing_id == things.id)\n topic_type = ThingType.objects.get(name='topic').id\n df3 = df2.where(things.type_id == topic_type).select(\n *(['duration', 'channel_id', 'show_id', 'hour', 'week_day', 'gender_id'] + \\\n [things.id.alias('topic'), 'shot_id']))\n full_df = df3\n else:\n full_df = df0\n \n groups = ([key] if key is not None else []) + ['gender_id']\n rows = full_df.groupBy(*groups).agg(func.sum('duration')).collect()\n \n out_rows = []\n for label in labels:\n label_rows = {row.gender_id: row for row in rows if key is None or row[key] == label['id']}\n if len(label_rows) < 3: continue\n male_dur = int(label_rows[MALE.id]['sum(duration)'])\n female_dur = int(label_rows[FEMALE.id]['sum(duration)'])\n unknown_dur = int(label_rows[UNKNOWN.id]['sum(duration)'])\n base_dur = male_dur + female_dur\n if min_dur != None and base_dur < min_dur:\n continue\n out_rows.append({\n key: label['name'],\n 'M': format_time(male_dur),\n 'F': format_time(female_dur),\n 'U': format_time(unknown_dur),\n 'base': format_time(base_dur),\n 'M%': int(100.0 * male_dur / base_dur),\n 'F%': int(100.0 * female_dur / base_dur),\n 'U%': int(100.0 * unknown_dur / (base_dur + unknown_dur)),\n 'Overlap': 0,\n })\n return out_rows", "_____no_output_____" ], [ "def gender_speaker_stats(key, labels, min_dur=None, no_host=False):\n keys = ['duration', 'channel_id', 'show_id', 'hour', 'week_day']\n \n df0 = speakers\n if no_host:\n df0 = df0.where(df0.has_host == False)\n\n if key == 'topic': \n df1 = df0.join(segment_links, speakers.segment_id == segment_links.segment_id)\n df2 = df1.join(things, segment_links.thing_id == things.id)\n topic_type = ThingType.objects.get(name='topic').id\n df3 = df2.where(things.type_id == topic_type).select(\n *(keys + ['gender_id', things.id.alias('topic')]))\n full_df = df3\n else:\n full_df = df0\n \n aggs = [func.count('gender_id')] + [func.first(full_df[k]).alias(k) for k in keys] + \\\n ([full_df.topic] if key == 'topic' else [])\n groups = ([key] if key is not None else []) + ['gender_id'] + (['topic'] if key == 'topic' else [])\n rows = full_df.groupBy(*groups).agg(func.sum('duration')).collect()\n\n if key is not None:\n base_counts = full_df.groupBy(key).agg(full_df[key], func.sum('duration')).collect()\n else:\n base_counts = full_df.agg(func.sum('duration')).collect()\n \n base_map = {\n (row[key] if key is not None else 0): row['sum(duration)']\n for row in base_counts\n }\n \n out_rows = []\n for label in labels:\n label_rows = {row.gender_id: row for row in rows if key is None or row[key] == label['id']}\n if len(label_rows) < 2: continue\n male_dur = int(label_rows[MALE.id]['sum(duration)'])\n female_dur = int(label_rows[FEMALE.id]['sum(duration)'])\n base_dur = int(base_map[label['id']])\n if min_dur != None and base_dur < min_dur:\n continue\n out_rows.append({\n key: label['name'],\n 'M': format_time(male_dur),\n 'F': format_time(female_dur),\n 'base': format_time(base_dur),\n 'M%': int(100.0 * male_dur / base_dur),\n 'F%': int(100.0 * female_dur / base_dur),\n })\n return out_rows\n\ngender_speaker_ordering = ['M', 'M%', 'F', 'F%']", "_____no_output_____" ] ], [ [ "## All Gender", "_____no_output_____" ] ], [ [ "print('Singlecount')\nshow_df(gender_singlecount_stats(None, [{'id': 0, 'name': 'whole dataset'}]), \n gender_ordering)", "_____no_output_____" ], [ "print('Multicount')\ngender_screen_all = gender_multicount_stats(None, [{'id': 0, 'name': 'whole dataset'}])\ngender_screen_all_nh = gender_multicount_stats(None, [{'id': 0, 'name': 'whole dataset'}], \n no_host=True)\nshow_df(gender_screen_all, gender_ordering)", "_____no_output_____" ], [ "show_df(gender_screen_all_nh, gender_ordering)", "_____no_output_____" ], [ "print('Speaking time')\ngender_speaking_all = gender_speaker_stats(None, [{'id': 0, 'name': 'whole dataset'}])\ngender_speaking_all_nh = gender_speaker_stats(\n None, [{'id': 0, 'name': 'whole dataset'}],\n no_host=True)\nshow_df(gender_speaking_all, gender_speaker_ordering)", "_____no_output_____" ], [ "show_df(gender_speaking_all_nh, gender_speaker_ordering)", "_____no_output_____" ] ], [ [ "### Persist for Report", "_____no_output_____" ] ], [ [ "pd.DataFrame(gender_screen_all).to_csv('/app/data/screen_all.csv')\npd.DataFrame(gender_screen_all_nh).to_csv('/app/data/screen_all_nh.csv')\npd.DataFrame(gender_speaking_all).to_csv('/app/data/speaking_all.csv')", "_____no_output_____" ] ], [ [ "## Gender by Channel", "_____no_output_____" ] ], [ [ "print('Singlecount')\nshow_df(\n gender_singlecount_stats('channel_id', list(Channel.objects.values('id', 'name'))),\n ['channel_id'] + gender_ordering)", "_____no_output_____" ], [ "print('Multicount')\nshow_df(\n gender_multicount_stats('channel_id', list(Channel.objects.values('id', 'name'))),\n ['channel_id'] + gender_ordering)", "_____no_output_____" ], [ "print('Speaking time')\nshow_df(\n gender_speaker_stats('channel_id', list(Channel.objects.values('id', 'name'))),\n ['channel_id'] + gender_speaker_ordering)", "_____no_output_____" ] ], [ [ "## Gender by Show", "_____no_output_____" ] ], [ [ "print('Singlecount')\nshow_df(\n gender_singlecount_stats('show_id', list(Show.objects.values('id', 'name')), min_dur=3600*500),\n ['show_id'] + gender_ordering)", "_____no_output_____" ], [ "print('Multicount')\ngender_screen_show = gender_multicount_stats('show_id', list(Show.objects.values('id', 'name')), min_dur=3600*250)\ngender_screen_show_nh = gender_multicount_stats('show_id', list(Show.objects.values('id', 'name')), min_dur=3600*250, no_host=True)\ngender_screen_show_jh = gender_multicount_stats('show_id', list(Show.objects.values('id', 'name')), min_dur=3600*50, just_host=True)\nshow_df(gender_screen_show, ['show_id'] + gender_ordering)", "_____no_output_____" ], [ "gshow = face_genders.groupBy('video_id', 'gender_id').agg(func.sum('duration').alias('screen_sum'), func.first('show_id').alias('show_id'))\ngspeak = speakers.groupBy('video_id', 'gender_id').agg(func.sum('duration').alias('speak_sum'))\nrows = gshow.join(gspeak, ['video_id', 'gender_id']).toPandas()", "_____no_output_____" ], [ "# TODO: this is really sketchy and clobbers some variables such as videos\n\n# show = Show.objects.get(name='Fox and Friends First')\n# rows2 = rows[rows.show_id == show.id]\n# videos = collect([r for _, r in rows2.iterrows()], lambda r: int(r.video_id))\n# bs = []\n# vkeys = []\n# for vid, vrows in videos.iteritems():\n# vgender = {int(r.gender_id): r for r in vrows}\n# def balance(key):\n# return vgender[1][key] / float(vgender[1][key] + vgender[2][key])\n# try:\n# bs.append(balance('screen_sum') / balance('speak_sum'))\n# except KeyError:\n# bs.append(0)\n# vkeys.append(vid)\n# idx = np.argsort(bs)[-20:]\n# print(np.array(vkeys)[idx].tolist(), np.array(bs)[idx].tolist())", "_____no_output_____" ], [ "show_df(gender_screen_show_nh, ['show_id'] + gender_ordering)", "_____no_output_____" ], [ "print('Speaking time')\ngender_speaking_show = gender_speaker_stats('show_id', list(Show.objects.values('id', 'name')), min_dur=3600*3)\ngender_speaking_show_nh = gender_speaker_stats('show_id', list(Show.objects.values('id', 'name')), min_dur=3600*3, no_host=True)\nshow_df( \n gender_speaking_show,\n ['show_id'] + gender_speaker_ordering)", "_____no_output_____" ], [ "show_df( \n gender_speaking_show_nh,\n ['show_id'] + gender_speaker_ordering)", "_____no_output_____" ] ], [ [ "### Persist for Report", "_____no_output_____" ] ], [ [ "pd.DataFrame(gender_screen_show).to_csv('/app/data/screen_show.csv')\npd.DataFrame(gender_screen_show_nh).to_csv('/app/data/screen_show_nh.csv')\npd.DataFrame(gender_screen_show_jh).to_csv('/app/data/screen_show_jh.csv')\npd.DataFrame(gender_speaking_show).to_csv('/app/data/speaking_show.csv')\npd.DataFrame(gender_speaking_show_nh).to_csv('/app/data/speaking_show_nh.csv')", "_____no_output_____" ] ], [ [ "## Gender by Canonical Show", "_____no_output_____" ] ], [ [ "print('Singlecount')\nshow_df(\n gender_singlecount_stats(\n 'canonical_show_id', \n list(CanonicalShow.objects.values('id', 'name')),\n min_dur=3600*500\n ),\n ['canonical_show_id'] + gender_ordering\n)", "_____no_output_____" ], [ "print('Multicount')\ngender_screen_canonical_show = gender_multicount_stats(\n 'canonical_show_id', \n list(CanonicalShow.objects.values('id', 'name')), \n min_dur=3600*250\n)\ngender_screen_canonical_show_nh = gender_multicount_stats(\n 'canonical_show_id', \n list(CanonicalShow.objects.values('id', 'name')), \n min_dur=3600*250, \n no_host=True\n)\ngender_screen_canonical_show_jh = gender_multicount_stats(\n 'canonical_show_id', \n list(CanonicalShow.objects.values('id', 'name')), \n min_dur=3600*50, \n just_host=True\n)\nshow_df(gender_screen_canonical_show, ['canonical_show_id'] + gender_ordering)", "_____no_output_____" ], [ "print('Speaking time')\ngender_speaking_canonical_show = gender_speaker_stats(\n 'canonical_show_id', \n list(CanonicalShow.objects.values('id', 'name')),\n min_dur=3600*3\n)\ngender_speaking_canonical_show_nh = gender_speaker_stats(\n 'canonical_show_id', \n list(CanonicalShow.objects.values('id', 'name')), \n min_dur=3600*3,\n no_host=True\n)\nshow_df( \n gender_speaking_canonical_show,\n ['canonical_show_id'] + gender_speaker_ordering)", "_____no_output_____" ], [ "show_df( \n gender_speaking_canonical_show_nh,\n ['canonical_show_id'] + gender_speaker_ordering)", "_____no_output_____" ] ], [ [ "### Persist for Report", "_____no_output_____" ] ], [ [ "pd.DataFrame(gender_screen_canonical_show).to_csv('/app/data/screen_canonical_show.csv')\npd.DataFrame(gender_screen_canonical_show_nh).to_csv('/app/data/screen_canonical_show_nh.csv')\npd.DataFrame(gender_screen_canonical_show_jh).to_csv('/app/data/screen_canonical_show_jh.csv')\npd.DataFrame(gender_speaking_canonical_show).to_csv('/app/data/speaking_canonical_show.csv')\npd.DataFrame(gender_speaking_canonical_show_nh).to_csv('/app/data/speaking_canonical_show_nh.csv')", "_____no_output_____" ] ], [ [ "## Gender by time of day", "_____no_output_____" ] ], [ [ "print('Singlecount')\nshow_df(\n gender_singlecount_stats('hour', [{'id': hour, 'name': format_hour(hour)} for hour in hours]),\n ['hour'] + gender_ordering) ", "_____no_output_____" ], [ "print('Multicount')\ngender_screen_tod = gender_multicount_stats('hour', [{'id': hour, 'name': format_hour(hour)} for hour in hours])\nshow_df(gender_screen_tod, ['hour'] + gender_ordering) ", "_____no_output_____" ], [ "print('Speaking time')\ngender_speaking_tod = gender_speaker_stats('hour', [{'id': hour, 'name': format_hour(hour)} for hour in hours])\nshow_df(gender_speaking_tod, ['hour'] + gender_speaker_ordering) ", "_____no_output_____" ] ], [ [ "## Gender by Day of the Week", "_____no_output_____" ] ], [ [ "dotw = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']\nprint('Singlecount')\nshow_df(\n gender_singlecount_stats('week_day', [{'id': i + 1, 'name': d} for i, d in enumerate(dotw)]),\n ['week_day'] + gender_ordering)", "_____no_output_____" ], [ "print('Multicount')\nshow_df(\n gender_multicount_stats('week_day', [{'id': i + 1, 'name': d} for i, d in enumerate(dotw)]),\n ['week_day'] + gender_ordering)", "_____no_output_____" ], [ "print('Speaking time')\nshow_df(\n gender_speaker_stats('week_day', [{'id': i + 1, 'name': d} for i, d in enumerate(dotw)]),\n ['week_day'] + gender_speaker_ordering)", "_____no_output_____" ] ], [ [ "## Gender by topic", "_____no_output_____" ] ], [ [ "# TODO: FIX ME\n\n# THOUGHTS:\n# - Try topic analysis just on a \"serious\" news show. \n# - Generate a panel from multiple clips, e.g. endless panel of people on a topic\n# - Produce an endless stream of men talking about, e.g. birth control\n\n# print('Singlecount')\n# show_df(\n# gender_singlecount_stats(\n# 'topic', [{'id': t.id, 'name': t.name} for t in Thing.objects.filter(type__name='topic')],\n# min_dur=3600*5),\n# ['topic'] + gender_ordering)\n\n# check this \n# M% is the pecent of time that men are on screen when this topic is being discussed", "_____no_output_____" ], [ "# print('Multicount')\n# gender_screen_topic = gender_multicount_stats(\n# 'topic', [{'id': t.id, 'name': t.name} for t in Thing.objects.filter(type__name='topic')],\n# min_dur=3600*300)\n# gender_screen_topic_nh = gender_multicount_stats(\n# 'topic', [{'id': t.id, 'name': t.name} for t in Thing.objects.filter(type__name='topic')],\n# min_dur=3600*300, no_host=True)\n# show_df(gender_screen_topic, ['topic'] + gender_ordering)", "_____no_output_____" ], [ "# print('Speaking time')\n# gender_speaking_topic = gender_speaker_stats(\n# 'topic', [{'id': t.id, 'name': t.name} for t in Thing.objects.filter(type__name='topic')],\n# min_dur=3600*100)\n# gender_speaking_topic_nh = gender_speaker_stats(\n# 'topic', [{'id': t.id, 'name': t.name} for t in Thing.objects.filter(type__name='topic')],\n# min_dur=3600*100, no_host=True)\n# show_df(gender_speaking_topic, ['topic'] + gender_speaker_ordering)", "_____no_output_____" ] ], [ [ "## Male vs. female faces in panels\n* Smaller percentage of women in panels relative to overall dataset.", "_____no_output_____" ] ], [ [ "# # TODO: female-domainated situations?\n# # TODO: slice this on # of people in the panel\n# # TODO: small visualization that shows sample of segments\n# # TODO: panels w/ majority male vs. majority female\n\n# print('Computing panels')\n# panels = queries.panels()\n# print('Computing gender stats')\n# frame_ids = [frame.id for (frame, _) in panels]\n# counts = filter_gender(lambda qs: qs.filter(face__person__frame__id__in=frame_ids), lambda qs: qs)\n# show_df([counts], ordering)", "_____no_output_____" ] ], [ [ "# Pose\n* Animatedness of people (specifically hosts)\n * e.g. Rachel Maddow vs. others\n * Pick 3-4 hours of a few specific hosts, compute dense poses and tracks\n * Devise acceleration metric\n* More gesturing on heated exchanges?\n* Sitting vs. standing\n* Repeated gestures (debates vs. state of the union)\n* Head/eye orientation (are people looking at each other?)\n* Camera orientation (looking at someone from above/below)\n* How much are the hosts facing each other\n* Quantify aggressive body language", "_____no_output_____" ], [ "# Topics", "_____no_output_____" ] ], [ [ "df = pd.DataFrame(gender_screen_tod)\nax = df.plot('hour', 'M%')\npd.DataFrame(gender_speaking_tod).plot('hour', 'M%', ax=ax)\nax.set_ylim(0, 100)\nax.set_xticks(range(len(df)))\nax.set_xticklabels(df.hour)\nax.axhline(50, color='r', linestyle='--')\nax.legend(['Screen time', 'Speaking time', '50%'])", "_____no_output_____" ], [ "# pd.DataFrame(gender_screen_topic).to_csv('/app/data/screen_topic.csv')\n# pd.DataFrame(gender_screen_topic_nh).to_csv('/app/data/screen_topic_nh.csv')\n# pd.DataFrame(gender_speaking_topic).to_csv('/app/data/speaking_topic.csv')\n# pd.DataFrame(gender_speaking_topic_nh).to_csv('/app/data/speaking_topic_nh.csv')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ] ]
4a4b84899f1b165c42ec91373b3ff7153f70f19e
78,981
ipynb
Jupyter Notebook
docs/tutorials/custom_federated_algorithms_1.ipynb
SamuelMarks/federated
fd0c3c7422ffe90063c595eb79af85843ddc82e1
[ "Apache-2.0" ]
null
null
null
docs/tutorials/custom_federated_algorithms_1.ipynb
SamuelMarks/federated
fd0c3c7422ffe90063c595eb79af85843ddc82e1
[ "Apache-2.0" ]
null
null
null
docs/tutorials/custom_federated_algorithms_1.ipynb
SamuelMarks/federated
fd0c3c7422ffe90063c595eb79af85843ddc82e1
[ "Apache-2.0" ]
null
null
null
37.185028
294
0.552703
[ [ [ "##### Copyright 2019 The TensorFlow Authors.", "_____no_output_____" ] ], [ [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "# Custom Federated Algorithms, Part 1: Introduction to the Federated Core", "_____no_output_____" ], [ "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_1\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/federated/blob/master/docs/tutorials/custom_federated_algorithms_1.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/federated/blob/master/docs/tutorials/custom_federated_algorithms_1.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>", "_____no_output_____" ], [ "This tutorial is the first part of a two-part series that demonstrates how to\nimplement custom types of federated algorithms in TensorFlow Federated (TFF)\nusing the [Federated Core (FC)](../federated_core.md) - a set of lower-level\ninterfaces that serve as a foundation upon which we have implemented the\n[Federated Learning (FL)](../federated_learning.md) layer.\n\nThis first part is more conceptual; we introduce some of the key concepts and\nprogramming abstractions used in TFF, and we demonstrate their use on a very\nsimple example with a distributed array of temperature sensors. In\n[the second part of this series](custom_federated_algorithms_2.ipynb), we use\nthe mechanisms we introduce here to implement a simple version of federated\ntraining and evaluation algorithms. As a follow-up, we encourage you to study\n[the implementation](https://github.com/tensorflow/federated/blob/master/tensorflow_federated/python/learning/federated_averaging.py)\nof federated averaging in `tff.learning`.\n\nBy the end of this series, you should be able to recognize that the applications\nof Federated Core are not necessarily limited to learning. The programming\nabstractions we offer are quite generic, and could be used, e.g., to implement\nanalytics and other custom types of computations over distributed data.\n\nAlthough this tutorial is designed to be self-contained, we encourage you to\nfirst read tutorials on\n[image classification](federated_learning_for_image_classification.ipynb) and\n[text generation](federated_learning_for_text_generation.ipynb) for a\nhigher-level and more gentle introduction to the TensorFlow Federated framework\nand the [Federated Learning](../federated_learning.md) APIs (`tff.learning`), as\nit will help you put the concepts we describe here in context.", "_____no_output_____" ], [ "## Intended Uses\n\nIn a nutshell, Federated Core (FC) is a development environment that makes it\npossible to compactly express program logic that combines TensorFlow code with\ndistributed communication operators, such as those that are used in\n[Federated Averaging](https://arxiv.org/abs/1602.05629) - computing\ndistributed sums, averages, and other types of distributed aggregations over a\nset of client devices in the system, broadcasting models and parameters to those\ndevices, etc.\n\nYou may be aware of\n[`tf.contrib.distribute`](https://www.tensorflow.org/api_docs/python/tf/contrib/distribute),\nand a natural question to ask at this point may be: in what ways does this\nframework differ? Both frameworks attempt at making TensorFlow computations\ndistributed, after all.\n\nOne way to think about it is that, whereas the stated goal of\n`tf.contrib.distribute` is *to allow users to use existing models and training\ncode with minimal changes to enable distributed training*, and much focus is on\nhow to take advantage of distributed infrastructure to make existing training\ncode more efficient, the goal of TFF's Federated Core is to give researchers and\npractitioners explicit control over the specific patterns of distributed\ncommunication they will use in their systems. The focus in FC is on providing a\nflexible and extensible language for expressing distributed data flow\nalgorithms, rather than a concrete set of implemented distributed training\ncapabilities.\n\nOne of the primary target audiences for TFF's FC API is researchers and\npractitioners who might want to experiment with new federated learning\nalgorithms and evaluate the consequences of subtle design choices that affect\nthe manner in which the flow of data in the distributed system is orchestrated,\nyet without getting bogged down by system implementation details. The level of\nabstraction that FC API is aiming for roughly corresponds to pseudocode one\ncould use to describe the mechanics of a federated learning algorithm in a\nresearch publication - what data exists in the system and how it is transformed,\nbut without dropping to the level of individual point-to-point network message\nexchanges.\n\nTFF as a whole is targeting scenarios in which data is distributed, and must\nremain such, e.g., for privacy reasons, and where collecting all data at a\ncentralized location may not be a viable option. This has implication on the\nimplementation of machine learning algorithms that require an increased degree\nof explicit control, as compared to scenarios in which all data can be\naccumulated in a centralized location at a data center.", "_____no_output_____" ], [ "## Before we start\n\nBefore we dive into the code, please try to run the following \"Hello World\"\nexample to make sure your environment is correctly setup. If it doesn't work,\nplease refer to the [Installation](../install.md) guide for instructions.", "_____no_output_____" ] ], [ [ "#@test {\"skip\": true}\n!pip install --quiet --upgrade tensorflow_federated_nightly\n!pip install --quiet --upgrade nest_asyncio\n\nimport nest_asyncio\nnest_asyncio.apply()", "_____no_output_____" ], [ "import collections\n\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow_federated as tff", "_____no_output_____" ], [ "@tff.federated_computation\ndef hello_world():\n return 'Hello, World!'\n\nhello_world()", "_____no_output_____" ] ], [ [ "## Federated data\n\nOne of the distinguishing features of TFF is that it allows you to compactly\nexpress TensorFlow-based computations on *federated data*. We will be using the\nterm *federated data* in this tutorial to refer to a collection of data items\nhosted across a group of devices in a distributed system. For example,\napplications running on mobile devices may collect data and store it locally,\nwithout uploading to a centralized location. Or, an array of distributed sensors\nmay collect and store temperature readings at their locations.\n\nFederated data like those in the above examples are treated in TFF as\n[first-class citizens](https://en.wikipedia.org/wiki/First-class_citizen), i.e.,\nthey may appear as parameters and results of functions, and they have types. To\nreinforce this notion, we will refer to federated data sets as *federated\nvalues*, or as *values of federated types*.\n\nThe important point to understand is that we are modeling the entire collection\nof data items across all devices (e.g., the entire collection temperature\nreadings from all sensors in a distributed array) as a single federated value.\n\nFor example, here's how one would define in TFF the type of a *federated float*\nhosted by a group of client devices. A collection of temperature readings that\nmaterialize across an array of distributed sensors could be modeled as a value\nof this federated type.", "_____no_output_____" ] ], [ [ "federated_float_on_clients = tff.type_at_clients(tf.float32)", "_____no_output_____" ] ], [ [ "More generally, a federated type in TFF is defined by specifying the type `T` of\nits *member constituents* - the items of data that reside on individual devices,\nand the group `G` of devices on which federated values of this type are hosted\n(plus a third, optional bit of information we'll mention shortly). We refer to\nthe group `G` of devices hosting a federated value as the value's *placement*.\nThus, `tff.CLIENTS` is an example of a placement.", "_____no_output_____" ] ], [ [ "str(federated_float_on_clients.member)", "_____no_output_____" ], [ "str(federated_float_on_clients.placement)", "_____no_output_____" ] ], [ [ "A federated type with member constituents `T` and placement `G` can be\nrepresented compactly as `{T}@G`, as shown below.", "_____no_output_____" ] ], [ [ "str(federated_float_on_clients)", "_____no_output_____" ] ], [ [ "The curly braces `{}` in this concise notation serve as a reminder that the\nmember constituents (items of data on different devices) may differ, as you\nwould expect e.g., of temperature sensor readings, so the clients as a group are\njointly hosting a [multi-set](https://en.wikipedia.org/wiki/Multiset) of\n`T`-typed items that together constitute the federated value.\n\nIt is important to note that the member constituents of a federated value are\ngenerally opaque to the programmer, i.e., a federated value should not be\nthought of as a simple `dict` keyed by an identifier of a device in the system -\nthese values are intended to be collectively transformed only by *federated\noperators* that abstractly represent various kinds of distributed communication\nprotocols (such as aggregation). If this sounds too abstract, don't worry - we\nwill return to this shortly, and we will illustrate it with concrete examples.\n\nFederated types in TFF come in two flavors: those where the member constituents\nof a federated value may differ (as just seen above), and those where they are\nknown to be all equal. This is controlled by the third, optional `all_equal`\nparameter in the `tff.FederatedType` constructor (defaulting to `False`).", "_____no_output_____" ] ], [ [ "federated_float_on_clients.all_equal", "_____no_output_____" ] ], [ [ "A federated type with a placement `G` in which all of the `T`-typed member\nconstituents are known to be equal can be compactly represented as `T@G` (as\nopposed to `{T}@G`, that is, with the curly braces dropped to reflect the fact\nthat the multi-set of member constituents consists of a single item).", "_____no_output_____" ] ], [ [ "str(tff.type_at_clients(tf.float32, all_equal=True))", "_____no_output_____" ] ], [ [ "One example of a federated value of such type that might arise in practical\nscenarios is a hyperparameter (such as a learning rate, a clipping norm, etc.)\nthat has been broadcasted by a server to a group of devices that participate in\nfederated training.\n\nAnother example is a set of parameters for a machine learning model pre-trained\nat the server, that were then broadcasted to a group of client devices, where\nthey can be personalized for each user.\n\nFor example, suppose we have a pair of `float32` parameters `a` and `b` for a\nsimple one-dimensional linear regression model. We can construct the\n(non-federated) type of such models for use in TFF as follows. The angle braces\n`<>` in the printed type string are a compact TFF notation for named or unnamed\ntuples.", "_____no_output_____" ] ], [ [ "simple_regression_model_type = (\n tff.StructType([('a', tf.float32), ('b', tf.float32)]))\n\nstr(simple_regression_model_type)", "_____no_output_____" ] ], [ [ "Note that we are only specifying `dtype`s above. Non-scalar types are also\nsupported. In the above code, `tf.float32` is a shortcut notation for the more\ngeneral `tff.TensorType(dtype=tf.float32, shape=[])`.\n\nWhen this model is broadcasted to clients, the type of the resulting federated\nvalue can be represented as shown below.", "_____no_output_____" ] ], [ [ "str(tff.type_at_clients(\n simple_regression_model_type, all_equal=True))", "_____no_output_____" ] ], [ [ "Per symmetry with *federated float* above, we will refer to such a type as a\n*federated tuple*. More generally, we'll often use the term *federated XYZ* to\nrefer to a federated value in which member constituents are *XYZ*-like. Thus, we\nwill talk about things like *federated tuples*, *federated sequences*,\n*federated models*, and so on.\n\nNow, coming back to `float32@CLIENTS` - while it appears replicated across\nmultiple devices, it is actually a single `float32`, since all member are the\nsame. In general, you may think of any *all-equal* federated type, i.e., one of\nthe form `T@G`, as isomorphic to a non-federated type `T`, since in both cases,\nthere's actually only a single (albeit potentially replicated) item of type `T`.\n\nGiven the isomorphism between `T` and `T@G`, you may wonder what purpose, if\nany, the latter types might serve. Read on.", "_____no_output_____" ], [ "## Placements\n\n### Design Overview\n\nIn the preceding section, we've introduced the concept of *placements* - groups\nof system participants that might be jointly hosting a federated value, and\nwe've demonstrated the use of `tff.CLIENTS` as an example specification of a\nplacement.\n\nTo explain why the notion of a *placement* is so fundamental that we needed to\nincorporate it into the TFF type system, recall what we mentioned at the\nbeginning of this tutorial about some of the intended uses of TFF.\n\nAlthough in this tutorial, you will only see TFF code being executed locally in\na simulated environment, our goal is for TFF to enable writing code that you\ncould deploy for execution on groups of physical devices in a distributed\nsystem, potentially including mobile or embedded devices running Android. Each\nof of those devices would receive a separate set of instructions to execute\nlocally, depending on the role it plays in the system (an end-user device, a\ncentralized coordinator, an intermediate layer in a multi-tier architecture,\netc.). It is important to be able to reason about which subsets of devices\nexecute what code, and where different portions of the data might physically\nmaterialize.\n\nThis is especially important when dealing with, e.g., application data on mobile\ndevices. Since the data is private and can be sensitive, we need the ability to\nstatically verify that this data will never leave the device (and prove facts\nabout how the data is being processed). The placement specifications are one of\nthe mechanisms designed to support this.\n\nTFF has been designed as a data-centric programming environment, and as such,\nunlike some of the existing frameworks that focus on *operations* and where\nthose operations might *run*, TFF focuses on *data*, where that data\n*materializes*, and how it's being *transformed*. Consequently, placement is\nmodeled as a property of data in TFF, rather than as a property of operations on\ndata. Indeed, as you're about to see in the next section, some of the TFF\noperations span across locations, and run \"in the network\", so to speak, rather\nthan being executed by a single machine or a group of machines.\n\nRepresenting the type of a certain value as `T@G` or `{T}@G` (as opposed to just\n`T`) makes data placement decisions explicit, and together with a static\nanalysis of programs written in TFF, it can serve as a foundation for providing\nformal privacy guarantees for sensitive on-device data.\n\nAn important thing to note at this point, however, is that while we encourage\nTFF users to be explicit about *groups* of participating devices that host the\ndata (the placements), the programmer will never deal with the raw data or\nidentities of the *individual* participants.\n\n(Note: While it goes far outside the scope of this tutorial, we should mention\nthat there is one notable exception to the above, a `tff.federated_collect`\noperator that is intended as a low-level primitive, only for specialized\nsituations. Its explicit use in situations where it can be avoided is not\nrecommended, as it may limit the possible future applications. For example, if\nduring the course of static analysis, we determine that a computation uses such\nlow-level mechanisms, we may disallow its access to certain types of data.)\n\nWithin the body of TFF code, by design, there's no way to enumerate the devices\nthat constitute the group represented by `tff.CLIENTS`, or to probe for the\nexistence of a specific device in the group. There's no concept of a device or\nclient identity anywhere in the Federated Core API, the underlying set of\narchitectural abstractions, or the core runtime infrastructure we provide to\nsupport simulations. All the computation logic you write will be expressed as\noperations on the entire client group.\n\nRecall here what we mentioned earlier about values of federated types being\nunlike Python `dict`, in that one cannot simply enumerate their member\nconstituents. Think of values that your TFF program logic manipulates as being\nassociated with placements (groups), rather than with individual participants.\n\nPlacements *are* designed to be a first-class citizen in TFF as well, and can\nappear as parameters and results of a `placement` type (to be represented by\n`tff.PlacementType` in the API). In the future, we plan to provide a variety of\noperators to transform or combine placements, but this is outside the scope of\nthis tutorial. For now, it suffices to think of `placement` as an opaque\nprimitive built-in type in TFF, similar to how `int` and `bool` are opaque\nbuilt-in types in Python, with `tff.CLIENTS` being a constant literal of this\ntype, not unlike `1` being a constant literal of type `int`.\n\n### Specifying Placements\n\nTFF provides two basic placement literals, `tff.CLIENTS` and `tff.SERVER`, to\nmake it easy to express the rich variety of practical scenarios that are\nnaturally modeled as client-server architectures, with multiple *client* devices\n(mobile phones, embedded devices, distributed databases, sensors, etc.)\norchestrated by a single centralized *server* coordinator. TFF is designed to\nalso support custom placements, multiple client groups, multi-tiered and other,\nmore general distributed architectures, but discussing them is outside the scope\nof this tutorial.\n\nTFF doesn't prescribe what either the `tff.CLIENTS` or the `tff.SERVER` actually\nrepresent.\n\nIn particular, `tff.SERVER` may be a single physical device (a member of a\nsingleton group), but it might just as well be a group of replicas in a\nfault-tolerant cluster running state machine replication - we do not make any\nspecial architectural assumptions. Rather, we use the `all_equal` bit mentioned\nin the preceding section to express the fact that we're generally dealing with\nonly a single item of data at the server.\n\nLikewise, `tff.CLIENTS` in some applications might represent all clients in the\nsystem - what in the context of federated learning we sometimes refer to as the\n*population*, but e.g., in\n[production implementations of Federated Averaging](https://arxiv.org/abs/1602.05629),\nit may represent a *cohort* - a subset of the clients selected for paticipation\nin a particular round of training. The abstractly defined placements are given\nconcrete meaning when a computation in which they appear is deployed for\nexecution (or simply invoked like a Python function in a simulated environment,\nas is demonstrated in this tutorial). In our local simulations, the group of\nclients is determined by the federated data supplied as input.", "_____no_output_____" ], [ "## Federated computations\n\n### Declaring federated computations\n\nTFF is designed as a strongly-typed functional programming environment that\nsupports modular development.\n\nThe basic unit of composition in TFF is a *federated computation* - a section of\nlogic that may accept federated values as input and return federated values as\noutput. Here's how you can define a computation that calculates the average of\nthe temperatures reported by the sensor array from our previous example.", "_____no_output_____" ] ], [ [ "@tff.federated_computation(tff.type_at_clients(tf.float32))\ndef get_average_temperature(sensor_readings):\n return tff.federated_mean(sensor_readings)", "_____no_output_____" ] ], [ [ "Looking at the above code, at this point you might be asking - aren't there\nalready decorator constructs to define composable units such as\n[`tf.function`](https://www.tensorflow.org/api_docs/python/tf/function)\nin TensorFlow, and if so, why introduce yet another one, and how is it\ndifferent?\n\nThe short answer is that the code generated by the `tff.federated_computation`\nwrapper is *neither* TensorFlow, *nor is it* Python - it's a specification of a\ndistributed system in an internal platform-independent *glue* language. At this\npoint, this will undoubtedly sound cryptic, but please bear this intuitive\ninterpretation of a federated computation as an abstract specification of a\ndistributed system in mind. We'll explain it in a minute.\n\nFirst, let's play with the definition a bit. TFF computations are generally\nmodeled as functions - with or without parameters, but with well-defined type\nsignatures. You can print the type signature of a computation by querying its\n`type_signature` property, as shown below.", "_____no_output_____" ] ], [ [ "str(get_average_temperature.type_signature)", "_____no_output_____" ] ], [ [ "The type signature tells us that the computation accepts a collection of\ndifferent sensor readings on client devices, and returns a single average on the\nserver.\n\nBefore we go any further, let's reflect on this for a minute - the input and\noutput of this computation are *in different places* (on `CLIENTS` vs. at the\n`SERVER`). Recall what we said in the preceding section on placements about how\n*TFF operations may span across locations, and run in the network*, and what we\njust said about federated computations as representing abstract specifications\nof distributed systems. We have just a defined one such computation - a simple\ndistributed system in which data is consumed at client devices, and the\naggregate results emerge at the server.\n\nIn many practical scenarios, the computations that represent top-level tasks\nwill tend to accept their inputs and report their outputs at the server - this\nreflects the idea that computations might be triggered by *queries* that\noriginate and terminate on the server.\n\nHowever, FC API does not impose this assumption, and many of the building blocks\nwe use internally (including numerous `tff.federated_...` operators you may find\nin the API) have inputs and outputs with distinct placements, so in general, you\nshould not think about a federated computation as something that *runs on the\nserver* or is *executed by a server*. The server is just one type of participant\nin a federated computation. In thinking about the mechanics of such\ncomputations, it's best to always default to the global network-wide\nperspective, rather than the perspective of a single centralized coordinator.\n\nIn general, functional type signatures are compactly represented as `(T -> U)`\nfor types `T` and `U` of inputs and outputs, respectively. The type of the\nformal parameter (such `sensor_readings` in this case) is specified as the\nargument to the decorator. You don't need to specify the type of the result -\nit's determined automatically.\n\nAlthough TFF does offer limited forms of polymorphism, programmers are strongly\nencouraged to be explicit about the types of data they work with, as that makes\nunderstanding, debugging, and formally verifying properties of your code easier.\nIn some cases, explicitly specifying types is a requirement (e.g., polymorphic\ncomputations are currently not directly executable).\n\n### Executing federated computations\n\nIn order to support development and debugging, TFF allows you to directly invoke\ncomputations defined this way as Python functions, as shown below. Where the\ncomputation expects a value of a federated type with the `all_equal` bit set to\n`False`, you can feed it as a plain `list` in Python, and for federated types\nwith the `all_equal` bit set to `True`, you can just directly feed the (single)\nmember constituent. This is also how the results are reported back to you.", "_____no_output_____" ] ], [ [ "get_average_temperature([68.5, 70.3, 69.8])", "_____no_output_____" ] ], [ [ "When running computations like this in simulation mode, you act as an external\nobserver with a system-wide view, who has the ability to supply inputs and\nconsume outputs at any locations in the network, as indeed is the case here -\nyou supplied client values at input, and consumed the server result.\n\nNow, let's return to a note we made earlier about the\n`tff.federated_computation` decorator emitting code in a *glue* language.\nAlthough the logic of TFF computations can be expressed as ordinary functions in\nPython (you just need to decorate them with `tff.federated_computation` as we've\ndone above), and you can directly invoke them with Python arguments just\nlike any other Python functions in this notebook, behind the scenes, as we noted\nearlier, TFF computations are actually *not* Python.\n\nWhat we mean by this is that when the Python interpreter encounters a function\ndecorated with `tff.federated_computation`, it traces the statements in this\nfunction's body once (at definition time), and then constructs a\n[serialized representation](https://github.com/tensorflow/federated/blob/master/tensorflow_federated/proto/v0/computation.proto)\nof the computation's logic for future use - whether for execution, or to be\nincorporated as a sub-component into another computation.\n\nYou can verify this by adding a print statement, as follows:", "_____no_output_____" ] ], [ [ "@tff.federated_computation(tff.type_at_clients(tf.float32))\ndef get_average_temperature(sensor_readings):\n\n print ('Getting traced, the argument is \"{}\".'.format(\n type(sensor_readings).__name__))\n\n return tff.federated_mean(sensor_readings)", "Getting traced, the argument is \"ValueImpl\".\n" ] ], [ [ "You can think of Python code that defines a federated computation similarly to\nhow you would think of Python code that builds a TensorFlow graph in a non-eager\ncontext (if you're not familiar with the non-eager uses of TensorFlow, think of\nyour Python code defining a graph of operations to be executed later, but not\nactually running them on the fly). The non-eager graph-building code in\nTensorFlow is Python, but the TensorFlow graph constructed by this code is\nplatform-independent and serializable.\n\nLikewise, TFF computations are defined in Python, but the Python statements in\ntheir bodies, such as `tff.federated_mean` in the example weve just shown,\nare compiled into a portable and platform-independent serializable\nrepresentation under the hood.\n\nAs a developer, you don't need to concern yourself with the details of this\nrepresentation, as you will never need to directly work with it, but you should\nbe aware of its existence, the fact that TFF computations are fundamentally\nnon-eager, and cannot capture arbitrary Python state. Python code contained in a\nTFF computation's body is executed at definition time, when the body of the\nPython function decorated with `tff.federated_computation` is traced before\ngetting serialized. It's not retraced again at invocation time (except when the\nfunction is polymorphic; please refer to the documentation pages for details).\n\nYou may wonder why we've chosen to introduce a dedicated internal non-Python\nrepresentation. One reason is that ultimately, TFF computations are intended to\nbe deployable to real physical environments, and hosted on mobile or embedded\ndevices, where Python may not be available.\n\nAnother reason is that TFF computations express the global behavior of\ndistributed systems, as opposed to Python programs which express the local\nbehavior of individual participants. You can see that in the simple example\nabove, with the special operator `tff.federated_mean` that accepts data on\nclient devices, but deposits the results on the server.\n\nThe operator `tff.federated_mean` cannot be easily modeled as an ordinary\noperator in Python, since it doesn't execute locally - as noted earlier, it\nrepresents a distributed system that coordinates the behavior of multiple system\nparticipants. We will refer to such operators as *federated operators*, to\ndistinguish them from ordinary (local) operators in Python.\n\nThe TFF type system, and the fundamental set of operations supported in the TFF's\nlanguage, thus deviates significantly from those in Python, necessitating the\nuse of a dedicated representation.\n\n### Composing federated computations\n\nAs noted above, federated computations and their constituents are best\nunderstood as models of distributed systems, and you can think of composing\nfederated computations as composing more complex distributed systems from\nsimpler ones. You can think of the `tff.federated_mean` operator as a kind of\nbuilt-in template federated computation with a type signature `({T}@CLIENTS ->\nT@SERVER)` (indeed, just like computations you write, this operator also has a\ncomplex structure - under the hood we break it down into simpler operators).\n\nThe same is true of composing federated computations. The computation\n`get_average_temperature` may be invoked in a body of another Python function\ndecorated with `tff.federated_computation` - doing so will cause it to be\nembedded in the body of the parent, much in the same way `tff.federated_mean`\nwas embedded in its own body earlier.\n\nAn important restriction to be aware of is that bodies of Python functions\ndecorated with `tff.federated_computation` must consist *only* of federated\noperators, i.e., they cannot directly contain TensorFlow operations. For\nexample, you cannot directly use `tf.nest` interfaces to add a pair of\nfederated values. TensorFlow code must be confined to blocks of code decorated\nwith a `tff.tf_computation` discussed in the following section. Only when\nwrapped in this manner can the wrapped TensorFlow code be invoked in the body of\na `tff.federated_computation`.\n\nThe reasons for this separation are technical (it's hard to trick operators such\nas `tf.add` to work with non-tensors) as well as architectural. The language of\nfederated computations (i.e., the logic constructed from serialized bodies of\nPython functions decorated with `tff.federated_computation`) is designed to\nserve as a platform-independent *glue* language. This glue language is currently\nused to build distributed systems from embedded sections of TensorFlow code\n(confined to `tff.tf_computation` blocks). In the fullness of time, we\nanticipate the need to embed sections of other, non-TensorFlow logic, such as\nrelational database queries that might represent input pipelines, all connected\ntogether using the same glue language (the `tff.federated_computation` blocks).", "_____no_output_____" ], [ "## TensorFlow logic\n\n### Declaring TensorFlow computations\n\nTFF is designed for use with TensorFlow. As such, the bulk of the code you will\nwrite in TFF is likely to be ordinary (i.e., locally-executing) TensorFlow code.\nIn order to use such code with TFF, as noted above, it just needs to be\ndecorated with `tff.tf_computation`.\n\nFor example, here's how we could implement a function that takes a number and\nadds `0.5` to it.", "_____no_output_____" ] ], [ [ "@tff.tf_computation(tf.float32)\ndef add_half(x):\n return tf.add(x, 0.5)", "_____no_output_____" ] ], [ [ "Once again, looking at this, you may be wondering why we should define another\ndecorator `tff.tf_computation` instead of simply using an existing mechanism\nsuch as `tf.function`. Unlike in the preceding section, here we are\ndealing with an ordinary block of TensorFlow code.\n\nThere are a few reasons for this, the full treatment of which goes beyond the\nscope of this tutorial, but it's worth naming the main one:\n\n* In order to embed reusable building blocks implemented using TensorFlow code\n in the bodies of federated computations, they need to satisfy certain\n properties - such as getting traced and serialized at definition time,\n having type signatures, etc. This generally requires some form of a\n decorator.\n\nIn general, we recommend using TensorFlow's native mechanisms for composition,\nsuch as `tf.function`, wherever possible, as the exact manner in\nwhich TFF's decorator interacts with eager functions can be expected to evolve.\n\nNow, coming back to the example code snippet above, the computation `add_half`\nwe just defined can be treated by TFF just like any other TFF computation. In\nparticular, it has a TFF type signature.", "_____no_output_____" ] ], [ [ "str(add_half.type_signature)", "_____no_output_____" ] ], [ [ "Note this type signature does not have placements. TensorFlow computations\ncannot consume or return federated types.\n\nYou can now also use `add_half` as a building block in other computations . For\nexample, here's how you can use the `tff.federated_map` operator to apply\n`add_half` pointwise to all member constituents of a federated float on client\ndevices.", "_____no_output_____" ] ], [ [ "@tff.federated_computation(tff.type_at_clients(tf.float32))\ndef add_half_on_clients(x):\n return tff.federated_map(add_half, x)", "_____no_output_____" ], [ "str(add_half_on_clients.type_signature)", "_____no_output_____" ] ], [ [ "### Executing TensorFlow computations\n\nExecution of computations defined with `tff.tf_computation` follows the same\nrules as those we described for `tff.federated_computation`. They can be invoked\nas ordinary callables in Python, as follows.", "_____no_output_____" ] ], [ [ "add_half_on_clients([1.0, 3.0, 2.0])", "_____no_output_____" ] ], [ [ "Once again, it is worth noting that invoking the computation\n`add_half_on_clients` in this manner simulates a distributed process. Data is\nconsumed on clients, and returned on clients. Indeed, this computation has each\nclient perform a local action. There is no `tff.SERVER` explicitly mentioned in\nthis system (even if in practice, orchestrating such processing might involve\none). Think of a computation defined this way as conceptually analogous to the\n`Map` stage in `MapReduce`.\n\nAlso, keep in mind that what we said in the preceding section about TFF\ncomputations getting serialized at the definition time remains true for\n`tff.tf_computation` code as well - the Python body of `add_half_on_clients`\ngets traced once at definition time. On subsequent invocations, TFF uses its\nserialized representation.\n\nThe only difference between Python methods decorated with\n`tff.federated_computation` and those decorated with `tff.tf_computation` is\nthat the latter are serialized as TensorFlow graphs (whereas the former are not\nallowed to contain TensorFlow code directly embedded in them).\n\nUnder the hood, each method decorated with `tff.tf_computation` temporarily\ndisables eager execution in order to allow the computation's structure to be\ncaptured. While eager execution is locally disabled, you are welcome to use\neager TensorFlow, AutoGraph, TensorFlow 2.0 constructs, etc., so long as you\nwrite the logic of your computation in a manner such that it can get correctly\nserialized.\n\nFor example, the following code will fail:", "_____no_output_____" ] ], [ [ "try:\n\n # Eager mode\n constant_10 = tf.constant(10.)\n\n @tff.tf_computation(tf.float32)\n def add_ten(x):\n return x + constant_10\n\nexcept Exception as err:\n print (err)", "Attempting to capture an EagerTensor without building a function.\n" ] ], [ [ "The above fails because `constant_10` has already been constructed outside of\nthe graph that `tff.tf_computation` constructs internally in the body of\n`add_ten` during the serialization process.\n\nOn the other hand, invoking python functions that modify the current graph when\ncalled inside a `tff.tf_computation` is fine:", "_____no_output_____" ] ], [ [ "def get_constant_10():\n return tf.constant(10.)\n\[email protected]_computation(tf.float32)\ndef add_ten(x):\n return x + get_constant_10()\n\nadd_ten(5.0)", "_____no_output_____" ] ], [ [ "Note that the serialization mechanisms in TensorFlow are evolving, and we expect\nthe details of how TFF serializes computations to evolve as well.\n\n### Working with `tf.data.Dataset`s\n\nAs noted earlier, a unique feature of `tff.tf_computation`s is that they allows\nyou to work with `tf.data.Dataset`s defined abstractly as formal parameters by\nyour code. Parameters to be represented in TensorFlow as data sets need to be\ndeclared using the `tff.SequenceType` constructor.\n\nFor example, the type specification `tff.SequenceType(tf.float32)` defines an\nabstract sequence of float elements in TFF. Sequences can contain either\ntensors, or complex nested structures (we'll see examples of those later). The\nconcise representation of a sequence of `T`-typed items is `T*`.", "_____no_output_____" ] ], [ [ "float32_sequence = tff.SequenceType(tf.float32)\n\nstr(float32_sequence)", "_____no_output_____" ] ], [ [ "Suppose that in our temperature sensor example, each sensor holds not just one\ntemperature reading, but multiple. Here's how you can define a TFF computation\nin TensorFlow that calculates the average of temperatures in a single local data\nset using the `tf.data.Dataset.reduce` operator.", "_____no_output_____" ] ], [ [ "@tff.tf_computation(tff.SequenceType(tf.float32))\ndef get_local_temperature_average(local_temperatures):\n sum_and_count = (\n local_temperatures.reduce((0.0, 0), lambda x, y: (x[0] + y, x[1] + 1)))\n return sum_and_count[0] / tf.cast(sum_and_count[1], tf.float32)", "_____no_output_____" ], [ "str(get_local_temperature_average.type_signature)", "_____no_output_____" ] ], [ [ "In the body of a method decorated with `tff.tf_computation`, formal parameters\nof a TFF sequence type are represented simply as objects that behave like\n`tf.data.Dataset`, i.e., support the same properties and methods (they are\ncurrently not implemented as subclasses of that type - this may change as the\nsupport for data sets in TensorFlow evolves).\n\nYou can easily verify this as follows.", "_____no_output_____" ] ], [ [ "@tff.tf_computation(tff.SequenceType(tf.int32))\ndef foo(x):\n return x.reduce(np.int32(0), lambda x, y: x + y)\n\nfoo([1, 2, 3])", "_____no_output_____" ] ], [ [ "Keep in mind that unlike ordinary `tf.data.Dataset`s, these dataset-like objects\nare placeholders. They don't contain any elements, since they represent abstract\nsequence-typed parameters, to be bound to concrete data when used in a concrete\ncontext. Support for abstractly-defined placeholder data sets is still somewhat\nlimited at this point, and in the early days of TFF, you may encounter certain\nrestrictions, but we won't need to worry about them in this tutorial (please\nrefer to the documentation pages for details).\n\nWhen locally executing a computation that accepts a sequence in a simulation\nmode, such as in this tutorial, you can feed the sequence as Python list, as\nbelow (as well as in other ways, e.g., as a `tf.data.Dataset` in eager mode, but\nfor now, we'll keep it simple).", "_____no_output_____" ] ], [ [ "get_local_temperature_average([68.5, 70.3, 69.8])", "_____no_output_____" ] ], [ [ "Like all other TFF types, sequences like those defined above can use the\n`tff.StructType` constructor to define nested structures. For example,\nhere's how one could declare a computation that accepts a sequence of pairs `A`,\n`B`, and returns the sum of their products. We include the tracing statements in\nthe body of the computation so that you can see how the TFF type signature\ntranslates into the dataset's `output_types` and `output_shapes`.", "_____no_output_____" ] ], [ [ "@tff.tf_computation(tff.SequenceType(collections.OrderedDict([('A', tf.int32), ('B', tf.int32)])))\ndef foo(ds):\n print('element_structure = {}'.format(ds.element_spec))\n return ds.reduce(np.int32(0), lambda total, x: total + x['A'] * x['B'])", "element_structure = OrderedDict([('A', TensorSpec(shape=(), dtype=tf.int32, name=None)), ('B', TensorSpec(shape=(), dtype=tf.int32, name=None))])\n" ], [ "str(foo.type_signature)", "_____no_output_____" ], [ "foo([{'A': 2, 'B': 3}, {'A': 4, 'B': 5}])", "_____no_output_____" ] ], [ [ "The support for using `tf.data.Datasets` as formal parameters is still somewhat\nlimited and evolving, although functional in simple scenarios such as those used\nin this tutorial.\n\n## Putting it all together\n\nNow, let's try again to use our TensorFlow computation in a federated setting.\nSuppose we have a group of sensors that each have a local sequence of\ntemperature readings. We can compute the global temperature average by averaging\nthe sensors' local averages as follows.", "_____no_output_____" ] ], [ [ "@tff.federated_computation(\n tff.type_at_clients(tff.SequenceType(tf.float32)))\ndef get_global_temperature_average(sensor_readings):\n return tff.federated_mean(\n tff.federated_map(get_local_temperature_average, sensor_readings))", "_____no_output_____" ] ], [ [ "Note that this isn't a simple average across all local temperature readings from\nall clients, as that would require weighing contributions from different clients\nby the number of readings they locally maintain. We leave it as an exercise for\nthe reader to update the above code; the `tff.federated_mean` operator\naccepts the weight as an optional second argument (expected to be a federated\nfloat).\n\nAlso note that the input to `get_global_temperature_average` now becomes a\n*federated float sequence*. Federated sequences is how we will typically represent\non-device data in federated learning, with sequence elements typically\nrepresenting data batches (you will see examples of this shortly).", "_____no_output_____" ] ], [ [ "str(get_global_temperature_average.type_signature)", "_____no_output_____" ] ], [ [ "Here's how we can locally execute the computation on a sample of data in Python.\nNotice that the way we supply the input is now as a `list` of `list`s. The outer\nlist iterates over the devices in the group represented by `tff.CLIENTS`, and\nthe inner ones iterate over elements in each device's local sequence.", "_____no_output_____" ] ], [ [ "get_global_temperature_average([[68.0, 70.0], [71.0], [68.0, 72.0, 70.0]])", "_____no_output_____" ] ], [ [ "This concludes the first part of the tutorial... we encourage you to continue on\nto the [second part](custom_federated_algorithms_2.ipynb).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a4bc792952f9fd23a6d2146b54503988ec63b0f
12,968
ipynb
Jupyter Notebook
code/Validate_Over_Concentrations/code/8_Check_Cell_Count.ipynb
menchelab/Perturbome
c93aeb2d42a1900f5060322732dd97f8eb8db7bd
[ "MIT" ]
5
2019-11-15T19:58:31.000Z
2021-12-08T19:30:10.000Z
code/Validate_Over_Concentrations/code/8_Check_Cell_Count.ipynb
mcaldera/Perturbome
82c752f90f7100865c09cfea0f1fe96deffe2ed9
[ "MIT" ]
1
2020-01-06T21:23:57.000Z
2020-01-07T14:06:21.000Z
code/Validate_Over_Concentrations/code/8_Check_Cell_Count.ipynb
mcaldera/Perturbome
82c752f90f7100865c09cfea0f1fe96deffe2ed9
[ "MIT" ]
4
2019-11-26T07:34:49.000Z
2022-02-22T06:41:43.000Z
37.588406
365
0.538402
[ [ [ "# Check Cell Count", "_____no_output_____" ], [ "## Libraries", "_____no_output_____" ] ], [ [ "import pandas\nimport MySQLdb\nimport numpy as np\nimport pickle\nimport os", "_____no_output_____" ] ], [ [ "## Functions and definitions", "_____no_output_____" ] ], [ [ "# - - - - - - - - - - - - - - - - - - - -\n# Define Experiment\ntable = 'IsabelCLOUPAC_Per_Image'\n\n# - - - - - - - - - - - - - - - - - - - -\n\n\n\n\ndef ensure_dir(file_path):\n '''\n Function to ensure a file path exists, else creates the path\n\n :param file_path:\n :return:\n '''\n directory = os.path.dirname(file_path)\n if not os.path.exists(directory):\n os.makedirs(directory)", "_____no_output_____" ] ], [ [ "## Main Functions", "_____no_output_____" ] ], [ [ "def create_Single_CellCounts(db_table):\n db = MySQLdb.connect(\"menchelabdb.int.cemm.at\", \"root\", \"cqsr4h\", \"ImageAnalysisDDI\")\n\n string = \"select Image_Metadata_ID_A from \"+db_table+\" group by Image_Metadata_ID_A;\"\n\n data = pandas.read_sql(string, con=db)['Image_Metadata_ID_A']\n\n #with open('../results/FeatureVectors/SingleVectors_' + str(min(plates)) + '_to_' + str(\n # max(plates)) + '_NoCutoff_' + str(cast_int) + '.pickle', 'rb') as handle:\n # single_Vectors = pickle.load(handle)\n\n singles = list(data)\n singles.sort()\n\n if 'PosCon' in singles:\n singles.remove('PosCon')\n\n if 'DMSO' in singles:\n singles.remove('DMSO')\n\n # Define Database to check for missing Images\n\n string = \"select SUM(Image_Count_Cytoplasm), Image_Metadata_Well, Image_Metadata_ID_A,Image_Metadata_Conc_A,Image_Metadata_Plate from \" + db_table + \" where Image_Metadata_ID_A not like 'DMSO' and Image_Metadata_Transfer_A like 'YES' and Image_Metadata_Transfer_A like 'YES' group by Image_Metadata_ID_A,Image_Metadata_Plate,Image_Metadata_Well;\"\n\n data = pandas.read_sql(string,con=db)\n\n ensure_dir('../results/'+table+'/CellCount/SinglesCellCount.csv')\n fp_out = open('../results/'+table+'/CellCount/SinglesCellCount.csv','w')\n fp_out.write('Drug,Conc,AVG_CellCount\\n')\n for drug in singles:\n\n drug_values = data.loc[data['Image_Metadata_ID_A'] == drug][['SUM(Image_Count_Cytoplasm)','Image_Metadata_Conc_A']]\n concentrations = list(set(drug_values['Image_Metadata_Conc_A'].values))\n concentrations.sort()\n \n for conc in concentrations:\n\n if len(drug_values.loc[drug_values['Image_Metadata_Conc_A'] == conc]['SUM(Image_Count_Cytoplasm)'].values) > 0:\n cellcount = np.mean(drug_values.loc[drug_values['Image_Metadata_Conc_A'] == conc]['SUM(Image_Count_Cytoplasm)'].values)\n cellcount = int(cellcount)\n else:\n cellcount = 'nan'\n\n fp_out.write(drug+','+str(conc)+','+str(cellcount) +'\\n')\n fp_out.close()\n\n \ndef create_Single_CellCounts_individualReplicates(db_table):\n db = MySQLdb.connect(\"menchelabdb.int.cemm.at\", \"root\", \"cqsr4h\", \"ImageAnalysisDDI\")\n\n string = \"select Image_Metadata_ID_A from \"+db_table+\" group by Image_Metadata_ID_A;\"\n\n data = pandas.read_sql(string, con=db)['Image_Metadata_ID_A']\n\n #with open('../results/FeatureVectors/SingleVectors_' + str(min(plates)) + '_to_' + str(\n # max(plates)) + '_NoCutoff_' + str(cast_int) + '.pickle', 'rb') as handle:\n # single_Vectors = pickle.load(handle)\n\n singles = list(data)\n singles.sort()\n\n if 'PosCon' in singles:\n singles.remove('PosCon')\n\n if 'DMSO' in singles:\n singles.remove('DMSO')\n\n\n #plates = range(1315001, 1315124, 10)\n\n #string = \"select SUM(Image_Count_Cytoplasm), Image_Metadata_Well, Image_Metadata_ID_A,Image_Metadata_ID_B,Image_Metadata_Plate from \" + db_table + \" where Image_Metadata_ID_B like 'DMSO' and Image_Metadata_Transfer_A like 'YES' and Image_Metadata_Transfer_B like 'YES' group by Image_Metadata_ID_A,Image_Metadata_Plate,Image_Metadata_Well;\"\n string = \"select SUM(Image_Count_Cytoplasm), Image_Metadata_Well, Image_Metadata_ID_A,Image_Metadata_Conc_A,Image_Metadata_Plate from \" + db_table + \" where Image_Metadata_ID_A not like 'DMSO' and Image_Metadata_Transfer_A like 'YES' and Image_Metadata_Transfer_A like 'YES' group by Image_Metadata_ID_A,Image_Metadata_Plate,Image_Metadata_Well;\"\n\n \n \n data = pandas.read_sql(string,con=db)\n\n\n ensure_dir('../results/' + table + '/CellCount/SinglesCellCount_AllReplicates.csv')\n fp_out = open('../results/' + table + '/CellCount/SinglesCellCount_AllReplicates.csv','w')\n #fp_out.write('Drug,CellCounts\\n')\n fp_out.write('Drug,Conc,Replicate1,Replicate2\\n')\n for drug in singles:\n drug_values = data.loc[data['Image_Metadata_ID_A'] == drug][['SUM(Image_Count_Cytoplasm)','Image_Metadata_Conc_A']]\n concentrations = list(set(drug_values['Image_Metadata_Conc_A'].values))\n concentrations.sort()\n \n for conc in concentrations:\n \n if len(drug_values.loc[drug_values['Image_Metadata_Conc_A'] == conc]['SUM(Image_Count_Cytoplasm)'].values) > 0:\n cellcounts = drug_values.loc[drug_values['Image_Metadata_Conc_A'] == conc]['SUM(Image_Count_Cytoplasm)'].values\n\n fp_out.write(drug + ',' +str(conc)+','+ ','.join([str(x) for x in cellcounts]) + '\\n')\n\n\n fp_out.close()\n \n\ndef getDMSO_Untreated_CellCount(db_table):\n # Define Database to check for missing Images\n db = MySQLdb.connect(\"menchelabdb.int.cemm.at\", \"root\", \"cqsr4h\", \"ImageAnalysisDDI\")\n\n\n string = \"select SUM(Image_Count_Cytoplasm), Image_Metadata_Well, Image_Metadata_Plate from \" + db_table + \" where Image_Metadata_ID_A like 'DMSO' and Image_Metadata_Transfer_A like 'YES' group by Image_Metadata_Well,Image_Metadata_Plate;\"\n data = pandas.read_sql(string,con=db)\n\n\n\n mean = np.mean(data['SUM(Image_Count_Cytoplasm)'])\n std = np.std(data['SUM(Image_Count_Cytoplasm)'])\n max_val = np.percentile(data['SUM(Image_Count_Cytoplasm)'],98)\n\n ensure_dir('../results/' + table + '/CellCount/DMSO_Overview.csv')\n fp_out = open('../results/' + table + '/CellCount/DMSO_Overview.csv', 'w')\n fp_out.write('Mean,Std,Max\\n%f,%f,%f' %(mean,std,max_val))\n fp_out.close()\n\n fp_out = open('../results/' + table + '/CellCount/DMSO_Replicates.csv', 'w')\n fp_out.write('Plate,Well,CellCount\\n')\n for row in data.iterrows():\n fp_out.write(str(row[1][2])+','+row[1][1]+','+str(row[1][0])+'\\n')\n fp_out.close()\n\ndef get_CellCount_perWell(db_table):\n # Define Database to check for missing Images\n db = MySQLdb.connect(\"menchelabdb.int.cemm.at\", \"root\", \"cqsr4h\", \"ImageAnalysisDDI\")\n\n\n string = \"select SUM(Image_Count_Cytoplasm),Image_Metadata_ID_A, Image_Metadata_Well, Image_Metadata_Plate,Image_Metadata_Transfer_A from \" + db_table + \" group by Image_Metadata_Well,Image_Metadata_Plate;\"\n data = pandas.read_sql(string,con=db)\n\n data.sort_values(by=['Image_Metadata_Plate','Image_Metadata_Well'])\n\n ensure_dir('../results/' + db_table + '/CellCount/Individual_Well_Results.csv')\n fp_out = open('../results/' + db_table + '/CellCount/Individual_Well_Results.csv', 'w')\n\n fp_out.write('ID_A,ID_B,Plate,Well,CellCount,TransferOK\\n')\n for row in data.iterrows():\n\n \n ID_A = row[1][1]\n Trans_A = row[1][4]\n\n\n if ID_A == 'DMSO' or ID_A == 'PosCon':\n if Trans_A == 'YES':\n worked = 'TRUE'\n else:\n worked = 'FALSE'\n else:\n if Trans_A == 'YES':\n worked = 'TRUE'\n else:\n worked = 'FALSE'\n\n\n fp_out.write(ID_A+','+str(row[1][3])+','+row[1][2]+','+str(row[1][0])+','+worked+'\\n')\n fp_out.close()\n\ndef PlotResult_file(table,all=False):\n\n\n from matplotlib import pylab as plt\n\n drug_values = {}\n dmso_values = []\n fp = open('../results/' + table + '/CellCount/Individual_Well_Results.csv')\n fp.next()\n for line in fp:\n tmp = line.strip().split(',')\n\n if tmp[4] == 'TRUE':\n\n if tmp[0] != 'DMSO':\n\n if drug_values.has_key(tmp[0]):\n drug_values[tmp[0]].append(float(tmp[3]))\n else:\n drug_values[tmp[0]] = [float(tmp[3])]\n\n if tmp[0] == 'DMSO':\n dmso_values.append(float(tmp[3]))\n\n max_val = np.mean(dmso_values) + 0.5 * np.std(dmso_values)\n #max_val = np.mean([np.mean(x) for x in drug_values.values()]) + 1.2 * np.std([np.mean(x) for x in drug_values.values()])\n\n\n effect = 0\n\n normalized = []\n for drug in drug_values:\n scaled = (np.mean(drug_values[drug]) - 0) / max_val\n if scaled <= 1:\n normalized.append(scaled)\n else:\n normalized.append(1)\n\n if scaled < 0.5:\n effect +=1\n\n print 'Number of drugs with more than 50%% cytotoxicity: %d' %effect\n print 'Number of drugs with les than 50%% cytotoxicity: %d' %(len(drug_values) - effect)\n\n plt.hist(normalized,bins='auto', color = '#40B9D4')\n #plt.show()\n plt.xlabel('Viability')\n plt.ylabel('Frequency')\n plt.savefig('../results/' + table + '/CellCount/CellCountHistogram.pdf')\n plt.close()", "_____no_output_____" ], [ "\n\n#create_Single_CellCounts(table)\n#create_Single_CellCounts_individualReplicates(table)\n#getDMSO_Untreated_CellCount(table)\n\n#get_CellCount_perWell(table)\n\n\nPlotResult_file(table)", "Number of drugs with more than 50% cytotoxicity: 313\nNumber of drugs with les than 50% cytotoxicity: 1177\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
4a4bca9446d3d3819869bce4ee95e34687450353
239,419
ipynb
Jupyter Notebook
19 - Credit Risk Modeling in Python/10_LGD and EAD models/2_LGD and EAD models: dependent variables (4:51)/Credit%20Risk%20Modeling%20-%20LGD%20and%20EAD%20Models%20-%20With%20Comments%20-%2010-2.ipynb
olayinka04/365-data-science-courses
7d71215432f0ef07fd3def559d793a6f1938d108
[ "Apache-2.0" ]
null
null
null
19 - Credit Risk Modeling in Python/10_LGD and EAD models/2_LGD and EAD models: dependent variables (4:51)/Credit%20Risk%20Modeling%20-%20LGD%20and%20EAD%20Models%20-%20With%20Comments%20-%2010-2.ipynb
olayinka04/365-data-science-courses
7d71215432f0ef07fd3def559d793a6f1938d108
[ "Apache-2.0" ]
null
null
null
19 - Credit Risk Modeling in Python/10_LGD and EAD models/2_LGD and EAD models: dependent variables (4:51)/Credit%20Risk%20Modeling%20-%20LGD%20and%20EAD%20Models%20-%20With%20Comments%20-%2010-2.ipynb
olayinka04/365-data-science-courses
7d71215432f0ef07fd3def559d793a6f1938d108
[ "Apache-2.0" ]
null
null
null
938.898039
136,520
0.790096
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
4a4bcb6391211250429a249497c200df1e0a97d2
3,271
ipynb
Jupyter Notebook
docs/source/tutorial/05-IO-Create-Insert-External-Data.ipynb
patcao/ibis
661bbd20081285f3c29267793f3d070d0c8a0db8
[ "Apache-2.0" ]
986
2017-06-07T07:33:01.000Z
2022-03-31T13:00:46.000Z
docs/source/tutorial/05-IO-Create-Insert-External-Data.ipynb
patcao/ibis
661bbd20081285f3c29267793f3d070d0c8a0db8
[ "Apache-2.0" ]
2,623
2017-06-07T18:29:11.000Z
2022-03-31T20:27:31.000Z
docs/source/tutorial/05-IO-Create-Insert-External-Data.ipynb
patcao/ibis
661bbd20081285f3c29267793f3d070d0c8a0db8
[ "Apache-2.0" ]
238
2017-06-26T19:02:58.000Z
2022-03-31T15:18:29.000Z
22.874126
145
0.487924
[ [ [ "# Creating and inserting data", "_____no_output_____" ], [ "## Setup", "_____no_output_____" ] ], [ [ "import os\nimport ibis\n\nibis.options.interactive = True\n\nconnection = ibis.sqlite.connect(os.path.join('data', 'geography.db'))", "_____no_output_____" ] ], [ [ "## Creating new tables from Ibis expressions\n\n\nSuppose you have an Ibis expression that produces a table:", "_____no_output_____" ] ], [ [ "countries = connection.table('countries')\n\ncontinent_name = (countries.continent\n .case()\n .when('AF', 'Africa')\n .when('AN', 'Antarctica')\n .when('AS', 'Asia')\n .when('EU', 'Europe')\n .when('NA', 'North America')\n .when('OC', 'Oceania')\n .when('SA', 'South America')\n .else_(countries.continent)\n .end()\n .name('continent_name'))\n\nexpr = countries[countries.continent, continent_name].distinct()\nexpr", "_____no_output_____" ] ], [ [ "To create a table in the database from the results of this expression, use the connection's `create_table` method:", "_____no_output_____" ] ], [ [ "connection.create_table('continents', expr)", "_____no_output_____" ], [ "continents = connection.table('continents')\ncontinents", "_____no_output_____" ] ], [ [ "Tables can be similarly dropped with `drop_table`", "_____no_output_____" ] ], [ [ "connection.drop_table('continents')", "_____no_output_____" ] ], [ [ "## Inserting data into existing tables\n\n\nSome backends support inserting data into existing tables from expressions. This can be done using `connection.insert('table_name', expr)`.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]