markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
A custom function to pick a default device
def get_default_device(): """Pick GPU if available else CPU""" if torch.cuda.is_available(): return torch.device('cuda') else: return torch.device('cpu') device = get_default_device() device def to_device(data,device): """Move tensors to chosen device""" if isinstance(data,(list,tuple)): return [to_device(x,device) for x in data] return data.to(device,non_blocking=True) for images, labels in train_loader: print(images.shape) images = to_device(images,device) print(images.device) break class DeviceDataLoader(): """Wrap a DataLoader to move data to a device""" def __init__(self,dl,device): self.dl = dl self.device = device def __iter__(self): """Yield a batch of data to a dataloader""" for b in self.dl: yield to_device(b, self.device) def __len__(self): """Number of batches""" return len(self.dl) train_loader = DeviceDataLoader(train_loader,device) val_loader = DeviceDataLoader(val_loader,device) model = VGG_net(in_channels=3,num_classes=10) to_device(model,device)
_____no_output_____
MIT
VGG/VGG.ipynb
gowriaddepalli/papers
Training the model
@torch.no_grad() def evaluate(model, val_loader): model.eval() outputs = [model.validation_step(batch) for batch in val_loader] return model.validation_epoch_end(outputs) def fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD): history = [] train_losses =[] optimizer = opt_func(model.parameters(), lr) for epoch in range(epochs): # Training Phase model.train() for batch in train_loader: loss = model.training_step(batch) train_losses.append(loss) loss.backward() optimizer.step() optimizer.zero_grad() # Validation phase result = evaluate(model, val_loader) result['train_loss'] = torch.stack(train_losses).mean().item() model.epoch_end(epoch, result) history.append(result) return history history = [evaluate(model, val_loader)] history #history = fit(2,0.1,model,train_loader,val_loader)
_____no_output_____
MIT
VGG/VGG.ipynb
gowriaddepalli/papers
*Accompanying code examples of the book "Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python" by [Sebastian Raschka](https://sebastianraschka.com). All code examples are released under the [MIT license](https://github.com/rasbt/deep-learning-book/blob/master/LICENSE). If you find this content useful, please consider supporting the work by buying a [copy of the book](https://leanpub.com/ann-and-deeplearning).* Other code examples and content are available on [GitHub](https://github.com/rasbt/deep-learning-book). The PDF and ebook versions of the book are available through [Leanpub](https://leanpub.com/ann-and-deeplearning).
%load_ext watermark %watermark -a 'Sebastian Raschka' -v -p torch
Sebastian Raschka CPython 3.6.6 IPython 7.1.1 torch 0.4.1
MIT
code/model_zoo/pytorch_ipynb/convnet-resnet50-celeba-dataparallel.ipynb
wpsliu123/Sebastian_Raschka-Deep-Learning-Book
Model Zoo -- CNN Gender Classifier (ResNet-50 Architecture, CelebA) with Data Parallelism Network Architecture The network in this notebook is an implementation of the ResNet-50 [1] architecture on the CelebA face dataset [2] to train a gender classifier. References - [1] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). ([CVPR Link](https://www.cv-foundation.org/openaccess/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html))- [2] Zhang, K., Tan, L., Li, Z., & Qiao, Y. (2016). Gender and smile classification using deep convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 34-38).**Note that the CelebA images are 218 x 178, not 256 x 256. We resize to 128x128** The following code implements the residual blocks with skip connections such that the input passed via the shortcut matches the dimensions of the main path's output, which allows the network to learn identity functions. Such a residual block is illustrated below:![](images/resnets/resnet-ex-1-1.png)The following code implements the residual blocks with skip connections such that the input passed via the shortcut matches is resized to dimensions of the main path's output. Such a residual block is illustrated below:![](images/resnets/resnet-ex-1-2.png) For a more detailed explanation see the other notebook, [resnet-ex-1.ipynb](resnet-ex-1.ipynb). The image below illustrates the ResNet-34 architecture (from the He et al. paper):![](images/resnets/resnet34/resnet34-arch.png)While ResNet-34 has 34 layers as shown in the figure above, the 50-layer ResNet variant implemented in this notebook uses "bottleneck" approach instead of the basic residual blocks. Figure 5 from the He et al. paper illustrates the difference between a basic residual block (as used in ResNet-34) and the bottleneck block used in ResNet-50:![](images/resnets/resnet50/resnet-50-bottleneck.png) Imports
import os import time import numpy as np import pandas as pd import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import Dataset from torch.utils.data import DataLoader from torchvision import datasets from torchvision import transforms import matplotlib.pyplot as plt from PIL import Image
_____no_output_____
MIT
code/model_zoo/pytorch_ipynb/convnet-resnet50-celeba-dataparallel.ipynb
wpsliu123/Sebastian_Raschka-Deep-Learning-Book
Dataset Downloading the Dataset Note that the ~200,000 CelebA face image dataset is relatively large (~1.3 Gb). The download link provided below was provided by the author on the official CelebA website at http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html. 1) Download and unzip the file `img_align_celeba.zip`, which contains the images in jpeg format.2) Download the `list_attr_celeba.txt` file, which contains the class labels3) Download the `list_eval_partition.txt` file, which contains training/validation/test partitioning info Preparing the Dataset
df1 = pd.read_csv('list_attr_celeba.txt', sep="\s+", skiprows=1, usecols=['Male']) # Make 0 (female) & 1 (male) labels instead of -1 & 1 df1.loc[df1['Male'] == -1, 'Male'] = 0 df1.head() df2 = pd.read_csv('list_eval_partition.txt', sep="\s+", skiprows=0, header=None) df2.columns = ['Filename', 'Partition'] df2 = df2.set_index('Filename') df2.head() df3 = df1.merge(df2, left_index=True, right_index=True) df3.head() df3.to_csv('celeba-gender-partitions.csv') df4 = pd.read_csv('celeba-gender-partitions.csv', index_col=0) df4.head() df4.loc[df4['Partition'] == 0].to_csv('celeba-gender-train.csv') df4.loc[df4['Partition'] == 1].to_csv('celeba-gender-valid.csv') df4.loc[df4['Partition'] == 2].to_csv('celeba-gender-test.csv') img = Image.open('img_align_celeba/000001.jpg') print(np.asarray(img, dtype=np.uint8).shape) plt.imshow(img);
(218, 178, 3)
MIT
code/model_zoo/pytorch_ipynb/convnet-resnet50-celeba-dataparallel.ipynb
wpsliu123/Sebastian_Raschka-Deep-Learning-Book
Implementing a Custom DataLoader Class
class CelebaDataset(Dataset): """Custom Dataset for loading CelebA face images""" def __init__(self, csv_path, img_dir, transform=None): df = pd.read_csv(csv_path, index_col=0) self.img_dir = img_dir self.csv_path = csv_path self.img_names = df.index.values self.y = df['Male'].values self.transform = transform def __getitem__(self, index): img = Image.open(os.path.join(self.img_dir, self.img_names[index])) if self.transform is not None: img = self.transform(img) label = self.y[index] return img, label def __len__(self): return self.y.shape[0] # Note that transforms.ToTensor() # already divides pixels by 255. internally custom_transform = transforms.Compose([transforms.CenterCrop((178, 178)), transforms.Resize((128, 128)), #transforms.Grayscale(), #transforms.Lambda(lambda x: x/255.), transforms.ToTensor()]) train_dataset = CelebaDataset(csv_path='celeba-gender-train.csv', img_dir='img_align_celeba/', transform=custom_transform) valid_dataset = CelebaDataset(csv_path='celeba-gender-valid.csv', img_dir='img_align_celeba/', transform=custom_transform) test_dataset = CelebaDataset(csv_path='celeba-gender-test.csv', img_dir='img_align_celeba/', transform=custom_transform) BATCH_SIZE=256*torch.cuda.device_count() train_loader = DataLoader(dataset=train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4) valid_loader = DataLoader(dataset=valid_dataset, batch_size=BATCH_SIZE, shuffle=False, num_workers=4) test_loader = DataLoader(dataset=test_dataset, batch_size=BATCH_SIZE, shuffle=False, num_workers=4) device = torch.device("cuda:0") torch.manual_seed(0) for epoch in range(2): for batch_idx, (x, y) in enumerate(train_loader): print('Epoch:', epoch+1, end='') print(' | Batch index:', batch_idx, end='') print(' | Batch size:', y.size()[0]) x = x.to(device) y = y.to(device) break
Epoch: 1 | Batch index: 0 | Batch size: 1024 Epoch: 2 | Batch index: 0 | Batch size: 1024
MIT
code/model_zoo/pytorch_ipynb/convnet-resnet50-celeba-dataparallel.ipynb
wpsliu123/Sebastian_Raschka-Deep-Learning-Book
Model
########################## ### SETTINGS ########################## # Hyperparameters random_seed = 1 learning_rate = 0.001 num_epochs = 5 # Architecture num_features = 128*128 num_classes = 2
_____no_output_____
MIT
code/model_zoo/pytorch_ipynb/convnet-resnet50-celeba-dataparallel.ipynb
wpsliu123/Sebastian_Raschka-Deep-Learning-Book
The following code cell that implements the ResNet-34 architecture is a derivative of the code provided at https://pytorch.org/docs/0.4.0/_modules/torchvision/models/resnet.html.
########################## ### MODEL ########################## def conv3x3(in_planes, out_planes, stride=1): """3x3 convolution with padding""" return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False) class Bottleneck(nn.Module): expansion = 4 def __init__(self, inplanes, planes, stride=1, downsample=None): super(Bottleneck, self).__init__() self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) self.bn1 = nn.BatchNorm2d(planes) self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False) self.bn2 = nn.BatchNorm2d(planes) self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False) self.bn3 = nn.BatchNorm2d(planes * 4) self.relu = nn.ReLU(inplace=True) self.downsample = downsample self.stride = stride def forward(self, x): residual = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) out = self.relu(out) out = self.conv3(out) out = self.bn3(out) if self.downsample is not None: residual = self.downsample(x) out += residual out = self.relu(out) return out class ResNet(nn.Module): def __init__(self, block, layers, num_classes): self.inplanes = 64 super(ResNet, self).__init__() self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(64) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 = self._make_layer(block, 64, layers[0]) self.layer2 = self._make_layer(block, 128, layers[1], stride=2) self.layer3 = self._make_layer(block, 256, layers[2], stride=2) self.layer4 = self._make_layer(block, 512, layers[3], stride=2) self.avgpool = nn.AvgPool2d(7, stride=1, padding=2) self.fc = nn.Linear(2048 * block.expansion, num_classes) for m in self.modules(): if isinstance(m, nn.Conv2d): n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels m.weight.data.normal_(0, (2. / n)**.5) elif isinstance(m, nn.BatchNorm2d): m.weight.data.fill_(1) m.bias.data.zero_() def _make_layer(self, block, planes, blocks, stride=1): downsample = None if stride != 1 or self.inplanes != planes * block.expansion: downsample = nn.Sequential( nn.Conv2d(self.inplanes, planes * block.expansion, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(planes * block.expansion), ) layers = [] layers.append(block(self.inplanes, planes, stride, downsample)) self.inplanes = planes * block.expansion for i in range(1, blocks): layers.append(block(self.inplanes, planes)) return nn.Sequential(*layers) def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.avgpool(x) x = x.view(x.size(0), -1) logits = self.fc(x) probas = F.softmax(logits, dim=1) return logits, probas def resnet50(num_classes): """Constructs a ResNet-34 model.""" model = ResNet(Bottleneck, [3, 4, 6, 3], num_classes=num_classes) return model torch.manual_seed(random_seed) ########################## ### COST AND OPTIMIZER ########################## #### DATA PARALLEL START #### model = resnet50(num_classes) if torch.cuda.device_count() > 1: print("Using", torch.cuda.device_count(), "GPUs") model = nn.DataParallel(model) #### DATA PARALLEL END #### model.to(device) #### DATA PARALLEL START #### cost_fn = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
Using 4 GPUs
MIT
code/model_zoo/pytorch_ipynb/convnet-resnet50-celeba-dataparallel.ipynb
wpsliu123/Sebastian_Raschka-Deep-Learning-Book
Training
def compute_accuracy(model, data_loader): correct_pred, num_examples = 0, 0 for i, (features, targets) in enumerate(data_loader): features = features.to(device) targets = targets.to(device) logits, probas = model(features) _, predicted_labels = torch.max(probas, 1) num_examples += targets.size(0) correct_pred += (predicted_labels == targets).sum() return correct_pred.float()/num_examples * 100 start_time = time.time() for epoch in range(num_epochs): model.train() for batch_idx, (features, targets) in enumerate(train_loader): features = features.to(device) targets = targets.to(device) ### FORWARD AND BACK PROP logits, probas = model(features) cost = cost_fn(logits, targets) optimizer.zero_grad() cost.backward() ### UPDATE MODEL PARAMETERS optimizer.step() ### LOGGING if not batch_idx % 50: print ('Epoch: %03d/%03d | Batch %04d/%04d | Cost: %.4f' %(epoch+1, num_epochs, batch_idx, len(train_dataset)//BATCH_SIZE, cost)) model.eval() with torch.set_grad_enabled(False): # save memory during inference print('Epoch: %03d/%03d | Train: %.3f%% | Valid: %.3f%%' % ( epoch+1, num_epochs, compute_accuracy(model, train_loader), compute_accuracy(model, valid_loader))) print('Time elapsed: %.2f min' % ((time.time() - start_time)/60)) print('Total Training Time: %.2f min' % ((time.time() - start_time)/60))
Epoch: 001/005 | Batch 0000/0158 | Cost: 0.7133 Epoch: 001/005 | Batch 0050/0158 | Cost: 0.1586 Epoch: 001/005 | Batch 0100/0158 | Cost: 0.1041 Epoch: 001/005 | Batch 0150/0158 | Cost: 0.1345 Epoch: 001/005 | Train: 93.080% | Valid: 94.050% Time elapsed: 2.74 min Epoch: 002/005 | Batch 0000/0158 | Cost: 0.1176 Epoch: 002/005 | Batch 0050/0158 | Cost: 0.0857 Epoch: 002/005 | Batch 0100/0158 | Cost: 0.0789 Epoch: 002/005 | Batch 0150/0158 | Cost: 0.0594 Epoch: 002/005 | Train: 97.245% | Valid: 97.086% Time elapsed: 5.43 min Epoch: 003/005 | Batch 0000/0158 | Cost: 0.0635 Epoch: 003/005 | Batch 0050/0158 | Cost: 0.0747 Epoch: 003/005 | Batch 0100/0158 | Cost: 0.0778 Epoch: 003/005 | Batch 0150/0158 | Cost: 0.0583 Epoch: 003/005 | Train: 96.920% | Valid: 96.824% Time elapsed: 8.12 min Epoch: 004/005 | Batch 0000/0158 | Cost: 0.0578 Epoch: 004/005 | Batch 0050/0158 | Cost: 0.0701 Epoch: 004/005 | Batch 0100/0158 | Cost: 0.0721 Epoch: 004/005 | Batch 0150/0158 | Cost: 0.0504 Epoch: 004/005 | Train: 96.846% | Valid: 96.477% Time elapsed: 10.81 min Epoch: 005/005 | Batch 0000/0158 | Cost: 0.0448 Epoch: 005/005 | Batch 0050/0158 | Cost: 0.0456 Epoch: 005/005 | Batch 0100/0158 | Cost: 0.0584 Epoch: 005/005 | Batch 0150/0158 | Cost: 0.0396 Epoch: 005/005 | Train: 97.287% | Valid: 96.804% Time elapsed: 13.50 min Total Training Time: 13.50 min
MIT
code/model_zoo/pytorch_ipynb/convnet-resnet50-celeba-dataparallel.ipynb
wpsliu123/Sebastian_Raschka-Deep-Learning-Book
Evaluation
with torch.set_grad_enabled(False): # save memory during inference print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader))) for batch_idx, (features, targets) in enumerate(test_loader): features = features targets = targets break plt.imshow(np.transpose(features[0], (1, 2, 0))) model.eval() logits, probas = model(features.to(device)[0, None]) print('Probability Female %.2f%%' % (probas[0][0]*100))
Probability Female 99.19%
MIT
code/model_zoo/pytorch_ipynb/convnet-resnet50-celeba-dataparallel.ipynb
wpsliu123/Sebastian_Raschka-Deep-Learning-Book
Filtering data
df.filter('Close>500').show() df.filter('Close>500').select('Open').show() df.filter((df["Close"]> 500) &(df["Open"]< 495)).show() df.filter((df["Close"]>200) | (df["Open"]< 200)).show() result = df.filter(df["open"] == 208.330002 ).collect() type(result[0]) row = result[0] row.asDict() for item in result[0]: print(item)
2010-01-19 00:00:00 208.330002 215.18999900000003 207.240004 215.039995 182501900 27.860484999999997
MIT
spark/SparkBasic/DataFrames_Basic_Operations.ipynb
AlphaSunny/RecSys
Examples of matrix products
import pyJvsip as pjv
_____no_output_____
MIT
doc/notebooks/MatrixProduct.ipynb
rrjudd/jvsip
Example of matrix product prod
inA=pjv.create('mview_d',2,5).randn(5) inB=pjv.create('mview_d',5,5).identity outC=inA.prod(inB) outC.mprint('%.3f') print("Frobenius of difference %.2f"%(inA-outC).normFro) inB=pjv.create('mview_d',2,2).identity outC=inB.prod(inA) outC.mprint('%.3f') print("Frobenius of difference %.2f"%(inA-outC).normFro)
[ 0.508 0.535 0.699 -0.960 0.231; 0.040 -0.477 0.208 0.506 -0.383] Frobenius of difference 0.00
MIT
doc/notebooks/MatrixProduct.ipynb
rrjudd/jvsip
Example of prodjConjugate matrix product
inA=pjv.create('cmview_f',3,4).randn(3) inB=pjv.create('cmview_f',4,2).randn(4) outC=inA.prodj(inB) print('C=A.prodj(B)'); print('A');inA.mprint('%.3f') print('B');inB.mprint('%.3f') print('C');outC.mprint('%.3f') print('test using prod and inB.conj');pjv.prod(inA,(inB.conj),outC).mprint('%.3f')
test using prod and inB.conj [ 1.384-2.247i -0.287-2.598i; 2.453+1.101i 0.404+0.091i; -1.963+3.397i -3.496+0.722i]
MIT
doc/notebooks/MatrixProduct.ipynb
rrjudd/jvsip
Example of prodhHermitian matrix product
inA=pjv.create('cmview_f',3,4).randn(3) inB=pjv.create('cmview_f',2,4).randn(4) outC=inA.prodh(inB) print('C=A.prodj(B)'); print('A');inA.mprint('%.3f') print('B');inB.mprint('%.3f') print('C');outC.mprint('%.3f') print('test using prod and inB.herm');pjv.prod(inA,(inB.herm),outC).mprint('%.3f')
test using prod and inB.herm [-2.193+0.832i -0.311+4.938i; 0.322-0.693i -3.759-1.877i; -0.758+1.496i -2.282+2.687i]
MIT
doc/notebooks/MatrixProduct.ipynb
rrjudd/jvsip
Example of prodtTranspose matrix product.
inA=pjv.create('cmview_f',3,4).randn(3) inB=pjv.create('cmview_f',2,4).randn(4) outC=inA.prodt(inB) print('C=A.prodj(B)'); print('A');inA.mprint('%.3f') print('B');inB.mprint('%.3f') print('C');outC.mprint('%.3f') print('test using prod and inB.herm');pjv.prod(inA,(inB.transview),outC).mprint('%.3f') inA=pjv.create('mview_f',3,3).fill(0.0); inA.diagview(0).fill(1.0) inA.diagview(-1).fill(-1.0) inA.diagview(1).fill(-1.0) inA.mprint('%.1f') inB=pjv.create('mview_f',3,10).randn(14) pjv.prod3(inA,inB,inB.empty).mprint('%.3f')
[ 0.555 -1.017 -0.534 0.872 -1.029 1.373 -1.534 -2.648 -0.214 2.112; -0.796 -0.389 0.591 -0.835 -0.654 0.024 3.188 2.993 0.176 -2.690; -0.202 1.608 -0.422 0.629 1.406 -1.663 -2.301 -2.322 -0.565 2.163]
MIT
doc/notebooks/MatrixProduct.ipynb
rrjudd/jvsip
Priority Queue Reference Implementation Operations:For the sake of simplicity, all inputs assumed to be valid**enqueue(data, priority)*** Insert data to the priority queue**dequeue()*** Remove one node from the priority queue with highest priority* If queue is empty, return None
class Node(object): def __init__(self, data, priority, next=None): self.data = data self.priority = priority self.next = next class PriorityQueue(object): def __init__(self): self.head = None def enqueue(self, data, priority): if self.head is None: self.head = Node(data, priority) return if self.head.next is None: if self.head.priority < priority: self.head = Node(data, priority, self.head) else: self.head.next = Node(data, priority) else: p = self.head pprev = None while p is not None: if p.priority < priority: if p is self.head: temp = Node(data, priority, self.head) self.head = temp else: temp = Node(data, priority, p) pprev.next = temp return pprev = p p = p.next pprev.next = Node(data, priority) def dequeue(self): if self.head is None: return None node = self.head self.head = self.head.next return node p = PriorityQueue() p.enqueue(1, 20) p.enqueue(2, 30) p.enqueue(3, 15) x = p.head while x is not None: print(x.data, " with priority of", x.priority) x = x.next node = p.dequeue() print("Dequeue ", node.data, " with priority of", node.priority) x = p.head while x is not None: print(x.data, " with priority of", x.priority) x = x.next
2 with priority of 30 1 with priority of 20 3 with priority of 15 Dequeue 2 with priority of 30 1 with priority of 20 3 with priority of 15
MIT
DataStructures/PriorityQueue.ipynb
varian97/ComputerScience-Notebook
May
features_hierarchical_may, transformed_tokens_may, linkage_matrix_may, clusters_may = hierarchical_clustering(best_model_may, tfidf_matrix_may, 2) agglomerative_clustering(6, features_hierarchical_may, df_may, 2, best_model_may, transformed_tokens_may, clusters_may) elbow_method(tfidf_matrix_may[clusters_may==2], linkage_matrix_may)
_____no_output_____
MIT
Mediacloud_Hierarchical_clustering.ipynb
gesiscss/media_frames
September
features_hierarchical_sep, transformed_tokens_sep, linkage_matrix_sep, clusters_sep = hierarchical_clustering(best_model_sep, tfidf_matrix_sep, 10) agglomerative_clustering(2, features_hierarchical_sep, df_sep, 10, best_model_sep, transformed_tokens_sep, clusters_sep)
_____no_output_____
MIT
Mediacloud_Hierarchical_clustering.ipynb
gesiscss/media_frames
REINFORCE in PyTorchJust like we did before for Q-learning, this time we'll design a PyTorch network to learn `CartPole-v0` via policy gradient (REINFORCE).Most of the code in this notebook is taken from approximate Q-learning, so you'll find it more or less familiar and even simpler.
import sys, os if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'): !wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash !touch .setup_complete # This code creates a virtual display to draw game images on. # It will have no effect if your machine has a monitor. if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0: !bash ../xvfb start os.environ['DISPLAY'] = ':1' import gym import numpy as np import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
Unlicense
week06_policy_based/reinforce_pytorch.ipynb
RomaKoks/Practical_RL
A caveat: with some versions of `pyglet`, the following cell may crash with `NameError: name 'base' is not defined`. The corresponding bug report is [here](https://github.com/pyglet/pyglet/issues/134). If you see this error, try restarting the kernel.
env = gym.make("CartPole-v0") # gym compatibility: unwrap TimeLimit if hasattr(env, '_max_episode_steps'): env = env.env env.reset() n_actions = env.action_space.n state_dim = env.observation_space.shape plt.imshow(env.render("rgb_array"))
_____no_output_____
Unlicense
week06_policy_based/reinforce_pytorch.ipynb
RomaKoks/Practical_RL
Building the network for REINFORCE For REINFORCE algorithm, we'll need a model that predicts action probabilities given states.For numerical stability, please __do not include the softmax layer into your network architecture__.We'll use softmax or log-softmax where appropriate.
import torch import torch.nn as nn # Build a simple neural network that predicts policy logits. # Keep it simple: CartPole isn't worth deep architectures. model = nn.Sequential( <YOUR CODE: define a neural network that predicts policy logits> )
_____no_output_____
Unlicense
week06_policy_based/reinforce_pytorch.ipynb
RomaKoks/Practical_RL
Predict function Note: output value of this function is not a torch tensor, it's a numpy array.So, here gradient calculation is not needed.Use [no_grad](https://pytorch.org/docs/stable/autograd.htmltorch.autograd.no_grad)to suppress gradient calculation.Also, `.detach()` (or legacy `.data` property) can be used instead, but there is a difference:With `.detach()` computational graph is built but then disconnected from a particular tensor,so `.detach()` should be used if that graph is needed for backprop via some other (not detached) tensor;In contrast, no graph is built by any operation in `no_grad()` context, thus it's preferable here.
def predict_probs(states): """ Predict action probabilities given states. :param states: numpy array of shape [batch, state_shape] :returns: numpy array of shape [batch, n_actions] """ # convert states, compute logits, use softmax to get probability <YOUR CODE> return <YOUR CODE> test_states = np.array([env.reset() for _ in range(5)]) test_probas = predict_probs(test_states) assert isinstance(test_probas, np.ndarray), \ "you must return np array and not %s" % type(test_probas) assert tuple(test_probas.shape) == (test_states.shape[0], env.action_space.n), \ "wrong output shape: %s" % np.shape(test_probas) assert np.allclose(np.sum(test_probas, axis=1), 1), "probabilities do not sum to 1"
_____no_output_____
Unlicense
week06_policy_based/reinforce_pytorch.ipynb
RomaKoks/Practical_RL
Play the gameWe can now use our newly built agent to play the game.
def generate_session(env, t_max=1000): """ Play a full session with REINFORCE agent. Returns sequences of states, actions, and rewards. """ # arrays to record session states, actions, rewards = [], [], [] s = env.reset() for t in range(t_max): # action probabilities array aka pi(a|s) action_probs = predict_probs(np.array([s]))[0] # Sample action with given probabilities. a = <YOUR CODE> new_s, r, done, info = env.step(a) # record session history to train later states.append(s) actions.append(a) rewards.append(r) s = new_s if done: break return states, actions, rewards # test it states, actions, rewards = generate_session(env)
_____no_output_____
Unlicense
week06_policy_based/reinforce_pytorch.ipynb
RomaKoks/Practical_RL
Computing cumulative rewards$$\begin{align*}G_t &= r_t + \gamma r_{t + 1} + \gamma^2 r_{t + 2} + \ldots \\&= \sum_{i = t}^T \gamma^{i - t} r_i \\&= r_t + \gamma * G_{t + 1}\end{align*}$$
def get_cumulative_rewards(rewards, # rewards at each step gamma=0.99 # discount for reward ): """ Take a list of immediate rewards r(s,a) for the whole session and compute cumulative returns (a.k.a. G(s,a) in Sutton '16). G_t = r_t + gamma*r_{t+1} + gamma^2*r_{t+2} + ... A simple way to compute cumulative rewards is to iterate from the last to the first timestep and compute G_t = r_t + gamma*G_{t+1} recurrently You must return an array/list of cumulative rewards with as many elements as in the initial rewards. """ <YOUR CODE> return <YOUR CODE: array of cumulative rewards> get_cumulative_rewards(rewards) assert len(get_cumulative_rewards(list(range(100)))) == 100 assert np.allclose( get_cumulative_rewards([0, 0, 1, 0, 0, 1, 0], gamma=0.9), [1.40049, 1.5561, 1.729, 0.81, 0.9, 1.0, 0.0]) assert np.allclose( get_cumulative_rewards([0, 0, 1, -2, 3, -4, 0], gamma=0.5), [0.0625, 0.125, 0.25, -1.5, 1.0, -4.0, 0.0]) assert np.allclose( get_cumulative_rewards([0, 0, 1, 2, 3, 4, 0], gamma=0), [0, 0, 1, 2, 3, 4, 0]) print("looks good!")
_____no_output_____
Unlicense
week06_policy_based/reinforce_pytorch.ipynb
RomaKoks/Practical_RL
Loss function and updatesWe now need to define objective and update over policy gradient.Our objective function is$$ J \approx { 1 \over N } \sum_{s_i,a_i} G(s_i,a_i) $$REINFORCE defines a way to compute the gradient of the expected reward with respect to policy parameters. The formula is as follows:$$ \nabla_\theta \hat J(\theta) \approx { 1 \over N } \sum_{s_i, a_i} \nabla_\theta \log \pi_\theta (a_i \mid s_i) \cdot G_t(s_i, a_i) $$We can abuse PyTorch's capabilities for automatic differentiation by defining our objective function as follows:$$ \hat J(\theta) \approx { 1 \over N } \sum_{s_i, a_i} \log \pi_\theta (a_i \mid s_i) \cdot G_t(s_i, a_i) $$When you compute the gradient of that function with respect to network weights $\theta$, it will become exactly the policy gradient.
def to_one_hot(y_tensor, ndims): """ helper: take an integer vector and convert it to 1-hot matrix. """ y_tensor = y_tensor.type(torch.LongTensor).view(-1, 1) y_one_hot = torch.zeros( y_tensor.size()[0], ndims).scatter_(1, y_tensor, 1) return y_one_hot # Your code: define optimizers optimizer = torch.optim.Adam(model.parameters(), 1e-3) def train_on_session(states, actions, rewards, gamma=0.99, entropy_coef=1e-2): """ Takes a sequence of states, actions and rewards produced by generate_session. Updates agent's weights by following the policy gradient above. Please use Adam optimizer with default parameters. """ # cast everything into torch tensors states = torch.tensor(states, dtype=torch.float32) actions = torch.tensor(actions, dtype=torch.int32) cumulative_returns = np.array(get_cumulative_rewards(rewards, gamma)) cumulative_returns = torch.tensor(cumulative_returns, dtype=torch.float32) # predict logits, probas and log-probas using an agent. logits = model(states) probs = nn.functional.softmax(logits, -1) log_probs = nn.functional.log_softmax(logits, -1) assert all(isinstance(v, torch.Tensor) for v in [logits, probs, log_probs]), \ "please use compute using torch tensors and don't use predict_probs function" # select log-probabilities for chosen actions, log pi(a_i|s_i) log_probs_for_actions = torch.sum( log_probs * to_one_hot(actions, env.action_space.n), dim=1) # Compute loss here. Don't forgen entropy regularization with `entropy_coef` entropy = <YOUR CODE> loss = <YOUR CODE> # Gradient descent step <YOUR CODE> # technical: return session rewards to print them later return np.sum(rewards)
_____no_output_____
Unlicense
week06_policy_based/reinforce_pytorch.ipynb
RomaKoks/Practical_RL
The actual training
for i in range(100): rewards = [train_on_session(*generate_session(env)) for _ in range(100)] # generate new sessions print("mean reward:%.3f" % (np.mean(rewards))) if np.mean(rewards) > 500: print("You Win!") # but you can train even further break
_____no_output_____
Unlicense
week06_policy_based/reinforce_pytorch.ipynb
RomaKoks/Practical_RL
Results & video
# Record sessions import gym.wrappers with gym.wrappers.Monitor(gym.make("CartPole-v0"), directory="videos", force=True) as env_monitor: sessions = [generate_session(env_monitor) for _ in range(100)] # Show video. This may not work in some setups. If it doesn't # work for you, you can download the videos and view them locally. from pathlib import Path from base64 import b64encode from IPython.display import HTML video_paths = sorted([s for s in Path('videos').iterdir() if s.suffix == '.mp4']) video_path = video_paths[-1] # You can also try other indices if 'google.colab' in sys.modules: # https://stackoverflow.com/a/57378660/1214547 with video_path.open('rb') as fp: mp4 = fp.read() data_url = 'data:video/mp4;base64,' + b64encode(mp4).decode() else: data_url = str(video_path) HTML(""" <video width="640" height="480" controls> <source src="{}" type="video/mp4"> </video> """.format(data_url))
_____no_output_____
Unlicense
week06_policy_based/reinforce_pytorch.ipynb
RomaKoks/Practical_RL
Equivalent layer technique for estimating total magnetization direction: Analysis of the result Importing libraries
% matplotlib inline import sys import numpy as np import matplotlib.pyplot as plt import matplotlib.mlab as mlab import cPickle as pickle import datetime import timeit import string as st from mpl_toolkits.axes_grid1.inset_locator import inset_axes from fatiando.gridder import regular notebook_name = 'airborne_EQL_magdirection_RM_analysis.ipynb'
_____no_output_____
BSD-3-Clause
code/notebooks/synthetic_tests/model_multibody_shallow-seated/airborne_EQL_magdirection_RM_analysis.ipynb
pinga-lab/eqlayer-magnetization-direction
Plot style
plt.style.use('ggplot')
_____no_output_____
BSD-3-Clause
code/notebooks/synthetic_tests/model_multibody_shallow-seated/airborne_EQL_magdirection_RM_analysis.ipynb
pinga-lab/eqlayer-magnetization-direction
Importing my package
dir_modules = '../../../mypackage' sys.path.append(dir_modules) import auxiliary_functions as fc
_____no_output_____
BSD-3-Clause
code/notebooks/synthetic_tests/model_multibody_shallow-seated/airborne_EQL_magdirection_RM_analysis.ipynb
pinga-lab/eqlayer-magnetization-direction
Loading model
with open('data/model_multi.pickle') as f: model_multi = pickle.load(f)
_____no_output_____
BSD-3-Clause
code/notebooks/synthetic_tests/model_multibody_shallow-seated/airborne_EQL_magdirection_RM_analysis.ipynb
pinga-lab/eqlayer-magnetization-direction
Loading observation points
with open('data/airborne_survey.pickle') as f: airborne = pickle.load(f)
_____no_output_____
BSD-3-Clause
code/notebooks/synthetic_tests/model_multibody_shallow-seated/airborne_EQL_magdirection_RM_analysis.ipynb
pinga-lab/eqlayer-magnetization-direction
Loading data set
with open('data/data_set.pickle') as f: data = pickle.load(f)
_____no_output_____
BSD-3-Clause
code/notebooks/synthetic_tests/model_multibody_shallow-seated/airborne_EQL_magdirection_RM_analysis.ipynb
pinga-lab/eqlayer-magnetization-direction
Loading results
with open('data/result_RM_airb.pickle') as f: results = pickle.load(f)
_____no_output_____
BSD-3-Clause
code/notebooks/synthetic_tests/model_multibody_shallow-seated/airborne_EQL_magdirection_RM_analysis.ipynb
pinga-lab/eqlayer-magnetization-direction
List of saved files
saved_files = []
_____no_output_____
BSD-3-Clause
code/notebooks/synthetic_tests/model_multibody_shallow-seated/airborne_EQL_magdirection_RM_analysis.ipynb
pinga-lab/eqlayer-magnetization-direction
Observation area
print 'Area limits: \n x_max = %.1f m \n x_min = %.1f m \n y_max = %.1f m \n y_min = %.1f m' % (airborne['area'][1], airborne['area'][0], airborne['area'][3], airborne['area'][2])
Area limits: x_max = 5500.0 m x_min = -6500.0 m y_max = 6500.0 m y_min = -5500.0 m
BSD-3-Clause
code/notebooks/synthetic_tests/model_multibody_shallow-seated/airborne_EQL_magdirection_RM_analysis.ipynb
pinga-lab/eqlayer-magnetization-direction
Airborne survey information
print 'Shape : (%.0f,%.0f)'% airborne['shape'] print 'Number of data: %.1f' % airborne['N'] print 'dx: %.1f m' % airborne['dx'] print 'dy: %.1f m ' % airborne['dy']
Shape : (49,25) Number of data: 1225.0 dx: 250.0 m dy: 500.0 m
BSD-3-Clause
code/notebooks/synthetic_tests/model_multibody_shallow-seated/airborne_EQL_magdirection_RM_analysis.ipynb
pinga-lab/eqlayer-magnetization-direction
Properties of the model Main field
inc_gf,dec_gf = model_multi['main_field'] print'Main field inclination: %.1f degree' % inc_gf print'Main field declination: %.1f degree' % dec_gf
Main field inclination: -40.0 degree Main field declination: -22.0 degree
BSD-3-Clause
code/notebooks/synthetic_tests/model_multibody_shallow-seated/airborne_EQL_magdirection_RM_analysis.ipynb
pinga-lab/eqlayer-magnetization-direction
Magnetization direction
print 'Inclination: %.1f degree' % model_multi['inc_R'] print 'Declination: %.1f degree' % model_multi['dec_R'] inc_R,dec_R = model_multi['inc_R'],model_multi['dec_R']
_____no_output_____
BSD-3-Clause
code/notebooks/synthetic_tests/model_multibody_shallow-seated/airborne_EQL_magdirection_RM_analysis.ipynb
pinga-lab/eqlayer-magnetization-direction
Coordinates equivalent sources
h = results['layer_depth'] shape_layer = (airborne['shape'][0],airborne['shape'][1]) xs,ys,zs = regular(airborne['area'],shape_layer,h)
_____no_output_____
BSD-3-Clause
code/notebooks/synthetic_tests/model_multibody_shallow-seated/airborne_EQL_magdirection_RM_analysis.ipynb
pinga-lab/eqlayer-magnetization-direction
The best solution using L-curve
m_LM = results['magnetic_moment'][4] inc_est = results['inc_est'][4] dec_est = results['dec_est'][4] mu = results['reg_parameter'][4] phi = results['phi'][4] print mu
350000.0
BSD-3-Clause
code/notebooks/synthetic_tests/model_multibody_shallow-seated/airborne_EQL_magdirection_RM_analysis.ipynb
pinga-lab/eqlayer-magnetization-direction
Visualization of the convergence
phi = (np.array(phi)/airborne['x'].size) title_font = 22 bottom_font = 20 saturation_factor = 1. plt.close('all') plt.figure(figsize=(10,10), tight_layout=True) plt.plot(phi,'b-',linewidth=1.5) plt.title('Convergence', fontsize=title_font) plt.xlabel('iteration', fontsize = title_font) plt.ylabel('Goal function ', fontsize = title_font) plt.tick_params(axis='both', which='major', labelsize=15) file_name = 'figs/airborne/convergence_LM_NNLS_magRM' plt.savefig(file_name+'.png',dpi=300) saved_files.append(file_name+'.png') plt.show()
/home/andrelreis/anaconda3/envs/py2/lib/python2.7/site-packages/matplotlib/figure.py:2299: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect. warnings.warn("This figure includes Axes that are not compatible "
BSD-3-Clause
code/notebooks/synthetic_tests/model_multibody_shallow-seated/airborne_EQL_magdirection_RM_analysis.ipynb
pinga-lab/eqlayer-magnetization-direction
Estimated magnetization direction
print (inc_est,dec_est) print (inc_R,dec_R)
(-25.0, 30.0)
BSD-3-Clause
code/notebooks/synthetic_tests/model_multibody_shallow-seated/airborne_EQL_magdirection_RM_analysis.ipynb
pinga-lab/eqlayer-magnetization-direction
Comparison between observed data and predicted data
pred = fc.tfa_layer(airborne['x'],airborne['y'],airborne['z'], xs,ys,zs,inc_gf,dec_gf,m_LM,inc_est,dec_est) res = pred - data['tfa_obs_RM_airb'] r_norm,r_mean,r_std = fc.residual(data['tfa_obs_RM_airb'],pred) title_font = 22 bottom_font = 20 plt.figure(figsize=(28,11), tight_layout=True) ranges = np.abs([data['tfa_obs_RM_airb'].max(), data['tfa_obs_RM_airb'].min(), pred.max(), pred.min()]).max() ranges_r = np.abs([res.max(),res.min()]).max() ## Observed data plot ax1=plt.subplot(1,4,1) plt.title('Observed data', fontsize=title_font) plt.xlabel('y (km)',fontsize = title_font) plt.ylabel('x (km)',fontsize = title_font) plt.contourf(1e-3*airborne['y'].reshape(airborne['shape']), 1e-3*airborne['x'].reshape(airborne['shape']), data['tfa_obs_RM_airb'].reshape(airborne['shape']), 30, cmap='viridis',vmin=-ranges, vmax=ranges) plt.tick_params(axis='both', which='major', labelsize=bottom_font) cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0) cb.set_label('nT',size=bottom_font) cb.ax.tick_params(labelsize=bottom_font) ## Predicted data plot ax2=plt.subplot(1,4,2) plt.title('Predicted data', fontsize=title_font) plt.xlabel('y (km)',fontsize = title_font) plt.ylabel('x (km)',fontsize = title_font) plt.contourf(1e-3*airborne['y'].reshape(airborne['shape']), 1e-3*airborne['x'].reshape(airborne['shape']), pred.reshape(airborne['shape']), 30, cmap='viridis', vmin=-ranges, vmax=ranges) plt.tick_params(axis='both', which='major', labelsize=bottom_font) cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0) cb.set_label('nT',size=bottom_font) cb.ax.tick_params(labelsize=bottom_font) ## Residuals plot and histogram ax3=plt.subplot(1,4,3) plt.title('Residuals map', fontsize=title_font) plt.xlabel('y (km)',fontsize = title_font) plt.ylabel('x (km)',fontsize = title_font) plt.contourf(1e-3*airborne['y'].reshape(airborne['shape']), 1e-3*airborne['x'].reshape(airborne['shape']), res.reshape(airborne['shape']), 30, cmap='viridis', vmin=-ranges_r, vmax=ranges_r) plt.tick_params(axis='both', which='major', labelsize=bottom_font) cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0) cb.set_label('nT',size=bottom_font) cb.ax.tick_params(labelsize=bottom_font) ax4=plt.subplot(1,4,4) plt.title('Histogram of residuals', fontsize =title_font) plt.xlabel('Residuals (nT)', fontsize = title_font) plt.ylabel('Frequency', fontsize = title_font) plt.text(0.02, 0.97, "mean = {:.2f}\nstd = {:.2f} ".format(np.mean(res), np.std(res)), horizontalalignment='left', verticalalignment='top', transform = ax4.transAxes, fontsize=bottom_font) n, bins, patches = plt.hist(res,bins=30, normed=True, facecolor='black') gauss = mlab.normpdf(bins, 0., 10.) plt.plot(bins, gauss, 'r-', linewidth=4.) ax4.set_xticks([-100.0,-50.,0.0,50.,100.0]) ax4.set_yticks([.0,.010,.020,.030,.040,.05,.06]) plt.tick_params(axis='both', which='major', labelsize=bottom_font) ## file_name = 'figs/airborne/data_fitting_LM_NNLS_magRM' plt.savefig(file_name+'.png',dpi=300) saved_files.append(file_name+'.png') plt.show()
/home/andrelreis/anaconda3/envs/py2/lib/python2.7/site-packages/matplotlib/axes/_axes.py:6571: UserWarning: The 'normed' kwarg is deprecated, and has been replaced by the 'density' kwarg. warnings.warn("The 'normed' kwarg is deprecated, and has been " /home/andrelreis/anaconda3/envs/py2/lib/python2.7/site-packages/ipykernel_launcher.py:65: MatplotlibDeprecationWarning: scipy.stats.norm.pdf
BSD-3-Clause
code/notebooks/synthetic_tests/model_multibody_shallow-seated/airborne_EQL_magdirection_RM_analysis.ipynb
pinga-lab/eqlayer-magnetization-direction
Positive magnetic-moment distribution
title_font = 22 bottom_font = 20 plt.close('all') plt.figure(figsize=(10,10), tight_layout=True) plt.title('Magnetic moment distribution', fontsize=title_font) plt.contourf(1e-3*ys.reshape(shape_layer),1e-3*xs.reshape(shape_layer), m_LM.reshape(shape_layer), 40, cmap='inferno') cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0) cb.set_label('$A.m^2$',size=bottom_font) cb.ax.tick_params(labelsize=bottom_font) plt.xlabel('y (km)', fontsize = title_font) plt.ylabel('x (km)', fontsize = title_font) plt.tick_params(axis='both', which='major', labelsize=bottom_font) file_name = 'figs/airborne/magnetic_moment_positive_LM_NNLS_magRM' plt.savefig(file_name+'.png',dpi=300) saved_files.append(file_name+'.png') plt.show()
_____no_output_____
BSD-3-Clause
code/notebooks/synthetic_tests/model_multibody_shallow-seated/airborne_EQL_magdirection_RM_analysis.ipynb
pinga-lab/eqlayer-magnetization-direction
Figure for paper
#title_font = 17 title_font = 5 #bottom_font = 14 bottom_font = 4 hist_font = 5 height_per_width = 17./15. plt.figure(figsize=(4.33,4.33*height_per_width), tight_layout=True) ranges = np.abs([data['tfa_obs_RM_airb'].max(), data['tfa_obs_RM_airb'].min(), pred.max(), pred.min()]).max() ranges_r = np.abs([res.max(),res.min()]).max() ## Observed data plot ax1=plt.subplot(3,2,1) plt.title('(a) Observed data', fontsize=title_font) plt.xlabel('y (km)',fontsize = title_font) plt.ylabel('x (km)',fontsize = title_font) plt.contourf(1e-3*airborne['y'].reshape(airborne['shape']), 1e-3*airborne['x'].reshape(airborne['shape']), data['tfa_obs_RM_airb'].reshape(airborne['shape']), 30, cmap='viridis',vmin=-ranges, vmax=ranges) cbar = plt.colorbar(pad=0.01, aspect=20, shrink=1.0) cbar.set_label('nT',size=title_font) cbar.ax.tick_params(labelsize=bottom_font) plt.tick_params(axis='both', which='major', labelsize=bottom_font) plt.title('(a) Observed data', fontsize=title_font) plt.xlabel('y (km)',fontsize = title_font) plt.ylabel('x (km)',fontsize = title_font) ## Predicted data plot ax2=plt.subplot(3,2,2) plt.contourf(1e-3*airborne['y'].reshape(airborne['shape']), 1e-3*airborne['x'].reshape(airborne['shape']), pred.reshape(airborne['shape']), 30, cmap='viridis', vmin=-ranges, vmax=ranges) cbar = plt.colorbar(pad=0.01, aspect=20, shrink=1.0) cbar.set_label('nT',size=title_font) cbar.ax.tick_params(labelsize=bottom_font) plt.tick_params(axis='both', which='major', labelsize=bottom_font) plt.title('(b) Predicted data', fontsize=title_font) plt.xlabel('y (km)',fontsize = title_font) plt.ylabel('x (km)',fontsize = title_font) ## Residuals plot and histogram ax3=plt.subplot(3,2,3) plt.contourf(1e-3*airborne['y'].reshape(airborne['shape']), 1e-3*airborne['x'].reshape(airborne['shape']), res.reshape(airborne['shape']), 30, cmap='viridis', vmin=-ranges_r, vmax=ranges_r) cbar = plt.colorbar(pad=0.01, aspect=20, shrink=1.0) cbar.set_label('nT',size=title_font) cbar.ax.tick_params(labelsize=bottom_font) plt.tick_params(axis='both', which='major', labelsize=bottom_font) plt.title('(c) Residuals', fontsize=title_font) plt.xlabel('y (km)',fontsize = title_font) plt.ylabel('x (km)',fontsize = title_font) ax4= plt.subplot(3,2,4) plt.text(0.02, 0.97, "mean = {:.2f}\nstd = {:.2f} ".format(np.mean(res), np.std(res)), horizontalalignment='left', verticalalignment='top', transform = ax4.transAxes, fontsize=hist_font) n, bins, patches = plt.hist(res,bins=20, normed=True, facecolor='black') gauss = mlab.normpdf(bins, 0., 10.) plt.plot(bins, gauss, 'r-', linewidth=1.) ax4.set_xticks([-100.0,-50.,0.0,50.,100.0]) ax4.set_yticks([.0,.010,.020,.030,.040,.05,.06]) plt.tick_params(axis='both', which='major', labelsize=bottom_font) plt.title('(d) Histogram of residuals', fontsize =title_font) plt.xlabel('Residuals (nT)', fontsize = title_font) plt.ylabel('Frequency', fontsize = title_font) ax5= plt.subplot(3,2,5) plt.contourf(1e-3*ys.reshape(shape_layer),1e-3*xs.reshape(shape_layer), m_LM.reshape(shape_layer)*1e-9, 30, cmap='inferno') cbar = plt.colorbar(pad=0.01, aspect=20, shrink=1.0) cbar.set_label('$10^{9}$ A$\cdot$m$^2$',size=title_font) cbar.ax.tick_params(labelsize=bottom_font) plt.tick_params(axis='both', which='major', labelsize=bottom_font) plt.title('(e) Magnetic moment distribution', fontsize=title_font) plt.xlabel('y (km)', fontsize = title_font) plt.ylabel('x (km)', fontsize = title_font) ax6= plt.subplot(3,2,6) plt.plot(phi, 'b-',linewidth=1.0) plt.tick_params(axis='both', which='major', labelsize=bottom_font) plt.title('(f) Convergence', fontsize=title_font) plt.xlabel('iteration', fontsize = title_font) plt.ylabel('Goal function ', fontsize = title_font) ########################################################################### #file_name = 'figs/airborne/results_compiled_LM_NNLS_magRM' file_name = 'figs/airborne/Fig3' plt.savefig(file_name+'.png',dpi=1200) saved_files.append(file_name+'.png') plt.savefig(file_name+'.eps',dpi=1200) saved_files.append(file_name+'.eps') plt.show()
/home/andrelreis/anaconda3/envs/py2/lib/python2.7/site-packages/ipykernel_launcher.py:71: MatplotlibDeprecationWarning: scipy.stats.norm.pdf
BSD-3-Clause
code/notebooks/synthetic_tests/model_multibody_shallow-seated/airborne_EQL_magdirection_RM_analysis.ipynb
pinga-lab/eqlayer-magnetization-direction
Parsing Natural Language in Python **(C) 2018 by [Damir Cavar](http://damir.cavar.me/)** **License:** [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/) ([CA BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)) This is a tutorial related to the discussion of parsing with Probabilistic Context Free Grammars (PCFG) in the class *Advanced Natural Language Processing* taught at Indiana University in Fall 2018. This code and tutorial is based on different summer school courses that I taught and tutorials that I gave at different occasions in Europe and the US. This particular example will use code from the **TDAParser.py** and other scripts developed since 2002. Most of this material was used in general introduction courses to algorithms in Natural Language Processing that I taught at Indiana University, University of Konstanz, University of Zadar, University of Nova Gorica.
import sys
_____no_output_____
Apache-2.0
notebooks/Parsing Natural Language in Python.ipynb
peey/python-tutorial-notebooks
The Grammar Class Let us assume that our Phrase Structure Grammar consists of rules that contain one symbol in the Left-Hand Side, followed by a production symbol, an arrow, and by a list of at least one terminal and symbol. Comments can be introduced using the ** symbol. Every rule has to be contained in one line.
grammarText = """ # PSG1 # small English grammar # (C) 2005 by Damir Cavar, Indiana University # Grammar: S -> NP VP NP -> N NP -> Adj N NP -> Art Adj N NP -> Art N NP -> Art N PP #NP -> Art N NP VP -> V VP -> V NP VP -> Adv V NP VP -> V PP VP -> V NP PP PP -> P NP # Lexicon: N -> John N -> Mary N -> bench N -> cat N -> mouse Art -> the Art -> a Adj -> green Adj -> yellow Adj -> big Adj -> small Adv -> often Adv -> yesterday V -> kissed V -> loves V -> sees V -> meets V -> chases P -> on P -> in P -> beside P -> under """
_____no_output_____
Apache-2.0
notebooks/Parsing Natural Language in Python.ipynb
peey/python-tutorial-notebooks
We can parse this grammar into a representation that allows us to fetch the left- and the right-hand side of a rule for top- or bottom-up parsing.
class PSG: def __init__(self, grammar): self.LHS = {} self.RHS = {} self.__read__(grammar) def __str__(self): text = "" for i in self.LHS.keys(): if len(text) > 0: text += "\n" for x in self.LHS[i]: text += i + " -> " + " ".join(x) + "\n" return text def __read__(self, g): for i in g.split("\n"): i = i.split("#")[0].strip() # cut off comment string and strip if len(i) == 0: continue tokens = i.split("->") if len(tokens) != 2: continue lhs = tokens[0].split() if len(lhs) != 1: continue rhs = tuple(tokens[1].split()) value = self.LHS.get(lhs[0], []) if rhs not in value: value.append(rhs) self.LHS[lhs[0]] = value value = self.RHS.get(rhs, []) if lhs[0] not in value: value.append(lhs[0]) self.RHS[rhs] = value def getRHS(self, left): return self.LHS.get(left, []) def getLHS(self, right): return self.RHS.get(right, [])
_____no_output_____
Apache-2.0
notebooks/Parsing Natural Language in Python.ipynb
peey/python-tutorial-notebooks
The grammar file: The Top-Down Parser Defining some parameters:
LIFO = -1 FIFO = 0 strategy = FIFO def tdparse(inp, goal, grammar, agenda): print("Got : %s\tinput: %s" % (goal, inp)) if goal == inp == []: print("Success") elif goal == [] or inp == []: if agenda == []: print("Fail: Agenda empty!") else: entry = agenda.pop(strategy) print("Backing up to: %s with %s" % (entry[0], entry[1])) tdparse(entry[1], entry[0], grammar, agenda) else: # there is something in goal and input if goal[0] == inp[0]: # if initial symbols match, reduce lists, parse tdparse(inp[1:], goal[1:], grammar, agenda) else: for i in grammar.LHS.get(goal[0], []): if [list(i) + goal[1:], inp] not in agenda: agenda.append([list(i) + goal[1:], inp]) if len(agenda) > 0: entry = agenda.pop(strategy) tdparse(entry[1], entry[0], grammar, agenda) else: print("Fail: Agenda empty!") myGrammar = PSG(grammarText) print(myGrammar) tdparse( ('John', 'loves', 'Mary') , ["S"], myGrammar, [])
S -> NP VP NP -> N NP -> Adj N NP -> Art Adj N NP -> Art N NP -> Art N PP VP -> V VP -> V NP VP -> Adv V NP VP -> V PP VP -> V NP PP PP -> P NP N -> John N -> Mary N -> bench N -> cat N -> mouse Art -> the Art -> a Adj -> green Adj -> yellow Adj -> big Adj -> small Adv -> often Adv -> yesterday V -> kissed V -> loves V -> sees V -> meets V -> chases P -> on P -> in P -> beside P -> under Got : ['S'] input: ('John', 'loves', 'Mary') Got : ['NP', 'VP'] input: ('John', 'loves', 'Mary') Got : ['N', 'VP'] input: ('John', 'loves', 'Mary') Got : ['Adj', 'N', 'VP'] input: ('John', 'loves', 'Mary') Got : ['Art', 'Adj', 'N', 'VP'] input: ('John', 'loves', 'Mary') Got : ['Art', 'N', 'VP'] input: ('John', 'loves', 'Mary') Got : ['Art', 'N', 'PP', 'VP'] input: ('John', 'loves', 'Mary') Got : ['John', 'VP'] input: ('John', 'loves', 'Mary') Got : ['VP'] input: ('loves', 'Mary') Got : ['Mary', 'VP'] input: ('John', 'loves', 'Mary') Got : ['bench', 'VP'] input: ('John', 'loves', 'Mary') Got : ['cat', 'VP'] input: ('John', 'loves', 'Mary') Got : ['mouse', 'VP'] input: ('John', 'loves', 'Mary') Got : ['green', 'N', 'VP'] input: ('John', 'loves', 'Mary') Got : ['yellow', 'N', 'VP'] input: ('John', 'loves', 'Mary') Got : ['big', 'N', 'VP'] input: ('John', 'loves', 'Mary') Got : ['small', 'N', 'VP'] input: ('John', 'loves', 'Mary') Got : ['the', 'Adj', 'N', 'VP'] input: ('John', 'loves', 'Mary') Got : ['a', 'Adj', 'N', 'VP'] input: ('John', 'loves', 'Mary') Got : ['the', 'N', 'VP'] input: ('John', 'loves', 'Mary') Got : ['a', 'N', 'VP'] input: ('John', 'loves', 'Mary') Got : ['the', 'N', 'PP', 'VP'] input: ('John', 'loves', 'Mary') Got : ['a', 'N', 'PP', 'VP'] input: ('John', 'loves', 'Mary') Got : ['V'] input: ('loves', 'Mary') Got : ['V', 'NP'] input: ('loves', 'Mary') Got : ['Adv', 'V', 'NP'] input: ('loves', 'Mary') Got : ['V', 'PP'] input: ('loves', 'Mary') Got : ['V', 'NP', 'PP'] input: ('loves', 'Mary') Got : ['kissed'] input: ('loves', 'Mary') Got : ['loves'] input: ('loves', 'Mary') Got : [] input: ('Mary',) Backing up to: ['sees'] with ('loves', 'Mary') Got : ['sees'] input: ('loves', 'Mary') Got : ['meets'] input: ('loves', 'Mary') Got : ['chases'] input: ('loves', 'Mary') Got : ['kissed', 'NP'] input: ('loves', 'Mary') Got : ['loves', 'NP'] input: ('loves', 'Mary') Got : ['NP'] input: ('Mary',) Got : ['sees', 'NP'] input: ('loves', 'Mary') Got : ['meets', 'NP'] input: ('loves', 'Mary') Got : ['chases', 'NP'] input: ('loves', 'Mary') Got : ['often', 'V', 'NP'] input: ('loves', 'Mary') Got : ['yesterday', 'V', 'NP'] input: ('loves', 'Mary') Got : ['kissed', 'PP'] input: ('loves', 'Mary') Got : ['loves', 'PP'] input: ('loves', 'Mary') Got : ['PP'] input: ('Mary',) Got : ['sees', 'PP'] input: ('loves', 'Mary') Got : ['meets', 'PP'] input: ('loves', 'Mary') Got : ['chases', 'PP'] input: ('loves', 'Mary') Got : ['kissed', 'NP', 'PP'] input: ('loves', 'Mary') Got : ['loves', 'NP', 'PP'] input: ('loves', 'Mary') Got : ['NP', 'PP'] input: ('Mary',) Got : ['sees', 'NP', 'PP'] input: ('loves', 'Mary') Got : ['meets', 'NP', 'PP'] input: ('loves', 'Mary') Got : ['chases', 'NP', 'PP'] input: ('loves', 'Mary') Got : ['N'] input: ('Mary',) Got : ['Adj', 'N'] input: ('Mary',) Got : ['Art', 'Adj', 'N'] input: ('Mary',) Got : ['Art', 'N'] input: ('Mary',) Got : ['Art', 'N', 'PP'] input: ('Mary',) Got : ['P', 'NP'] input: ('Mary',) Got : ['N', 'PP'] input: ('Mary',) Got : ['Adj', 'N', 'PP'] input: ('Mary',) Got : ['Art', 'Adj', 'N', 'PP'] input: ('Mary',) Got : ['Art', 'N', 'PP', 'PP'] input: ('Mary',) Got : ['John'] input: ('Mary',) Got : ['Mary'] input: ('Mary',) Got : [] input: () Backing up to: ['bench'] with ('Mary',) Got : ['bench'] input: ('Mary',) Got : ['cat'] input: ('Mary',) Got : ['mouse'] input: ('Mary',) Got : ['green', 'N'] input: ('Mary',) Got : ['yellow', 'N'] input: ('Mary',) Got : ['big', 'N'] input: ('Mary',) Got : ['small', 'N'] input: ('Mary',) Got : ['the', 'Adj', 'N'] input: ('Mary',) Got : ['a', 'Adj', 'N'] input: ('Mary',) Got : ['the', 'N'] input: ('Mary',) Got : ['a', 'N'] input: ('Mary',) Got : ['the', 'N', 'PP'] input: ('Mary',) Got : ['a', 'N', 'PP'] input: ('Mary',) Got : ['on', 'NP'] input: ('Mary',) Got : ['in', 'NP'] input: ('Mary',) Got : ['beside', 'NP'] input: ('Mary',) Got : ['under', 'NP'] input: ('Mary',) Got : ['John', 'PP'] input: ('Mary',) Got : ['Mary', 'PP'] input: ('Mary',) Got : ['PP'] input: ()
Apache-2.0
notebooks/Parsing Natural Language in Python.ipynb
peey/python-tutorial-notebooks
Mask R-CNN DemoA quick intro to using the pre-trained model to detect and segment objects.
import os import sys import random import math import numpy as np import skimage.io import matplotlib import matplotlib.pyplot as plt import cv2,time,json,glob from IPython.display import clear_output # Root directory of the project ROOT_DIR = os.path.abspath("../") # Import Mask RCNN sys.path.append(ROOT_DIR) # To find local version of the library from mrcnn import utils import mrcnn.model as modellib from mrcnn import visualize # Import COCO config sys.path.append(os.path.join("/home/jchilders/coco/")) # To find local version from coco import coco import tensorflow as tf print('tensorflow version: ',tf.__version__) print('using gpu: ',tf.test.is_gpu_available()) %matplotlib inline # Directory to save logs and trained model MODEL_DIR = os.path.join(ROOT_DIR, "logs") # Local path to trained weights file COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5") # Download COCO trained weights from Releases if needed if not os.path.exists(COCO_MODEL_PATH): utils.download_trained_weights(COCO_MODEL_PATH) # Directory of images to run detection on IMAGE_DIR = os.path.join(ROOT_DIR, "images")
tensorflow version: 1.15.0 using gpu: True
MIT
samples/demo.ipynb
jtchilders/Mask_RCNN
ConfigurationsWe'll be using a model trained on the MS-COCO dataset. The configurations of this model are in the ```CocoConfig``` class in ```coco.py```.For inferencing, modify the configurations a bit to fit the task. To do so, sub-class the ```CocoConfig``` class and override the attributes you need to change.
class InferenceConfig(coco.CocoConfig): # Set batch size to 1 since we'll be running inference on # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU GPU_COUNT = 1 IMAGES_PER_GPU = 10 BATCH_SIZE=10 config = InferenceConfig() config.display()
_____no_output_____
MIT
samples/demo.ipynb
jtchilders/Mask_RCNN
Create Model and Load Trained Weights
# Create model object in inference mode. model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config) # Load weights trained on MS-COCO model.load_weights(COCO_MODEL_PATH, by_name=True)
_____no_output_____
MIT
samples/demo.ipynb
jtchilders/Mask_RCNN
Class NamesThe model classifies objects and returns class IDs, which are integer value that identify each class. Some datasets assign integer values to their classes and some don't. For example, in the MS-COCO dataset, the 'person' class is 1 and 'teddy bear' is 88. The IDs are often sequential, but not always. The COCO dataset, for example, has classes associated with class IDs 70 and 72, but not 71.To improve consistency, and to support training on data from multiple sources at the same time, our ```Dataset``` class assigns it's own sequential integer IDs to each class. For example, if you load the COCO dataset using our ```Dataset``` class, the 'person' class would get class ID = 1 (just like COCO) and the 'teddy bear' class is 78 (different from COCO). Keep that in mind when mapping class IDs to class names.To get the list of class names, you'd load the dataset and then use the ```class_names``` property like this.``` Load COCO datasetdataset = coco.CocoDataset()dataset.load_coco(COCO_DIR, "train")dataset.prepare() Print class namesprint(dataset.class_names)```We don't want to require you to download the COCO dataset just to run this demo, so we're including the list of class names below. The index of the class name in the list represent its ID (first class is 0, second is 1, third is 2, ...etc.)
# COCO Class names # Index of the class in the list is its ID. For example, to get ID of # the teddy bear class, use: class_names.index('teddy bear') class_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush']
_____no_output_____
MIT
samples/demo.ipynb
jtchilders/Mask_RCNN
Run Object Detection
# Load a random image from the images folder file_names = next(os.walk(IMAGE_DIR))[2] image = skimage.io.imread(os.path.join(IMAGE_DIR, random.choice(file_names))) # Run detection results = model.detect([image], verbose=1) # Visualize results r = results[0] visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'], class_names, r['scores']) fn = "/home/jchilders/car_videos/10.07.17-10.07.40.mp4" cap = cv2.VideoCapture(fn) fps = cap.get(cv2.CAP_PROP_FPS) print('frames per second: %d' % fps) frames = [] ret, frame = cap.read() timestamp = [cap.get(cv2.CAP_PROP_POS_MSEC)] frames.append(frame) data = [] while(ret): if len(frames) == 10: results = model.detect(frames) for i in range(len(results)): r = results[i] rois = r['rois'].tolist() masks = r['masks'] * 1 class_ids = r['class_ids'] size = [] position = [] pixel_size = [] class_name = [] for i in range(len(rois)): size.append([ rois[i][2] - rois[i][0], rois[i][3] - rois[i][1] ]) position.append([ rois[i][0]+int(float(size[-1][0])/2.), rois[i][1]+int(float(size[-1][1])/2.) ] ) pixel_size.append(int(masks[i].sum())) class_name.append(class_names[class_ids[i]]) data.append({'size': size, 'position': position, 'pixel_size': pixel_size, 'timestamp': timestamp[i], 'rois':rois, 'class_ids':r['class_ids'].tolist(), 'class_names':class_name, 'scores':r['scores'].tolist()}) # clear_output(wait=True) # visualize.display_instances(frames[i], r['rois'], r['masks'], r['class_ids'], # class_names, r['scores']) # print(r['rois']) # print(r['class_ids']) # print(r['scores']) json.dump(data,open('%s_fps%d.json' % (os.path.basename(fn),fps),'w'),indent=2, sort_keys=True) frames = [] timestamp = [] ret, frame = cap.read() timestamp.append(cap.get(cv2.CAP_PROP_POS_MSEC)) frames.append(frame) fn = "/home/jchilders/car_videos/10.07.17-10.07.40.mp4" class_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'] def get_video_data(fn,model,batch_size,show_img=False): cap = cv2.VideoCapture(fn) fps = cap.get(cv2.CAP_PROP_FPS) print('frames per second: %d' % fps) frames = [] ret, frame = cap.read() timestamp = [cap.get(cv2.CAP_PROP_POS_MSEC)] frames.append(frame) data = [] output = {'filename:': fn, 'fps': fps, 'timestamp': str(time.ctime(os.path.getmtime(fn))), 'data': data} while(ret): if len(frames) == batch_size: results = model.detect(frames) for i in range(len(results)): r = results[i] rois = r['rois'].tolist() masks = r['masks'] * 1 class_ids = r['class_ids'] size = [] position = [] pixel_size = [] class_name = [] for i in range(len(rois)): size.append([ rois[i][2] - rois[i][0], rois[i][3] - rois[i][1] ]) position.append([ rois[i][0]+int(float(size[-1][0])/2.), rois[i][1]+int(float(size[-1][1])/2.) ] ) pixel_size.append(int(masks[i].sum())) class_name.append(class_names[class_ids[i]]) data.append({'size': size, 'position': position, 'pixel_size': pixel_size, 'frametime': timestamp[i], 'rois':rois, 'class_ids':r['class_ids'].tolist(), 'class_names':class_name, 'scores':r['scores'].tolist()}) if show_img: clear_output(wait=True) vr = results[0] visualize.display_instances(frames[0], vr['rois'], vr['masks'], vr['class_ids'], class_names, vr['scores']) # print(r['rois']) # print(r['class_ids']) # print(r['scores']) # json.dump(data,open('%s_fps%d.json' % (os.path.basename(fn),fps),'w'),indent=2, sort_keys=True) frames = [] timestamp = [] ret, frame = cap.read() timestamp.append(cap.get(cv2.CAP_PROP_POS_MSEC)) frames.append(frame) return output batch_size = 25 class InferenceConfig(coco.CocoConfig): # Set batch size to 1 since we'll be running inference on # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU GPU_COUNT = 1 IMAGES_PER_GPU = batch_size BATCH_SIZE = batch_size config = InferenceConfig() # Create model object in inference mode. model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config) # Load weights trained on MS-COCO model.load_weights(COCO_MODEL_PATH, by_name=True) filelist = open('/home/jchilders/car_videos/filelist.txt').readlines() print('files: %d' % len(filelist)) output = [] for i,line in enumerate(filelist): print(' %s of %s' % (i,len(filelist))) fn = line.strip() fn_output = get_video_data(fn,model,batch_size,show_img=True) print(fn_output) clear_output(wait=True) output.append(fn_output) json.dump(output,open('full_data.json'))
files: 345 0 of 345 frames per second: 25 WARNING:tensorflow:From /home/jchilders/conda/mask_rcnn/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
MIT
samples/demo.ipynb
jtchilders/Mask_RCNN
The perceptron - Recognising the MNIST digits Table of contents
%matplotlib inline from pylab import * from utils import *
_____no_output_____
MIT
course/perceptron-MNIST-simulation.ipynb
cecconeurale/neunet-basics
Let us implement a perceptron that categorize the MNIST images as numbers. As you will see below the behaviour of the network is far from optimal. As we see the network [learns well the training set](Plotting-the-results-of-test). Nevertheless its behaviour in a [test with new digits](Spreading-of-the-network-during-test) is far from optimal. **The task we are asking the network to learn is too difficult!!** Training Initializing data and parametersFirst we initialize the dataset (see [The MNIST dataset](http://francesco-mannella.github.io/neunet-basics/mnist.html)), the we define few parameters and initialize the main variables:
#----------------------------------------------------------- # training # Set the number of patterns n_patterns = 500 # Take 'n_patterns' rows indices = arange(training_length) shuffle(indices) indices = indices[:n_patterns] # Get patterns patterns = array(mndata.train_images)[indices] # Rescale all patterns between 0 and 1 patterns = sign(patterns/255.0) # Get the labels of the patterns labels = array(mndata.train_labels)[indices] # Constants # Number of repetitions of # the pattern series epochs = 30 # Number of trials for learning stime = epochs*n_patterns # Create a list of pattern indices. # We will reshuffle it at each # repetition of the series pattern_indices = arange(n_patterns) # Learning rate eta = 0.0001 # Number of output units m = 10 # the input is given # by a 28*28 vector) n = n_pixels # Variables # Init weights w = zeros([m, n+1]) # Init input units x = zeros(n) # init net input net = zeros(m) # Init output units y = zeros(m) # Init desired output vector y_target = zeros(m) # We will store the input, output and error history input_store = zeros([n,stime]) output_store = zeros([m,stime]) label_store = zeros([m,stime]) squared_errors = zeros(epochs)
_____no_output_____
MIT
course/perceptron-MNIST-simulation.ipynb
cecconeurale/neunet-basics
Let us visualize the first 20 patterns of the trining set:
for i in xrange(20): # Create a new figure after each 10-th item if i%10 == 0: fig = figure(figsize = (20, 1)) # Plot current item (we use the # function plot_img in our utils.py) plot_img( to_mat(patterns[i]), fig, (i%10)+1, windows = 20 ) # show figure after all 1o items # are plotted if i%10 == 9: show()
_____no_output_____
MIT
course/perceptron-MNIST-simulation.ipynb
cecconeurale/neunet-basics
Spreading of the network during trainingHere starts the core part, iterating the timesteps. We also divide the training phase in epochs. Each epoch is a single presentation of the whole input pattern series. The sum of squared errors will be grouped by epochs.
# counter of repetitions # of the series of patterns epoch = -1 # Iterate trials for t in xrange(stime) : # Reiterate the input pattern # sequence through timesteps # Reshuffle at the end # of the series if t%n_patterns == 0: shuffle(pattern_indices) epoch += 1 # Current pattern k = pattern_indices[t%n_patterns] # Aggregate inputs and the bias unit x = hstack([ 1, patterns[k] ]) # Only the unit representing the desired # category is set to 1 y_target *= 0 y_target[labels[k]] = 1 # !!!! The dot product becomes a matrix # product with more than one output unit !!!! net = dot(w,x) # output function y = step(net) # Learning - outer product w += eta*outer(y_target - y, x); # Store data input_store[:,t] = x[1:] output_store[:,t] = y label_store[:,t] = y_target squared_errors[epoch] += 0.5*sum((y_target - y)**2)
_____no_output_____
MIT
course/perceptron-MNIST-simulation.ipynb
cecconeurale/neunet-basics
Plotting the results of trainingWe plot the history of the squared errors through epocs a
fig = figure() ax = fig.add_subplot(111) ax.plot(squared_errors) xlabel("Epochs") ylabel("Sum of squared errors")
_____no_output_____
MIT
course/perceptron-MNIST-simulation.ipynb
cecconeurale/neunet-basics
and a visualization of the weights to each ouput unit. Each set of weights seems to reproduce (in a very raugh manner) a generalization of the target digit.
figure(figsize=(15,2)) for i in xrange(m) : subplot(1,m,i+1) title(i) im = to_mat(w[i,1:]) imshow(im, cmap=cm.bone) axis('off') show()
_____no_output_____
MIT
course/perceptron-MNIST-simulation.ipynb
cecconeurale/neunet-basics
Testing Initializing data and parametersNow we create a new dataset to test the network and reset some variables:
#----------------------------------------------------------- # test # Set the number of patterns n_patterns = 1000 # Take 'n_patterns' rows indices = arange(test_length) shuffle(indices) indices = indices[:n_patterns] # Get patterns patterns = array(mndata.test_images)[indices] # Rescale all patterns between 0 and 1 patterns = sign(patterns/255.0) # Get the labels of the patterns labels = array(mndata.test_labels)[indices] # Constants # Create a list of pattern indices. # We will reshuffle it at each # repetition of the series pattern_indices = arange(n_patterns) shuffle(pattern_indices) # Clear variables x *= 0 net *= 0 y *= 0 # We will store the input, output and error history input_store = zeros([patterns.shape[1], n_patterns]) output_store = zeros([m, n_patterns]) target_store = zeros(n_patterns) error_store = zeros(n_patterns)
_____no_output_____
MIT
course/perceptron-MNIST-simulation.ipynb
cecconeurale/neunet-basics
Spreading of the network during testThe network react to each test pattern in each spreading timestep:
# Iterate trials for p in xrange(n_patterns) : # Aggregate inputs and the bias unit x = hstack([ 1, patterns[p] ]) # !!!! The dot product becomes a matrix # product with more than one output unit !!!! net = dot(w,x) # output function y = step(net) y_index = squeeze(find(y==1)) y_index_target = int(labels[p]) error = 0 if y_index.size < 2 : if y_index == y_index_target : error = 1 # store input_store[:,p] = x[1:] output_store[:,p] = y target_store[p] = labels[p] error_store[p] = error
_____no_output_____
MIT
course/perceptron-MNIST-simulation.ipynb
cecconeurale/neunet-basics
Let us see what is the proportion of correct answers of the network:
print "Proportion of correct answers:{}" \ .format(sum(error_store)/float(n_patterns))
Proportion of correct answers:0.655
MIT
course/perceptron-MNIST-simulation.ipynb
cecconeurale/neunet-basics
Plotting the results of testNow we plot few test samples to get the real idea. For each sample we plot the input digit on the top, the answer of the network on the center and the target digit on the left. Squared brakets indicate that the network gave zero or more than one answer.
import matplotlib.gridspec as gridspec gs = gridspec.GridSpec(8, 4*10) n_patterns = 20 for p in xrange(n_patterns) : im = to_mat(input_store[:,p]) k = p%10 if k==0 : fig = figure(figsize=(15,2)) ax1 = fig.add_subplot(gs[:4,(k*4):(k*4+4)]) ax1.imshow(im, cmap=cm.binary) ax1.set_axis_off() if error_store[p] == True : color = "blue" else : color = "red" y = squeeze(find(output_store[:,p]==1)) y_target = int(labels[p]) ax2 = fig.add_subplot(gs[4:6,(k*4):(k*4+4)]) ax2.text(0.5,0.5,"{}".format(y), fontsize="16", color=color) axis("off") ax3 = fig.add_subplot(gs[6:,(k*4):(k*4+4)]) ax3.text(0.5,0.5,"{}".format(y_target), fontsize = "16", color=color ) axis("off") if k == 9: show()
_____no_output_____
MIT
course/perceptron-MNIST-simulation.ipynb
cecconeurale/neunet-basics
The next cell is just for styling
from IPython.core.display import HTML def css_styling(): styles = open("../style/ipybn.css", "r").read() return HTML(styles) css_styling()
_____no_output_____
MIT
course/perceptron-MNIST-simulation.ipynb
cecconeurale/neunet-basics
BE 240 Lecture 4 Sub-SBML Modeling diffusion, shared resources, and compartmentalized systems _Ayush Pandey_
# This notebook is designed to be converted to a HTML slide show # To do this in the command prompt type (in the folder containing the notebook): # jupyter nbconvert BE240_Lecture4_Sub-SBML.ipynb --to slides
_____no_output_____
BSD-3-Clause
examples/BE240_Lecture4_Sub-SBML.ipynb
BuildACell/subsbml
![image.png](attachment:image.png) ![image.png](attachment:image.png) An example: Three different "subsystems" - each with its SBML model Another "signal in mixture" subsystem - models signal in the environment / mixture Using Sub-SBML we can obtain the combined model for such a system with* transport across membrane* shared resources : ATP, Ribosome etc* resolve naming conflicts (Ribo, Ribosome, RNAP, RNAPolymerase etc.) ![image.png](attachment:image.png) Installing Sub-SBML```git clone https://github.com/BuildACell/subsbml.git```cd to `subsbml` directory then run the following command to install the package in your environment:``` python setup.py install``` Dependencies:1. python-libsbml : Run `pip install python-libsbml`, if you don't have it already. You probably already have this installed as it is also a dependency for bioscrape1. A simulator: You will need a simulator of your choice to simulate the SBML models that Sub-SBML generates. Bioscrape is an example of a simulator and we will be using that for simulations. Update your bioscrape installationFrom the bioscrape directory, run the following if you do not have a remote fork (your own Github fork of the original bioscrape repository - `biocircuits/bioscrape`. To list all remote repositories that your bioscrape directory is connected to you can run `git remote -v`. The `origin` in the next two commands corresponds to the biocircuits/bioscrape github repository (you should change it if your remote has a different name)```git pull origin masterpython setup.py install```Update your BioCRNpyler installation as well - if you plan to use your own BioCRNpyler models with Sub-SBML. Run the same commands as for bioscrape from the BioCRNpyler directory. Sub-SBML notes: On "name" and "identifier":> SBML elements can have a name and an identifier argument. A `name` is supposed to be a human readable name of the particular element in the model. On the other hand, an `identifier` is what the software tool reads. Hence, `identifier` argument in an SBML model is mandatory whereas `name` argument is optional. Sub-SBML works with `name` arguments of various model components to figure out what components interact/get combined/shared etc. Bioscrape/BioCRNpyler and other common software tools generate SBML models with `name` arguments added to various components such as species, parameters. As an example, to combine two species, Sub-SBML looks at the names of the two species and if they are the same - they are combined together and given a new identifier but the name remains the same. A simple Sub-SBML use case:A simple example where we have two different models : transcription and translation. Using Sub-SBML, we can combine these two together and run simulations.
# Import statements from subsbml.Subsystem import createNewSubsystem, createSubsystem import numpy as np import pylab as plt
_____no_output_____
BSD-3-Clause
examples/BE240_Lecture4_Sub-SBML.ipynb
BuildACell/subsbml
Transcription Model:Consider the following simple transcription-only model where $G$ is a gene, $T$ is a transcript, and $S$ is the signaling molecule.We can write the following reduced order dynamics:1. $G \xrightarrow[]{\rho_{tx}(G, S)} G + T$; \begin{align} \rho_{tx}(G, S) = G K_{X}\frac{S^{2}}{K_{S}^{2}+S^{2}}\\\end{align}Here, $S$ is the inducer signal that cooperatively activates the transcription of the gene $G$. Since, this is a positive activation of the gene by the inducer, we have a positive proportional Hill function.1. $T \xrightarrow[]{\delta} \varnothing$; massaction kinetics at rate $\delta$. Translation model:1. $T \xrightarrow[]{\rho_{tl}(T)} T+X$; \begin{align} \rho_{tl}(T) = K_{TR} \frac{T}{K_{R} + T}\\\end{align}Here $X$ is the protein species.The lumped parameters $K_{TR}$ and $K_R$ model effects due to ribosome saturation. This is the similar Hill function as derived in the enzymatic reaction example. 1. $X \xrightarrow[]{\delta} \varnothing$; massaction kinetics at rate $\delta$.
# Import SBML models by creating Subsystem class objects ss1 = createSubsystem('transcription_SBML_model.xml') ss2 = createSubsystem('translation_SBML_model.xml') ss1.renameSName('mRNA_T', 'T') # Combine the two subsystems together tx_tl_subsystem = ss1 + ss2 # The longer way to do the same thing: # tx_tl_subsystem = createNewSubsystem() # tx_tl_subsystem.combineSubsystems([ss1,ss2], verbose = True) # Set signal concentration (input) - manually and get ID for protein X X_id = tx_tl_subsystem.getSpeciesByName('X').getId() # Writing a Subsystem to an SBML file (Export SBML) _ = tx_tl_subsystem.writeSBML('txtl_ss.xml') tx_tl_subsystem.setSpeciesAmount('S',10) try: # Simulate with Bioscrape and plot the result timepoints = np.linspace(0,100,100) results, _ = tx_tl_subsystem.simulateWithBioscrape(timepoints) plt.plot(timepoints, results[X_id], linewidth = 3, label = 'S = 10') tx_tl_subsystem.setSpeciesAmount('S',5) results, _ = tx_tl_subsystem.simulateWithBioscrape(timepoints) plt.plot(timepoints, results[X_id], linewidth = 3, label = 'S = 5') plt.title('Protein X dynamics') plt.ylabel('[X]') plt.xlabel('Time') plt.legend() plt.show() except: print('Simulator not found') # Viewing the change log for the changes that Sub-SBML made # print(ss1.changeLog) # print(ss2.changeLog) print(tx_tl_subsystem.changeLog)
{'default_bioscrape_generated_model_47961': 'default_bioscrape_generated_model_245758_combined', 'default_bioscrape_generated_model_245758': 'default_bioscrape_generated_model_245758_combined', 'mRNA_T_bioscrape_generated_model_245758': 'mRNA_T_bioscrape_generated_model_245758_1_combined', 'T_bioscrape_generated_model_47961': 'mRNA_T_bioscrape_generated_model_245758_1_combined', 'delta_bioscrape_generated_model_245758': 'delta_combined', 'delta_bioscrape_generated_model_47961': 'delta_combined', 'n_bioscrape_generated_model_245758': 'n_combined', 'n_bioscrape_generated_model_47961': 'n_combined'}
BSD-3-Clause
examples/BE240_Lecture4_Sub-SBML.ipynb
BuildACell/subsbml
Signal induction model:1. $\varnothing \xrightarrow[]{\rho(I)} S$; \begin{align} \rho(S) = K_{0} \frac{I^2}{K_{I} + I^2}\\\end{align}Here $S$ is the signal produced on induction by an inducer $I$.The lumped parameters $K_{0}$ and $K_S$ model effects of cooperative production of the signal by the inducer. This is the similar Hill function as derived in the enzymatic reaction example.
ss3 = createSubsystem('signal_in_mixture.xml') # Signal subsystem (production of signal molecule) combined_ss = ss1 + ss2 + ss3 # Alternatively combined_ss = createNewSubsystem() combined_ss.combineSubsystems([ss1,ss2,ss3]) # Writing a Subsystem to an SBML file (Export SBML) combined_ss.writeSBML('txtl_combined.xml') # Set signal concentration (input) - manually and get ID for protein X combined_ss.setSpeciesAmount('I',10) X_id = combined_ss.getSpeciesByName('X').getId() try: # Simulate with Bioscrape and plot the result timepoints = np.linspace(0,100,100) results, _ = combined_ss.simulateWithBioscrape(timepoints) plt.plot(timepoints, results[X_id], linewidth = 3, label = 'I = 10') combined_ss.setSpeciesAmount('I',2) results, _ = combined_ss.simulateWithBioscrape(timepoints) plt.plot(timepoints, results[X_id], linewidth = 3, label = 'I = 5') plt.title('Protein X dynamics') plt.ylabel('[X]') plt.xlabel('Time') plt.legend() plt.show() except: print('Simulator not found') combined_ss.changeLog
_____no_output_____
BSD-3-Clause
examples/BE240_Lecture4_Sub-SBML.ipynb
BuildACell/subsbml
What does Sub-SBML look for?1. For compartments: if two compartments have the same `name` and the same `size` attributes => they are combined together.1. For species: if two species have the same `name` attribute => they are combined together. If initial amount is not the same, the first amount is set. It is easy to set species amounts later.1. For parameters: if two paraemters have the same `name` attribute **and** the same `value` => they are combined together.1. For reactions: if two reactions have the same `name` **and** the same reaction string (reactants -> products) => they are combined together. 1. Other SBML components are also merged. Utility functions for Subsystems1. Set `verbose` keyword argument to `True` to get a list of detailed warning messages that describe the changes being made to the models. Helpful in debugging and creating clean models when combining multiple models.1. Use `renameSName` method for a `Subsystem` to rename any species' names throughout a model and `renameSIdRefs` to rename identifiers.1. Use `createBasicSubsystem()` function to get a basic "empty" subsystem model.1. Use `getSpeciesByName` to get all species with a given name in a Subsystem model.1. use `shareSubsystems` method similar to `combineSubsystems` method if you are only interested in getting a model with shared resource species combined together. 1. Set `combineNames` keyword argument to `False` in `combineSubsystems` method to combine the Subsystem objects but treating the elements with the same `name` as different. Modeling transport across membranes ![image.png](attachment:image.png) System 1 : TX-TL with IPTG reservoir and no membrane
from subsbml.System import System, combineSystems cell_1 = System('cell_1') ss1 = createSubsystem('txtl_ss.xml') ss1.renameSName('S', 'IPTG') ss2 = createSubsystem('IPTG_reservoir.xml') IPTG_external_conc = ss2.getSpeciesByName('IPTG').getInitialConcentration() cell_1.setInternal([ss1]) cell_1.setExternal([ss2]) # cell_1.setMembrane() # Membrane-less system ss1.setSpeciesAmount('IPTG', IPTG_external_conc) cell_1_model = cell_1.getModel() # Get a Subsystem object that represents the combined model for cell_1 cell_1_model.writeSBML('cell_1_model.xml')
_____no_output_____
BSD-3-Clause
examples/BE240_Lecture4_Sub-SBML.ipynb
BuildACell/subsbml
System 2 : TX-TL with IPTG reservoir and a simple membrane Membrane : IPTG external and internal diffusion in a one step reversible reaction
from subsbml import System, createSubsystem, combineSystems, createNewSubsystem ss1 = createSubsystem('txtl_ss.xml') ss1.renameSName('S','IPTG') ss2 = createSubsystem('IPTG_reservoir.xml') # Create a simple IPTG membrane where IPTG goes in an out of the membrane via a reversible reaction mb2 = createSubsystem('membrane_IPTG.xml', membrane = True) # cell_2 = System('cell_2',ListOfInternalSubsystems = [ss1], # ListOfExternalSubsystems = [ss2], # ListOfMembraneSubsystems = [mb2]) cell_2 = System('cell_2') cell_2.setInternal(ss1) cell_2.setExternal(ss2) cell_2.setMembrane(mb2) cell_2_model = cell_2.getModel() cell_2_model.setSpeciesAmount('IPTG', 1e4, compartment = 'cell_2_external') cell_2_model.writeSBML('cell_2_model.xml')
The subsystem from membrane_IPTG.xml has multiple compartments
BSD-3-Clause
examples/BE240_Lecture4_Sub-SBML.ipynb
BuildACell/subsbml
System 3 : TX-TL with IPTG reservoir and a detailed membrane diffusion Membrane : IPTG external binds to a transport protein and forms a complex. This complex causes the diffusion of IPTG in the internal of the cell.
# Create a more detailed IPTG membrane where IPTG binds to an intermediate transporter protein, forms a complex # then transports out of the cell system to the external environment mb3 = createSubsystem('membrane_IPTG_detailed.xml', membrane = True) cell_3 = System('cell_3',ListOfInternalSubsystems = [ss1], ListOfExternalSubsystems = [ss2], ListOfMembraneSubsystems = [mb3]) cell_3_model = cell_3.getModel() cell_3_model.setSpeciesAmount('IPTG', 1e4, compartment = 'cell_3_external') cell_3_model.writeSBML('cell_3_model.xml') combined_model = combineSystems([cell_1, cell_2, cell_3]) try: import numpy as np import matplotlib.pyplot as plt timepoints = np.linspace(0,2,100) results_1, _ = cell_1_model.simulateWithBioscrape(timepoints) results_2, _ = cell_2_model.simulateWithBioscrape(timepoints) results_3, _ = cell_3_model.simulateWithBioscrape(timepoints) X_id1 = cell_1_model.getSpeciesByName('X').getId() X_id2 = cell_2_model.getSpeciesByName('X', compartment = 'cell_2_internal').getId() X_id3 = cell_3_model.getSpeciesByName('X', compartment = 'cell_3_internal').getId() plt.plot(timepoints, results_1[X_id1], linewidth = 3, label = 'No membrane') plt.plot(timepoints, results_2[X_id2], linewidth = 3, label = 'Simple membrane') plt.plot(timepoints, results_3[X_id3], linewidth = 3, label = 'Advanced membrane') plt.xlabel('Time') plt.ylabel('[X]') plt.legend() plt.show() timepoints = np.linspace(0,200,100) results_1, _ = cell_1_model.simulateWithBioscrape(timepoints) results_2, _ = cell_2_model.simulateWithBioscrape(timepoints) results_3, _ = cell_3_model.simulateWithBioscrape(timepoints) X_id1 = cell_1_model.getSpeciesByName('X').getId() X_id2 = cell_2_model.getSpeciesByName('X', compartment = 'cell_2_internal').getId() X_id3 = cell_3_model.getSpeciesByName('X', compartment = 'cell_3_internal').getId() plt.plot(timepoints, results_1[X_id1], linewidth = 3, label = 'No membrane') plt.plot(timepoints, results_2[X_id2], linewidth = 3, label = 'Simple membrane') plt.plot(timepoints, results_3[X_id3], linewidth = 3, label = 'Advanced membrane') plt.xlabel('Time') plt.ylabel('[X]') plt.legend() plt.show() except: print('Simulator not found')
_____no_output_____
BSD-3-Clause
examples/BE240_Lecture4_Sub-SBML.ipynb
BuildACell/subsbml
[Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) The Extended Kalman Filter
#format the book %matplotlib inline from __future__ import division, print_function from book_format import load_style load_style()
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
At this point in the book we have developed the theory for the linear Kalman filter. Then, in the last two chapters we broached the topic of using Kalman filters for nonlinear problems. In this chapter we will learn the Extended Kalman filter (EKF). The EKF handles nonlinearity by linearizing the system at the point of the current estimate, and then the linear Kalman filter is used to filter this linearized system. It was one of the very first techniques used for nonlinear problems, and it remains the most common technique. The EKF provides significant mathematical challenges to the designer of the filter; this is the most challenging chapter of the book. To be honest, I do everything I can to avoid the EKF in favor of other techniques that have been developed to filter nonlinear problems. However, the topic is unavoidable; all classic papers and a majority of current papers in the field use the EKF. Even if you do not use the EKF in your own work you will need to be familiar with the topic to be able to read the literature. Linearizing the Kalman FilterThe Kalman filter uses linear equations, so it does not work with nonlinear problems. Problems can be nonlinear in two ways. First, the process model might be nonlinear. An object falling through the atmosphere encounters drag which reduces its acceleration. The amount of drag varies based on the velocity the object. The resulting behavior is nonlinear - it cannot be modeled with linear equations. Second, the measurements could be nonlinear. For example, a radar gives a range and bearing to a target. We use trigonometry, which is nonlinear, to compute the position of the target.For the linear filter we have these equations for the process and measurement models:$$\begin{aligned}\overline{\mathbf x} &= \mathbf{Ax} + \mathbf{Bu} + w_x\\\mathbf z &= \mathbf{Hx} + w_z\end{aligned}$$For the nonlinear model these equations must be modified to read:$$\begin{aligned}\overline{\mathbf x} &= f(\mathbf x, \mathbf u) + w_x\\\mathbf z &= h(\mathbf x) + w_z\end{aligned}$$The linear expression $\mathbf{Ax} + \mathbf{Bu}$ is replaced by a nonlinear function $f(\mathbf x, \mathbf u)$, and the linear expression $\mathbf{Hx}$ is replaced by a nonlinear function $h(\mathbf x)$.You might imagine that we proceed by finding a new set of Kalman filter equations that optimally solve these equations. But if you remember the charts in the **Nonlinear Filtering** chapter you'll recall that passing a Gaussian through a nonlinear function results in a probability distribution that is no longer Gaussian. So this will not work.The EKF does not alter the Kalman filter's linear equations. Instead, it *linearizes* the nonlinear equations at the point of the current estimate, and uses this linearization in the linear Kalman filter. *Linearize* means what it sounds like. We find a line that most closely matches the curve at a defined point. The graph below linearizes the parabola $f(x)=x^2−2x$ at $x=1.5$.
import ekf_internal ekf_internal.show_linearization()
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
If the curve above is the process model, then the dotted lines shows the linearization of that curve for the estimate $x=1.5$.We linearize systems by finding the slope of the curve at the given point:$$\begin{aligned}f(x) &= x^2 -2x \\\frac{df}{dx} &= 2x - 2\end{aligned}$$and then finding its value at the evaluation point:$$\begin{aligned}m &= f'(x=1.5) \\&= 2(1.5) - 2 \\&= 1\end{aligned}$$ Our math will be more complicated because we are working with systems of differential equations. We linearize $f(\mathbf x, \mathbf u)$, and $h(\mathbf x)$ by taking the partial derivatives ($\frac{\partial}{\partial \mathbf x}$) of each to evaluate $\mathbf A$ and $\mathbf H$ at the point $\mathbf x_t$ and $\mathbf u_t$. This gives us the the system dynamics matrix and measurement model matrix:$$\begin{aligned}\mathbf A &= {\frac{\partial{f(\mathbf x_t, \mathbf u_t)}}{\partial{\mathbf x}}}\biggr|_{{\mathbf x_t},{\mathbf u_t}} \\\mathbf H &= \frac{\partial{h(\mathbf x_t)}}{\partial{\mathbf x}}\biggr|_{\mathbf x_t} \end{aligned}$$ Finally, we find the discrete state transition matrix $\mathbf F$ by using the approximation of the Taylor-series expansion of $e^{\mathbf A \Delta t}$:$$\mathbf F = e^{\mathbf A\Delta t} = \mathbf{I} + \mathbf A\Delta t + \frac{(\mathbf A\Delta t)^2}{2!} + \frac{(\mathbf A\Delta t)^3}{3!} + ... $$Alternatively, you can use one of the other techniques we learned in the **Kalman Math** chapter. This leads to the following equations for the EKF. I placed them beside the equations for the linear Kalman filter, and put boxes around the only changes:$$\begin{array}{l|l}\text{linear Kalman filter} & \text{EKF} \\\hline & \boxed{\mathbf A = {\frac{\partial{f(\mathbf x_t, \mathbf u_t)}}{\partial{\mathbf x}}}\biggr|_{{\mathbf x_t},{\mathbf u_t}}} \\& \boxed{\mathbf F = e^{\mathbf A \Delta t}} \\\mathbf{\overline x} = \mathbf{Fx} + \mathbf{Bu} & \boxed{\mathbf{\overline x} = f(\mathbf x, \mathbf u)} \\\mathbf{\overline P} = \mathbf{FPF}^\mathsf{T}+\mathbf Q & \mathbf{\overline P} = \mathbf{FPF}^\mathsf{T}+\mathbf Q \\\hline& \boxed{\mathbf H = \frac{\partial{h(\mathbf x_t)}}{\partial{\mathbf x}}\biggr|_{\mathbf x_t}} \\\textbf{y} = \mathbf z - \mathbf{H \bar{x}} & \textbf{y} = \mathbf z -\mathbf{H \bar{x}}\\\mathbf{K} = \mathbf{\bar{P}H}^\mathsf{T} (\mathbf{H\bar{P}H}^\mathsf{T} + \mathbf R)^{-1} & \mathbf{K} = \mathbf{\bar{P}H}^\mathsf{T} (\mathbf{H\bar{P}H}^\mathsf{T} + \mathbf R)^{-1} \\\mathbf x=\mathbf{\bar{x}} +\mathbf{K\textbf{y}} & \mathbf x=\mathbf{\bar{x}} +\mathbf{K\textbf{y}} \\\mathbf P= (\mathbf{I}-\mathbf{KH})\mathbf{\bar{P}} & \mathbf P= (\mathbf{I}-\mathbf{KH})\mathbf{\bar{P}}\end{array}$$We don't normally use $\mathbf{Fx}$ to propagate the state for the EKF as the linearization causes inaccuracies. It is typical to compute $\overline{\mathbf x}$ using a numerical integration technique such as Euler or Runge Kutta. Thus I wrote $\mathbf{\overline x} = f(\mathbf x, \mathbf u)$.I think the easiest way to understand the EKF is to start off with an example. After we do a few examples you may want to come back and reread this section. Example: Tracking a Flying Airplane We will start by simulating tracking an airplane by using ground based radar. We implemented a UKF for this problem in the last chapter. Now we will implement an EKF for the same problem so we can compare both the filter performance and the level of effort required to implement the filter.Radars work by emitting a beam of radio waves and scanning for a return bounce. Anything in the beam's path will reflects some of the signal back to the radar. By timing how long it takes for the reflected signal to get back to the radar the system can compute the *slant distance* - the straight line distance from the radar installation to the object.For this example we want to take the slant range measurement from the radar and compute the horizontal position (distance of aircraft from the radar measured over the ground) and altitude of the aircraft, as in the diagram below.
import ekf_internal ekf_internal.show_radar_chart()
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
This gives us the equality $x=\sqrt{slant^2 - altitude^2}$. Design the State Variables We want to track the position of an aircraft assuming a constant velocity and altitude, and measurements of the slant distance to the aircraft. That means we need 3 state variables - horizontal distance, horizonal velocity, and altitude:$$\mathbf x = \begin{bmatrix}\mathtt{distance} \\\mathtt{velocity}\\ \mathtt{altitude}\end{bmatrix}= \begin{bmatrix}x \\ \dot x\\ y\end{bmatrix}$$ Design the Process Model We assume a Newtonian, kinematic system for the aircraft. We've used this model in previous chapters, so by inspection you may recognize that we want$$\mathbf F = \left[\begin{array}{cc|c} 1 & \Delta t & 0\\0 & 1 & 0 \\ \hline0 & 0 & 1\end{array}\right]$$I've partioned the matrix into blocks to show the upper left block is a constant velocity model for $x$, and the lower right block is a constant position model for $y$.However, let's practice finding these matrix for a nonlinear system. We model nonlinear systems with a set of differential equations. We need an equation in the form $$\dot{\mathbf x} = \mathbf{Ax} + \mathbf{w}$$where $\mathbf{w}$ is the system noise. The variables $x$ and $y$ are independent so we can compute them separately. The differential equations for motion in one dimension are:$$\begin{aligned}v &= \dot x \\a &= \ddot{x} = 0\end{aligned}$$Now we put the differential equations into state-space form. If this was a second or greater order differential system we would have to first reduce them to an equivalent set of first degree equations. The equations are first order, so we put them in state space matrix form as$$\begin{aligned}\begin{bmatrix}\dot x \\ \ddot{x}\end{bmatrix} &= \begin{bmatrix}0&1\\0&0\end{bmatrix} \begin{bmatrix}x \\ \dot x\end{bmatrix} \\ \dot{\mathbf x} &= \mathbf{Ax}\end{aligned}$$where $\mathbf A=\begin{bmatrix}0&1\\0&0\end{bmatrix}$. Recall that $\mathbf A$ is the *system dynamics matrix*. It describes a set of linear differential equations. From it we must compute the state transition matrix $\mathbf F$. $\mathbf F$ describes a discrete set of linear equations which compute $\mathbf x$ for a discrete time step $\Delta t$.and solve the following power series expansion of the matrix exponential to linearize the equations at $t$:$$\mathbf F(\Delta t) = e^{\mathbf A\Delta t} = \mathbf{I} + \mathbf A\Delta t + \frac{(\mathbf A\Delta t)^2}{2!} + \frac{(\mathbf A \Delta t)^3}{3!} + ... $$$\mathbf A^2 = \begin{bmatrix}0&0\\0&0\end{bmatrix}$, so all higher powers of $\mathbf A$ are also $\mathbf{0}$. Thus the power series expansion is:$$\begin{aligned}\mathbf F(\Delta t) &=\mathbf{I} + \mathbf At + \mathbf{0} \\&= \begin{bmatrix}1&0\\0&1\end{bmatrix} + \begin{bmatrix}0&1\\0&0\end{bmatrix}\Delta t\\&= \begin{bmatrix}1&t\\0&1\end{bmatrix}\end{aligned}$$This give us$$\begin{aligned}\mathbf{\overline x} &=\mathbf{Fx} \\\mathbf{\overline x} &=\begin{bmatrix}1&\Delta t\\0&1\end{bmatrix}\mathbf x\end{aligned}$$This is the same result used by the kinematic equations! This exercise was unnecessary other than to illustrate linearizing differential equations. Subsequent examples will require you to use these techniques. Design the Measurement Model The measurement function for our filter needs to take the filter state $\mathbf x$ and turn it into a measurement, which is the slant range distance. We use the Pythagorean theorem to derive$$h(\mathbf x) = \sqrt{x^2 + y^2}$$The relationship between the slant distance and the position on the ground is nonlinear due to the square root term. To use it in the EKF we must linearize it. As we discussed above, the best way to linearize an equation at a point is to find its slope, which we do by evaluatiing its partial derivative at a point:$$\mathbf H = \frac{\partial{h(\mathbf x)}}{\partial{\mathbf x}}\biggr|_{\mathbf x_t}$$The partial derivative of a matrix is called a Jacobian, and takes the form $$\frac{\partial \mathbf H}{\partial \mathbf x} = \begin{bmatrix}\frac{\partial h_1}{\partial x_1} & \frac{\partial h_1}{\partial x_2} &\dots \\\frac{\partial h_2}{\partial x_1} & \frac{\partial h_2}{\partial x_2} &\dots \\\vdots & \vdots\end{bmatrix}$$In other words, each element in the matrix is the partial derivative of the function $h$ with respect to the variables $x$. For our problem we have$$\mathbf H = \begin{bmatrix}{\partial h}/{\partial x} & {\partial h}/{\partial \dot{x}} & {\partial h}/{\partial y}\end{bmatrix}$$where $h(x) = \sqrt{x^2 + y^2}$.Solving each in turn:$$\begin{aligned}\frac{\partial h}{\partial x} &= \frac{\partial}{\partial x} \sqrt{x^2 + y^2} \\&= \frac{x}{\sqrt{x^2 + y^2}}\end{aligned}$$and$$\begin{aligned}\frac{\partial h}{\partial \dot{x}} &=\frac{\partial}{\partial \dot{x}} \sqrt{x^2 + y^2} \\ &= 0\end{aligned}$$and$$\begin{aligned}\frac{\partial h}{\partial y} &= \frac{\partial}{\partial y} \sqrt{x^2 + y^2} \\ &= \frac{y}{\sqrt{x^2 + y^2}}\end{aligned}$$giving us $$\mathbf H = \begin{bmatrix}\frac{x}{\sqrt{x^2 + y^2}} & 0 &&\frac{y}{\sqrt{x^2 + y^2}}\end{bmatrix}$$This may seem daunting, so step back and recognize that all of this math is doing something very simple. We have an equation for the slant range to the airplane which is nonlinear. The Kalman filter only works with linear equations, so we need to find a linear equation that approximates $\mathbf H$. As we discussed above, finding the slope of a nonlinear equation at a given point is a good approximation. For the Kalman filter, the 'given point' is the state variable $\mathbf x$ so we need to take the derivative of the slant range with respect to $\mathbf x$. To make this more concrete, let's now write a Python function that computes the Jacobian of $\mathbf H$ for this problem. The `ExtendedKalmanFilter` class will be using this to generate `ExtendedKalmanFilter.H` at each step of the process.
from math import sqrt def HJacobian_at(x): """ compute Jacobian of H matrix at x """ horiz_dist = x[0] altitude = x[2] denom = sqrt(horiz_dist**2 + altitude**2) return array ([[horiz_dist/denom, 0., altitude/denom]])
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
Finally, let's provide the code for $h(\mathbf x)$
def hx(x): """ compute measurement for slant range that would correspond to state x. """ return (x[0]**2 + x[2]**2) ** 0.5
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
Now lets write a simulation for our radar.
from numpy.random import randn import math class RadarSim(object): """ Simulates the radar signal returns from an object flying at a constant altityude and velocity in 1D. """ def __init__(self, dt, pos, vel, alt): self.pos = pos self.vel = vel self.alt = alt self.dt = dt def get_range(self): """ Returns slant range to the object. Call once for each new measurement at dt time from last call. """ # add some process noise to the system self.vel = self.vel + .1*randn() self.alt = self.alt + .1*randn() self.pos = self.pos + self.vel*self.dt # add measurement noise err = self.pos * 0.05*randn() slant_dist = math.sqrt(self.pos**2 + self.alt**2) return slant_dist + err
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
Design Process and Measurement NoiseThe radar returns the range distance. A good radar can achieve accuracy of $\sigma_{range}= 5$ meters, so we will use that value. This gives us$$\mathbf R = \begin{bmatrix}\sigma_{range}^2\end{bmatrix} = \begin{bmatrix}25\end{bmatrix}$$The design of $\mathbf Q$ requires some discussion. The state $\mathbf x= \begin{bmatrix}x & \dot x & y\end{bmatrix}^\mathtt{T}$. The first two elements are position (down range distance) and velocity, so we can use `Q_discrete_white_noise` noise to compute the values for the upper left hand side of $\mathbf Q$. The third element of $\mathbf x$ is altitude, which we are assuming is independent of the down range distance. That leads us to a block design of $\mathbf Q$ of:$$\mathbf Q = \begin{bmatrix}\mathbf Q_\mathtt{x} & 0 \\ 0 & \mathbf Q_\mathtt{y}\end{bmatrix}$$ ImplementationThe `FilterPy` library provides the class `ExtendedKalmanFilter`. It works very similar to the `KalmanFilter` class we have been using, except that it allows you to provide functions that compute the Jacobian of $\mathbf H$ and the function $h(\mathbf x)$. We have already written the code for these two functions, so let's get going.We start by importing the filter and creating it. There are 3 variables in `x` and only 1 measurement. At the same time we will create our radar simulator.```pythonfrom filterpy.kalman import ExtendedKalmanFilterrk = ExtendedKalmanFilter(dim_x=3, dim_z=1)radar = RadarSim(dt, pos=0., vel=100., alt=1000.)```We will initialize the filter near the airplane's actual position```pythonrk.x = array([radar.pos, radar.vel-10, radar.alt+100])```We assign the system matrix using the first term of the Taylor series expansion we computed above.```pythondt = 0.05rk.F = eye(3) + array([[0, 1, 0], [0, 0, 0], [0, 0, 0]])*dt```After assigning reasonable values to $\mathbf R$, $\mathbf Q$, and $\mathbf P$ we can run the filter with a simple loop```pythonfor i in range(int(20/dt)): z = radar.get_range() rk.update(array([z]), HJacobian_at, hx) rk.predict()```Adding some boilerplate code to save and plot the results we get:
from filterpy.common import Q_discrete_white_noise from filterpy.kalman import ExtendedKalmanFilter from numpy import eye, array, asarray import numpy as np dt = 0.05 rk = ExtendedKalmanFilter(dim_x=3, dim_z=1) radar = RadarSim(dt, pos=0., vel=100., alt=1000.) # make an imperfect starting guess rk.x = array([radar.pos-100, radar.vel+100, radar.alt+1000]) rk.F = eye(3) + array([[0, 1, 0], [0, 0, 0], [0, 0, 0]]) * dt range_std = 5. # meters rk.R = np.diag([range_std**2]) rk.Q[0:2, 0:2] = Q_discrete_white_noise(2, dt=dt, var=0.1) rk.Q[2,2] = 0.1 rk.P *= 50 xs, track = [], [] for i in range(int(20/dt)): z = radar.get_range() track.append((radar.pos, radar.vel, radar.alt)) rk.update(array([z]), HJacobian_at, hx) xs.append(rk.x) rk.predict() xs = asarray(xs) track = asarray(track) time = np.arange(0, len(xs)*dt, dt) ekf_internal.plot_radar(xs, track, time)
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
Using SymPy to compute Jacobians Depending on your experience with derivatives you may have found the computation of the Jacobian difficult. Even if you found it easy, a slightly more difficult problem easily leads to very difficult computations.As explained in Appendix A, we can use the SymPy package to compute the Jacobian for us.
import sympy sympy.init_printing(use_latex=True) x, x_vel, y = sympy.symbols('x, x_vel y') H = sympy.Matrix([sympy.sqrt(x**2 + y**2)]) state = sympy.Matrix([x, x_vel, y]) H.jacobian(state)
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
This result is the same as the result we computed above, and with much less effort on our part! Robot LocalizationSo, time to try a real problem. I warn you that this is far from a simple problem. However, most books choose simple, textbook problems with simple answers, and you are left wondering how to implement a real world solution. We will consider the problem of robot localization. We already implemented this in the **Unscented Kalman Filter** chapter, and I recommend you read that first. In this scenario we have a robot that is moving through a landscape with sensors that give range and bearings to various landmarks. This could be a self driving car using computer vision to identify trees, buildings, and other landmarks. It might be one of those small robots that vacuum your house, or a robot in a warehouse.Our robot has 4 wheels configured the same as an automobile. It maneuvers by pivoting the front wheels. This causes the robot to pivot around the rear axle while moving forward. This is nonlinear behavior which we will have to model. The robot has a sensor that gives it approximate range and bearing to known targets in the landscape. This is nonlinear because computing a position from a range and bearing requires square roots and trigonometry. Both the process model and measurement models are nonlinear. The UKF accommodates both, so we provisionally conclude that the UKF is a viable choice for this problem. Robot Motion Model At a first approximation an automobile steers by pivoting the front tires while moving forward. The front of the car moves in the direction that the wheels are pointing while pivoting around the rear tires. This simple description is complicated by issues such as slippage due to friction, the differing behavior of the rubber tires at different speeds, and the need for the outside tire to travel a different radius than the inner tire. Accurately modeling steering requires a complicated set of differential equations. For Kalman filtering, especially for lower speed robotic applications a simpler *bicycle model* has been found to perform well. This is a depiction of the model:
ekf_internal.plot_bicycle()
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
In the **Unscented Kalman Filter** chapter we derived these equations describing for this model:$$\begin{aligned} x &= x - R\sin(\theta) + R\sin(\theta + \beta) \\y &= y + R\cos(\theta) - R\cos(\theta + \beta) \\\theta &= \theta + \beta\end{aligned}$$where $\theta$ is the robot's heading.You do not need to understand this model in detail if you are not interested in steering models. The important thing to recognize is that our motion model is nonlinear, and we will need to deal with that with our Kalman filter. Design the State VariablesFor our robot we will maintain the position and orientation of the robot:$$\mathbf x = \begin{bmatrix}x \\ y \\ \theta\end{bmatrix}$$Our control input $\mathbf u$ is the velocity $v$ and steering angle $\alpha$:$$\mathbf u = \begin{bmatrix}v \\ \alpha\end{bmatrix}$$ Design the System ModelIn general we model our system as a nonlinear motion model plus noise.$$\overline x = x + f(x, u) + \mathcal{N}(0, Q)$$Using the motion model for a robot that we created above, we can expand this to$$\overline{\begin{bmatrix}x\\y\\\theta\end{bmatrix}} = \begin{bmatrix}x\\y\\\theta\end{bmatrix} + \begin{bmatrix}- R\sin(\theta) + R\sin(\theta + \beta) \\R\cos(\theta) - R\cos(\theta + \beta) \\\beta\end{bmatrix}$$We linearize this with a taylor expansion at $x$:$$f(x, u) \approx \mathbf x + \frac{\partial f(\mathbf x, \mathbf u)}{\partial x}\biggr|_{\mathbf x, \mathbf u} $$We replace $f(x, u)$ with our state estimate $\mathbf x$, and the derivative is the Jacobian of $f$. The Jacobian $\mathbf F$ is$$\mathbf F = \frac{\partial f(x, u)}{\partial x} =\begin{bmatrix}\frac{\partial \dot x}{\partial x} & \frac{\partial \dot x}{\partial y} &\frac{\partial \dot x}{\partial \theta}\\\frac{\partial \dot y}{\partial x} & \frac{\partial \dot y}{\partial y} &\frac{\partial \dot y}{\partial \theta} \\\frac{\partial \dot{\theta}}{\partial x} & \frac{\partial \dot{\theta}}{\partial y} &\frac{\partial \dot{\theta}}{\partial \theta}\end{bmatrix}$$When we calculate these we get$$\mathbf F = \begin{bmatrix}1 & 0 & -R\cos(\theta) + R\cos(\theta+\beta) \\0 & 1 & -R\sin(\theta) + R\sin(\theta+\beta) \\0 & 0 & 1\end{bmatrix}$$We can double check our work with SymPy.
import sympy from sympy.abc import alpha, x, y, v, w, R, theta from sympy import symbols, Matrix sympy.init_printing(use_latex="mathjax", fontsize='16pt') time = symbols('t') d = v*time beta = (d/w)*sympy.tan(alpha) r = w/sympy.tan(alpha) fxu = Matrix([[x-r*sympy.sin(theta) + r*sympy.sin(theta+beta)], [y+r*sympy.cos(theta)- r*sympy.cos(theta+beta)], [theta+beta]]) J = fxu.jacobian(Matrix([x, y, theta])) J
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
That looks a bit complicated. We can use SymPy to substitute terms:
# reduce common expressions B, R = symbols('beta, R') J = J.subs((d/w)*sympy.tan(alpha), B) J.subs(w/sympy.tan(alpha), R)
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
In that form we can see that our computation of the Jacobian is correct.Now we can turn our attention to the noise. Here, the noise is in our control input, so it is in *control space*. In other words, we command a specific velocity and steering angle, but we need to convert that into errors in $x, y, \theta$. In a real system this might vary depending on velocity, so it will need to be recomputed for every prediction. I will choose this as the noise model; for a real robot you will need to choose a model that accurately depicts the error in your system. $$\mathbf{M} = \begin{bmatrix}\sigma_{vel}^2 & 0 \\ 0 & \sigma_\alpha^2\end{bmatrix}$$If this was a linear problem we would convert from control space to state space using the by now familiar $\mathbf{FMF}^\mathsf T$ form. Since our motion model is nonlinear we do not try to find a closed form solution to this, but instead linearize it with a Jacobian which we will name $\mathbf{V}$. $$\mathbf{V} = \frac{\partial f(x, u)}{\partial u} \begin{bmatrix}\frac{\partial \dot x}{\partial v} & \frac{\partial \dot x}{\partial \alpha} \\\frac{\partial \dot y}{\partial v} & \frac{\partial \dot y}{\partial \alpha} \\\frac{\partial \dot{\theta}}{\partial v} & \frac{\partial \dot{\theta}}{\partial \alpha}\end{bmatrix}$$These partial derivatives become very difficult to work with. Let's compute them with SymPy.
V = fxu.jacobian(Matrix([v, alpha])) V = V.subs(sympy.tan(alpha)/w, 1/R) V = V.subs(time*v/R, B) V = V.subs(time*v, 'd') V
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
This should give you an appreciation of how quickly the EKF become mathematically intractable. This gives us the final form of our prediction equations:$$\begin{aligned}\mathbf{\overline x} &= \mathbf x + \begin{bmatrix}- R\sin(\theta) + R\sin(\theta + \beta) \\R\cos(\theta) - R\cos(\theta + \beta) \\\beta\end{bmatrix}\\\mathbf{\overline P} &=\mathbf{FPF}^{\mathsf T} + \mathbf{VMV}^{\mathsf T}\end{aligned}$$One final point. This form of linearization is not the only way to predict $\mathbf x$. For example, we could use a numerical integration technique like *Runge Kutta* to compute the position of the robot in the future. In fact, if the time step is relatively large you will have to do that. As I am sure you are realizing, things are not as cut and dried with the EKF as it was for the KF. For a real problem you have to very carefully model your system with differential equations and then determine the most appropriate way to solve that system. The correct approach depends on the accuracy you require, how nonlinear the equations are, your processor budget, and numerical stability concerns. These are all topics beyond the scope of this book. Design the Measurement ModelNow we need to design our measurement model. For this problem we are assuming that we have a sensor that receives a noisy bearing and range to multiple known locations in the landscape. The measurement model must convert the state $\begin{bmatrix}x & y&\theta\end{bmatrix}^\mathsf T$ into a range and bearing to the landmark. Using $p$ be the position of a landmark, the range $r$ is$$r = \sqrt{(p_x - x)^2 + (p_y - y)^2}$$We assume that the sensor provides bearing relative to the orientation of the robot, so we must subtract the robot's orientation from the bearing to get the sensor reading, like so:$$\phi = \arctan(\frac{p_y - y}{p_x - x}) - \theta$$Thus our function is$$\begin{aligned}\mathbf x& = h(x,p) &+ \mathcal{N}(0, R)\\&= \begin{bmatrix}\sqrt{(p_x - x)^2 + (p_y - y)^2} \\\arctan(\frac{p_y - y}{p_x - x}) - \theta \end{bmatrix} &+ \mathcal{N}(0, R)\end{aligned}$$This is clearly nonlinear, so we need linearize $h(x, p)$ at $\mathbf x$ by taking its Jacobian. We compute that with SymPy below.
px, py = symbols('p_x, p_y') z = Matrix([[sympy.sqrt((px-x)**2 + (py-y)**2)], [sympy.atan2(py-y, px-x) - theta]]) z.jacobian(Matrix([x, y, theta]))
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
Now we need to write that as a Python function. For example we might write:
from math import sqrt def H_of(x, landmark_pos): """ compute Jacobian of H matrix where h(x) computes the range and bearing to a landmark for state x """ px = landmark_pos[0] py = landmark_pos[1] hyp = (px - x[0, 0])**2 + (py - x[1, 0])**2 dist = sqrt(hyp) H = array( [[-(px - x[0, 0]) / dist, -(py - x[1, 0]) / dist, 0], [ (py - x[1, 0]) / hyp, -(px - x[0, 0]) / hyp, -1]]) return H
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
We also need to define a function that converts the system state into a measurement.
from math import atan2 def Hx(x, landmark_pos): """ takes a state variable and returns the measurement that would correspond to that state. """ px = landmark_pos[0] py = landmark_pos[1] dist = sqrt((px - x[0, 0])**2 + (py - x[1, 0])**2) Hx = array([[dist], [atan2(py - x[1, 0], px - x[0, 0]) - x[2, 0]]]) return Hx
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
Design Measurement NoiseThis is quite straightforward as we need to specify measurement noise in measurement space, hence it is linear. It is reasonable to assume that the range and bearing measurement noise is independent, hence$$R=\begin{bmatrix}\sigma_{range}^2 & 0 \\ 0 & \sigma_{bearing}^2\end{bmatrix}$$ ImplementationWe will use `FilterPy`'s `ExtendedKalmanFilter` class to implement the filter. Its `predict()` method uses the standard linear equations. Our process model is nonlinear, so we will have to override `predict()` with our own version. I'll want to also use this class to simulate the robot, so I'll add a method `move()` that computes the position of the robot which both `predict()` and my simulation can call.The matrices for the prediction step are quite large. While writing this code I made several errors before I finally got it working. I only found my errors by using SymPy's `evalf` function, which allows you to evaluate a SymPy `Matrix` with specific values for the variables. I decided to demonstrate this technique, and to eliminate a possible source of bugs, by using SymPy in the Kalman filter. You'll need to understand a couple of points.First, `evalf` uses a dictionary to pass in the values you want to use. For example, if your matrix contains an `x` and `y`, you can write```python M.evalf(subs={x:3, y:17})``` to evaluate the matrix for `x=3` and `y=17`. Second, `evalf` returns a `sympy.Matrix` object. Use `numpy.array(M).astype(float)` to convert it to a NumPy array. `numpy.array(M)` creates an array of type `object`, which is not what you want.Here is the code for the EKF:
from filterpy.kalman import ExtendedKalmanFilter as EKF from numpy import dot, array, sqrt class RobotEKF(EKF): def __init__(self, dt, wheelbase, std_vel, std_steer): EKF.__init__(self, 3, 2, 2) self.dt = dt self.wheelbase = wheelbase self.std_vel = std_vel self.std_steer = std_steer a, x, y, v, w, theta, time = symbols( 'a, x, y, v, w, theta, t') d = v*time beta = (d/w)*sympy.tan(a) r = w/sympy.tan(a) self.fxu = Matrix( [[x-r*sympy.sin(theta)+r*sympy.sin(theta+beta)], [y+r*sympy.cos(theta)-r*sympy.cos(theta+beta)], [theta+beta]]) self.F_j = self.fxu.jacobian(Matrix([x, y, theta])) self.V_j = self.fxu.jacobian(Matrix([v, a])) # save dictionary and it's variables for later use self.subs = {x: 0, y: 0, v:0, a:0, time:dt, w:wheelbase, theta:0} self.x_x, self.x_y, = x, y self.v, self.a, self.theta = v, a, theta def predict(self, u=0): self.x = self.move(self.x, u, self.dt) self.subs[self.theta] = self.x[2, 0] self.subs[self.v] = u[0] self.subs[self.a] = u[1] F = array(self.F_j.evalf(subs=self.subs)).astype(float) V = array(self.V_j.evalf(subs=self.subs)).astype(float) # covariance of motion noise in control space M = array([[self.std_vel*u[0]**2, 0], [0, self.std_steer**2]]) self.P = dot(F, self.P).dot(F.T) + dot(V, M).dot(V.T) def move(self, x, u, dt): hdg = x[2, 0] vel = u[0] steering_angle = u[1] dist = vel * dt if abs(steering_angle) > 0.001: # is robot turning? beta = (dist / self.wheelbase) * tan(steering_angle) r = self.wheelbase / tan(steering_angle) # radius dx = np.array([[-r*sin(hdg) + r*sin(hdg + beta)], [r*cos(hdg) - r*cos(hdg + beta)], [beta]]) else: # moving in straight line dx = np.array([[dist*cos(hdg)], [dist*sin(hdg)], [0]]) return x + dx
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
Now we have another issue to handle. The residual is notionally computed as $y = z - h(x)$ but this will not work because our measurement contains an angle in it. Suppose z has a bearing of $1^\circ$ and $h(x)$ has a bearing of $359^\circ$. Naively subtracting them would yield a bearing difference of $-358^\circ$, which will throw off the computation of the Kalman gain. The correct angle difference in this case is $2^\circ$. So we will have to write code to correctly compute the bearing residual.
def residual(a, b): """ compute residual (a-b) between measurements containing [range, bearing]. Bearing is normalized to [-pi, pi)""" y = a - b y[1] = y[1] % (2 * np.pi) # force in range [0, 2 pi) if y[1] > np.pi: # move to [-pi, pi) y[1] -= 2 * np.pi return y
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
The rest of the code runs the simulation and plots the results, and shouldn't need too much comment by now. I create a variable `landmarks` that contains the coordinates of the landmarks. I update the simulated robot position 10 times a second, but run the EKF only once. This is for two reasons. First, we are not using Runge Kutta to integrate the differental equations of motion, so a narrow time step allows our simulation to be more accurate. Second, it is fairly normal in embedded systems to have limited processing speed. This forces you to run your Kalman filter only as frequently as absolutely needed.
from filterpy.stats import plot_covariance_ellipse from math import sqrt, tan, cos, sin, atan2 import matplotlib.pyplot as plt dt = 1.0 def z_landmark(lmark, sim_pos, std_rng, std_brg): x, y = sim_pos[0, 0], sim_pos[1, 0] d = np.sqrt((lmark[0] - x)**2 + (lmark[1] - y)**2) a = atan2(lmark[1] - y, lmark[0] - x) - sim_pos[2, 0] z = np.array([[d + randn()*std_rng], [a + randn()*std_brg]]) return z def ekf_update(ekf, z, landmark): ekf.update(z, HJacobian=H_of, Hx=Hx, residual=residual, args=(landmark), hx_args=(landmark)) def run_localization(landmarks, std_vel, std_steer, std_range, std_bearing, step=10, ellipse_step=20, ylim=None): ekf = RobotEKF(dt, wheelbase=0.5, std_vel=std_vel, std_steer=std_steer) ekf.x = array([[2, 6, .3]]).T # x, y, steer angle ekf.P = np.diag([.1, .1, .1]) ekf.R = np.diag([std_range**2, std_bearing**2]) sim_pos = ekf.x.copy() # simulated position # steering command (vel, steering angle radians) u = array([1.1, .01]) plt.scatter(landmarks[:, 0], landmarks[:, 1], marker='s', s=60) track = [] for i in range(200): sim_pos = ekf.move(sim_pos, u, dt/10.) # simulate robot track.append(sim_pos) if i % step == 0: ekf.predict(u=u) if i % ellipse_step == 0: plot_covariance_ellipse( (ekf.x[0,0], ekf.x[1,0]), ekf.P[0:2, 0:2], std=6, facecolor='k', alpha=0.3) x, y = sim_pos[0, 0], sim_pos[1, 0] for lmark in landmarks: z = z_landmark(lmark, sim_pos, std_range, std_bearing) ekf_update(ekf, z, lmark) if i % ellipse_step == 0: plot_covariance_ellipse( (ekf.x[0,0], ekf.x[1,0]), ekf.P[0:2, 0:2], std=6, facecolor='g', alpha=0.8) track = np.array(track) plt.plot(track[:, 0], track[:,1], color='k', lw=2) plt.axis('equal') plt.title("EKF Robot localization") if ylim is not None: plt.ylim(*ylim) plt.show() return ekf landmarks = array([[5, 10], [10, 5], [15, 15]]) ekf = run_localization( landmarks, std_vel=0.1, std_steer=np.radians(1), std_range=0.3, std_bearing=0.1) print('Final P:', ekf.P.diagonal())
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
I have plotted the landmarks as solid squares. The path of the robot is drawn with black line. The covariance ellipses for the predict step is light gray, and the covariances of the update are shown in green. To make them visible at this scale I have set the ellipse boundary at 6$\sigma$.From this we can see that there is a lot of uncertainty added by our motion model, and that most of the error in in the direction of motion. We can see that from the shape of the blue ellipses. After a few steps we can see that the filter incorporates the landmark measurements.I used the same initial conditions and landmark locations in the UKF chapter. You can see both in the plot and in the printed final value for $\mathbf P$ that the UKF achieves much better accuracy in terms of the error ellipse. The black solid line denotes the robot's actual path. Both perform roughly as well as far as their estimate for $\mathbf x$ is concerned. Now lets add another landmark.
landmarks = array([[5, 10], [10, 5], [15, 15], [20, 5]]) ekf = run_localization( landmarks, std_vel=0.1, std_steer=np.radians(1), std_range=0.3, std_bearing=0.1) plt.show() print('Final P:', ekf.P.diagonal())
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
The uncertainly in the estimates near the end of the track are smaller with the additional landmark. We can see the fantastic effect that multiple landmarks has on our uncertainty by only using the first two landmarks.
ekf = run_localization( landmarks[0:2], std_vel=1.e-10, std_steer=1.e-10, std_range=1.4, std_bearing=.05) print('Final P:', ekf.P.diagonal())
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
The estimate quickly diverges from the robot's path after passing the landmarks. The covariance also grows quickly. Let's see what happens with only one landmark:
ekf = run_localization( landmarks[0:1], std_vel=1.e-10, std_steer=1.e-10, std_range=1.4, std_bearing=.05) print('Final P:', ekf.P.diagonal())
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
As you probably suspected, only one landmark produces a very bad result. Conversely, a large number of landmarks allows us to make very accurate estimates.
landmarks = array([[5, 10], [10, 5], [15, 15], [20, 5], [15, 10], [10,14], [23, 14], [25, 20], [10, 20]]) ekf = run_localization( landmarks, std_vel=0.1, std_steer=np.radians(1), std_range=0.3, std_bearing=0.1, ylim=(0, 21)) print('Final P:', ekf.P.diagonal())
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
DiscussionI said that this was a 'real' problem, and in some ways it is. I've seen alternative presentations that used robot motion models that led to much easier Jacobians. On the other hand, my model of a automobile's movement is itself simplistic in several ways. First, it uses a bicycle model. A real car has two sets of tires, and each travels on a different radius. The wheels do not grip the surface perfectly. I also assumed that the robot responds instantaneously to the control input. Sebastian Thrun writes in *Probabilistic Robots* that simplified models are justified because the filters perform well when used to track real vehicles. The lesson here is that while you have to have a reasonably accurate nonlinear model, it does not need to be perfect to operate well. As a designer you will need to balance the fidelity of your model with the difficulty of the math and the computation required to implement the equations. Another way in which this problem was simplistic is that we assumed that we knew the correspondance between the landmarks and measurements. But suppose we are using radar - how would we know that a specific signal return corresponded to a specific building in the local scene? This question hints at SLAM algorithms - simultaneous localization and mapping. SLAM is not the point of this book, so I will not elaborate on this topic. UKF vs EKFI implemented this tracking problem using the UKF in the previous chapter. The difference in implementation should be very clear. Computing the Jacobians for the state and measurement models was not trivial despite a rudimentary motion model. I am justified in using this model because the research resulting from the DARPA car challenges has shown that it works well in practice. A different problem could result in a Jacobian which is difficult or impossible to derive analytically. In contrast, the UKF only requires you to provide a function that computes the system motion model and another for the measurement model. There are many cases where the Jacobian cannot be found analytically. The details are beyond the scope of this book, but you will have to use numerical methods to compute the Jacobian. That is a very nontrivial undertaking, and you will spend a significant portion of a master's degree at a STEM school learning techniques to handle such situations. Even then you'll likely only be able to solve problems related to your field - an aeronautical engineer learns a lot about Navier Stokes equations, but not much about modelling chemical reaction rates. So, UKFs are easy. Are they accurate? In practice they often perform better than the EKF. You can find plenty of research papers that prove that the UKF outperforms the EKF in various problem domains. It's not hard to understand why this would be true. The EKF works by linearizing the system model and measurement model at a single point, and the UKF uses $2n+1$ points.Let's look at a specific example. Take $f(x) = x^3$ and pass a Gaussian distribution through it. I will compute an accurate answer using a monte carlo simulation. I generate 50,000 points randomly distributed according to the Gaussian, pass each through $f(x)$, then compute the mean and variance of the result. First, let's see how the EKF fairs. The EKF linearizes the function by taking the derivative and evaluating it the mean $x$ to get the slope tangent to the function at that point. This slope becomes the linear function that we use to transform the Gaussian. Here is a plot of that.
import nonlinear_plots nonlinear_plots.plot_ekf_vs_mc()
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
We can see from both the graph and the print out at the bottom that the EKF has introduced quite a bit of error.In contrast, here is the performance of the UKF:
nonlinear_plots.plot_ukf_vs_mc(alpha=0.001, beta=3., kappa=1.)
_____no_output_____
CC-BY-4.0
11-Extended-Kalman-Filters.ipynb
galuardi/Kalman-and-Bayesian-Filters-in-Python
!nvidia-smi !git clone https://github.com/venkat2319/MIRnet %cd MIRNet !pip install -qq wandb from glob import glob import tensorflow as tf from mirnet.train import LowLightTrainer from mirnet.utils import init_wandb, download_dataset tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR) download_dataset('LOL') init_wandb( project_name='mirnet', experiment_name='LOL_lowlight_experiment_2_256x256', wandb_api_key='cf0947ccde62903d4df0742a58b8a54ca4c11673' ) trainer = LowLightTrainer() train_low_light_images = glob('./our485/low/*') train_high_light_images = glob('./our485/high/*') valid_low_light_images = glob('./eval15/low/*') valid_high_light_images = glob('./eval15/high/*') trainer.build_dataset( train_low_light_images, train_high_light_images, valid_low_light_images, valid_high_light_images, crop_size=256, batch_size=2 ) trainer.compile() trainer.train(epochs=100, checkpoint_dir='./checkpoints') from glob import glob from google.colab import files for file in glob('/content/MIRNet/checkpoints/*'): files.download(file)
_____no_output_____
Apache-2.0
notebook/MIRNet_Low_Light_Train.ipynb
venkat2319/MIRnet
Using ObjectScript in a notebookThis notebook uses a kernel written in Python, which plugs into Jupyter to enable execution of ObjectScript inside IRIS. See `misc/kernels/objectscript/*` and `src/ObjectScript/Kernel/CodeExecutor.cls` for how this is done.Indenting each line with at least one space allows InterSystems Language Server to recognize the ObjectScript INT code correctly.
Set hello = "helloworld2" zw hello
hello="helloworld2"
MIT
src/Notebooks/ObjectScript.ipynb
gjsjohnmurray/iris-python-template