repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
---|---|---|---|
google/starthinker | colabs/dv360_api_insert_from_bigquery.ipynb | apache-2.0 | !pip install git+https://github.com/google/starthinker
"""
Explanation: 1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
"""
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
"""
Explanation: 2. Get Cloud Project ID
To run this recipe requires a Google Cloud Project, this only needs to be done once, then click play.
End of explanation
"""
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
"""
Explanation: 3. Get Client Credentials
To read and write to various endpoints requires downloading client credentials, this only needs to be done once, then click play.
End of explanation
"""
FIELDS = {
'auth_write': 'user', # Credentials used for writing data.
'insert': '',
'auth_read': 'service', # Credentials used for reading data.
'dataset': '', # Google BigQuery dataset to create tables in.
'table': '', # Google BigQuery dataset to create tables in.
}
print("Parameters Set To: %s" % FIELDS)
"""
Explanation: 4. Enter DV360 API Insert From BigQuery Parameters
Insert DV360 API Endpoints.
1. Specify the name of the dataset and table.
1. Rows will be read and applied as a insert to DV360.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
"""
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dv360_api': {
'auth': 'user',
'insert': {'field': {'name': 'insert','kind': 'choice','choices': ['advertisers','advertisers.campaigns','advertisers.channels','advertisers.channels.sites','advertisers.creatives','advertisers.insertionOrders','advertisers.lineItems','advertisers.locationLists','advertisers.locationLists.assignedLocations','advertisers.negativeKeywordLists','advertisers.negativeKeywordLists.negativeKeywords','floodlightGroups','inventorySourceGroups','partners.channels','users'],'default': ''}},
'bigquery': {
'auth': 'user',
'dataset': {'field': {'name': 'dataset','kind': 'string','order': 2,'default': '','description': 'Google BigQuery dataset to create tables in.'}},
'table': {'field': {'name': 'table','kind': 'string','order': 3,'default': '','description': 'Google BigQuery dataset to create tables in.'}},
'as_object': True
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
"""
Explanation: 5. Execute DV360 API Insert From BigQuery
This does NOT need to be modified unles you are changing the recipe, click play.
End of explanation
"""
|
tensorflow/docs-l10n | site/en-snapshot/io/tutorials/colorspace.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow IO Authors.
End of explanation
"""
!pip install tensorflow-io
"""
Explanation: Color Space Conversions
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/io/tutorials/colorspace"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/io/blob/master/docs/tutorials/colorspace.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/io/blob/master/docs/tutorials/colorspace.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/io/docs/tutorials/colorspace.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
In computer vision, the selected color space could have a significant the performance of the model. While RGB is the most common color space, in manay situations the model performs better when switching to alternative color spaces such as YUV, YCbCr, XYZ (CIE), etc.
The tensorflow-io package provides a list of color space conversions APIs that can be used to prepare and augment the image data.
Setup
Install required Packages, and restart runtime
End of explanation
"""
!curl -o sample.jpg -L https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg
!ls -ls sample.jpg
"""
Explanation: Download the sample image
The image example used in this tutorial is a cat in the snow, though it could be replaced by any JPEG images.
The following will download the image and save to local disk as sample.jpg:
End of explanation
"""
import tensorflow as tf
import tensorflow_io as tfio
image = tf.image.decode_jpeg(tf.io.read_file('sample.jpg'))
print(image.shape, image.dtype)
"""
Explanation: Usage
Read Image File
Read and decode the image into a uint8 Tensor of shape (213, 320, 3)
End of explanation
"""
import matplotlib.pyplot as plt
plt.figure()
plt.imshow(image)
plt.axis('off')
plt.show()
"""
Explanation: The image can be displayed by:
End of explanation
"""
grayscale = tfio.experimental.color.rgb_to_grayscale(image)
print(grayscale.shape, grayscale.dtype)
# use tf.squeeze to remove last channel for plt.imshow to display:
plt.figure()
plt.imshow(tf.squeeze(grayscale, axis=-1), cmap='gray')
plt.axis('off')
plt.show()
"""
Explanation: Convert RGB to Grayscale
An RGB image can be converted to Grayscale to reduce the channel from 3 to 1 with tfio.experimental.color.rgb_to_grayscale:
End of explanation
"""
bgr = tfio.experimental.color.rgb_to_bgr(image)
print(bgr.shape, bgr.dtype)
plt.figure()
plt.imshow(bgr)
plt.axis('off')
plt.show()
"""
Explanation: Convert RGB to BGR
Some image software and camera manufacturors might prefer BGR, which can be obtained through tfio.experimental.color.rgb_to_bgr:
End of explanation
"""
# convert to float32
image_float32 = tf.cast(image, tf.float32) / 255.0
xyz_float32 = tfio.experimental.color.rgb_to_xyz(image_float32)
# convert back uint8
xyz = tf.cast(xyz_float32 * 255.0, tf.uint8)
print(xyz.shape, xyz.dtype)
plt.figure()
plt.imshow(xyz)
plt.axis('off')
plt.show()
"""
Explanation: Convert RGB to CIE XYZ
CIE XYZ (or CIE 1931 XYZ is a common color space used in many image processing programs. The following is the conversion from RGB to CIE XYZ through tfio.experimental.color.rgb_to_xyz. Note tfio.experimental.color.rgb_to_xyz assumes floating point input in the range of [0, 1] so additional pre-processing is needed:
End of explanation
"""
ycbcr = tfio.experimental.color.rgb_to_ycbcr(image)
print(ycbcr.shape, ycbcr.dtype)
plt.figure()
plt.imshow(ycbcr, cmap='gray')
plt.axis('off')
plt.show()
"""
Explanation: Convert RGB to YCbCr
Finally, YCbCr is the default color space in many video systems. Converting to YCbCr could be done through tfio.experimental.color.rgb_to_ycbcr:
End of explanation
"""
y, cb, cr = ycbcr[:,:,0], ycbcr[:,:,1], ycbcr[:,:,2]
# Y' component
plt.figure()
plt.imshow(y, cmap='gray')
plt.axis('off')
plt.show()
# Cb component
plt.figure()
plt.imshow(cb, cmap='gray')
plt.axis('off')
plt.show()
# Cr component
plt.figure()
plt.imshow(cr, cmap='gray')
plt.axis('off')
plt.show()
"""
Explanation: What is more interesting, though, is that YCbCr could be decomposed into Y' (luma), Cb (blue-difference chroma), and Cr (red-difference chroma) components with each component carry perceptually meaningful information:
End of explanation
"""
|
google/starthinker | colabs/iam.ipynb | apache-2.0 | !pip install git+https://github.com/google/starthinker
"""
Explanation: Project IAM
Sets project permissions for an email.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
"""
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
"""
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
"""
FIELDS = {
'auth_write':'service', # Credentials used for writing data.
'role':'', # projects/[project name]/roles/[role name]
'email':'', # Email address to grant role to.
}
print("Parameters Set To: %s" % FIELDS)
"""
Explanation: 3. Enter Project IAM Recipe Parameters
Provide a role in the form of projects/[project name]/roles/[role name]
Enter an email to grant that role to.
This only grants roles, you must remove them from the project manually.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
"""
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'iam':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'role':{'field':{'name':'role','kind':'string','order':1,'default':'','description':'projects/[project name]/roles/[role name]'}},
'email':{'field':{'name':'email','kind':'string','order':2,'default':'','description':'Email address to grant role to.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
"""
Explanation: 4. Execute Project IAM
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation
"""
|
chetan51/nupic.research | projects/dynamic_sparse/notebooks/ExperimentAnalysis-Neurips-debug-hebbianANDmagnitude-opposite.ipynb | gpl-3.0 | %load_ext autoreload
%autoreload 2
import sys
sys.path.append("../../")
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import glob
import tabulate
import pprint
import click
import numpy as np
import pandas as pd
from ray.tune.commands import *
from dynamic_sparse.common.browser import *
"""
Explanation: Experiment:
Opposite of Hebbian Learning: Hebbian Learning by pruning the highest coactivation, instead of the lowest.
Opposite of Hebbian Growth: growth connections by allowing gradient flow on connections with the lowest coactivation, instead of the highest
Motivation.
Verify the relevance of highest coactivated units, by checking their impact on the model when they are pruned
Verify the relevance of lowest coactivated units, by checking their impact on the model when they are added to the model
Conclusions:
The opposite logic of hebbian pruning, when weight pruning is set to 0, clearly affects the model performance.
Acc when full pruning is done at each state is 0.965 {(1,0), (0,1), (1,1)}
Acc with no pruning is 0.977 {(0,0)}
Best acc is still with only magnitude based pruning {(0,0.2), (0, 0.4)}
Opposite of hebbian prunning (removing connections with highest coactivation) only is harmful to the model, with acc equal or worst than full pruning, even with as low as 0.2 pruning
Opposite random growth (adding connections with lowest activation) reduces acc by ~ 0.02
End of explanation
"""
exps = ['neurips_debug_test10', 'neurips_debug_test11']
paths = [os.path.expanduser("~/nta/results/{}".format(e)) for e in exps]
df = load_many(paths)
df.head(5)
# replace hebbian prine
df['hebbian_prune_perc'] = df['hebbian_prune_perc'].replace(np.nan, 0.0, regex=True)
df['weight_prune_perc'] = df['weight_prune_perc'].replace(np.nan, 0.0, regex=True)
df.columns
df.shape
df.iloc[1]
df.groupby('model')['model'].count()
"""
Explanation: Load and check data
End of explanation
"""
# Did any trials failed?
df[df["epochs"]<30]["epochs"].count()
# Removing failed or incomplete trials
df_origin = df.copy()
df = df_origin[df_origin["epochs"]>=30]
df.shape
# which ones failed?
# failed, or still ongoing?
df_origin['failed'] = df_origin["epochs"]<30
df_origin[df_origin['failed']]['epochs']
# helper functions
def mean_and_std(s):
return "{:.3f} ± {:.3f}".format(s.mean(), s.std())
def round_mean(s):
return "{:.0f}".format(round(s.mean()))
stats = ['min', 'max', 'mean', 'std']
def agg(columns, filter=None, round=3):
if filter is None:
return (df.groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
else:
return (df[filter].groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
"""
Explanation: ## Analysis
Experiment Details
End of explanation
"""
random_grow = (df['hebbian_grow'] == False)
agg(['hebbian_prune_perc'], random_grow)
agg(['weight_prune_perc'], random_grow)
"""
Explanation: What is the impact of removing connections with highest coactivation
End of explanation
"""
pd.pivot_table(df[random_grow],
index='hebbian_prune_perc',
columns='weight_prune_perc',
values='val_acc_max',
aggfunc=mean_and_std)
"""
Explanation: What is the optimal combination of both
End of explanation
"""
# with and without hebbian grow
agg('hebbian_grow')
# with and without hebbian grow
pd.pivot_table(df,
index=['hebbian_grow', 'hebbian_prune_perc'],
columns='weight_prune_perc',
values='val_acc_max',
aggfunc=mean_and_std)
"""
Explanation: The opposite logic of hebbian pruning, when weight pruning is set to 0, clearly affects the model performance.
Acc when full pruning is done at each state is 0.965 {(1,0), (0,1), (1,1)}
Acc with no pruning is 0.977 {(0,0)}
Best acc is still with only magnitude based pruning {(0,0.2), (0, 0.4)}
Opposite of hebbian prunning (removing connections with highest coactivation) only is harmful to the model, with acc equal or worst than full pruning, even with as low as 0.2 pruning
What is the impact of the adding connections with lowest coactivation
End of explanation
"""
|
choderalab/MSMs | initial_ipynbs/Abl_longsim_initial_MSM.ipynb | gpl-2.0 | #Import libraries
import matplotlib.pyplot as plt
import mdtraj as md
import glob
import numpy as np
from msmbuilder.dataset import dataset
%pylab inline
#Import longest trajectory.
t = md.load("run0-clone35.h5")
"""
Explanation: Analysis of large set of Abl simulations on Folding@home (project 10468), one starting configuration
May 1, 2015
This is some initial MSM building Abl simulations.
Section 0: Longest Sim
End of explanation
"""
frame = np.arange(len(t))[:, np.newaxis]
# Using 0.25 so that units are in ns.
time = frame * .250
sim_time = time[-1] * 1e-3
print "Length of this longest simulation of Abl is %s us." % ''.join(map(str, sim_time))
rmsd = md.rmsd(t,t,frame=0)
plt.plot(time, rmsd)
plt.xlabel('time (ns)')
plt.ylabel('RMSD(nm)')
plt.title('RMSD')
"""
Explanation: The timestep for these simulations is 2 fs (can be found in /data/choderalab/fah/initial-models/projects/ABL1_HUMAN_D0_V1/RUN0/integrator.xml [stepSize=".002"]).
Assuming the write frequency is every 125000 steps (can't find project.xml, assuming same as for MEK etc. projects). This means that each frame is 250 ps.
End of explanation
"""
# For now making dir long_sims in bash using:
# > for file in $(find * -type f -size +300000); do cp $file long_sims/$file; done
filenames = glob.glob("run0*.h5")
trajectories = [md.load(filename) for filename in filenames]
len(trajectories)
No_sims = len(trajectories)
print "There are %s sims in this. The shortest one is run0-clone338.h5." % No_sims
t_long_min = md.load("run0-clone338.h5")
frame = np.arange(len(t_long_min))[:, np.newaxis]
# Using 0.25 so that units are in ns.
time = frame * .250
sim_time = time[-1] * 1e-3
print "Length of run0-clone338.h5 %s us." % ''.join(map(str, sim_time))
#NOT DOING THIS FOR NOW
#frame = np.arange(len(trajectories))[:, np.newaxis]
# Using 0.25 so that units are in ns.
#time = frame * .250
#sim_time = time[-1] * 1e-3
#print "The total length of all these long sims is %s us." % ''.join(map(str, sim_time))
"""
Explanation: Load all trajectories > 1 us.
How many frames is 1us? 1000/.25 = 4000 frames!
End of explanation
"""
from msmbuilder import msm, featurizer, utils, decomposition
# Make dihedral_features
dihedrals = featurizer.DihedralFeaturizer(types=["phi", "psi", "chi2"]).transform(trajectories)
# Make tICA features
tica = decomposition.tICA(n_components = 4)
X = tica.fit_transform(dihedrals)
#Note the default lagtime here is 1 (=250ps),
#which is super short according to lit for building reasonable protein MSM.
Xf = np.concatenate(X)
hexbin(Xf[:,0], Xf[:, 1], bins='log')
title("Dihedral tICA Analysis")
xlabel("Slowest Coordinate")
ylabel("Second Slowest Coordinate")
savefig("abl_10467_msm.png", bbox_inches="tight")
"""
Explanation: Section 1: Building an MSM.
End of explanation
"""
#Load trajectory with ensembler models
t_models = md.load("../../ensembler-models/traj-refine_implicit_md.xtc", top = "../../ensembler-models/topol-renumbered-implicit.pdb")
#Now make dihedrals of this.
dihedrals_models = featurizer.DihedralFeaturizer(types=["phi", "psi", "chi2"]).transform([t_models])
x_models = tica.transform(dihedrals_models)
#do not use fit here because don't want to change tica object, want to use one generated from sims.
#Now plot on the slow MSM features found above.
hexbin(Xf[:,0], Xf[:, 1], bins='log')
plot(x_models[0][:, 0], x_models[0][:, 1], 'o', markersize=5, label="ensembler models", color='white')
title("Dihedral tICA Analysis")
xlabel("Slowest Coordinate")
ylabel("Second Slowest Coordinate")
legend(loc=0)
savefig("abl_10467_msm_wmodels.png", bbox_inches="tight")
"""
Explanation: Section 2: Comparing MSM to Danny's ensembler outputs.
End of explanation
"""
|
SteveDiamond/cvxpy | examples/notebooks/WWW/sparse_solution.ipynb | gpl-3.0 | import cvxpy as cp
import numpy as np
# Fix random number generator so we can repeat the experiment.
np.random.seed(1)
# The threshold value below which we consider an element to be zero.
delta = 1e-8
# Problem dimensions (m inequalities in n-dimensional space).
m = 100
n = 50
# Construct a feasible set of inequalities.
# (This system is feasible for the x0 point.)
A = np.random.randn(m, n)
x0 = np.random.randn(n)
b = A.dot(x0) + np.random.random(m)
"""
Explanation: Computing a sparse solution of a set of linear inequalities
A derivative work by Judson Wilson, 5/11/2014.<br>
Adapted from the CVX example of the same name, by Almir Mutapcic, 2/28/2006.
Topic References:
Section 6.2, Boyd & Vandenberghe "Convex Optimization" <br>
"Just relax: Convex programming methods for subset selection and sparse approximation" by J. A. Tropp
Introduction
We consider a set of linear inequalities
$Ax \preceq b$
which are feasible. We apply two heuristics to find a sparse point $x$ that satisfies these inequalities.
The (standard) $\ell_1$-norm heuristic for finding a sparse solution is:
\begin{array}{ll}
\mbox{minimize} & \|x\|_1 \
\mbox{subject to} & Ax \preceq b.
\end{array}
The log-based heuristic is an iterative method for finding
a sparse solution, by finding a local optimal point for the problem:
\begin{array}{ll}
\mbox{minimize} & \sum_i \log \left( \delta + \left|x_i\right| \right) \
\mbox{subject to} & Ax \preceq b,
\end{array}
where $\delta$ is a small threshold value (which determines if a value is close to zero).
We cannot solve this problem since it is a minimization of a concave
function and thus it is not a convex problem. However, we can apply
a heuristic in which we linearize the objective, solve, and re-iterate.
This becomes a weighted $\ell_1$-norm heuristic:
\begin{array}{ll}
\mbox{minimize} & \sum_i W_i \left|x_i\right| \
\mbox{subject to} & Ax \preceq b,
\end{array}
which in each iteration re-adjusts the weights $W_i$ based on the rule:
$$W_i = 1/(\delta + \left|x_i\right|),$$
where $\delta$ is a small threshold value.
This algorithm is described in papers:
"An affine scaling methodology for best basis selection"<br>
by B. D. Rao and K. Kreutz-Delgado
"Portfolio optimization with linear and fixed transaction costs"<br>
by M. S. Lobo, M. Fazel, and S. Boyd
Generate problem data
End of explanation
"""
# Create variable.
x_l1 = cp.Variable(shape=n)
# Create constraint.
constraints = [A*x_l1 <= b]
# Form objective.
obj = cp.Minimize(cp.norm(x_l1, 1))
# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()
print("status: {}".format(prob.status))
# Number of nonzero elements in the solution (its cardinality or diversity).
nnz_l1 = (np.absolute(x_l1.value) > delta).sum()
print('Found a feasible x in R^{} that has {} nonzeros.'.format(n, nnz_l1))
print("optimal objective value: {}".format(obj.value))
"""
Explanation: $\ell_1$-norm heuristic
End of explanation
"""
# Do 15 iterations, allocate variable to hold number of non-zeros
# (cardinality of x) for each run.
NUM_RUNS = 15
nnzs_log = np.array(())
# Store W as a positive parameter for simple modification of the problem.
W = cp.Parameter(shape=n, nonneg=True);
x_log = cp.Variable(shape=n)
# Initial weights.
W.value = np.ones(n);
# Setup the problem.
obj = cp.Minimize( W.T*cp.abs(x_log) ) # sum of elementwise product
constraints = [A*x_log <= b]
prob = cp.Problem(obj, constraints)
# Do the iterations of the problem, solving and updating W.
for k in range(1, NUM_RUNS+1):
# Solve problem.
# The ECOS solver has known numerical issues with this problem
# so force a different solver.
prob.solve(solver=cp.CVXOPT)
# Check for error.
if prob.status != cp.OPTIMAL:
raise Exception("Solver did not converge!")
# Display new number of nonzeros in the solution vector.
nnz = (np.absolute(x_log.value) > delta).sum()
nnzs_log = np.append(nnzs_log, nnz);
print('Iteration {}: Found a feasible x in R^{}'
' with {} nonzeros...'.format(k, n, nnz))
# Adjust the weights elementwise and re-iterate
W.value = np.ones(n)/(delta*np.ones(n) + np.absolute(x_log.value))
"""
Explanation: Iterative log heuristic
End of explanation
"""
import matplotlib.pyplot as plt
# Show plot inline in ipython.
%matplotlib inline
# Plot properties.
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.figure(figsize=(6,6))
# Plot the two data series.
plt.plot(range(1,1+NUM_RUNS), nnzs_log, label='log heuristic')
plt.plot((1, NUM_RUNS), (nnz_l1, nnz_l1), linestyle='--', label='l1-norm heuristic')
# Format and show plot.
plt.xlabel('iteration', fontsize=16)
plt.ylabel('number of non-zeros (cardinality)', fontsize=16)
plt.ylim(0,n)
plt.xlim(1,NUM_RUNS)
plt.legend(loc='lower right')
plt.tight_layout()
plt.show()
"""
Explanation: Result plots
The following code plots the result of the $\ell_1$-norm heuristic, as well as the result for each iteration of the log heuristic.
End of explanation
"""
|
tedunderwood/fiction | bert/logistic_regression_baselines.ipynb | mit | # Things that will come in handy
import pandas as pd
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score, f1_score
from collections import Counter
from scipy.stats import pearsonr
import random, glob, csv
"""
Explanation: Logistic models for blog post
This notebook works up some quick and dirty bag-of-words models, to see how much this approach suffers when we cut whole documents into 128- or 256-word chunks.
We're going to use LogisticRegression from scikit-learn, and apply it in three ways:
To whole documents.
To BERT-sized chunks.
Aggregating the votes from BERT-sized chunks to produce a document-level prediction.
End of explanation
"""
raw = pd.read_csv('sentimentdata.tsv', sep = '\t')
fullname = 'sentiment'
raw = raw.sample(frac = 1)
# that is in effect a shuffle
cut = round(len(raw) * .75)
train = raw.iloc[0: cut, : ]
test = raw.iloc[cut : , : ]
lex = Counter()
delchars = ''.join(c for c in map(chr, range(256)) if not c.isalpha())
spaces = ' ' * len(delchars)
punct2space = str.maketrans(delchars, spaces)
def getwords(text):
global punct2space
text = text.replace('<br />', ' ')
words = text.translate(punct2space).split()
return words
def get_dataset(rootfolder):
negpaths = glob.glob(rootfolder + '/neg/*.txt')
pospaths = glob.glob(rootfolder + '/pos/*.txt')
paths = [(0, x) for x in negpaths] + [(1, x) for x in pospaths]
index = 0
lines = []
lex = Counter()
labels = []
texts = []
for label, p in paths:
with open(p) as f:
text = f.read().strip().lower()
words = getwords(text)
for w in words:
lex[w] += 1
labels.append(label)
texts.append(text)
vocab = [x[0] for x in lex.most_common()]
print(vocab[0:10])
df = pd.DataFrame.from_dict({'sent': labels, 'text': texts})
df = df.sample(frac = 1)
# shuffle
return vocab, df
def make_matrix(df, vocab, cut):
lexicon = dict()
for i in range(cut):
lexicon[vocab[i]] = i
y = []
x = []
for i, row in df.iterrows():
y.append(int(row['sent']))
x_row = np.zeros(cut)
words = getwords(row.text)
for w in words:
if w in lexicon:
idx = lexicon[w]
x_row[idx] = x_row[idx] + 1
x_row = x_row / np.sum(len(words))
x.append(x_row)
x = np.array(x)
return x, y
triplets = []
vocab, train_df = get_dataset('/Volumes/TARDIS/aclImdb/train')
print('got training')
dummy, test_df = get_dataset('/Volumes/TARDIS/aclImdb/test')
print('got test')
for cut in range(3200, 5200, 200):
for reg_const in [.00001, .0001, .0003, .001, .01, .1]:
trainingset, train_y = make_matrix(train_df, vocab, cut)
testset, test_y = make_matrix(test_df, vocab, cut)
model = LogisticRegression(C = reg_const)
stdscaler = StandardScaler()
stdscaler.fit(trainingset)
scaledtraining = stdscaler.transform(trainingset)
model.fit(scaledtraining, train_y)
scaledtest = stdscaler.transform(testset)
predictions = [x[1] for x in model.predict_proba(scaledtest)]
predictions = np.round(predictions)
accuracy = accuracy_score(predictions, test_y)
f1 = f1_score(predictions, test_y)
print(cut, reg_const, f1, accuracy)
triplets.append((accuracy, cut, reg_const))
random.shuffle(triplets)
triplets.sort(key = lambda x: x[0])
print(triplets[-1])
"""
Explanation: Modeling whole movie reviews from the IMDb dataset
@InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
title = {Learning Word Vectors for Sentiment Analysis},
booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
month = {June},
year = {2011},
address = {Portland, Oregon, USA},
publisher = {Association for Computational Linguistics},
pages = {142--150},
url = {http://www.aclweb.org/anthology/P11-1015}
End of explanation
"""
def get_datachunks(filepath):
data = pd.read_csv(filepath, sep = '\t', header = None, names = ['idx', 'sent', 'dummy', 'text'], quoting = csv.QUOTE_NONE)
lex = Counter()
for i, row in data.iterrows():
text = row['text'].strip().lower()
words = getwords(text)
for w in words:
lex[w] += 1
vocab = [x[0] for x in lex.most_common()]
print(vocab[0:10])
df = data.loc[ : , ['sent', 'text']]
return vocab, df
triplets = []
vocab, train_df = get_datachunks('/Users/tunder/Dropbox/fiction/bert/bertdata/train_sentiment.tsv')
print('got training')
dummy, test_df = get_datachunks('/Users/tunder/Dropbox/fiction/bert/bertdata/dev_sentiment.tsv')
print('got test')
for cut in range(2200, 6200, 400):
for reg_const in [.00001, .00005, .0001, .0003, .001]:
trainingset, train_y = make_matrix(train_df, vocab, cut)
testset, test_y = make_matrix(test_df, vocab, cut)
model = LogisticRegression(C = reg_const)
stdscaler = StandardScaler()
stdscaler.fit(trainingset)
scaledtraining = stdscaler.transform(trainingset)
model.fit(scaledtraining, train_y)
scaledtest = stdscaler.transform(testset)
predictions = [x[1] for x in model.predict_proba(scaledtest)]
predictions = np.round(predictions)
accuracy = accuracy_score(predictions, test_y)
f1 = f1_score(predictions, test_y)
print(cut, reg_const, f1, accuracy)
triplets.append((accuracy, cut, reg_const))
random.shuffle(triplets)
triplets.sort(key = lambda x: x[0])
print(triplets[-1])
"""
Explanation: Cut down the reviews to 128-word chunks; how does it perform?
Here I'm using the same data files that were given to BERT.
End of explanation
"""
trainingset, train_y = make_matrix(train_df, vocab, 5200)
testset, test_y = make_matrix(test_df, vocab, 5200)
model = LogisticRegression(C = .0001)
stdscaler = StandardScaler()
stdscaler.fit(trainingset)
scaledtraining = stdscaler.transform(trainingset)
model.fit(scaledtraining, train_y)
scaledtest = stdscaler.transform(testset)
predictions = [x[1] for x in model.predict_proba(scaledtest)]
# make a dataframe
meta = pd.read_csv('bertmeta/dev_rows_sentiment.tsv', sep = '\t')
pred = pd.DataFrame.from_dict({'idx': meta['idx'], 'pred': predictions, 'real': test_y})
pred = pred.set_index('idx')
pred.head()
right = 0
for idx, row in pred.iterrows():
if row['pred'] >= 0.5:
predclass = 1
else:
predclass = 0
if predclass == row['real']:
right += 1
print(right / len(pred))
byvol = meta.groupby('docid')
rightvols = 0
allvols = 0
bertprobs = dict()
for vol, df in byvol:
total = 0
right = 0
positive = 0
df.set_index('idx', inplace = True)
predicted = []
for idx, row in df.iterrows():
predict = pred.loc[idx, 'pred']
predicted.append(predict)
true_class = row['class']
volmean = sum(predicted) / len(predicted)
if volmean >= 0.5:
predicted_class = 1
else:
predicted_class = 0
if true_class == predicted_class:
rightvols += 1
allvols += 1
print()
print('Overall accuracy:', rightvols / allvols)
"""
Explanation: How much can we improve our chunk-level results by aggregating them?
End of explanation
"""
triplets = []
vocab, train_df = get_datachunks('/Users/tunder/Dropbox/fiction/bert/bertdata/train_Mystery256.tsv')
print('got training')
dummy, test_df = get_datachunks('/Users/tunder/Dropbox/fiction/bert/bertdata/dev_Mystery256.tsv')
print('got test')
for cut in range(2000, 6200, 400):
for reg_const in [.00001, .00005, .0001, .0003, .001]:
trainingset, train_y = make_matrix(train_df, vocab, cut)
testset, test_y = make_matrix(test_df, vocab, cut)
model = LogisticRegression(C = reg_const)
stdscaler = StandardScaler()
stdscaler.fit(trainingset)
scaledtraining = stdscaler.transform(trainingset)
model.fit(scaledtraining, train_y)
scaledtest = stdscaler.transform(testset)
predictions = [x[1] for x in model.predict_proba(scaledtest)]
predictions = np.round(predictions)
accuracy = accuracy_score(predictions, test_y)
f1 = f1_score(predictions, test_y)
print(cut, reg_const, f1, accuracy)
triplets.append((accuracy, cut, reg_const))
random.shuffle(triplets)
triplets.sort(key = lambda x: x[0])
print(triplets[-1])
"""
Explanation: What about the parallel problem for genre?
We use the same data that was passed to BERT.
End of explanation
"""
# best model
trainingset, train_y = make_matrix(train_df, vocab, 6000)
testset, test_y = make_matrix(test_df, vocab, 6000)
model = LogisticRegression(C = .00001)
stdscaler = StandardScaler()
stdscaler.fit(trainingset)
scaledtraining = stdscaler.transform(trainingset)
model.fit(scaledtraining, train_y)
scaledtest = stdscaler.transform(testset)
predictions = [x[1] for x in model.predict_proba(scaledtest)]
# make a dataframe
meta = pd.read_csv('bertmeta/dev_rows_Mystery256.tsv', sep = '\t')
pred = pd.DataFrame.from_dict({'idx': meta['idx'], 'pred': predictions, 'real': test_y})
pred = pred.set_index('idx')
pred.head()
byvol = meta.groupby('docid')
rightvols = 0
allvols = 0
bertprobs = dict()
for vol, df in byvol:
total = 0
right = 0
positive = 0
df.set_index('idx', inplace = True)
predicted = []
for idx, row in df.iterrows():
predict = pred.loc[idx, 'pred']
predicted.append(predict)
true_class = row['class']
volmean = sum(predicted) / len(predicted)
if volmean >= 0.5:
predicted_class = 1
else:
predicted_class = 0
if true_class == predicted_class:
rightvols += 1
allvols += 1
print()
print('Overall accuracy:', rightvols / allvols)
"""
Explanation: and now aggregating the genre chunks
End of explanation
"""
|
samgoodgame/sf_crime | iterations/Error Analysis/W207_Final_Project_errorAnalysis_updated_08_21_1930.ipynb | mit | # Additional Libraries
%matplotlib inline
import matplotlib.pyplot as plt
# Import relevant libraries:
import time
import numpy as np
import pandas as pd
from sklearn.neighbors import KNeighborsClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import GaussianNB
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import log_loss
from sklearn.linear_model import LogisticRegression
from sklearn import svm
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
# Import Meta-estimators
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import GradientBoostingClassifier
# Import Calibration tools
from sklearn.calibration import CalibratedClassifierCV
# Set random seed and format print output:
np.random.seed(0)
np.set_printoptions(precision=3)
"""
Explanation: Kaggle San Francisco Crime Classification
Berkeley MIDS W207 Final Project: Sam Goodgame, Sarah Cha, Kalvin Kao, Bryan Moore
Environment and Data
End of explanation
"""
# Data path to your local copy of Kalvin's "x_data.csv", which was produced by the negated cell above
data_path = "./data/x_data_3.csv"
df = pd.read_csv(data_path, header=0)
x_data = df.drop('category', 1)
y = df.category.as_matrix()
# Impute missing values with mean values:
#x_complete = df.fillna(df.mean())
x_complete = x_data.fillna(x_data.mean())
X_raw = x_complete.as_matrix()
# Scale the data between 0 and 1:
X = MinMaxScaler().fit_transform(X_raw)
####
#X = np.around(X, decimals=2)
####
# Shuffle data to remove any underlying pattern that may exist. Must re-run random seed step each time:
np.random.seed(0)
shuffle = np.random.permutation(np.arange(X.shape[0]))
X, y = X[shuffle], y[shuffle]
# Due to difficulties with log loss and set(y_pred) needing to match set(labels), we will remove the extremely rare
# crimes from the data for quality issues.
X_minus_trea = X[np.where(y != 'TREA')]
y_minus_trea = y[np.where(y != 'TREA')]
X_final = X_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')]
y_final = y_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')]
# Separate training, dev, and test data:
test_data, test_labels = X_final[800000:], y_final[800000:]
dev_data, dev_labels = X_final[700000:800000], y_final[700000:800000]
train_data, train_labels = X_final[100000:700000], y_final[100000:700000]
calibrate_data, calibrate_labels = X_final[:100000], y_final[:100000]
# Create mini versions of the above sets
mini_train_data, mini_train_labels = X_final[:20000], y_final[:20000]
mini_calibrate_data, mini_calibrate_labels = X_final[19000:28000], y_final[19000:28000]
mini_dev_data, mini_dev_labels = X_final[49000:60000], y_final[49000:60000]
# Create list of the crime type labels. This will act as the "labels" parameter for the log loss functions that follow
crime_labels = list(set(y_final))
crime_labels_mini_train = list(set(mini_train_labels))
crime_labels_mini_dev = list(set(mini_dev_labels))
crime_labels_mini_calibrate = list(set(mini_calibrate_labels))
print(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev),len(crime_labels_mini_calibrate))
#print(len(train_data),len(train_labels))
#print(len(dev_data),len(dev_labels))
print(len(mini_train_data),len(mini_train_labels))
print(len(mini_dev_data),len(mini_dev_labels))
#print(len(test_data),len(test_labels))
print(len(mini_calibrate_data),len(mini_calibrate_labels))
#print(len(calibrate_data),len(calibrate_labels))
"""
Explanation: Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets.
End of explanation
"""
tuned_DT_calibrate_isotonic = RandomForestClassifier(min_impurity_split=1,
n_estimators=100,
bootstrap= True,
max_features=15,
criterion='entropy',
min_samples_leaf=10,
max_depth=None
).fit(train_data, train_labels)
ccv_isotonic = CalibratedClassifierCV(tuned_DT_calibrate_isotonic, method = 'isotonic', cv = 'prefit')
ccv_isotonic.fit(calibrate_data, calibrate_labels)
ccv_predictions = ccv_isotonic.predict(dev_data)
ccv_prediction_probabilities_isotonic = ccv_isotonic.predict_proba(dev_data)
working_log_loss_isotonic = log_loss(y_true = dev_labels, y_pred = ccv_prediction_probabilities_isotonic, labels = crime_labels)
print("Multi-class Log Loss with RF and calibration with isotonic is:", working_log_loss_isotonic)
"""
Explanation: The Best RF Classifier
End of explanation
"""
pd.DataFrame(np.amax(ccv_prediction_probabilities_isotonic, axis=1)).hist()
"""
Explanation: Distribution of Posterior Probabilities
End of explanation
"""
#clf_probabilities, clf_predictions, labels
def error_analysis_calibration(buckets, clf_probabilities, clf_predictions, labels):
"""inputs:
clf_probabilities = clf.predict_proba(dev_data)
clf_predictions = clf.predict(dev_data)
labels = dev_labels"""
#buckets = [0.05, 0.15, 0.3, 0.5, 0.8]
#buckets = [0.15, 0.25, 0.3, 1.0]
correct = [0 for i in buckets]
total = [0 for i in buckets]
lLimit = 0
uLimit = 0
for i in range(len(buckets)):
uLimit = buckets[i]
for j in range(clf_probabilities.shape[0]):
if (np.amax(clf_probabilities[j]) > lLimit) and (np.amax(clf_probabilities[j]) <= uLimit):
if clf_predictions[j] == labels[j]:
correct[i] += 1
total[i] += 1
lLimit = uLimit
#here we report the classifier accuracy for each posterior probability bucket
accuracies = []
for k in range(len(buckets)):
print(1.0*correct[k]/total[k])
accuracies.append(1.0*correct[k]/total[k])
print('p(pred) <= %.13f total = %3d correct = %3d accuracy = %.3f' \
%(buckets[k], total[k], correct[k], 1.0*correct[k]/total[k]))
f = plt.figure(figsize=(15,8))
plt.plot(buckets,accuracies)
plt.title("Calibration Analysis")
plt.xlabel("Posterior Probability")
plt.ylabel("Classifier Accuracy")
return buckets, accuracies
buckets = [0.2, 0.25, 0.3, 0.4, 0.5, 0.7, 0.9, 1.0]
calibration_buckets, calibration_accuracies = \
error_analysis_calibration(buckets, \
clf_probabilities=ccv_prediction_probabilities_isotonic, \
clf_predictions=ccv_predictions, \
labels=dev_labels)
"""
Explanation: Error Analysis: Calibration
End of explanation
"""
def error_analysis_classification_report(clf_predictions, labels):
"""inputs:
clf_predictions = clf.predict(dev_data)
labels = dev_labels"""
print('Classification Report:')
report = classification_report(labels, clf_predictions)
print(report)
return report
classificationReport = error_analysis_classification_report(clf_predictions=ccv_predictions, \
labels=dev_labels)
"""
Explanation: The fact that the classifier accuracy is higher for predictions with a higher posterior probability shows that our model is strongly calibrated. However, the distribution of these posterior probabilities shows that our classifier rarely has a 'confident' prediction.
Error Analysis: Classification Report
End of explanation
"""
def error_analysis_confusion_matrix(label_names, clf_predictions, labels):
"""inputs:
clf_predictions = clf.predict(dev_data)
labels = dev_labels"""
cm = pd.DataFrame(confusion_matrix(labels, clf_predictions, labels=label_names))
cm.columns=label_names
cm.index=label_names
cm.to_csv(path_or_buf="./confusion_matrix.csv")
#print(cm)
return cm
error_analysis_confusion_matrix(label_names=crime_labels, clf_predictions=ccv_predictions, \
labels=dev_labels)
"""
Explanation: The classification report shows that the model still has issues of every sort with regards to accuracy-- both false positives and false negatives are an issue across many classes.
The relatively high recall scores for larceny/theft and prostitution are noticeable, showing that our model had fewer false negatives for these two classes. However, their accuracies are still low.
Error Analysis: Confusion Matrix
End of explanation
"""
|
kaushikpavani/neural_networks_in_python | src/linear_regression/linear_regression.ipynb | mit | def generate_random_points_along_a_line (slope, intercept, num_points, abs_value, abs_noise):
# randomly select x
x = np.random.uniform(-abs_value, abs_value, num_points)
# y = mx + b + noise
y = slope*x + intercept + np.random.uniform(-abs_noise, abs_noise, num_points)
return x, y
def plot_points(x,y):
plt.scatter(x, y)
plt.title('Scatter plot of x and y')
plt.xlabel('x')
plt.ylabel('y')
slope = 4
intercept = -3
num_points = 20
abs_value = 4
abs_noise = 2
x, y = generate_random_points_along_a_line (slope, intercept, num_points, abs_value, abs_noise)
plot_points(x, y)
"""
Explanation: Given a 2D set of points spanned by axes $x$ and $y$ axes, we will try to fit a line that best approximates the data. The equation of the line, in slope-intercept form, is defined by: $y = mx + b$.
End of explanation
"""
# this function computes gradient with respect to slope m
def grad_m (x, y, m, b):
return np.sum(np.multiply(-2*(y - (m*x + b)), x))
# this function computes gradient with respect to intercept b
def grad_b (x, y, m, b):
return np.sum(-2*(y - (m*x + b)))
# Performs gradient descent
def gradient_descent (x, y, num_iterations, learning_rate):
# Initialize m and b
m = np.random.uniform(-1, 1, 1)
b = np.random.uniform(-1, 1, 1)
# Update m and b in direction opposite to that of the gradient to minimize loss
for i in range(num_iterations):
m = m - learning_rate * grad_m (x, y, m, b)
b = b - learning_rate * grad_b (x, y, m, b)
# Return final slope and intercept
return m, b
# Plot point along with the best fit line
def plot_line (m, b, x, y):
plot_points(x,y)
plt.plot(x, x*m + b, 'r')
plt.show()
# In general, keep num_iterations high and learning_rate low.
num_iterations = 1000
learning_rate = 0.0001
m, b = gradient_descent (x, y, num_iterations, learning_rate)
plot_line (m, b, x, y)
plt.show()
"""
Explanation: If $N$ = num_points, then the error in fitting a line to the points (also defined as Cost, $C$) can be defined as:
$C = \sum_{i=0}^{N} (y-(mx+b))^2$
To perform gradient descent, we need the partial derivatives of Cost $C$ with respect to slope $m$ and intercept $b$.
$\frac{\partial C}{\partial m} = \sum_{i=0}^{N} -2(y-(mx+b)).x$
$\frac{\partial C}{\partial b} = \sum_{i=0}^{N} -2(y-(mx+b))$
End of explanation
"""
|
adamamiller/NUREU17 | LSST/VariableStarClassification/First_Sources.ipynb | mit | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from astropy.table import Table as tab
"""
Explanation: Inital Sources
Using the sources at 007.20321 +14.87119 and RA = 20:50:00.91, dec = -00:42:23.8 taken from the NASA/IPAC Infrared Science Archieve on 6/22/17.
End of explanation
"""
source_1 = tab.read('source1.tbl', format='ipac') #In order for this to compile properly, these filenames will need to reflect
source_2 = tab.read('source2.tbl', format= 'ipac') #the directory of the user.
"""
Explanation: Read in the two data files. Currently, the *id's are in double format. This is different from the orginial table's long as .read() was having overflow errors
End of explanation
"""
times_1 = source_1[0][:] #date expressed in juilian days
obs_mag_1 = source_1[1][:] #observed magnitude, auto corrected? correlated?
obs_mag_error_1 = source_1[2][:] #error on the observed magnitude
times_2 = source_2[0][:]
obs_mag_2 = source_2[1][:]
obs_mag_error_2 = source_2[2][:]
"""
Explanation: Picking out the relevant data into their own arrays to work with.
End of explanation
"""
plt.errorbar(times_1, obs_mag_1, yerr = obs_mag_error_1, fmt = 'ro', markersize = 3)
plt.xlabel('MJD')
plt.ylabel('Observed Magnitude')
plt.title('Source 1 Lightcurve "All Oids"')
"""
Explanation: Source 1
As each data file had multiple oid's present, I plotted both the raw file and also the individual sources on their own.
End of explanation
"""
oid_11 = np.where(source_1[3][:] == 33261000001104)
plt.errorbar(times_1[oid_11], obs_mag_1[oid_11], yerr = obs_mag_error_1[oid_11], fmt = 'ro', markersize = 3)
plt.xlabel('MJD')
plt.ylabel('Observed Magnitude')
plt.title('Source 1 Lightcurve "Oid 33261000001104')
"""
Explanation: Decomposed Oids
End of explanation
"""
oid_12 = np.where(source_1[3][:] == 33262000001431)
plt.errorbar(times_1[oid_12], obs_mag_1[oid_12], yerr = obs_mag_error_1[oid_12], fmt = 'ro', markersize = 3)
plt.xlabel('MJD')
plt.ylabel('Observed Magnitude')
plt.title('Source 1 Lightcurve "Oid 33262000001431')
"""
Explanation: This oid doesnt seem to have an variability. And, given the plot above, it would seem that these are in fact distinct sources.
End of explanation
"""
plt.errorbar(times_2, obs_mag_2, yerr = obs_mag_error_2, fmt = 'bo', markersize = 3)
plt.xlabel('MJD')
plt.ylabel('Observed mag')
plt.title('Source 2 Lightcurve "All Oids"')
"""
Explanation: Again, this oid doesn't have any apparent variability.
Source 2
End of explanation
"""
oid_21 = np.where(source_2[3][:] == 226831060005494)
plt.errorbar(times_2[oid_21], obs_mag_2[oid_21], yerr = obs_mag_error_2[oid_21], fmt = 'bo', markersize = 3)
plt.xlabel('MJD')
plt.ylabel('Observed mag')
plt.title('Source 2 Lightcurve "Oid 226831060005494"')
"""
Explanation: Decomposed Oids
End of explanation
"""
oid_22 = np.where(source_2[3][:] == 226832060006908)
plt.errorbar(times_2[oid_22], obs_mag_2[oid_22], yerr = obs_mag_error_2[oid_22], fmt = 'bo', markersize = 3)
plt.xlabel('MJD')
plt.ylabel('Observed mag')
plt.title('Source 2 Lightcurve "Oid 226832060006908"')
oid_23 = np.where(source_2[3][:] == 26832000005734)
plt.errorbar(times_2[oid_23], obs_mag_2[oid_23], yerr = obs_mag_error_2[oid_23], fmt = 'bo', markersize = 3)
plt.xlabel('MJD')
plt.ylabel('Observed mag')
plt.title('Source 2 Lightcurve "Oid 26832000005734"')
"""
Explanation: This is just a single point so it is likely to be some sort of outlier or misattributed source.
End of explanation
"""
primary_period_1 = 0.191486 #taken from the NASA Exoplanet Archieve Periodogram Service
phase_21 = (times_2 % primary_period_1) / primary_period_1
plt.errorbar(phase_21[oid_23], obs_mag_2[oid_23], yerr = obs_mag_error_2[oid_23], fmt = 'bo', markersize = 3)
plt.xlabel('Phase')
plt.ylabel('Observed mag')
plt.title('Source 2 Periodic Lightcurve For Oid 226832060006908')
"""
Explanation: Folded Lightcurves
For oids 226832060006908 and 26832000005734
End of explanation
"""
primary_period_2 = 2.440220
phase_22 = (times_2 % primary_period_2) / primary_period_2
plt.errorbar(phase_22[oid_23], obs_mag_2[oid_23], yerr = obs_mag_error_2[oid_23], fmt = 'bo', markersize = 3)
plt.xlabel('Phase')
plt.ylabel('Observed mag')
plt.title('Source 2 Periodic Lightcurve For Oid 26832000005734')
"""
Explanation: There maybe some periodic variability here. A fit of a cosine might be able to reproduce this data. However, it apprears to be scattered fairly randomly.
End of explanation
"""
|
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies | ex34-Correlations between SOI and SLP, Temperature and Precipitation.ipynb | mit | %matplotlib inline
import numpy as np
import xarray as xr
import pandas as pd
from numba import jit
from functools import partial
from scipy.stats import pearsonr
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
# Set some parameters to apply to all plots. These can be overridden
import matplotlib
# Plot size to 12" x 7"
matplotlib.rc('figure', figsize = (15, 7))
# Font size to 14
matplotlib.rc('font', size = 14)
# Do not display top and right frame lines
matplotlib.rc('axes.spines', top = True, right = True)
# Remove grid lines
matplotlib.rc('axes', grid = False)
# Set backgound color to white
matplotlib.rc('axes', facecolor = 'white')
"""
Explanation: ex34-Correlations between SOI and SLP, Temperature and Precipitation
This tutorial will reproduce and extend the NCL:Correlations example with python packages.
Read gridded sea level pressure from the 20th Century Reanalysis
use proxy grid points near Tahiti and Darwin to construct an SOI time series spanning 1950-2018
perform lag-0 correlations between
SOI and SLP
SOI and temperature
SOI and preciptation using GPCP data which spans 1979-2018.
In addtion, the significane level of (p<0.01) was dotted on the correlation maps.
End of explanation
"""
filslp = "data/prmsl.mon.mean.nc"
filtmp = "data/air.sig995.mon.mean.nc"
filprc = "data/precip.mon.mean.nc"
"""
Explanation: 1. Basic information
Data files
All of these data are publicly available from NCEP/NCAR Reanalysis 1 and GPCP Version 2.3 Combined Precipitation Data Set
End of explanation
"""
yrStrt = 1950 # manually specify for convenience
yrLast = 2018 # 20th century ends 2018
clStrt = 1950 # reference climatology for SOI
clLast = 1979
yrStrtP = 1979 # 1st year GPCP
yrLastP = yrLast # match 20th century
"""
Explanation: Specification for reference years
End of explanation
"""
latT = -17.6 # Tahiti
lonT = 210.75
latD = -12.5 # Darwin
lonD = 130.83
"""
Explanation: Grid points near Tahiti and Darwin
These two points are used to construct an SOI time series spanning 1950-2018.
End of explanation
"""
# read slp data
ds_slp = xr.open_dataset(filslp).sel(time=slice(str(yrStrt)+'-01-01', str(yrLast)+'-12-31'))
# select grids of T and D
T = ds_slp.sel(lat=latT, lon=lonT, method='nearest')
D = ds_slp.sel(lat=latD, lon=lonD, method='nearest')
# monthly reference climatologies
TClm = T.sel(time=slice(str(clStrt)+'-01-01', str(clLast)+'-12-31'))
DClm = D.sel(time=slice(str(clStrt)+'-01-01', str(clLast)+'-12-31'))
# anomalies reference clim
TAnom = T.groupby('time.month') - TClm.groupby('time.month').mean('time')
DAnom = D.groupby('time.month') - DClm.groupby('time.month').mean('time')
# stddev of anomalies over clStrt & clLast
TAnomStd = np.std(TAnom.sel(time=slice(str(clStrt)+'-01-01', str(clLast)+'-12-31')))
DAnomStd = np.std(DAnom.sel(time=slice(str(clStrt)+'-01-01', str(clLast)+'-12-31')))
# signal and noise
soi_signal = ((TAnom/TAnomStd) - (DAnom/DAnomStd)).rename({'slp':'SOI'})
"""
Explanation: 2. Calculate SOI Index
End of explanation
"""
@jit(nogil=True)
def pr_cor_corr(x, y):
"""
Uses the scipy stats module to calculate a pearson correlation test
:x vector: Input pixel vector to run tests on
:y vector: The date input vector
"""
# Check NA values
co = np.count_nonzero(~np.isnan(x))
if co < len(y): # If fewer than length of y observations return np.nan
print('I am here')
return np.nan, np.nan
corr, _ = pearsonr(x, y)
return corr
@jit(nogil=True)
def pr_cor_pval(x, y):
"""
Uses the scipy stats module to calculate a pearson correlation test
:x vector: Input pixel vector to run tests on
:y vector: The date input vector
"""
# Check NA values
co = np.count_nonzero(~np.isnan(x))
if co < len(y): # If fewer than length of y observations return np.nan
return np.nan
# Run the pearson correlation test
_, p_value = pearsonr(x, y)
return p_value
# The function we are going to use for applying our pearson test per pixel
def pearsonr_corr(x, y, func=pr_cor_corr, dim='time'):
# x = Pixel value, y = a vector containing the date, dim == dimension
return xr.apply_ufunc(
func, x , y,
input_core_dims=[[dim], [dim]],
vectorize=True,
output_dtypes=[float]
)
"""
Explanation: 3. lag-0 correlation
At present, I have not found a good way to return multiple xarray.dataarray from xarray.apply_ufunc().
Therefore, I have to calculate Pearson correlation twice, which waste half a time.
3.1 Functions to calculate Pearson correlation
End of explanation
"""
ds_tmp = xr.open_dataset(filtmp).sel(time=slice(str(yrStrt)+'-01-01', str(yrLast)+'-12-31'))
ds_prc = xr.open_dataset(filprc).sel(time=slice(str(yrStrtP)+'-01-01', str(yrLastP)+'-12-31'))
# slp
da_slp = ds_slp.slp.stack(point=('lat', 'lon')).groupby('point')
slp_corr = pearsonr_corr(da_slp, soi_signal.SOI).unstack('point')
slp_pval = pearsonr_corr(da_slp, soi_signal.SOI, func= pr_cor_pval).unstack('point')
# tmp
da_tmp = ds_tmp.air.stack(point=('lat', 'lon')).groupby('point')
tmp_corr = pearsonr_corr(da_tmp, soi_signal.SOI).unstack('point')
tmp_pval = pearsonr_corr(da_tmp, soi_signal.SOI, func= pr_cor_pval).unstack('point')
# prc
soi_prc = soi_signal.sel(time=slice(str(yrStrtP)+'-01-01', str(yrLastP)+'-12-31'))
da_prc = ds_prc.precip.stack(point=('lat', 'lon')).groupby('point')
prc_corr = pearsonr_corr(da_prc, soi_prc.SOI).unstack('point')
prc_pval = pearsonr_corr(da_prc, soi_prc.SOI, func= pr_cor_pval).unstack('point')
"""
Explanation: 3.2 Calculate lag-0 correlation between SOI and (slp, temperature, precipitation), respectively
End of explanation
"""
# Convert to pandas.dataframe
df_soi = soi_signal.to_dataframe().drop('month', axis=1)
# 11-point smoother: Use reflective boundaries to fill out plot
window = 11
weights = [0.0270, 0.05856, 0.09030, 0.11742, 0.13567,
0.1421, 0.13567, 0.11742, 0.09030, 0.05856,
0.027]
ewma = partial(np.average, weights=weights)
rave = df_soi.rolling(window).apply(ewma).fillna(df_soi)
fig, ax = plt.subplots()
rave.plot(ax=ax, color='black', alpha=1.00, linewidth=2, legend=False)
d = rave.index
ax.fill_between(d, 0, rave['SOI'],
where=rave['SOI'] >0,
facecolor='blue', alpha=0.75, interpolate=True)
ax.fill_between(d, 0, rave['SOI'],
where=rave['SOI']<0,
facecolor='red', alpha=0.75, interpolate=True)
_ = ax.set_ylim(-5, 5)
_ = ax.set_title('SOI: %s-%s \n Based on NCEP/NCAR Reanalysis 1' %(str(yrStrt), str(yrLast)))
_ = ax.set_xlabel('')
"""
Explanation: 4. Visualization
4.1 SOI
End of explanation
"""
lons, lats = np.meshgrid(slp_corr.lon, slp_corr.lat)
sig_area = np.where(slp_pval < 0.01)
ax = plt.axes(projection=ccrs.Robinson(central_longitude=180))
slp_corr.plot(ax=ax, vmax=0.7, vmin=-0.7, cmap='RdYlBu_r', transform=ccrs.PlateCarree())
_ = ax.scatter(lons[sig_area], lats[sig_area], marker = '.', s = 1, c = 'k', alpha = 0.6, transform = ccrs.PlateCarree())
ax.set_title('Correlation Between SOI and SLP (%s-%s) \n Based on NCEP/NCAR Reanalysis 1 \n p < 0.01 has been dotted' %(str(yrStrt), str(yrLast)))
ax.add_feature(cfeature.BORDERS)
ax.add_feature(cfeature.COASTLINE)
"""
Explanation: 4.2 Correlations maps
4.2.1 SOI vs. SLP
Mask correlation with pvalues
ax = plt.axes(projection=ccrs.Robinson(central_longitude=180))
slp_corr.where(slp_pval < 0.01).plot(ax=ax, cmap='RdYlBu_r', transform=ccrs.PlateCarree())
ax.set_title('SOI SLP')
ax.add_feature(cfeature.BORDERS)
ax.add_feature(cfeature.COASTLINE)
End of explanation
"""
lons, lats = np.meshgrid(tmp_corr.lon, tmp_corr.lat)
sig_area = np.where(tmp_pval < 0.01)
ax = plt.axes(projection=ccrs.Robinson(central_longitude=180))
tmp_corr.plot(ax=ax, vmax=0.7, vmin=-0.7, cmap='RdYlBu_r', transform=ccrs.PlateCarree())
_ = ax.scatter(lons[sig_area], lats[sig_area], marker = '.', s = 1, c = 'k', alpha = 0.6, transform = ccrs.PlateCarree())
ax.set_title('Correlation Between SOI and TMP (%s-%s) \n Based on NCEP/NCAR Reanalysis 1 \n p < 0.01 has been dotted' %(str(yrStrt), str(yrLast)))
ax.add_feature(cfeature.BORDERS)
ax.add_feature(cfeature.COASTLINE)
"""
Explanation: 4.2.2 SOI vs. TMP
End of explanation
"""
lons, lats = np.meshgrid(prc_corr.lon, prc_corr.lat)
sig_area = np.where(prc_pval < 0.01)
ax = plt.axes(projection=ccrs.Robinson(central_longitude=180))
prc_corr.plot(ax=ax, vmax=0.7, vmin=-0.7, cmap='RdYlBu_r', transform=ccrs.PlateCarree())
_ = ax.scatter(lons[sig_area], lats[sig_area], marker = '.', s = 1, c = 'k', alpha = 0.6, transform = ccrs.PlateCarree())
ax.set_title('Correlation Between SOI and GPCP Precipitation (%s-%s) \n (Based on NCEP/NCAR Reanalysis 1 \n p < 0.01 has been dotted' %(str(yrStrtP), str(yrLastP)))
ax.add_feature(cfeature.BORDERS)
ax.add_feature(cfeature.COASTLINE)
"""
Explanation: 4.2.3 SOI vs. PRC
End of explanation
"""
|
projectmesa/mesa-examples | examples/ForestFire/Forest Fire Model.ipynb | apache-2.0 | import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from mesa import Model, Agent
from mesa.time import RandomActivation
from mesa.space import Grid
from mesa.datacollection import DataCollector
from mesa.batchrunner import BatchRunner
"""
Explanation: The Forest Fire Model
A rapid introduction to Mesa
The Forest Fire Model is one of the simplest examples of a model that exhibits self-organized criticality.
Mesa is a new, Pythonic agent-based modeling framework. A big advantage of using Python is that it a great language for interactive data analysis. Unlike some other ABM frameworks, with Mesa you can write a model, run it, and analyze it all in the same environment. (You don't have to, of course. But you can).
In this notebook, we'll go over a rapid-fire (pun intended, sorry) introduction to building and analyzing a model with Mesa.
First, some imports. We'll go over what all the Mesa ones mean just below.
End of explanation
"""
class TreeCell(Agent):
'''
A tree cell.
Attributes:
x, y: Grid coordinates
condition: Can be "Fine", "On Fire", or "Burned Out"
unique_id: (x,y) tuple.
unique_id isn't strictly necessary here, but it's good practice to give one to each
agent anyway.
'''
def __init__(self, model, pos):
'''
Create a new tree.
Args:
pos: The tree's coordinates on the grid. Used as the unique_id
'''
super().__init__(pos, model)
self.pos = pos
self.unique_id = pos
self.condition = "Fine"
def step(self):
'''
If the tree is on fire, spread it to fine trees nearby.
'''
if self.condition == "On Fire":
neighbors = self.model.grid.get_neighbors(self.pos, moore=False)
for neighbor in neighbors:
if neighbor.condition == "Fine":
neighbor.condition = "On Fire"
self.condition = "Burned Out"
"""
Explanation: Building the model
Most models consist of basically two things: agents, and an world for the agents to be in. The Forest Fire model has only one kind of agent: a tree. A tree can either be unburned, on fire, or already burned. The environment is a grid, where each cell can either be empty or contain a tree.
First, let's define our tree agent. The agent needs to be assigned x and y coordinates on the grid, and that's about it. We could assign agents a condition to be in, but for now let's have them all start as being 'Fine'. Since the agent doesn't move, and there is only at most one tree per cell, we can use a tuple of its coordinates as a unique identifier.
Next, we define the agent's step method. This gets called whenever the agent needs to act in the world and takes the model object to which it belongs as an input. The tree's behavior is simple: If it is currently on fire, it spreads the fire to any trees above, below, to the left and the right of it that are not themselves burned out or on fire; then it burns itself out.
End of explanation
"""
class ForestFire(Model):
'''
Simple Forest Fire model.
'''
def __init__(self, height, width, density):
'''
Create a new forest fire model.
Args:
height, width: The size of the grid to model
density: What fraction of grid cells have a tree in them.
'''
# Initialize model parameters
self.height = height
self.width = width
self.density = density
# Set up model objects
self.schedule = RandomActivation(self)
self.grid = Grid(height, width, torus=False)
self.dc = DataCollector({"Fine": lambda m: self.count_type(m, "Fine"),
"On Fire": lambda m: self.count_type(m, "On Fire"),
"Burned Out": lambda m: self.count_type(m, "Burned Out")})
# Place a tree in each cell with Prob = density
for x in range(self.width):
for y in range(self.height):
if random.random() < self.density:
# Create a tree
new_tree = TreeCell(self, (x, y))
# Set all trees in the first column on fire.
if x == 0:
new_tree.condition = "On Fire"
self.grid[y][x] = new_tree
self.schedule.add(new_tree)
self.running = True
def step(self):
'''
Advance the model by one step.
'''
self.schedule.step()
self.dc.collect(self)
# Halt if no more fire
if self.count_type(self, "On Fire") == 0:
self.running = False
@staticmethod
def count_type(model, tree_condition):
'''
Helper method to count trees in a given condition in a given model.
'''
count = 0
for tree in model.schedule.agents:
if tree.condition == tree_condition:
count += 1
return count
"""
Explanation: Now we need to define the model object itself. The main thing the model needs is the grid, which the trees are placed on. But since the model is dynamic, it also needs to include time -- it needs a schedule, to manage the trees activation as they spread the fire from one to the other.
The model also needs a few parameters: how large the grid is and what the density of trees on it will be. Density will be the key parameter we'll explore below.
Finally, we'll give the model a data collector. This is a Mesa object which collects and stores data on the model as it runs for later analysis.
The constructor needs to do a few things. It instantiates all the model-level variables and objects; it randomly places trees on the grid, based on the density parameter; and it starts the fire by setting all the trees on one edge of the grid (x=0) as being On "Fire".
Next, the model needs a step method. Like at the agent level, this method defines what happens every step of the model. We want to activate all the trees, one at a time; then we run the data collector, to count how many trees are currently on fire, burned out, or still fine. If there are no trees left on fire, we stop the model by setting its running property to False.
End of explanation
"""
fire = ForestFire(100, 100, 0.6)
"""
Explanation: Running the model
Let's create a model with a 100 x 100 grid, and a tree density of 0.6. Remember, ForestFire takes the arguments height, width, density.
End of explanation
"""
fire.run_model()
"""
Explanation: To run the model until it's done (that is, until it sets its running property to False) just use the run_model() method. This is implemented in the Model parent object, so we didn't need to implement it above.
End of explanation
"""
results = fire.dc.get_model_vars_dataframe()
"""
Explanation: That's all there is to it!
But... so what? This code doesn't include a visualization, after all.
TODO: Add a MatPlotLib visualization
Remember the data collector? Now we can put the data it collected into a pandas DataFrame:
End of explanation
"""
results.plot()
"""
Explanation: And chart it, to see the dynamics.
End of explanation
"""
fire = ForestFire(100, 100, 0.8)
fire.run_model()
results = fire.dc.get_model_vars_dataframe()
results.plot()
"""
Explanation: In this case, the fire burned itself out after about 90 steps, with many trees left unburned.
You can try changing the density parameter and rerunning the code above, to see how different densities yield different dynamics. For example:
End of explanation
"""
param_set = dict(height=50, # Height and width are constant
width=50,
# Vary density from 0.01 to 1, in 0.01 increments:
density=np.linspace(0,1,101)[1:])
# At the end of each model run, calculate the fraction of trees which are Burned Out
model_reporter = {"BurnedOut": lambda m: (ForestFire.count_type(m, "Burned Out") /
m.schedule.get_agent_count()) }
# Create the batch runner
param_run = BatchRunner(ForestFire, param_set, model_reporters=model_reporter)
"""
Explanation: ... But to really understand how the final outcome varies with density, we can't just tweak the parameter by hand over and over again. We need to do a batch run.
Batch runs
Batch runs, also called parameter sweeps, allow use to systemically vary the density parameter, run the model, and check the output. Mesa provides a BatchRunner object which takes a model class, a dictionary of parameters and the range of values they can take and runs the model at each combination of these values. We can also give it reporters, which collect some data on the model at the end of each run and store it, associated with the parameters that produced it.
For ease of typing and reading, we'll first create the parameters to vary and the reporter, and then assign them to a new BatchRunner.
End of explanation
"""
param_run.run_all()
"""
Explanation: Now the BatchRunner, which we've named param_run, is ready to go. To run the model at every combination of parameters (in this case, every density value), just use the run_all() method.
End of explanation
"""
df = param_run.get_model_vars_dataframe()
df.head()
"""
Explanation: Like with the data collector, we can extract the data the batch runner collected into a dataframe:
End of explanation
"""
plt.scatter(df.density, df.BurnedOut)
plt.xlim(0,1)
"""
Explanation: As you can see, each row here is a run of the model, identified by its parameter values (and given a unique index by the Run column). To view how the BurnedOut fraction varies with density, we can easily just plot them:
End of explanation
"""
param_run = BatchRunner(ForestFire, param_set, iterations=5, model_reporters=model_reporter)
param_run.run_all()
df = param_run.get_model_vars_dataframe()
plt.scatter(df.density, df.BurnedOut)
plt.xlim(0,1)
"""
Explanation: And we see the very clear emergence of a critical value around 0.5, where the model quickly shifts from almost no trees being burned, to almost all of them.
In this case we ran the model only once at each value. However, it's easy to have the BatchRunner execute multiple runs at each parameter combination, in order to generate more statistically reliable results. We do this using the iteration argument.
Let's run the model 5 times at each parameter point, and export and plot the results as above.
End of explanation
"""
|
amueller/scipy-2017-sklearn | notebooks/15.Pipelining_Estimators.ipynb | cc0-1.0 | import os
with open(os.path.join("datasets", "smsspam", "SMSSpamCollection")) as f:
lines = [line.strip().split("\t") for line in f.readlines()]
text = [x[1] for x in lines]
y = [x[0] == "ham" for x in lines]
from sklearn.model_selection import train_test_split
text_train, text_test, y_train, y_test = train_test_split(text, y)
"""
Explanation: Pipelining estimators
In this section we study how different estimators maybe be chained.
A simple example: feature extraction and selection before an estimator
Feature extraction: vectorizer
For some types of data, for instance text data, a feature extraction step must be applied to convert it to numerical features.
To illustrate we load the SMS spam dataset we used earlier.
End of explanation
"""
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
"""
Explanation: Previously, we applied the feature extraction manually, like so:
End of explanation
"""
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
pipeline.fit(text_train, y_train)
pipeline.score(text_test, y_test)
"""
Explanation: The situation where we learn a transformation and then apply it to the test data is very common in machine learning.
Therefore scikit-learn has a shortcut for this, called pipelines:
End of explanation
"""
# This illustrates a common mistake. Don't use this code!
from sklearn.model_selection import GridSearchCV
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
grid = GridSearchCV(clf, param_grid={'C': [.1, 1, 10, 100]}, cv=5)
grid.fit(X_train, y_train)
"""
Explanation: As you can see, this makes the code much shorter and easier to handle. Behind the scenes, exactly the same as above is happening. When calling fit on the pipeline, it will call fit on each step in turn.
After the first step is fit, it will use the transform method of the first step to create a new representation.
This will then be fed to the fit of the next step, and so on.
Finally, on the last step, only fit is called.
If we call score, only transform will be called on each step - this could be the test set after all! Then, on the last step, score is called with the new representation. The same goes for predict.
Building pipelines not only simplifies the code, it is also important for model selection.
Say we want to grid-search C to tune our Logistic Regression above.
Let's say we do it like this:
End of explanation
"""
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(),
LogisticRegression())
grid = GridSearchCV(pipeline,
param_grid={'logisticregression__C': [.1, 1, 10, 100]}, cv=5)
grid.fit(text_train, y_train)
grid.score(text_test, y_test)
"""
Explanation: 2.1.2 What did we do wrong?
Here, we did grid-search with cross-validation on X_train. However, when applying TfidfVectorizer, it saw all of the X_train,
not only the training folds! So it could use knowledge of the frequency of the words in the test-folds. This is called "contamination" of the test set, and leads to too optimistic estimates of generalization performance, or badly selected parameters.
We can fix this with the pipeline, though:
End of explanation
"""
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
params = {'logisticregression__C': [.1, 1, 10, 100],
"tfidfvectorizer__ngram_range": [(1, 1), (1, 2), (2, 2)]}
grid = GridSearchCV(pipeline, param_grid=params, cv=5)
grid.fit(text_train, y_train)
print(grid.best_params_)
grid.score(text_test, y_test)
"""
Explanation: Note that we need to tell the pipeline where at which step we wanted to set the parameter C.
We can do this using the special __ syntax. The name before the __ is simply the name of the class, the part after __ is the parameter we want to set with grid-search.
<img src="figures/pipeline_cross_validation.svg" width="50%">
Another benefit of using pipelines is that we can now also search over parameters of the feature extraction with GridSearchCV:
End of explanation
"""
# %load solutions/15A_ridge_grid.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
Create a pipeline out of a StandardScaler and Ridge regression and apply it to the Boston housing dataset (load using ``sklearn.datasets.load_boston``). Try adding the ``sklearn.preprocessing.PolynomialFeatures`` transformer as a second preprocessing step, and grid-search the degree of the polynomials (try 1, 2 and 3).
</li>
</ul>
</div>
End of explanation
"""
|
philippbayer/cats_dogs_redux | Statefarm.ipynb | mit | %%bash
cut -f 1 -d ',' driver_imgs_list.csv | grep -v subject | uniq -c
lines=$(expr `wc -l driver_imgs_list.csv | cut -f 1 -d ' '` - 1)
echo "Got ${lines} pics"
"""
Explanation: First, make the validation set with different drivers
End of explanation
"""
import csv
import os
to_get = set(['p081','p075', 'p072', 'p066', 'p064'])
with open('driver_imgs_list.csv') as f:
next(f)
for line in csv.reader(f):
if line[0] in to_get:
if os.path.exists('train/%s/%s' %(line[1], line[2])):
os.popen('mv train/%s/%s valid/%s/%s'%(line[1], line[2], line[1], line[2]))
import glob
print('Training has', len(glob.glob('train/*/*jpg')))
print('Validation has', len(glob.glob('valid/*/*jpg')))
"""
Explanation: fastai's statefarm has 3478 pics in validation set and 18946 in training, so let's get something close to that
End of explanation
"""
batch_size = 64
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)
trn_batches = get_batches(path+'train', gen_t, batch_size=batch_size)
val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)
from vgg16bn import Vgg16BN
model = vgg_ft_bn(10)
model.compile(optimizer=Adam(1e-3),
loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(trn_batches, trn_batches.N, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.N)
model.optimizer.lr = 1e-5
model.fit_generator(trn_batches, trn_batches.N, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.N)
last_conv_idx = [i for i,l in enumerate(model.layers) if type(l) is Convolution2D][-1]
conv_layers = model.layers[:last_conv_idx+1]
conv_model = Sequential(conv_layers)
trn_batches = get_batches(path+'train', gen_t, batch_size=batch_size, shuffle=False)
conv_feat = conv_model.predict_generator(trn_batches, trn_batches.nb_sample)
conv_val_feat = conv_model.predict_generator(val_batches, val_batches.nb_sample)
save_array(path+'results/conv_val_feat.dat', conv_val_feat)
save_array(path+'results/conv_feat.dat', conv_feat)
#print(type(conv_feat))
conv_feat = load_array(path+'results/conv_feat.dat')
conv_val_feat = load_array(path+'results/conv_val_feat.dat')
print(type(conv_feat))
#print(conv_layers[-1].output_shape)
def get_bn_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dropout(p),
Dense(512, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(512, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(10, activation='softmax')
]
p = 0.8
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path)
bn_model = Sequential(get_bn_layers(p))
bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=1,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr = 1e-7
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=7,
validation_data=(conv_val_feat, val_labels))
"""
Explanation: now starts the actual work
End of explanation
"""
test_batches = get_batches(path+'test', batch_size=batch_size, shuffle=False, class_mode=None)
conv_test_feat = conv_model.predict_generator(test_batches, test_batches.nb_sample)
"""
Explanation: Let's predict on the test set
The following is mashed together from fast.ai
End of explanation
"""
preds = bn_model.predict(conv_test_feat, batch_size=batch_size*2)
subm = do_clip(preds,0.93)
subm_name = path+'results/subm.gz'
classes = sorted(trn_batches.class_indices, key=trn_batches.class_indices.get)
submission = pd.DataFrame(subm, columns=classes)
submission.insert(0, 'img', [a[4:] for a in test_filenames])
submission.head()
submission.to_csv(subm_name, index=False, compression='gzip')
from IPython.display import FileLink
FileLink(subm_name)
"""
Explanation: That took forever (one hour? didn't time it perfectly)
End of explanation
"""
bn_model.save_weights(path+'models/bn_model.h5')
bn_model.load_weights(path+'models/bn_model.h5')
bn_feat = bn_model.predict(conv_feat, batch_size=batch_size)
bn_val_feat = bn_model.predict(conv_val_feat, batch_size=batch_size)
"""
Explanation: Private score: 0.94359
Public score: 1.18213
End of explanation
"""
np.max(bn_feat[:,1])
"""
Explanation: Let's try something else - can I look at what the model predicts for the training set?
Let's have a look at the images with a 'bad' maximum probability, around 50% -
how many training pictures do we have with bad probabilities?
End of explanation
"""
np.where(np.amax(bn_feat, axis=1) < 0.9)
def check_training_picture(bn_feat, filenames, number):
print(bn_feat[number,:])
print(filenames[number])
plt.imshow(mpimg.imread('train/' + filenames[number]))
check_training_picture(bn_feat, filenames, 22)
"""
Explanation: Give me all training pictures that don't have a class 'probability' above 90%
End of explanation
"""
check_training_picture(bn_feat, filenames, 45)
"""
Explanation: This is marked as class0 -
c0: normal driving
c1: texting - right
c2: talking on the phone - right
c3: texting - left
c4: talking on the phone - left
c5: operating the radio
c6: drinking
c7: reaching behind
c8: hair and makeup
c9: talking to passenger
That hand is probably confusing, but it's mostly the correct class.
End of explanation
"""
check_training_picture(bn_feat, filenames, 17421)
"""
Explanation: This doesn't have any 'good' class, everything is low, which is weird - could be the not-straight head angle, but who knows. I just realised that some pictures have a blue tape-like thing on the driver window (see above and below), some pictures don't have that sheet, which is probably confusing.
TODO: find a way to mask that window
End of explanation
"""
to_remove = np.where(np.amax(bn_feat, axis=1) < 0.9)[0]
print(len(to_remove))
print(1580./18587*100)
"""
Explanation: This is marked as 'talking to passenger', but it may as well be c0, driving normally.
Kick out the 'bad' pictures
I believe that these low quality marks 'confuse' the network, so a network trained without those pictures should work slightly better.
End of explanation
"""
to_remove_files = set([filenames[index] for index in to_remove])
list(to_remove_files)[:5]
out = open('weird_files.txt', 'w')
for f in to_remove_files:
out.write('%s\n'%f)
print(path)
%pwd
to_remove_files = [x.rstrip() for x in open('/home/ubuntu/statefarm/train/weird_files.txt')]
len(to_remove_files)
to_remove_files[:5]
%%bash
mkdir weird_ones
mkdir weird_ones/train
for i in {0..9}; do mkdir /home/ubuntu/statefarm/weird_ones/train/c${i}; done
%cd /home/ubuntu/statefarm/train
for l in glob.glob('*/*jpg'):
if l in to_remove_files:
os.popen('mv %s ../weird_ones/train/%s'%(l, l))
%%bash
find . -type f | wc -l
"""
Explanation: 1580 pictures are 'weird', which is not that much compared to our 18587 pictures (roughly 8.5%)
End of explanation
"""
path = "/home/ubuntu/statefarm/"
batch_size = 64
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)
trn_batches = get_batches(path+'train', gen_t, batch_size=batch_size)
val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)
from vgg16bn import Vgg16BN
model = vgg_ft_bn(10)
model.compile(optimizer=Adam(1e-3),
loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(trn_batches, trn_batches.N, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.N)
model.optimizer.lr = 1e-5
model.fit_generator(trn_batches, trn_batches.N, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.N)
last_conv_idx = [i for i,l in enumerate(model.layers) if type(l) is Convolution2D][-1]
conv_layers = model.layers[:last_conv_idx+1]
conv_model = Sequential(conv_layers)
trn_batches = get_batches(path+'train', gen_t, batch_size=batch_size, shuffle=False)
conv_feat = conv_model.predict_generator(trn_batches, trn_batches.nb_sample)
conv_val_feat = conv_model.predict_generator(val_batches, val_batches.nb_sample)
#print(conv_layers[-1].output_shape)
def get_bn_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dropout(p),
Dense(512, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(512, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(10, activation='softmax')
]
p = 0.8
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path)
bn_model = Sequential(get_bn_layers(p))
bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=1,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr = 1e-5
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=7,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr = 1e-7
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=7,
validation_data=(conv_val_feat, val_labels))
test_batches = get_batches(path+'test', batch_size=batch_size, shuffle=False, class_mode=None)
conv_test_feat = conv_model.predict_generator(test_batches, test_batches.nb_sample)
"""
Explanation: OK we removed the weird ones.
End of explanation
"""
preds = bn_model.predict(conv_test_feat, batch_size=batch_size*2)
subm = do_clip(preds,0.93)
subm_name = path+'results/subm_woweird.gz'
classes = sorted(trn_batches.class_indices, key=trn_batches.class_indices.get)
submission = pd.DataFrame(subm, columns=classes)
submission.insert(0, 'img', [a[4:] for a in test_filenames])
submission.head()
submission.to_csv(subm_name, index=False, compression='gzip')
from IPython.display import FileLink
FileLink(subm_name)
"""
Explanation: That took forever (one hour? didn't time it perfectly)
End of explanation
"""
hog = cv2.HOGDescriptor()
hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
img = glob.glob('train/*/*jpg')[100]
img = cv2.imread(img)
(rects, weights) = hog.detectMultiScale(img, winStride=(4, 4), padding=(8, 8), scale=1.05)
for (x, y, w, h) in rects:
cv2.rectangle(orig, (x, y), (x + w, y + h), (0, 0, 255), 2)
rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])
pick = non_max_suppression(rects, probs=None, overlapThresh=0.5)
for (xA, yA, xB, yB) in pick:
cv2.rectangle(img, (xA, yA), (xB, yB), (0, 255, 0), 2)
#plt.imshow(img)
#cv2.imshow('hi', img)
img.save('test.png')
"""
Explanation: Private score: 0.92506
Public score: 1.11814
RESULTS
Interestingly, the validation accuracy and validation loss is VERY similar, almost identical to the above. The training accuracy is slightly better.
TODO: Fix the validation problems too
TRYING OUT CUTTING FROM PICTURES
End of explanation
"""
|
drvinceknight/gt | assets/assessment/2021-2022/ind/assignment.ipynb | mit | import nashpy as nash
import numpy as np
np.random.seed(0)
repetitions = 2000
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: Game Theory - 2021-2022 individual coursework
Important Do not delete the cells containing:
```
BEGIN SOLUTION
END SOLUTION
```
write your solution attempts in those cells.
To submit this notebook:
Change the name of the notebook from main to: <student_number>. For example, if your student number is c1234567 then change the name of the notebook to c1234567.
Write all your solution attempts in the correct locations;
Do not delete any code that is already in the cells;
Save the notebook (File>Save As);
Follow the instructions given to submit.
Question 1
For each of the following matrices \(A\) and initial populations \(x_0\), for the corresponding normal form game:
create a variable initial_population which has value the corresponding nashpy initial population.
create a variable probabilities which has value the fixation probabilities of the simulated Moran process (using the given repetitions and random seed).
a. \(
A = \begin{pmatrix}1 & 2 \ 2 & 1\end{pmatrix}
\qquad
x_0 = \begin{pmatrix}10 & 0\end{pmatrix}
\)
Available marks: 2
End of explanation
"""
np.random.seed(0)
repetitions = 2000
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: b. \(
A = \begin{pmatrix}1 & 2 \ 2 & 1\end{pmatrix}
\qquad
x_0 = \begin{pmatrix}3 & 0\end{pmatrix}
\)
Available marks: 2
End of explanation
"""
np.random.seed(0)
repetitions = 2000
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: c. \(
A = \begin{pmatrix}1 & 2 & 3 \ 2 & 1 & 4 \ 2 & 3 & 1\end{pmatrix}
\qquad
x_0 = \begin{pmatrix}3 & 1 & 2\end{pmatrix}
\)
Available marks: 2
End of explanation
"""
np.random.seed(0)
repetitions = 2000
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: d. \(
A = \begin{pmatrix}1 & 2 & 3 \ 2 & 1 & 4 \ 2 & 3 & 1\end{pmatrix}
\qquad
x_0 = \begin{pmatrix}6 & 2 & 4\end{pmatrix}
\)
Available marks: 2
End of explanation
"""
timepoints = np.linspace(0, 1, 10)
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: Question 2
a. Consider the replicator dynamics on the population described by the following matrix:
\(A = \begin{pmatrix}4 & 2 \ 1 & 4\end{pmatrix}\)
Given an initial population vector: \(x=(.5, .5)\) create a variable base_population (as a numpy array) which has value the population vector after 10 timepoints (as given by the given variable timepoints).
Available marks: 4
End of explanation
"""
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: b. Consider that after these 10 timepoints a new strategy is introduced to the population so that the overall population now is: \(\left(\frac{x_1}{2(x_1 + x_2)}, \frac{x_2}{2(x_1 + x_2)}, \frac{x_1 + x_2}{2(x_1 + x_2)}\right)\) where \((x_1, x_2)\) corresponds to the values of base_population. This strategy:
Obtains a utility of \(100\) against itself.
Obtains a utility of \(5\) against the first strategy, and the first strategy obtains 0 against it.
Obtains a utility of \(7\) against the second strategy and the second strategy obtains 0 against it.
Create:
a variable y0 that has value the new population.
a variable A which has value the payoff matrix that corresponds to the evolutionary process described.
a variable end_population which has value the population vector after 10 timepoints (as given by the given variable timepoints).
Available marks: 6
End of explanation
"""
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: Question 3
Consider two retail companies in competition with each other:
The price (set by the market) for the good they produce is given by:
$$\Pi(q_1, q_2) = a - b (q_1 + q_2)$$
For some given value of $a, b$ where $q_1$ is the amount of product company 1 chooses to produce and $q_2$ is the amount of product company 2 chooses to produce.
- The profit of company $i\in{1, 2}$ is given by: $q_i\Pi(q_1, q_2) - q_i$.
a. For $a=b=1$ and assuming that $q_i\in{0, 1, 2, 3, 4, 5}$ for $i\in{1, 2}$ create a matrix A with value corresponding to the utilities for company $1$ and matrix B with value corresponding to the utilities for company $2$. Assume that company $1$ is the row player and company $2$ is the column player.
Available marks: 3
End of explanation
"""
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: b. Using these matrices and support enumeration, output the expecteed outcome in the market as Nash equilibria as a list for the associated game.
Available marks: 2
End of explanation
"""
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: c. Create a variable a_particular which has value the value of $a$ which gives a different set of equilibria to when $a=b=1$ (keep $b=1$).
Available marks: 3
End of explanation
"""
A = np.array([[0, 3], [1, 2]])
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: Question 4
For the hawk dove game defined by:
$$
A = \begin{pmatrix}0 & 3 \ 1 & 2\end{pmatrix}
\qquad
B = A^T
$$
a. Create a variable equilibria which has value the Nash equilibria of the game as a list using support enumeration.
Available marks: 2
End of explanation
"""
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: b. Create a variable seed_which_gives_first which has a value that when used with the following code, gives a final play count that approximately corresponds to the first element of equilibria (using the default order return by the support enumeration):
python
np.random.seed(seed_which_gives_first)
iterations = 500
play_counts = game.fictitious_play(iterations=iterations)
Available marks: 4
End of explanation
"""
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: c. Create a variable seed_which_gives_second which has a value that when used with the following code, gives a final play count that approximately corresponds to the second element of equilibria (using the default order return by the support enumeration):
python
np.random.seed(seed_which_gives_second)
iterations = 500
play_counts = game.fictitious_play(iterations=iterations)
Available marks: 4
End of explanation
"""
|
PythonFreeCourse/Notebooks | week06/2_Functional_Behavior.ipynb | mit | def square(x):
return x ** 2
"""
Explanation: <img src="images/logo.jpg" style="display: block; margin-left: auto; margin-right: auto;" alt="לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.">
<span style="text-align: right; direction: rtl; float: right;">התנהגות של פונקציות</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בפסקאות הקרובות נבחן פונקציות מזווית ראייה מעט שונה מהרגיל.<br>
בואו נקפוץ ישירות למים!
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">שם של פונקציה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
תכונה מעניינת שמתקיימת בפייתון היא שפונקציה היא ערך, בדיוק כמו כל ערך אחר.<br>
נגדיר פונקציה שמעלה מספר בריבוע:
</p>
End of explanation
"""
type(square)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נוכל לבדוק מאיזה טיפוס הפונקציה (אנחנו לא קוראים לה עם סוגריים אחרי שמה – רק מציינים את שמה):
</p>
End of explanation
"""
ribua = square
print(square(5))
print(ribua(5))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ואפילו לבצע השמה שלה למשתנה, כך ששם המשתנה החדש יצביע עליה:
</p>
End of explanation
"""
ribua is square
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
מה מתרחש בתא למעלה?<br>
כשהגדרנו את הפונקציה <var>square</var>, יצרנו לייזר עם התווית <var>square</var> שמצביע לפונקציה שמעלה מספר בריבוע.<br>
בהשמה שביצענו בשורה הראשונה בתא שלמעלה, הלייזר שעליו מודבקת התווית <var>ribua</var> כוון אל אותה הפונקציה שעליה מצביע הלייזר <var>square</var>.<br>
כעת <var>square</var> ו־<var>ribua</var> מצביעים לאותה פונקציה. אפשר לבדוק זאת כך:
</p>
End of explanation
"""
def add(num1, num2):
return num1 + num2
def subtract(num1, num2):
return num1 - num2
def multiply(num1, num2):
return num1 * num2
def divide(num1, num2):
return num1 / num2
functions = [add, subtract, multiply, divide]
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בשלב הזה אצטרך לבקש מכם לחגור חגורות, כי זה לא הולך להיות טיול רגיל הפעם.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">פונקציות במבנים מורכבים</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אם פונקציה היא בסך הכול ערך, ואם אפשר להתייחס לשם שלה בכל מקום, אין סיבה שלא נוכל ליצור רשימה של פונקציות!<br>
ננסה לממש את הרעיון:
</p>
End of explanation
"""
# Option 1
print(add(5, 2))
# Option 2
math_function = functions[0]
print(math_function(5, 2))
# Option 3 (ugly, but works!)
print(functions[0](5, 2))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
כעת יש לנו רשימה בעלת 4 איברים, שכל אחד מהם מצביע לפונקציה שונה.<br>
אם נרצה לבצע פעולת חיבור, נוכל לקרוא ישירות ל־<var>add</var> או (בשביל התרגול) לנסות לאחזר אותה מהרשימה שיצרנו:
</p>
End of explanation
"""
for function in functions:
print(function(5, 2))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
אם נרצה, נוכל אפילו לעבור על רשימת הפונקציות בעזרת לולאה ולהפעיל את כולן, זו אחר זו:
</p>
End of explanation
"""
def calculate(function, num1, num2):
return function(num1, num2)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בכל איטרציה של לולאת ה־<code>for</code>, המשתנה <var>function</var> עבר להצביע על הפונקציה הבאה מתוך רשימת <var>functions</var>.<br>
בשורה הבאה קראנו לאותה הפונקציה ש־<var>function</var> מצביע עליה, והדפסנו את הערך שהיא החזירה.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כיוון שרשימה היא מבנה ששומר על סדר האיברים שבו, התוצאות מודפסות בסדר שבו הפונקציות שמורות ברשימה.<br>
התוצאה הראשונה שאנחנו רואים היא תוצאת פונקציית החיבור, השנייה היא תוצאת פונקציית החיסור וכן הלאה.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">תרגיל ביניים: סוגרים חשבון</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה בשם <var>calc</var> שמקבלת כפרמטר שני מספרים וסימן של פעולה חשבונית.<br>
הסימן יכול להיות אחד מאלה: <code>+</code>, <code>-</code>, <code>*</code> או <code>/</code>.<br>
מטרת הפונקציה היא להחזיר את תוצאת הביטוי החשבוני שהופעל על שני המספרים.<br>
בפתרונכם, השתמשו בהגדרת הפונקציות מלמעלה ובמילון.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">העברת פונקציה כפרמטר</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נמשיך ללהטט בפונקציות.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
פונקציה נקראת "<dfn>פונקציה מסדר גבוה</dfn>" (<dfn>higher order function</dfn>) אם היא מקבלת כפרמטר פונקציה.<br>
ניקח לדוגמה את הפונקציה <var>calculate</var>:
</p>
End of explanation
"""
calculate(divide, 5, 2)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בקריאה ל־<var>calculate</var>, נצטרך להעביר פונקציה ושני מספרים.<br>
נעביר לדוגמה את הפונקציה <var>divide</var> שהגדרנו קודם לכן:
</p>
End of explanation
"""
def square(number):
return number ** 2
square_check = apply(square, [5, -1, 6, -8, 0])
tuple(square_check) == (25, 1, 36, 64, 0)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
מה שמתרחש במקרה הזה הוא שהעברנו את הפונקציה <var>divide</var> כארגומנט ראשון.<br>
הפרמטר <var>function</var> בפונקציה <var>calculate</var> מצביע כעת על פונקציית החילוק שהגדרנו למעלה.<br>
מכאן, שהפונקציה תחזיר את התוצאה של <code>divide(5, 2)</code> – הרי היא 2.5.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">תרגיל ביניים: מפה לפה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו generator בשם <var>apply</var> שמקבל כפרמטר ראשון פונקציה (<var>func</var>), וכפרמטר שני iterable (<var dir="rtl">iter</var>).<br>
עבור כל איבר ב־iterable, ה־generator יניב את האיבר אחרי שהופעלה עליו הפונקציה <var>func</var>, דהיינו – <code>func(item)</code>.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ודאו שהרצת התא הבא מחזירה <code>True</code> עבור הקוד שלכם:
</p>
End of explanation
"""
squared_items = map(square, [1, 6, -1, 8, 0, 3, -3, 9, -8, 8, -7])
print(tuple(squared_items))
"""
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">סיכום ביניים</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
וואו. זה היה די משוגע.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אז למעשה, פונקציות בפייתון הן ערך לכל דבר, כמו מחרוזות ומספרים!<br>
אפשר לאחסן אותן במשתנים, לשלוח אותן כארגומנטים ולכלול אותם בתוך מבני נתונים מורכבים יותר.<br>
אנשי התיאוריה של מדעי המחשב נתנו להתנהגות כזו שם: "<dfn>אזרח ממדרגה ראשונה</dfn>" (<dfn>first class citizen</dfn>).<br>
אם כך, אפשר להגיד על פונקציות בפייתון שהן אזרחיות ממדרגה ראשונה.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">פונקציות מסדר גבוה בפייתון</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
החדשות הטובות הן שכבר עשינו היכרות קלה עם המונח פונקציות מסדר גבוה.<br>
עכשיו, כשאנחנו יודעים שמדובר בפונקציות שמקבלות פונקציה כפרמטר, נתחיל ללכלך קצת את הידיים.<br>
נציג כמה פונקציות פייתוניות מעניינות שכאלו:
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הפונקציה map</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הפונקציה <var>map</var> מקבלת פונקציה כפרמטר הראשון, ו־iterable כפרמטר השני.<br>
<var>map</var> מפעילה את הפונקציה מהפרמטר הראשון על כל אחד מהאיברים שהועברו ב־iterable.<br>
היא מחזירה iterator שמורכב מהערכים שחזרו מהפעלת הפונקציה.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
במילים אחרות, <var>map</var> יוצרת iterable חדש.<br>
ה־iterable כולל את הערך שהוחזר מהפונקציה עבור כל איבר ב־<code>iterable</code> שהועבר.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לדוגמה:
</p>
End of explanation
"""
def my_map(function, iterable):
for item in iterable:
yield function(item)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
הפונקציה קיבלה כארגומנט ראשון את הפונקציה <var>square</var> שהגדרנו למעלה, שמטרתה העלאת מספר בריבוע.<br>
כארגומנט שני היא קיבלה את רשימת כל המספרים שאנחנו רוצים שהפונקציה תרוץ עליהם.<br>
כשהעברנו ל־<var>map</var> את הארגומנטים הללו, <var>map</var> החזירה לנו ב־iterator (מבנה שאפשר לעבור עליו איבר־איבר) את התוצאה:<br>
הריבוע, קרי החזקה השנייה, של כל אחד מהאיברים ברשימה שהועברה כארגומנט השני.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
למעשה, אפשר להגיד ש־<code>map</code> שקולה לפונקציה הבאה:
</p>
End of explanation
"""
numbers = [(2, 4), (1, 4, 2), (1, 3, 5, 6, 2), (3, )]
sums = map(sum, numbers)
print(tuple(sums))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
הנה דוגמה נוספת לשימוש ב־<var>map</var>:
</p>
End of explanation
"""
def add_one(number):
return number + 1
incremented = map(add_one, (1, 2, 3))
print(tuple(incremented))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
במקרה הזה, בכל מעבר, קיבלה הפונקציה <var>sum</var> איבר אחד מתוך הרשימה – tuple.<br>
היא סכמה את האיברים של כל tuple שקיבלה, וכך החזירה לנו את הסכומים של כל ה־tuple־ים – זה אחרי זה.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ודוגמה אחרונה:
</p>
End of explanation
"""
def is_mature(age):
return age >= 18
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בדוגמה הזו יצרנו פונקציה משל עצמנו, ואותה העברנו ל־map.<br>
מטרת דוגמה זו היא להדגיש שאין שוני בין העברת פונקציה שקיימת בפייתון לבין פונקציה שאנחנו יצרנו.
</p>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה שמקבלת רשימת מחרוזות של שתי מילים: שם פרטי ושם משפחה.<br>
הפונקציה תשתמש ב־map כדי להחזיר מכולן רק את השם הפרטי.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הפונקציה filter</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הפונקציה <var>filter</var> מקבלת פונקציה כפרמטר ראשון, ו־iterable כפרמטר שני.<br>
<var>filter</var> מפעילה על כל אחד מאיברי ה־iterable את הפונקציה, ומחזירה את האיבר אך ורק אם הערך שחזר מהפונקציה שקול ל־<code>True</code>.<br>
אם ערך ההחזרה שקול ל־<code>False</code> – הערך "יבלע" ב־<var>filter</var> ולא יחזור ממנה.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
במילים אחרות, <var>filter</var> יוצרת iterable חדש ומחזירה אותו.<br>
ה־iterable כולל רק את האיברים שעבורם הפונקציה שהועברה החזירה ערך השקול ל־<code>True</code>.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נבנה, לדוגמה, פונקציה שמחזירה אם אדם הוא בגיר.<br>
הפונקציה תקבל כפרמטר גיל, ותחזיר <code>True</code> כאשר הגיל שהועבר לה הוא לפחות 18, ו־<code>False</code> אחרת.
</p>
End of explanation
"""
ages = [0, 1, 4, 10, 20, 35, 56, 84, 120]
mature_ages = filter(is_mature, ages)
print(tuple(mature_ages))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נגדיר רשימת גילים, ונבקש מ־<var>filter</var> לסנן אותם לפי הפונקציה שהגדרנו:
</p>
End of explanation
"""
to_sum = [(1, -1), (2, 5), (5, -3, -2), (1, 2, 3)]
sum_is_not_zero = filter(sum, to_sum)
print(tuple(sum_is_not_zero))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
כפי שלמדנו, <var>filter</var> מחזירה לנו רק גילים השווים ל־18 או גדולים ממנו.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נחדד שהפונקציה שאנחנו מעבירים ל־<var>filter</var> לא חייבת להחזיר בהכרח <code>True</code> או <code>False</code>.<br>
הערך 0, לדוגמה, שקול ל־<code>False</code>, ולכן <var>filter</var> תסנן כל ערך שעבורו הפונקציה תחזיר 0:
</p>
End of explanation
"""
to_sum = [0, "", None, 0.0, True, False, "Hello"]
equivalent_to_true = filter(None, to_sum)
print(tuple(equivalent_to_true))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בתא האחרון העברנו ל־<var>filter</var> את sum כפונקציה שאותה אנחנו רוצים להפעיל, ואת <var>to_sum</var> כאיברים שעליהם אנחנו רוצים לפעול.<br>
ה־tuple־ים שסכום איבריהם היה 0 סוננו, וקיבלנו חזרה iterator שהאיברים בו הם אך ורק אלו שסכומם שונה מ־0.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כטריק אחרון, נלמד ש־<var>filter</var> יכולה לקבל גם <code>None</code> בתור הפרמטר הראשון, במקום פונקציה.<br>
זה יגרום ל־<var>filter</var> לא להפעיל פונקציה על האיברים שהועברו, כלומר לסנן אותם כמו שהם.<br>
איברים השקולים ל־<code>True</code> יוחזרו, ואיברים השקולים ל־<code>False</code> לא יוחזרו:
</p>
End of explanation
"""
def add(num1, num2):
return num1 + num2
"""
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה שמקבלת רשימת מחרוזות, ומחזירה רק את המחרוזות הפלינדרומיות שבה.<br>
מחרוזת נחשבת פלינדרום אם קריאתה מימין לשמאל ומשמאל לימין יוצרת אותו ביטוי.<br>
השתמשו ב־<var>filter</var>.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">פונקציות אנונימיות</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
תעלול נוסף שנוסיף לארגז הכלים שלנו הוא <dfn>פונקציות אנונימיות</dfn> (<dfn>anonymous functions</dfn>).<br>
אל תיבהלו מהשם המאיים – בסך הכול פירושו הוא "פונקציות שאין להן שם".<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לפני שאתם מרימים גבה ושואלים את עצמכם למה הן שימושיות, בואו נבחן כמה דוגמאות.<br>
ניזכר בהגדרת פונקציית החיבור שיצרנו לא מזמן:
</p>
End of explanation
"""
add = lambda num1, num2: num1 + num2
print(add(5, 2))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ונגדיר את אותה הפונקציה בדיוק בצורה אנונימית:
</p>
End of explanation
"""
def is_positive(number):
return number > 0
numbers = [-2, -1, 0, 1, 2]
positive_numbers = filter(is_positive, numbers)
print(tuple(positive_numbers))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
לפני שנסביר איפה החלק של ה"פונקציה בלי שם" נתמקד בצד ימין של ההשמה.<br>
כיצד מנוסחת הגדרת פונקציה אנונימית?
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>הצהרנו שברצוננו ליצור פונקציה אנונימית בעזרת מילת המפתח <code>lambda</code>.</li>
<li>מייד אחריה, ציינו את שמות כל הפרמטרים שהפונקציה תקבל, כשהם מופרדים בפסיק זה מזה.</li>
<li>כדי להפריד בין רשימת הפרמטרים לערך ההחזרה של הפונקציה, השתמשנו בנקודתיים.</li>
<li>אחרי הנקודתיים, כתבנו את הביטוי שאנחנו רוצים שהפונקציה תחזיר.</li>
</ol>
<figure>
<img src="images/lambda.png" style="max-width: 500px; margin-right: auto; margin-left: auto; text-align: center;" alt="בתמונה מופיעה הגדרת ה־lambda שביצענו קודם לכן. מעל המילה lambda המודגשת בירוק ישנו פס מקווקו, ומעליו רשום 'הצהרה'. מימין למילה lambda כתוב num1 (פסיק) num2, מעליהם קו מקווקו ומעליו המילה 'פרמטרים'. מימין לפרמטרים יש נקודתיים, ואז num1 (הסימן פלוס) num2. מעליהם קו מקווקו, ומעליו המילה 'ערך החזרה'."/>
<figcaption style="margin-top: 2rem; text-align: center; direction: rtl;">חלקי ההגדרה של פונקציה אנונימית בעזרת מילת המפתח <code>lambda</code><br><span style="color: white;">A girl has no name</span></figcaption>
</figure>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
במה שונה ההגדרה של פונקציה זו מההגדרה של פונקציה רגילה?<br>
היא לא באמת שונה.<br>
המטרה היא לאפשר תחביר שיקל על חיינו כשאנחנו רוצים לכתוב פונקציה קצרצרה שאורכה שורה אחת.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נראה, לדוגמה, שימוש ב־<var>filter</var> כדי לסנן את כל האיברים שאינם חיוביים:
</p>
End of explanation
"""
numbers = [-2, -1, 0, 1, 2]
positive_numbers = filter(lambda n: n > 0, numbers)
print(tuple(positive_numbers))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
במקום להגדיר פונקציה חדשה שנקראת <var>is_positive</var>, נוכל להשתמש בפונקציה אנונימית:
</p>
End of explanation
"""
closet = [
{'name': 'Peter', 'year_of_birth': 1927, 'gender': 'Male'},
{'name': 'Edmund', 'year_of_birth': 1930, 'gender': 'Male'},
{'name': 'Lucy', 'year_of_birth': 1932, 'gender': 'Female'},
{'name': 'Susan', 'year_of_birth': 1928, 'gender': 'Female'},
{'name': 'Jadis', 'year_of_birth': 0, 'gender': 'Female'},
]
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
איך זה עובד?<br>
במקום להעביר ל־<var>filter</var> פונקציה שיצרנו מבעוד מועד, השתמשנו ב־<code>lambda</code> כדי ליצור פונקציה ממש באותה השורה.<br>
הפונקציה שהגדרנו מקבלת מספר (<var>n</var>), ומחזירה <code>True</code> אם הוא חיובי, או <code>False</code> אחרת.<br>
שימו לב שבצורה זו באמת לא היינו צריכים לתת שם לפונקציה שהגדרנו.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
השימוש בפונקציות אנונימיות לא מוגבל ל־<var>map</var> ול־<var>filter</var>, כמובן.<br>
מקובל להשתמש ב־<code>lambda</code> גם עבור פונקציות כמו <var>sorted</var>, שמקבלות פונקציה בתור ארגומנט.
</p>
<div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; float: right; ">
<img src="images/recall.svg" style="height: 50px !important;" alt="תזכורת" title="תזכורת">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl;">
הפונקציה <code>sorted</code> מאפשרת לנו לסדר ערכים, ואפילו להגדיר עבורה לפי מה לסדר אותם.<br>
לרענון בנוגע לשימוש בפונקציה גשו למחברת בנושא פונקציות מובנות בשבוע 4.
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נסדר, למשל, את הדמויות ברשימה הבאה, לפי תאריך הולדתן:
</p>
End of explanation
"""
sorted(closet, key=lambda d: d['year_of_birth'])
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נרצה שסידור הרשימה יתבצע לפי המפתח <var>year_of_birth</var>.<br>
כלומר, בהינתן מילון שמייצג דמות בשם <var>d</var>, יש להשיג את <code dir="ltr">d['year_of_birth']</code>, ולפיו לבצע את סידור הרשימה.<br>
ניגש למלאכה:
</p>
End of explanation
"""
|
mne-tools/mne-tools.github.io | dev/_downloads/a96f6d7ea0f7ccafcacc578a25e1f8c5/ica_comparison.ipynb | bsd-3-clause | # Authors: Pierre Ablin <[email protected]>
#
# License: BSD-3-Clause
from time import time
import mne
from mne.preprocessing import ICA
from mne.datasets import sample
print(__doc__)
"""
Explanation: Compare the different ICA algorithms in MNE
Different ICA algorithms are fit to raw MEG data, and the corresponding maps
are displayed.
End of explanation
"""
data_path = sample.data_path()
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname).crop(0, 60).pick('meg').load_data()
reject = dict(mag=5e-12, grad=4000e-13)
raw.filter(1, 30, fir_design='firwin')
"""
Explanation: Read and preprocess the data. Preprocessing consists of:
MEG channel selection
1-30 Hz band-pass filter
End of explanation
"""
def run_ica(method, fit_params=None):
ica = ICA(n_components=20, method=method, fit_params=fit_params,
max_iter='auto', random_state=0)
t0 = time()
ica.fit(raw, reject=reject)
fit_time = time() - t0
title = ('ICA decomposition using %s (took %.1fs)' % (method, fit_time))
ica.plot_components(title=title)
"""
Explanation: Define a function that runs ICA on the raw MEG data and plots the components
End of explanation
"""
run_ica('fastica')
"""
Explanation: FastICA
End of explanation
"""
run_ica('picard')
"""
Explanation: Picard
End of explanation
"""
run_ica('infomax')
"""
Explanation: Infomax
End of explanation
"""
run_ica('infomax', fit_params=dict(extended=True))
"""
Explanation: Extended Infomax
End of explanation
"""
|
theideasmith/theideasmith.github.io | _notebooks/.ipynb_checkpoints/ODE N-Dimensional Test 1-checkpoint.ipynb | mit | import numpy as np
import numpy.ma as ma
from scipy.integrate import odeint
mag = lambda r: np.sqrt(np.sum(np.power(r, 2)))
def g(y, t, q, m, n,d, k):
"""
n: the number of particles
d: the number of dimensions
(for fun's sake I want this
to work for k-dimensional systems)
y: an (n*2,d) dimensional matrix
where y[:n]_i is the position
of the ith particle and
y[n:]_i is the velocity of
the ith particle
qs: the particle charges
ms: the particle masses
k: the electric constant
t: the current timestamp
"""
y = y.reshape((n*2,d))
v = np.array(y[n:])
# rj across, ri down
rs_from = np.tile(y[:n], (n,1,1))
# ri across, rj down
rs_to = np.transpose(rs_from, axes=(1,0,2))
# directional distance between each r_i and r_j
# dr_ij is the force from j onto i, i.e. r_i - r_j
dr = rs_to - rs_from
# Used as a mask
nd_identity = np.eye(n).reshape((n,n,1))
# Force magnitudes
drmag = ma.array(
np.sqrt(
np.sum(
np.power(dr, 2), 2)),
mask=nd_identity)
# Pairwise q_i*q_j for force equation
qsa = np.tile(q, (n,1))
qsb = np.tile(q, (n,1)).T
qs = qsa*qsb
# Directional forces
Fs = (1./np.power(drmag,2)).reshape((n,n,1))
# Net Force
Fnet = np.sum(Fs, 1)
# Dividing by m to obtain acceleration vectors
a = np.sum(Fnet*dr, 1)
# Sliding integrated acceleration
# (i.e. velocity from previous iteration)
# to the position derivative slot
y[:n] = np.array(y[n:])
# Entering the acceleration into the velocity slot
y[n:] = np.array(a)
# Flattening it out for scipy.odeint to work
return y.reshape(n*d*2)
"""
Explanation: Point Charge Dynamics
Akiva Lipshitz, February 2, 2017
Particles and their dynamics are incredibly fascinating, even wondrous. Give me some particles and some simple equations describing their interactions – some very interesting things can start happening.
Currently studying electrostatics in my physics class, I am interested in not only the static force and field distributions but also in the dynamics of particles in such fields. To study the dynamics of electric particles is not an easy endeavor – in fact the differential equations governing their dynamics are quite complex and not easily solved manually, especially by someone who lacks a background in differential equations.
Instead of relying on our analytical abilities, we may rely on our computational abilities and numerically solve the differential equations. Herein I will develop a scheme for computing the dynamics of $n$ electric particles en masse. It will not be computationally easy – the number of operations grows proportionally to $n^2$. For less than $10^4$ you should be able to simulate the particle dynamics for long enough time intervals to be useful. But for something like $10^6$ particles the problem is intractable. You'll need to do more than $10^12$ operations per iteration and a degree in numerical analysis.
Governing Equations
Given $n$ charges $q_1, q_2, ..., q_n$, with masses $m_1, m_2, ..., m_n$ located at positions $\vec{r}_1, \vec{r_2}, ..., \vec{r}_n$, the force induced on $q_i$ by $q_j$ is given by
$$\vec{F}{j \to i} = k\frac{q_iq_j}{\left|\vec{r}_i-\vec{r}_j\right|^2}\hat{r}{ij}$$
where
$$\hat{r}_{ij} = \vec{r}_i-\vec{r}_j$$
Now, the net marginal force on particle $q_i$ is given as the sum of the pairwise forces
$$\vec{F}{N, i} = \sum{j \ne i}{\vec{F}_{j \to i}}$$
And then the net acceleration of particle $q_i$ just normalizes the force by the mass of the particle:
$$\vec{a}i = \frac{\vec{F}{N, i}}{m_i}$$
To implement this at scale, we're going to need to figure out a scheme for vectorizing all these operations, demonstrated below.
We'll be using scipy.integrate.odeint for our numerical integration. Below, the function g(y, t, q, m, n, d, k) is a function that returns the derivatives for all our variables at each iteration. We pass it to odeint and then do the integration.
End of explanation
"""
t_f = 10
t = np.linspace(0, 20, num=t_f)
"""
Explanation: Let's define our time intervals, so that odeint knows which time stamps to iterate over.
End of explanation
"""
# Number of dimensions
d = 2
# Number of point charges
n = 3
# charge magnitudes, currently all equal to 1
q = np.ones(n)
# masses
m = np.ones(n)
# The electric constant
# k=1/(4*pi*epsilon_naught)
# Right now we will set it to 1
# because for our tests we are choosing all q_i =1.
# Therefore, k*q is too large a number and causes
# roundoff errors in the integrator.
# In truth:
# k = 8.99*10**9
# But for now:
k=1.
"""
Explanation: Some other constants
End of explanation
"""
r1i = np.array([-2., 0.5])
dr1dti = np.array([2.,0.])
r2i = np.array([30.,0.])
dr2dti = np.array([-2., 0.])
r3i = np.array([16.,16.])
dr3dti = np.array([0, -2.])
"""
Explanation: We get to choose the initial positions and velocities of our particles. For our initial tests, we'll set up 3 particles to collide with eachother.
End of explanation
"""
y0 = np.array([r1i, r2i, r3i, dr1dti, dr2dti, dr3dti]).flatten()
"""
Explanation: And pack them into an initial state variable we can pass to odeint.
End of explanation
"""
# Doing the integration
yf = odeint(g, y0, t, args=(q,m,n,d,k)).reshape(t_f,n*2,d)
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
#ax = fig.add_subplot(111, projection='3d')
ax = fig.add_subplot(111)
ys1 = yf[:,0,1]
xs1 = yf[:,0,0]
xs2 = yf[:,1,0]
ys2 = yf[:,1,1]
xs3 = yf[:,2,0]
ys3 = yf[:,2,1]
ax.plot(xs1[:1], ys1[:1],'bv')
ax.plot(xs1[-1:], ys1[-1:], 'rv')
ax.plot(xs2[:1], ys2[:1], 'bv')
ax.plot(xs2[-1:], ys2[-1:], 'rv')
ax.plot(xs3[:1], ys3[:1], 'bv')
ax.plot(xs3[-1:], ys3[-1:], 'rv')
# minx = np.min(y[:,[0,2],0])
# maxx = np.max(y[:,[0,2],0])
# miny = np.min(y[:,[0,2],1])
# maxy = np.max(y[:,[0,2],1])
ax.plot(xs1, ys1)
ax.plot(xs2, ys2)
ax.plot(xs3, ys3)
# plt.xlim(xmin=minx, xmax=maxx)
# plt.ylim(ymin=miny, ymax=maxy)
plt.title("Paths of 3 Colliding Electric Particles")
plt.show()
"""
Explanation: The Fun Part – Doing the Integration
Now, we'll actually do the integration
End of explanation
"""
|
pycam/python-basic | python_basic_2_2.ipynb | unlicense | codeList = ['NA06984', 'NA06985', 'NA06986', 'NA06989', 'NA06991']
for code in codeList:
print(code)
"""
Explanation: An introduction to solving biological problems with Python
Session 2.2: Loops
The <tt>for</tt> loop
Exercises 2.2.1
The <tt>while</tt> loop
Exercises 2.2.2
Skipping and breaking loops
More looping using range() and enumerate()
Filtering in loops
Exercises 2.2.3
Loops
When an operation needs to be repeated multiple times, for example on all of the items in a list, we
avoid having to type (or copy and paste) repetitive code by creating a loop. There are two ways of creating loops in Python, the <tt>for</tt> loop and the <tt>while</tt> loop.
The <tt>for</tt> loop
The for loop in Python iterates over each item in a sequence (such as a list or tuple) in the order that they appear in the sequence. What this means is that a variable (<tt>code</tt> in the below example) is set to each item from the sequence of values in turn, and each time this happens the indented block of code is executed again.
End of explanation
"""
dnaSequence = 'ATGGTGTTGCC'
for base in dnaSequence:
print(base)
"""
Explanation: A <tt>for</tt> loop can iterate over the individual characters in a string:
End of explanation
"""
rnaMassDict = {"G":345.21, "C":305.18, "A":329.21, "U":302.16}
for x in rnaMassDict:
print(x, rnaMassDict[x])
"""
Explanation: And also over the keys of a dictionary:
End of explanation
"""
total = 0
values = [1, 2, 4, 8, 16]
for v in values:
total = total + v
# total += v
print(total)
print(total)
"""
Explanation: Any variables that are defined before the loop can be accessed from inside the loop. So for example to calculate the summation of the items in a list of values we could define the total initially to be zero and add each value to the total in the loop:
End of explanation
"""
geneExpression = {
'Beta-Catenin': 2.5,
'Beta-Actin': 1.7,
'Pax6': 0,
'HoxA2': -3.2
}
for gene in geneExpression:
if geneExpression[gene] < 0:
print(gene, "is downregulated")
elif geneExpression[gene] > 0:
print(gene, "is upregulated")
else:
print("No change in expression of ", gene)
"""
Explanation: Naturally we can combine a <tt>for</tt> loop with an <tt>if</tt> statement, noting that we need two indentation levels, one for the outer loop and another for the conditional blocks:
End of explanation
"""
value = 0.25
while value < 8:
value = value * 2
print(value)
print("final value:", value)
"""
Explanation: Exercises 2.2.1
Create a sequence where each element is an individual base of DNA. Make the sequence 15 bases long.
Print the length of the sequence.
Create a for loop to output every base of the sequence on a new line.
The <tt>while</tt> loop
In addition to the <tt>for</tt> loop that operates on a collection of items, there is a <tt>while</tt> loop that simply repeats while some statement evaluates to True and stops when it is False. Note that if the tested expression never evaluates to False then you have an “infinite loop”, which is not good.
In this example we generate a series of numbers by doubling a value after each iteration, until a limit is reached:
End of explanation
"""
values = [10, -5, 3, -1, 7]
total = 0
for v in values:
if v < 0:
continue # Skip this iteration
total += v
print(total)
"""
Explanation: Whats going on here is that the value is doubled in each iteration and once it gets to 8 the while test fails (8 is not less than 8) and that last value is preserved. Note that if the test were instead value <= 8 then we would get one more doubling and the value would reach 16.
Exercises 2.2.2
Reuse the 15 bases long sequence created at the previous exercise where each element is an individual base of DNA.
Create a <tt>while</tt> loop similar to the one above that starts at the third base in the sequence and outputs every third base until the 12th.
Skipping and breaking loops
Python has two ways of affecting the flow of the <tt>for</tt> or <tt>while</tt> loop inside the block. The <tt>continue</tt> statement means that the rest of the code in the block is skipped for this particular item in the collection, i.e. jump to the next iteration. In this example negative numbers are left out of a summation:
End of explanation
"""
geneticCode = {'TAT': 'Tyrosine', 'TAC': 'Tyrosine',
'CAA': 'Glutamine', 'CAG': 'Glutamine',
'TAG': 'STOP'}
sequence = ['CAG','TAC','CAA','TAG','TAC','CAG','CAA']
for codon in sequence:
if geneticCode[codon] == 'STOP':
break # Quit looping at this point
else:
print(geneticCode[codon])
"""
Explanation: The other way of affecting a loop is with the <tt>break</tt> statement. In contrast to the <tt>continue</tt> statement, this immediately causes all looping to finish, and execution is resumed at the next statement after the loop.
End of explanation
"""
print(list(range(10)))
print(list(range(5, 10)))
print(list(range(0, 10, 3)))
print(list(range(7, 2, -2)))
"""
Explanation: Looping gotchas
An internal counter is used to keep track of which item is used next, and this is incremented on each iteration. When this counter has reached the length of the sequence the loop terminates. This means that if you delete the current item from the sequence, the next item will be skipped (since it gets the index of the current item which has already been treated). Likewise, if you insert an item in a sequence before the current item, the current item will be treated again the next time through the loop. This can lead to nasty bugs that can be avoided by making a temporary copy using a slice of the whole sequence.
<div class="alert-warning">
**When looping, never modify the collection!** Always create a copy of it first.
</div>
More looping
Using range()
If you would like to iterate over a numeric sequence then this is possible by combining the range() function and a for loop.
End of explanation
"""
for x in range(8):
print(x*x)
squares = []
for x in range(8):
s = x*x
squares.append(s)
print(squares)
"""
Explanation: Looping through ranges
End of explanation
"""
letters = ['A','C','G','T']
for index, letter in enumerate(letters):
print(index, letter)
numbered_letters = list(enumerate(letters))
print(numbered_letters)
"""
Explanation: Using enumerate()
Given a sequence, enumerate() allows you to iterate over the sequence generating a tuple containing each value along with a corresponding index.
End of explanation
"""
city_pops = {
'London': 8200000,
'Cambridge': 130000,
'Edinburgh': 420000,
'Glasgow': 1200000
}
big_cities = []
for city in city_pops:
if city_pops[city] >= 1000000:
big_cities.append(city)
print(big_cities)
total = 0
for city in city_pops:
total += city_pops[city]
print("total population:", total)
pops = list(city_pops.values())
print("total population:", sum(pops))
"""
Explanation: Filtering in loops
End of explanation
"""
print('{:.2f}'.format(0.4567))
geneExpression = {
'Beta-Catenin': 2.5,
'Beta-Actin': 1.7,
'Pax6': 0,
'HoxA2': -3.2
}
for gene in geneExpression:
print('{:s}\t{:+.2f}'.format(gene, geneExpression[gene])) # s is optional
# could also be written using variable names
#print('{gene:s}\t{exp:+.2f}'.format(gene=gene, exp=geneExpression[gene]))
"""
Explanation: Formating string
Constructing more complex strings from a mix of variables of different types can be cumbersome, and sometimes you want more control over how values are interpolated into a string. Python provides a powerful mechanism for formatting strings using built-in .format() function using "replacement fields" surrounded by curly braces {} which starts with an optional field name followed by a colon : and finishes with a format specification.
There are lots of these specifiers, but here are 3 useful ones:
d: decimal integer
f: floating point number
s: string
You can specify the number of decimal points to use in a floating point number with, e.g. .2f to use 2 decimal places or +.2f to use 2 decimal with always showing its associated sign.
End of explanation
"""
|
VVard0g/ThreatHunter-Playbook | docs/notebooks/windows/07_discovery/WIN-190625024610.ipynb | mit | from openhunt.mordorutils import *
spark = get_spark()
"""
Explanation: SysKey Registry Keys Access
Metadata
| Metadata | Value |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2019/06/25 |
| modification date | 2020/09/20 |
| playbook related | [] |
Hypothesis
Adversaries might be calculating the SysKey from registry key values to decrypt SAM entries
Technical Context
Every computer that runs Windows has its own local domain; that is, it has an account database for accounts that are specific to that computer.
Conceptually,this is an account database like any other with accounts, groups, SIDs, and so on. These are referred to as local accounts, local groups, and so on.
Because computers typically do not trust each other for account information, these identities stay local to the computer on which they were created.
Offensive Tradecraft
Adversaries might use tools like Mimikatz with lsadump::sam commands or scripts such as Invoke-PowerDump to get the SysKey to decrypt Security Account Mannager (SAM) database entries (from registry or hive) and get NTLM, and sometimes LM hashes of local accounts passwords.
Adversaries can calculate the Syskey by using RegOpenKeyEx/RegQueryInfoKey API calls to query the appropriate class info and values from the HKLM:\SYSTEM\CurrentControlSet\Control\Lsa\JD, HKLM:\SYSTEM\CurrentControlSet\Control\Lsa\Skew1, HKLM:\SYSTEM\CurrentControlSet\Control\Lsa\GBG, and HKLM:\SYSTEM\CurrentControlSet\Control\Lsa\Data keys.
Additional reading
* https://github.com/OTRF/ThreatHunter-Playbook/tree/master/docs/library/windows/security_account_manager_database.md
* https://github.com/OTRF/ThreatHunter-Playbook/tree/master/docs/library/windows/syskey.md
Security Datasets
| Metadata | Value |
|:----------|:----------|
| docs | https://securitydatasets.com/notebooks/atomic/windows/credential_access/SDWIN-190625103712.html |
| link | https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/credential_access/host/empire_mimikatz_sam_access.zip |
Analytics
Initialize Analytics Engine
End of explanation
"""
sd_file = "https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/credential_access/host/empire_mimikatz_sam_access.zip"
registerMordorSQLTable(spark, sd_file, "sdTable")
"""
Explanation: Download & Process Security Dataset
End of explanation
"""
df = spark.sql(
'''
SELECT `@timestamp`, ProcessName, ObjectName, AccessMask, EventID
FROM sdTable
WHERE LOWER(Channel) = "security"
AND (EventID = 4656 OR EventID = 4663)
AND ObjectType = "Key"
AND (
lower(ObjectName) LIKE "%jd"
OR lower(ObjectName) LIKE "%gbg"
OR lower(ObjectName) LIKE "%data"
OR lower(ObjectName) LIKE "%skew1"
)
'''
)
df.show(10,False)
"""
Explanation: Analytic I
Look for handle requests and access operations to specific registry keys used to calculate the SysKey. SACLs are needed for them
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Windows registry | Microsoft-Windows-Security-Auditing | Process accessed Windows registry key | 4663 |
| Windows registry | Microsoft-Windows-Security-Auditing | Process requested access Windows registry key | 4656 |
End of explanation
"""
|
PythonSanSebastian/ep-tools | notebooks/programme_grid.ipynb | mit | %%javascript
IPython.OutputArea.auto_scroll_threshold = 99999;
//increase max size of output area
import json
import datetime as dt
from random import choice, randrange, shuffle
from copy import deepcopy
from collections import OrderedDict, defaultdict
from itertools import product
from functools import partial
from operator import itemgetter
from eptools.dict_query import build_query, run_query
from IPython.display import display, HTML
show = lambda s: display(HTML(s))
"""
Explanation: EuroPython program grid
End of explanation
"""
talk_sessions = json.load(open('accepted_talks.json'))
list(talk_sessions.keys())
"""
Explanation: Load the data
End of explanation
"""
#all talks
all_talks = []
for s in talk_sessions.values():
all_talks.extend(list(s.values()))
#the talks worth for scheduling
grid_talks = []
sessions = talk_sessions.copy()
general_grid_sessions = ['talk', 'training']
for session_name in general_grid_sessions:
grid_talks.extend(sessions[session_name].values())
fields2pop = ['abstract_extra',
'abstract_long',
'abstract_short',
'twitters',
'emails',
'status',
'url',
'companies',
'have_tickets',
]
for talk in grid_talks:
for f in fields2pop:
talk.pop(f)
"""
Explanation: Clean up the data
Here I pick from talk_sessions only the talks with the type that I need for scheduling.
I also remove from all these talks the fields that I don't need for scheduling, to maintain the prints clean and short enough.
End of explanation
"""
tags_field = 'tag_categories'
weekday_names = {0: 'Monday, July 18th',
1: 'Tuesday, July 19th',
2: 'Wednesday, July 20th',
3: 'Thursday, July 21st',
4: 'Friday, July 22nd'
}
room_names = {0: 'A1',
1: 'A3',
2: 'A2',
3: 'Ba1',
4: 'Ba2',
5: 'E' ,
6: 'A4',
}
# this is not being used yet
durations = {'announcements': 15,
'keynote': 45,
'lts': 60,
'lunch': 60,
'am_coffee': 30,
'pm_coffee': 30,
}
# track schedule types, by talk conditions
track_schedule1 = [(('duration', 45), ),
(('duration', 45), ),
(('duration', (45, 60)), ),
(('duration', 45), ),
(('duration', 45), ),
(('duration', 45), ),
(('duration', 45)), ),
#(('admin_type', 'Lightning talk'), ),
]
track_schedule2 = [(('duration', 45), ),
(('duration', 45), ),
(('duration', (45, 30)), ),
(('duration', (60, 45)), ),
(('duration', (60, 45)), ),
(('duration', 45), ),
]
track_schedule3 = [(('duration', 45), ),
(('duration', 45), ),
(('duration', (45, 30)), ),
(('duration', (45, 60)), ),
(('duration', (45, 30), ),
(('duration', (60, 30)), ),
(('duration', (45, 30)), ),
]
#tutorials
track_schedule4 = [(('type', 'has Training'), ),
(('type', 'has Training'), ), ]
# these are for reference, but not being taken into account (yet)
frstday_schedule1 = [(('admin_type', 'Opening session')),
(('admin_type', 'Keynote')),
] + track_schedule1
lastday_schedule1 = track_schedule1 + [(('admin_type', 'Closing session')),]
# I removed time from here.
#daily_timegrid = lambda schedule: OrderedDict([(datetime.time(*slot[0]), slot[1]) for slot in schedule])
room1_schedule = track_schedule1 # A1, the google room
room2_schedule = track_schedule2 # A3, pythonanywhere room
room3_schedule = track_schedule3 # A2
room4_schedule = track_schedule3 # Barria1
room5_schedule = track_schedule3 # Barria2
room6_schedule = track_schedule4 # Room E
room7_schedule = track_schedule4 # Room A4
daily_schedule = OrderedDict([(0, room1_schedule),
(1, room2_schedule),
(2, room3_schedule),
(3, room4_schedule),
(4, room5_schedule),
(5, room6_schedule),
(6, room7_schedule)])
# week conditions
default_condition = (('language', 'English'), ('type', 'has Talk'),)
# [day][room] -> talk conditions
dayroom_conditions = {0: {},
1: {4: (('language', 'Spanish'), ), },
2: {3: (('language', 'Basque' ), ), },
3: {},
4: {},
}
"""
Explanation: Declare the week schedule
Declare certain structures to be able to declare and define the conference schedule.
The information here will be used in the dict_query submodule to filter the talks.
End of explanation
"""
# the whole schedule conditions table
def join_conds(condset1, condset2):
d = dict(condset1)
if condset2:
d.update(dict(condset2))
return tuple(d.items())
week_conditions = defaultdict(dict)
for day, room in product(weekday_names, room_names):
track_schedule = daily_schedule[room]
dayroom_conds = dayroom_conditions[day].get(room, default_condition)
week_conditions[day][room] = []
for slot_conds in track_schedule:
week_conditions[day][room].append(join_conds(dayroom_conds, slot_conds))
week_conditions[0][5]
"""
Explanation: Build the schedule conditions table
End of explanation
"""
tags2pop = ['>>> Suggested Track', 'Python', '']
tags = defaultdict(int)
for talk in all_talks:
for t in talk[tags_field]:
if t in tags2pop:
continue
tags[t] += 1
tags_sorted = sorted(tags.items(), key=itemgetter(1), reverse=True)
tags_sorted
"""
Explanation: Group tags and count talks-per-tag
End of explanation
"""
def pick_talk(talks, conditions, trialno=1):
if not talks:
raise IndexError('The list of talks is empty!')
query = build_query(conditions)
for tidx, talk in enumerate(talks):
if run_query(talk, query):
return talks.pop(tidx)
# if no talk fills the query requirements
if trialno == 1:
nuconds = dict(conditions)
del nuconds[tags_field]
nuconds = tuple(nuconds.items())
print('2ND TRY: Looking only for {}.'.format(nuconds))
return pick_talk(talks, nuconds, trialno=2)
if trialno == 2:
oldconds = dict(conditions)
nuconds = {}
if 'duration' in oldconds:
nuconds['duration'] = oldconds['duration']
if 'type' in oldconds:
nuconds['type'] = oldconds['type']
nuconds = tuple(nuconds.items())
print('3RD TRY: Looking only for {}.'.format(nuconds))
return pick_talk(talks, nuconds, trialno=3)
else:
print('FAILED looking for {}.'.format(conditions))
return {}
# collections splitting utilities
import random
def chunks(l, n):
"""Yield successive `n`-sized chunks from `l`."""
for i in range(0, len(l), n):
yield l[i:i+n]
def split(xs, n):
""" Yield `n` chunks of the sequence `xs`."""
ys = list(xs)
random.shuffle(ys)
size = len(ys) // n
leftovers = ys[size*n:]
for c in range(n):
if leftovers:
extra = [ leftovers.pop() ]
else:
extra = []
yield ys[c*size:(c+1)*size] + extra
"""
Explanation: Filtering functions
Here I declare the functions used to filter talks using the dict_query-type queries defined above.
End of explanation
"""
from eptools.dict_query import or_condition
talks = grid_talks.copy()
shuffle(talks)
def condition_set(slot_conditions, default_conditions, topic_conditions):
conds = join_conds(default_conditions, slot_conditions)
if 'admin_type' not in dict(conds):
conds = join_conds(conds, topic_conditions)
return conds
# random pick talks
week_slots = defaultdict(dict)
for day in weekday_names:
shuffle(tags_sorted)
tags_chunks = list(split([t[0] for t in tags_sorted], len(room_names)))
rooms_topics = {room: or_condition(tags_field, 'has', tags)
for room, tags in zip(room_names.keys(), tags_chunks)}
for room in room_names:
slots_conds = week_conditions[day][room]
room_topics = rooms_topics[room]
week_slots[day][room] = []
#print(len(talks))
for slot_cond in slots_conds:
conds = condition_set(slot_cond, default_condition, room_topics)
try:
week_slots[day][room].append(pick_talk(talks, conds))
except IndexError:
print('No talks left for {}.'.format(conditions))
except:
raise
"""
Explanation: Distribute the talks along the schedule
End of explanation
"""
q = build_query((('type', 'has Training'),))
run_query(talks[0], q)
if talks:
show('<h1>Not scheduled talks</h1>')
for talk in talks:
print(talk)
"""
Explanation: Remaining talks
Print the remaining talks that have been left out of the schedule (by accident?).
End of explanation
"""
class ListTable(list):
""" Overridden list class which takes a 2-dimensional list of
the form [[1,2,3],[4,5,6]], and renders an HTML Table in
IPython Notebook. """
def _repr_html_(self):
html = ["<table>"]
for row in self:
html.append("<tr>")
for col in row:
html.append("<td>{0}</td>".format(col))
html.append("</tr>")
html.append("</table>")
return ''.join(html)
def tabulate(time_list, header=''):
table = ListTable()
table.append(header)
for slot in time_list:
table.append([slot] + time_list[slot])
return table
def get_room_schedule(weekly_schedule, room_name, field='title'):
slots = list(daily_schedule[room_name].keys())
daily_slots = []
for slot in slots:
talks = [weekly_schedule[d][room_name][slot].get(field, '-') for d in range(n_days)]
daily_slots.append((slot, talks))
room_schedule = OrderedDict(daily_slots)
return room_schedule
from itertools import zip_longest
def get_day_schedule(weekly_schedule, day_num, field='title'):
day_schedule = weekly_schedule[day_num]
nslots = max([len(slots) for room, slots in day_schedule.items()])
room_slots = []
for room, talk_slots in day_schedule.items():
room_talks = [talk.get(field, '-') for slot, talk in enumerate(talk_slots)]
room_slots.append(room_talks)
schedule = OrderedDict(list(enumerate(list(map(list, zip_longest(*room_slots))))))
return schedule
"""
Explanation: Print the schedule
Declare functions needed to orederly access the talks in the filled schedule and print the tables nicely in this notebook.
End of explanation
"""
sched_field = 'title'
for day, _ in enumerate(weekday_names):
show('<h3>{}</h3>'.format(weekday_names[day]))
show(tabulate(get_day_schedule(week_slots, day),
header=['Slot'] + list(room_names.values()))._repr_html_())
"""
Explanation: Schedule
End of explanation
"""
get_room_schedule(weekly_schedule, 'A1')
## schedules by room
# tabulate(get_room_schedule(weekly_schedule, 'A1'), header=['A1'] + weekday_names)
# tabulate(get_room_schedule(weekly_schedule, 'A2'), header=['A2'] + weekday_names)
# tabulate(get_room_schedule(weekly_schedule, 'A3'), header=['A3'] + weekday_names)
# tabulate(get_room_schedule(weekly_schedule, 'Ba1'), header=['Barria 1'] + weekday_names)
# tabulate(get_room_schedule(weekly_schedule, 'Ba2'), header=['Barria 2'] + weekday_names)
# tabulate(get_room_schedule(weekly_schedule, 'E'), header=[room_names[6]] + weekday_names)
# tabulate(get_room_schedule(weekly_schedule, room_names[7]]), header=[room_names[7]]] + weekday_names)
def find_talk(talk_title):
return [talk for talk in all_talks if talk_title in talk['title']]
find_talk("So, what's all the fuss about Docker?")
"""
Explanation: Snippets
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/bcc/cmip6/models/sandbox-3/landice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'sandbox-3', 'landice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: BCC
Source ID: SANDBOX-3
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:39
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation
"""
|
scottquiring/Udacity_Deeplearning | intro-to-tflearn/TFLearn_Digit_Recognition.ipynb | mit | # Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
"""
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
"""
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
"""
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
"""
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(32)
"""
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with its corresponding label in the title.
End of explanation
"""
# Define the neural network
def build_model(hidden_layers, learning_rate):
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
net = tflearn.input_data([None, 784])
for n_units in hidden_layers:
net = tflearn.fully_connected(net, n_units, activation='ReLU')
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=learning_rate,
loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
np.log10(784)
layers = list(reversed(list(map(int,np.round(np.logspace(1,2.89,5))[1:-1]))))
layers
# Build the model
model = build_model([262, 88, 30],0.1)
"""
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
End of explanation
"""
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20)
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=10)
"""
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
"""
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
"""
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!
End of explanation
"""
|
DS-100/sp17-materials | sp17/hw/hw4/hw4.ipynb | gpl-3.0 | import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import sqlalchemy
!pip install -U okpy
from client.api.notebook import Notebook
ok = Notebook('hw4.ok')
"""
Explanation: Homework 4: SQL, FEC Data, and Small Donors
Due: 11:59pm Tuesday, March 14
Note: The due date has changed from March 7 to March 14. Happy studying!
In this homework, we're going to explore the Federal Election
Commission's data on the money exchanged during the 2016 election.
This homework has two main parts:
Answering questions and computing descriptive statistics on the data
Conducting a hypothesis test
This is very similar to what you've done before in this class. However, in this
homework almost all of our computations will be done using SQL.
Getting Started
For this assignment, you're going to use a popular cloud services provider: Heroku. This will give you some experience provisioning a database in the cloud and working on that database from your computer.
Since the free tier of Heroku's Postgres service limits users to 10,000 rows of data, we've provided a subset of the FEC dataset for you to work with.
If you're interested, you can download and load the entire dataset from
http://www.fec.gov/finance/disclosure/ftpdet.shtml. It is about 4GB and contains around 24 million rows. (With Heroku and other cloud services, it is relatively straightforward to rent clusters of machines to work on much larger datasets. In particular, it would be easy to rerun your analyses in this assignment on the full dataset.)
Provisioning the Postgres DB
Visit https://signup.heroku.com/postgres-home-button and sign up for an account
if you don't have one already.
Now, install the Heroku CLI: https://devcenter.heroku.com/articles/heroku-cli.
Then, run heroku login to log into Heroku from your CLI.
Now, visit https://dashboard.heroku.com/apps and click New -> App. Name the app
whatever you want.
You should be sent to the app details page. Click Resources in the navbar, then
in the Add-on search bar, type "Postgres". You should be able to select Heroku
Postgres. Make sure the free tier (Hobby Dev) is selected and click Provision. Now
you should see Heroku Postgres :: Database in your Add-ons list.
Loading the data into the Heroku DB
(1) Run the lines below in your terminal to install necessary libraries.
conda install -y psycopg2
conda install -y postgresql
pip install ipython-sql
(2) Click the Heroku Postgres :: Database link in your app's Add-ons list.
(3) In the Heroku Data page you got redirected to, you should see the name of your
database. Scroll down to Administration and click View Credentials. These are the
credentials that allow you to connect to the database. The last entry of the list
contains a line that looks like:
heroku pg:psql db_name --app app_name
In your terminal, take that command and add "< fec.sql" to the end
to get something like:
heroku pg:psql db_name --app app_name < fec.sql
Run that command. It will run the commands in fec.sql, which load the dataset into the database.
Now you should be able to run the command without the "< fec.sql" to
have a postgres prompt. Try typing "\d+" at the prompt. You should get
something like:
ds100-hw4-db::DATABASE=> \d+
List of relations
Schema | Name | Type | Owner | Size | Description
--------+--------------+-------+----------------+------------+-------------
public | cand | table | vibrgrsqevmzkj | 16 kB |
public | comm | table | vibrgrsqevmzkj | 168 kB |
public | indiv | table | vibrgrsqevmzkj | 904 kB |
public | indiv_sample | table | vibrgrsqevmzkj | 600 kB |
public | inter_comm | table | vibrgrsqevmzkj | 296 kB |
public | link | table | vibrgrsqevmzkj | 8192 bytes |
(6 rows)
Congrats! You now have a Postgres database running containing the data you need
for this project.
Part 1: Descriptive Statistics
End of explanation
"""
my_URI = <replace_me>
%load_ext sql
%sql $my_URI
engine = sqlalchemy.create_engine(my_URI)
connection = engine.connect()
"""
Explanation: Now, let's connect to your Postgres database. On your Heroku Postgres details,
look at the credentials for the database. Take the long URI in the credentials and
replace the portion of the code that reads <replace_me> with the URI.
It should start with postgres://.
End of explanation
"""
# We use `LIMIT 5` to avoid displaying a huge table.
# Although our tables shouldn't get too large to display,
# this is generally good practice when working in the
# notebook environment. Jupyter notebooks don't handle
# very large outputs well.
%sql SELECT * from cand LIMIT 5
"""
Explanation: Table Descriptions
Here is a list of the tables in the database. Each table links to the documentation on the FEC page for the dataset.
Note that the table names here are slightly different from the ones in lecture. Consult the FEC page
for the descriptions of the tables to find out what the correspondence is.
cand: Candidates table. Contains names and party affiliation.
comm: Committees table. Contains committee names and types.
link: Committee to candidate links.
indiv: Individual contributions. Contains recipient committee ID and transaction amount.
inter_comm: Committee-to-candidate and committee-to-committee contributions. Contains donor and recipient IDs and transaction amount.
indiv_sample: Sample of individual contributions to Hillary Clinton and Bernie Sanders. Used in Part 2 only.
Writing SQL queries
You can write SQL directly in the notebook by using the %sql magic, as demonstrated in the next cell.
Be careful when doing this.
If you try to run a SQL query that returns a lot of rows (100k or more is a good rule of thumb)
your browser will probably crash.
This is why in this homework, we will strongly prefer using SQL as much as
possible, only materializing the SQL queries when they are small.
Because of this, your queries should work even as the size of your
data goes into the terabyte range! This is the primary advantage of working
with SQL as opposed to only dataframes.
End of explanation
"""
query = '''
SELECT cand_id, cand_name
FROM cand
WHERE cand_pty_affiliation = 'REP'
LIMIT 5
'''
%sql $query
"""
Explanation: For longer queries, you can save your query into a string, then use it in the
%sql statement. The $query in the %sql statement pulls in the value in
the Python variable query.
End of explanation
"""
res = %sql select * from cand limit 5
res_df = res.DataFrame()
res_df['cand_id']
"""
Explanation: In addition, you can assign the SQL statement to a variable and then call .DataFrame() on it to get a Pandas DataFrame.
However, it will often be more efficient to express your computation directly in SQL. For this homework, we will be grading your SQL expressions so be sure to do all computation in SQL (unless otherwise requested).
End of explanation
"""
# complete the query string
query_q1a = """
SELECT ...
FROM ...
WHERE ...
"""
q1a = %sql $query_q1a
q1a
_ = ok.grade('q01a')
_ = ok.backup()
"""
Explanation: Question 1a
We are interested in finding the PACs that donated large sums to the candidates. To begin to answer this question, we will look at the inter_comm table. We'll find all the transactions that exceed \$5,000. However, if there are a lot of transactions like that, it might not be useful to list them all. So before actually finding the transactions, find out how many such transactions there are. Use only SQL to compute the answer.
(It should be a table with a single column called count and a single entry, the number of transactions.)
We will be grading the query string query_q1a. You may modify our template but the result should contain the same information with the same names.
End of explanation
"""
# complete the query string
query_q1b = """
SELECT
... AS donor_cmte_id
... AS recipient_name
... AS transaction_amt
FROM ...
WHERE ...
ORDER BY ...
"""
q1b = %sql $query_q1b
q1b
_ = ok.grade('q01b')
_ = ok.backup()
"""
Explanation: Question 1b
Having seen that there aren't too many transactions that exceed \$5,000, let's find them all. Using only SQL, construct a table containing the recipient committee's name, the ID of the donor committee, and the transaction amount, for transactions that exceed $5,000 dollars. Sort the transactions in decreasing order by amount.
We will be grading the query string query_q1b. You may modify our template but the result should contain the same information with the same names.
End of explanation
"""
# complete the query string
query_q1c = '''
SELECT
... AS donor_cmte_id
... AS recipient_name
... AS total_transaction_amt
FROM inter_comm
GROUP BY ...
ORDER BY ... DESC
LIMIT ...
'''
q1c = %sql $query_q1c
q1c
ok.grade('q01c')
_ = ok.backup()
"""
Explanation: Question 1c
Of course, individual transactions could be misleading. A more interesting question is: How much did each group give in total to each committee? Find the total transaction amounts after grouping by the recipient committee's name and the ID of the donor committee. This time, just use LIMIT 20 to limit your results to the top 20 total donations.
We will be grading the query string query_q1c. You may modify our template but the result should contain the same information with the same names.
End of explanation
"""
# complete the query string
query_q1d = """
SELECT
... AS donor_cmte_id,
... AS recipient_id,
... AS total_transaction_amt
FROM ...
GROUP BY ...
ORDER BY ... DESC
LIMIT 20
"""
q1d = %sql $query_q1d
q1d
_ = ok.grade('q01d')
_ = ok.backup()
"""
Explanation: If you peruse the results of your last query, you should notice that some names are listed twice with slightly different spellings. Perhaps this causes some contributions to be split extraneously.
Question 1d
Find a field that uniquely identifies recipient committees and repeat your analysis from the previous question using that new identifier.
We will be grading the query string query_q1d. You may modify our template but the result should contain the same information with the same names.
End of explanation
"""
# complete the query string
query_q1e = '''
SELECT
... AS donor_name,
... AS recipient_name,
... AS total_transaction_amt
FROM ...
WHERE ...
GROUP BY ...
ORDER BY ... DESC
LIMIT 20
'''
q1e = %sql $query_q1e
q1e
_ = ok.grade('q01e')
_ = ok.backup()
"""
Explanation: Question 1e
Of course, your results are probably not very informative. Let's join these results with the comm table (perhaps twice?) to get the names of the committees involved in these transactions. As before, limit your results to the top 20 by total donation.
We will be grading the query string query_q1e. You may modify our template but the result should contain the same information with the same names.
Remember that the name column of inter_comm is not consistent. We found this out in 1(c) where we found that the same committees were named slightly differently. Because of this, you cannot use the name column of inter_comm to get the names of the committees.
End of explanation
"""
# complete the query string
query_q2 = '''
SELECT
... AS state,
... AS count
FROM ...
...
'''
q2 = %sql $query_q2
q2
_ = ok.grade('q02')
_ = ok.backup()
"""
Explanation: Question 2
What is the distribution of committee by state? Write a SQL query which computes for each state the number of committees in the comm table that are registered in that state. Display the results in descending order by count.
We will be grading the query string query_q2. You may modify our template but the result should contain the same information with the same names.
End of explanation
"""
query_q3 = '''
WITH pac_donations(cmte_id, pac_donations) AS
(
...
)
SELECT
... AS cmte_name,
... AS pac_donations
FROM ...
ORDER BY pac_donations, cmte_nm
LIMIT 20
'''
q3 = %sql $query_q3
q3
_ = ok.grade('q03')
_ = ok.backup()
"""
Explanation: Question 3
Political Action Committees are
major sources funding for campaigns. They typically represent business, labor,
or ideological interests and influence campaigns through their funding.
Because of this, we'd like to know how much money each committee received from
PACs.
For each committee, list the total amount of donations they got from Political Action Committees. If they got no such donations, the total should be listed as null. Order the result by pac_donations, then cmte_nm.
We will be grading you on the query string query_q3. You may modify our template but the result should contain the same information with the same names.
End of explanation
"""
query_q4 = '''
SELECT
... AS from_cmte_name,
... AS to_cmte_name
FROM ...
WHERE ...
GROUP BY ...
ORDER ... DESC
LIMIT 10
'''
q4 = %sql $query_q4
q4
_ = ok.grade('q04')
_ = ok.backup()
"""
Explanation: Question 4
Committees can also contribute to other committees. When does this happen?
Perhaps looking at the data can help us figure it out.
Find the names of the top 10 (directed) committee pairs that are affiliated with the Republican Party, who have the highest number of intercommittee transactions. By directed, we mean that a transaction where C1 donates to C2 is not the same as one where C2 donates to C1.
We will be grading you on the query string query_q4. You may modify our template but the result should contain the same information with the same names.
End of explanation
"""
query_q5 = '''
SELECT DISTINCT
... AS cand_1,
... AS cand_2
FROM ...
WHERE ...
...
'''
q5 = %sql $query_q5
q5
_ = ok.grade('q05')
_ = ok.backup()
"""
Explanation: Question 5
Some committees received donations from a common contributor.
Perhaps they were ideologically similar.
Find the names of distinct candidate pairs that share a common committee contributor from Florida.
If you list a pair ("Washington", "Lincoln") you should also list ("Lincoln, Washington").
Save the result in q5.
Hint: In SQL, the "not equals" operator is <> (it's != in Python).
We will be grading you on the query string query_q5. You may modify our template but the result should contain the same information with the same names.
End of explanation
"""
# Fill in the query
query_q7 = """
SELECT
comm.cmte_nm AS cmte_nm,
sum(indiv_sample.transaction_amt) AS total_transaction_amt
FROM ...
WHERE ...
GROUP BY ...
HAVING ...
"""
# Do not change anything below this line
res = %sql $query_q7
q7 = res.DataFrame().set_index("cmte_nm")
q7 # q7 will be graded
_ = ok.grade('q07')
_ = ok.backup()
"""
Explanation: Part 2: Hypothesis Testing and Bootstrap in SQL
In this part, we're going to perform a hypothesis test using SQL!
This article
describes a statement by Hillary Clinton where
where she claims that the majority of her campaign was funded by small donors. The
article argues that her statement is false, so we ask a slightly different question:
Is there a difference in the proportion of money contributed by small donors
between Hillary Clinton's and Bernie Sanders' campaigns?
For these questions, we define small donors as individuals that donated $200 or less
to a campaign.
For review, we suggest looking over this chapter on Hypothesis Testing from the Data 8 textbook: https://www.inferentialthinking.com/chapters/10/testing-hypotheses.html
Question 6
Before we begin, please think about and answer the following questions.
For each question, state "Yes" or "No", followed by a one-sentence explanation.
(a) If we were working with the entire FEC dataset instead of a sample,
would we still conduct a hypothesis test? Why or why not?
(b) If we were working with the entire FEC dataset instead of a sample,
would we still conduct bootstrap resampling? Why or why not?
(c) Let's suppose we take our sample and compute the proportion of money contributed by
small donors to Hillary and Bernie's campaign. We find that the difference
is 0.0 — they received the exact same proportion of small donations. Would
we still need to conduct a hypothesis test? Why or why not?
(d) Let's suppose we take our sample and compute the proportion of money contributed by
small donors to Hillary and Bernie's campaign. We find that the difference
is 0.3. Would we still need to conduct a hypothesis test? Why or why not?
(a) Enter in your answer for (a) here, replacing this sentence.
(b) Enter in your answer for (b) here, replacing this sentence.
(c) Enter in your answer for (c) here, replacing this sentence.
(d) Enter in your answer for (d) here, replacing this sentence.
Question 7
We've taken a sample of around 2700 rows of the original FEC data for individual
contributions that only include contributions to Clinton and Sanders.
This sample is stored in the table indiv_sample.
The individual contributions of donors are linked to committees,
not candidates directly. Hillary's primary committee was called
HILLARY FOR AMERICA, and Bernie's was BERNIE 2016.
Fill in the SQL query below to compute the total contributions for each
candidate's committee.
We will be grading you on the query string query_q7. You may modify our template but the result should contain the same information with the same names.
End of explanation
"""
# Fill in the query
query_q8 = '''
SELECT
comm.cmte_id AS cmte_id,
comm.cmte_nm AS cmte_name,
SUM (...) / SUM(...) AS prop_funds
FROM ...
WHERE ...
GROUP BY ...
HAVING ...
'''
# Do not change anything below this line
res = %sql $query_q8
small_donor_funds_prop = res.DataFrame()
small_donor_funds_prop
_ = ok.grade('q08')
_ = ok.backup()
"""
Explanation: Question 8
We want to know what proportion of this money came from small donors — individuals
who donated \$200 or less. For example, if Hillary raised \$1000, and \$300 of
that came from small donors, her proportion of small donors would be 0.3.
Compute this proportion for each candidate by filling in the SQL query below.
The resulting table should have two columns:
cmte_id which contains the Hillary's and Bernie's committee IDs
cmte_name which contains the Hillary's and Bernie's committee names
prop_funds which contains the proportion of funds contributed by
small donors.
You may not create a dataframe for this problem. By keeping the calculations
in SQL, this query will also work on the original dataset of individual
contributions (~ 3GB).
Hint: Try using Postgres' CASE statement to filter out transactions under
$200.
Hint: Remember that you can append ::float to a column name to convert its
values to float. You'll have to do this to perform division correctly.
We will be grading you on the query string query_q8. You may modify our template but the result should contain the same information with the same names.
End of explanation
"""
# Finish the SQL query to render the histogram of individual contributions
# for 'HILLARY FOR AMERICA'
query_q9a = """
SELECT transaction_amt
FROM ...
WHERE ...
"""
# Do not change anything below this line
res = %sql $query_q9a
hillary_contributions = res.DataFrame()
print(hillary_contributions.head())
# Make the Plot
sns.distplot(hillary_contributions)
plt.title('Distribution of Contribution Amounts to Hillary')
plt.xlim((-50, 3000))
plt.ylim((0, 0.02))
# Finish the SQL query to render the histogram of individual contributions
# for 'BERNIE 2016'
query_q9b = """
SELECT transaction_amt
FROM ...
WHERE ...
"""
# Do not change anything below this line
res = %sql $query_q9b
bernie_contributions = res.DataFrame()
print(bernie_contributions.head())
sns.distplot(bernie_contributions)
plt.title('Distribution of Contribution Amounts to Bernie')
plt.xlim((-50, 3000))
plt.ylim((0, 0.02))
_ = ok.grade('q09')
_ = ok.backup()
"""
Explanation: Question 9
Let's now do a bit of EDA. Fill in the SQL statements below to make histograms
of the transaction amounts for both Hillary and Bernie.
Note that we do take your entire result and put it into a dataframe.
This is not scalable. If indiv_sample was large, your computer
would run out of memory trying to store it in a dataframe. The better way to
compute the histogram would be to use SQL to generate bins and count the number
of contributions in each bin using the built-in
width_bucket function.
End of explanation
"""
%%sql
DROP VIEW IF EXISTS hillary CASCADE;
DROP VIEW IF EXISTS bernie CASCADE;
CREATE VIEW hillary AS
SELECT row_number() over () AS row_id, indiv_sample.*
FROM indiv_sample, comm
WHERE indiv_sample.cmte_id = comm.cmte_id
AND comm.cmte_nm = 'HILLARY FOR AMERICA';
CREATE VIEW bernie AS
SELECT row_number() over () AS row_id, indiv_sample.*
FROM indiv_sample, comm
WHERE indiv_sample.cmte_id = comm.cmte_id
AND comm.cmte_nm = 'BERNIE 2016';
SELECT * FROM hillary LIMIT 5
"""
Explanation: Question 10
Looks like there is a difference. Let's see if it's statistically significant.
State appropriate null and alternative hypotheses for this problsm.
Fill in your answer here.
Constructing a Bootstrap CI
We want to create a bootstrap confidence interval of the proportion of
funds contributed to Hillary Clinton by small donors.
To do this in SQL, we need to number the rows we want to bootstrap.
The following cell creates a view called hillary. Views are like tables.
However, instead of storing the rows in the database, Postgres will recompute
the values in the view each time you query it.
It adds a row_id column to each row in indiv_sample
corresponding to a contribution to Hillary. Note that we use your
hillary_cmte_id variable by including $hillary_cmte_id in the SQL.
We'll do the same for Bernie, creating a view called bernie.
End of explanation
"""
n_hillary_rows = 1524
n_trials = 500
seed = 0.42
query_q11 = """
CREATE VIEW hillary_design AS
SELECT
... AS trial_id,
... AS row_id
FROM ...
"""
# Do not change anything below this line
# Fill in the $ variables set in the above string
import string
query_q11 = string.Template(query_q11).substitute(locals())
%sql drop view if exists hillary_design cascade
%sql SET SEED TO $seed
%sql $query_q11
%sql select * from hillary_design limit 5
_ = ok.grade('q11')
_ = ok.backup()
"""
Explanation: Question 11
Let's contruct a view containing the rows we want to sample for each
bootstrap trial. For example, if we want to create 100 bootstrap samples of
3 contributions to Hillary, we want something that looks like:
trial_id | row_id
======== | ======
1 | 1002
1 | 208
1 | 1
2 | 1524
2 | 1410
2 | 1023
3 | 423
3 | 68
3 | 925
... | ...
100 | 10
This will let us later construct a join on the hillary view that computes the
bootstrap sample for each trial by sampling with replacement.
Create a view called hillary_design that contains two columns: trial_id
and row_id. It should contain the IDs corresponding to
500 samples of the entire hillary view. The hillary view contains 1524
rows, so the hillary_design view should have a total of
500 * 1524 = 762000 rows.
Hint: Recall how we generated a matrix of random numbers in class. Start with
that, then start tweaking it until you get the view you want. Our solution uses
the Postgres functions generate_series, floor, and random.
End of explanation
"""
query_q12 = '''
CREATE VIEW hillary_trials as
SELECT
... AS trial_id,
... AS small_donor_sum,
... AS total
FROM ...
WHERE ...
GROUP BY ...
'''
# Do not change anything below this line
%sql drop view if exists hillary_trials cascade
%sql SET SEED TO $seed
%sql $query_q12
%sql select * from hillary_trials limit 5
_ = ok.grade('q12')
_ = ok.backup()
"""
Explanation: Question 12
Construct a view called hillary_trials that uses the hillary
and hillary_design views to compute the total amount contributed
by small donors for each trial as well as the overall amount.
It should have three columns:
trial_id: The number of the trial, from 1 to 500
small_donor_sum: The total contributions from small donors in the trial
total: The total contributions of all donations in the trial
Hint: Our solution uses the CASE WHEN statement inside of a SUM() function
call to compute the small_donor_sum.
End of explanation
"""
query_q13 = '''
CREATE VIEW hillary_props as
SELECT
trial_id,
... AS small_donor_prop
FROM hillary_trials
'''
# Do not change anything below this line
%sql drop view if exists hillary_props cascade
%sql SET SEED TO $seed
%sql $query_q13
%sql select * from hillary_props limit 5
_ = ok.grade('q13')
_ = ok.backup()
"""
Explanation: Question 13
Now, create a view called hillary_props that contains two columns:
trial_id: The number of the trial, from 1 to 500
small_donor_prop: The proportion contributed by small donors for each trial
Hint: Remember that you can append ::float to a column name to convert its
values to float. You'll have to do this to perform division correctly.
End of explanation
"""
n_bernie_rows = 1173
n_trials = 500
create_bernie_design = """
CREATE VIEW bernie_design AS
SELECT
... AS trial_id,
... AS row_id
FROM ...
"""
create_bernie_trials = '''
CREATE VIEW bernie_trials as
SELECT
... AS trial_id,
... AS small_donor_sum,
... AS total
FROM ...
WHERE ...
GROUP BY ...
'''
create_bernie_props = '''
CREATE VIEW bernie_props as
SELECT
trial_id,
... AS small_donor_prop
FROM bernie_trials
'''
# Do not change anything below this line
# Fill in the $ variables set in the above string
import string
create_bernie_design = (string.Template(create_bernie_design)
.substitute(locals()))
%sql drop view if exists bernie_design cascade
%sql $create_bernie_design
%sql drop view if exists bernie_trials cascade
%sql $create_bernie_trials
%sql drop view if exists bernie_props
%sql $create_bernie_props
%sql SET SEED TO $seed
%sql select * from bernie_props limit 5
_ = ok.grade('q14')
_ = ok.backup()
"""
Explanation: Question 14
Now, repeat the process to bootstrap Bernie's proportion of funds
raised by small donors.
You should be able to mostly copy-paste your code for Hillary's bootstrap CI.
End of explanation
"""
res = %sql select * from hillary_props
hillary_trials_df = res.DataFrame()
res = %sql select * from bernie_props
bernie_trials_df = res.DataFrame()
ax = plt.subplot(1,2,1)
sns.distplot(hillary_trials_df['small_donor_prop'], ax=ax)
plt.title('Hillary Bootstrap Prop')
plt.xlim(0.1, 0.9)
plt.ylim(0, 25)
ax = plt.subplot(1,2,2)
sns.distplot(bernie_trials_df['small_donor_prop'], ax=ax)
plt.title('Bernie Bootstrap Prop')
plt.xlim(0.1, 0.9)
plt.ylim(0, 25)
"""
Explanation: Plotting the sample distribution
Run the following cell to make a plot of the distribution of proportions
for both Hillary and Bernie.
Again, this would not be scalable if we took many bootstrap samples.
However, 500 floats is reasonable to fit in memory.
End of explanation
"""
_ = ok.grade_all()
"""
Explanation: Computing the Confidence Interval
Run the following cell to compute confidence intervals based on your
hillary_props and bernie_props views. Think about what the intervals mean.
Question 15
Based on your confidence intervals, should we reject the null?
Are there any other factors that should be taken into consideration when
making this conclusion?
Write your answer here, replacing this text.
Congrats! You finished the homework.
Submitting your assignment
First, run the next cell to run all the tests at once.
End of explanation
"""
# Now, we'll submit to okpy
_ = ok.submit()
"""
Explanation: Then, we'll submit the assignment to OkPy so that the staff will know to grade it. You can submit as many times as you want, and you can choose which submission you want us to grade by going to https://okpy.org/cal/data100/sp17/. After you've done that, make sure you've pushed your changes to Github as well!
End of explanation
"""
|
conversationai/unintended-ml-bias-analysis | archive/unintended_ml_bias/fat-star-bias-measurement-tutorial.ipynb | apache-2.0 | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import pandas as pd
import numpy as np
import pkg_resources
import matplotlib.pyplot as plt
import seaborn as sns
import time
import scipy.stats as stats
from sklearn import metrics
from keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Embedding
from keras.layers import Input
from keras.layers import Conv1D
from keras.layers import MaxPooling1D
from keras.layers import Flatten
from keras.layers import Dropout
from keras.layers import Dense
from tensorflow.keras.optimizers import RMSprop
from keras.models import Model
%matplotlib inline
# autoreload makes it easier to interactively work on code in imported libraries
%load_ext autoreload
%autoreload 2
"""
Explanation: Hands-on Tutorial: Measuring Unintended Bias in Text Classification Models with Real Data
Usage Instructions
This notebook can be run as a Kaggle Kernel with no installation required.
To run this notebook locally, you will need to:
Install all Python dependencies from the requirements.txt file
Download all training, validation, and test files
End of explanation
"""
# These files will be provided to tutorial participants via Google Cloud Storage
train_v1_df = pd.read_csv('../input/fat-star-tutorial-data/public_train_v1.csv')
validate_df = pd.read_csv('../input/fat-star-tutorial-data/public_validate.csv')
test_df = pd.read_csv('../input/fat-star-tutorial-data/public_test.csv')
"""
Explanation: Load and pre-process data sets
End of explanation
"""
train_v1_df[['toxicity', 'male', 'comment_text']].query('male >= 0').head()
"""
Explanation: Let's examine some rows in these datasets. Note that columns like toxicity and male are percent scores.
We query for "male >= 0" to exclude rows where the male identity is not labeled.
End of explanation
"""
# List all identities
identity_columns = [
'male', 'female', 'transgender', 'other_gender', 'heterosexual', 'homosexual_gay_or_lesbian',
'bisexual', 'other_sexual_orientation', 'christian', 'jewish', 'muslim', 'hindu', 'buddhist',
'atheist', 'other_religion', 'black', 'white', 'asian', 'latino', 'other_race_or_ethnicity',
'physical_disability', 'intellectual_or_learning_disability', 'psychiatric_or_mental_illness', 'other_disability']
def convert_to_bool(df, col_name):
df[col_name] = np.where(df[col_name] >= 0.5, True, False)
for df in [train_v1_df, validate_df, test_df]:
for col in ['toxicity'] + identity_columns:
convert_to_bool(df, col)
train_v1_df[['toxicity', 'male', 'comment_text']].head()
"""
Explanation: We will need to convert toxicity and identity columns to booleans, in order to work with our neural net and metrics calculcations. For this tutorial, we will consider any value >= 0.5 as True (i.e. a comment should be considered toxic if 50% or more crowd raters labeled it as toxic). Note that this code also converts missing identity fields to False.
End of explanation
"""
MAX_SEQUENCE_LENGTH = 250
MAX_NUM_WORDS = 10000
TOXICITY_COLUMN = 'toxicity'
TEXT_COLUMN = 'comment_text'
EMBEDDINGS_PATH = '../data/glove.6B/glove.6B.100d.txt'
EMBEDDINGS_DIMENSION = 100
DROPOUT_RATE = 0.3
LEARNING_RATE = 0.00005
NUM_EPOCHS = 1 # TODO: increase this
BATCH_SIZE = 128
def pad_text(texts, tokenizer):
return pad_sequences(tokenizer.texts_to_sequences(texts), maxlen=MAX_SEQUENCE_LENGTH)
def train_model(train_df, validate_df, tokenizer):
# Prepare data
train_text = pad_text(train_df[TEXT_COLUMN], tokenizer)
train_labels = to_categorical(train_df[TOXICITY_COLUMN])
validate_text = pad_text(validate_df[TEXT_COLUMN], tokenizer)
validate_labels = to_categorical(validate_df[TOXICITY_COLUMN])
# Load embeddings
embeddings_index = {}
with open(EMBEDDINGS_PATH) as f:
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
embedding_matrix = np.zeros((len(tokenizer.word_index) + 1,
EMBEDDINGS_DIMENSION))
num_words_in_embedding = 0
for word, i in tokenizer.word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
num_words_in_embedding += 1
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
# Create model layers.
def get_convolutional_neural_net_layers():
"""Returns (input_layer, output_layer)"""
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedding_layer = Embedding(len(tokenizer.word_index) + 1,
EMBEDDINGS_DIMENSION,
weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
trainable=False)
x = embedding_layer(sequence_input)
x = Conv1D(128, 5, activation='relu', padding='same')(x)
x = MaxPooling1D(5, padding='same')(x)
x = Conv1D(128, 5, activation='relu', padding='same')(x)
x = MaxPooling1D(5, padding='same')(x)
x = Conv1D(128, 5, activation='relu', padding='same')(x)
x = MaxPooling1D(40, padding='same')(x)
x = Flatten()(x)
x = Dropout(DROPOUT_RATE)(x)
x = Dense(128, activation='relu')(x)
preds = Dense(2, activation='softmax')(x)
return sequence_input, preds
# Compile model.
input_layer, output_layer = get_convolutional_neural_net_layers()
model = Model(input_layer, output_layer)
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(lr=LEARNING_RATE),
metrics=['acc'])
# Train model.
model.fit(train_text,
train_labels,
batch_size=BATCH_SIZE,
epochs=NUM_EPOCHS,
validation_data=(validate_text, validate_labels),
verbose=2)
return model
MODEL_NAME_V1 = 'fat_star_tutorial_v1'
tokenizer_v1 = Tokenizer(num_words=MAX_NUM_WORDS)
tokenizer_v1.fit_on_texts(train_v1_df[TEXT_COLUMN])
model_v1 = train_model(train_v1_df, validate_df, tokenizer_v1)
"""
Explanation: Create and Train Models
This code creates and trains a convolutional neural net using the Keras framework. This neural net accepts a text comment, encoding as a sequence of integers, and outputs a probably that the comment is toxic. Don't worry if you do not understand all of this code, as we will be treating this neural net as a black box later in the tutorial.
End of explanation
"""
test_comments_padded = pad_text(test_df[TEXT_COLUMN], tokenizer_v1)
test_df[MODEL_NAME_V1] = model_v1.predict(test_comments_padded)[:, 1]
# Print some records to compare our model resulsts with the correct labels
test_df[[TOXICITY_COLUMN, TEXT_COLUMN, MODEL_NAME_V1]].head(10)
"""
Explanation: Score test set with the new model
Using our new model, we can score the set of test comments for toxicity.
End of explanation
"""
# Get a list of identity columns that have >= 100 True records. This will remove groups such
# as "other_disability" which do not have enough records to calculate meaningful metrics.
identities_with_over_100_records = []
for identity in identity_columns:
num_records = len(test_df.query(identity + '==True'))
if num_records >= 100:
identities_with_over_100_records.append(identity)
def compute_normalized_pinned_auc(df, subgroup, model_name):
subgroup_non_toxic = df[df[subgroup] & ~df[TOXICITY_COLUMN]]
subgroup_toxic = df[df[subgroup] & df[TOXICITY_COLUMN]]
background_non_toxic = df[~df[subgroup] & ~df[TOXICITY_COLUMN]]
background_toxic = df[~df[subgroup] & df[TOXICITY_COLUMN]]
within_subgroup_mwu = normalized_mwu(subgroup_non_toxic, subgroup_toxic, model_name)
cross_negative_mwu = normalized_mwu(subgroup_non_toxic, background_toxic, model_name)
cross_positive_mwu = normalized_mwu(background_non_toxic, subgroup_toxic, model_name)
return np.mean([1 - within_subgroup_mwu, 1 - cross_negative_mwu, 1 - cross_positive_mwu])
def normalized_mwu(data1, data2, model_name):
"""Returns the number of pairs where the datapoint in data1 has a greater score than that from data2."""
scores_1 = data1[model_name]
scores_2 = data2[model_name]
n1 = len(scores_1)
n2 = len(scores_2)
u, _ = stats.mannwhitneyu(scores_1, scores_2, alternative = 'less')
return u/(n1*n2)
def compute_pinned_auc(df, identity, model_name):
# Create combined_df, containing an equal number of comments that refer to the identity, and
# that belong to the background distribution.
identity_df = df[df[identity]]
nonidentity_df = df[~df[identity]].sample(len(identity_df), random_state=25)
combined_df = pd.concat([identity_df, nonidentity_df])
# Calculate the Pinned AUC
true_labels = combined_df[TOXICITY_COLUMN]
predicted_labels = combined_df[model_name]
return metrics.roc_auc_score(true_labels, predicted_labels)
def get_bias_metrics(df, model_name):
bias_metrics_df = pd.DataFrame({
'subgroup': identities_with_over_100_records,
'pinned_auc': [compute_pinned_auc(df, identity, model_name)
for identity in identities_with_over_100_records],
'normalized_pinned_auc': [compute_normalized_pinned_auc(df, identity, model_name)
for identity in identities_with_over_100_records]
})
# Re-order columns and sort bias metrics
return bias_metrics_df[['subgroup', 'pinned_auc', 'normalized_pinned_auc']].sort_values('pinned_auc')
def calculate_overall_auc(df, model_name):
true_labels = df[TOXICITY_COLUMN]
predicted_labels = df[model_name]
return metrics.roc_auc_score(true_labels, predicted_labels)
bias_metrics_df = get_bias_metrics(test_df, MODEL_NAME_V1)
bias_metrics_df
calculate_overall_auc(test_df, MODEL_NAME_V1)
"""
Explanation: Measure bias
Using metrics based on Pinned AUC and the Mann Whitney U test, we can measure our model for biases against different identity groups. We only calculate bias metrics on identities that are refered to in 100 or more comments, to minimize noise.
End of explanation
"""
# Plot toxicity distributions of different identities to visualize bias.
def plot_histogram(identity):
toxic_scores = test_df.query(identity + ' == True & toxicity == True')[MODEL_NAME_V1]
non_toxic_scores = test_df.query(identity + ' == True & toxicity == False')[MODEL_NAME_V1]
sns.distplot(non_toxic_scores, color="skyblue", axlabel=identity)
sns.distplot(toxic_scores, color="red", axlabel=identity)
plt.figure()
for identity in bias_metrics_df['subgroup']:
plot_histogram(identity)
"""
Explanation: We can graph a histogram of comment scores in each identity. In the following graphs, the X axis represents the toxicity score given by our new model, and the Y axis represents the comment count. Blue values are comment whose true label is non-toxic, while red values are those whose true label is toxic.
We can see that for some identities such as Asian, the model scores most non-toxic comments as less than 0.2 and most toxic comments as greater than 0.2. This indicates that for the Asian identity, our model is able to distinguish between toxic and non-toxic comments. However, for the black identity, there are many non-toxic comments with scores over 0.5, along with many toxic comments with scores of less than 0.5. This shows that for the black identity, our model will be less accurate at separating toxic comments from non-toxic comments.
End of explanation
"""
# Load new training data and convert fields to booleans.
train_v2_df = pd.read_csv('../input/fat-star-tutorial-data/public_train_v2.csv')
for col in ['toxicity'] + identity_columns:
convert_to_bool(train_v2_df, col)
# Create a new model using the same structure as our model_v1.
MODEL_NAME_V2 = 'fat_star_tutorial_v2'
tokenizer_v2 = Tokenizer(num_words=MAX_NUM_WORDS)
tokenizer_v2.fit_on_texts(train_v2_df[TEXT_COLUMN])
model_v2 = train_model(train_v2_df, validate_df, tokenizer_v2)
test_comments_padded_v2 = pad_text(test_df[TEXT_COLUMN], tokenizer_v2)
test_df[MODEL_NAME_V2] = model_v2.predict(test_comments_padded_v2)[:, 1]
bias_metrics_v2_df = get_bias_metrics(test_df, MODEL_NAME_V2)
bias_metrics_v2_df
"""
Explanation: Retrain model to reduce bias
One possible reason for bias in the model may be that our training data is baised. In our case, our initial training data contained a higher percentage of toxic vs non-toxic comments for the "homosexual_gay_or_lesbian" identity. We have another dataset which contains additional non-toxic comments that refer to the "homosexual_gay_or_lesbian" group. If we train a new model using this data, we should make a small improvement in bias against this category (TODO: verify this).
End of explanation
"""
|
johnbachman/indra | models/indra_statements_demo.ipynb | bsd-2-clause | %pylab inline
import json
from indra.sources import trips
from indra.statements import draw_stmt_graph, stmts_to_json
"""
Explanation: Inspecting INDRA Statements and assembled models
In this example we look at how intermediate results of the assembly process from word models to executable models can be inspected. We first import the necessary modules of INDRA.
End of explanation
"""
text = 'Active ATM phosphorylates itself. Active ATM phosphorylates another ATM molecule.'
tp = trips.process_text(text)
"""
Explanation: Collecting Statements from reading
First, we use the TRIPS system via INDRA's trips module to read two sentences which describe distinct mechanistic hypotheses about ATM phosphorylation.
End of explanation
"""
tp.statements
"""
Explanation: Here tp is a TripsProcessor object whose extracted Statements can be accessed in a list.
Printing Statements as objects
It is possible to look at the string representation of the extracted INDRA Statements as below.
End of explanation
"""
pylab.rcParams['figure.figsize'] = (12, 8)
draw_stmt_graph(tp.statements[1:])
"""
Explanation: The first Statement, obtained by reading "Active ATM phosphorylates itself", represents the Autophosphorylation of ATM with ATM being in an active state. Here activity stands for generic molecular activity and True indicates an active as opposed to an inactive state.
The second Statement, obtained from "Active ATM phosphorylates another ATM molecule" is a Phosphorylation with the enzyme ATM being in an active state phosphorylating another ATM as a substrate.
Drawing Statements as graphs
Next, we can use the draw_stmt_graph function to display the Statements produced by reading and INDRA input processing as a graph. The root of each tree is the type of the Statement, in this case Autophosphorylation. The arguments of the Statement branch off from the root. In this case the enzyme argument of Autophosphorylation is an Agent with name ATM. Its database references can be inspected under the db_refs property.
End of explanation
"""
statements_json = stmts_to_json(tp.statements)
print(json.dumps(statements_json, indent=1))
"""
Explanation: Printing / exchanging Statements as JSON
INDRA Statements can be serialized into JSON format. This is a human-readable and editable form of INDRA Statements which is independent of Python and can therefore be used as a platform-independent data exchange format for Statements. The function stmts_to_json in the indra.statements module takes a list of Statements and returns a JSON as a dictionary. Below we pretty-print this JSON as a string with indentations.
End of explanation
"""
from indra.assemblers.pysb import PysbAssembler
pa = PysbAssembler()
pa.add_statements([tp.statements[0]])
model1 = pa.make_model()
"""
Explanation: Inspecting assembled rule-based models
We now assemble two PySB models, one for each Statement.
End of explanation
"""
model1
"""
Explanation: We can examine the properties of the PySB model object before exporting it. As seen below, the model has a single Monomer and Rule, and two Parameters.
End of explanation
"""
model1.monomers['ATM']
"""
Explanation: We can look at the ATM Monomer and its sites. ATM has an activity site which can be either active of inactive. It also has a phospho site with u and p states.
End of explanation
"""
model1.rules[0]
"""
Explanation: The rule representing ATM autophosphorylation can be inspected below. The rule is parameterized by the forward rate kf_a_autophos_1.
End of explanation
"""
pa = PysbAssembler()
pa.add_statements([tp.statements[1]])
model2 = pa.make_model()
model2
model2.monomers['ATM']
model2.rules[0]
"""
Explanation: We now assemble a model for the second Statement.
End of explanation
"""
model1.annotations
model2.annotations
"""
Explanation: As we see, the rule assembled for this model contains two distinct ATMs on each side, one acting as the kinase and the other as the substrate.
Inspecting assembled model annotations
Finally, models assembled by INDRA carry automatically propagated annotations. Below, the grounding of ATM in the UniProt, HGNC and NCIT databases is annotated; the semantic role of monomers in each rule are also annotated, and finally, the unique ID of the INDRA Statement that a rule was derived from is annotated.
End of explanation
"""
|
stuser/temp | pneumoniamnist_CNN.ipynb | mit | # import package
import matplotlib.pyplot as plt
import numpy as np
import os
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import (Input, Dense, Dropout, Activation, GlobalAveragePooling2D,
BatchNormalization, Flatten, Conv2D, MaxPooling2D)
#需安裝google download套件, Google Drive direct download of big files.
#pip install gdown
tf.__version__
#確認CPU&GPU裝置狀況
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
"""
Explanation: <a href="https://colab.research.google.com/github/stuser/temp/blob/master/pneumoniamnist_CNN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
"""
# find the share link of the file/folder on Google Drive
file_share_link = "https://drive.google.com/file/d/1nebGwtoKTNegJ-fUYO-NEz0mzC1481hv/view?usp=sharing"
# extract the ID of the file
file_id = "1nebGwtoKTNegJ-fUYO-NEz0mzC1481hv"
# download file name
file_name = 'pneumoniamnist.npz'
!gdown --id "$file_id" --output "$file_name"
!ls -lh
"""
Explanation: MedMNIST
MedMNIST, a collection of 10 pre-processed medical open datasets. MedMNIST is standardized to perform classification tasks on lightweight 28 * 28 images, which requires no background knowledge. Covering the primary data modalities in medical image analysis, it is diverse on data scale (from 100 to 100,000) and tasks (binary/multi-class, ordinal regression and multi-label). MedMNIST could be used for educational purpose, rapid prototyping, multi-modal machine learning or AutoML in medical image analysis. Moreover, MedMNIST Classification Decathlon is designed to benchmark AutoML algorithms on all 10 datasets.
(著者: 上海交通大學 Jiancheng Yang, Rui Shi, Bingbing Ni, Bilian Ke)
GitHub Pages 連結
<img src="https://medmnist.github.io/assets/overview.jpg" alt="MedMNIST figure" width="700">
PneumoniaMNIST資料集下載(google drive): https://drive.google.com/file/d/1nebGwtoKTNegJ-fUYO-NEz0mzC1481hv/view?usp=sharing
PneumoniaMNIST:
PneumoniaMNIST:
A dataset based on a prior dataset of 5,856 pediatric chest X-ray images. The task is binary-class classification of pneumonia and normal. We split the source training set with a ratio of 9:1 into training and validation set, and use its source validation set as the test set. The source images are single-channel, and their sizes range from (384-2,916) x (127-2,713). We center-crop the images and resize them into 1 x 28 x 28.
task: Binary-Class (2)
label:
0: normal, 1: pneumonia
n_channels: 1
n_samples:
train: 4708, val: 524, test: 624
End of explanation
"""
import numpy as np
#load pneumoniamnist dataset
pneumoniamnist = np.load('pneumoniamnist.npz')
type(pneumoniamnist) #include files: train_images, val_images, test_images, train_labels, val_labels, test_labels
pneumoniamnist['train_images'].shape, pneumoniamnist['train_labels'].shape
(x_train, y_train), (x_test, y_test) = (pneumoniamnist['train_images'], pneumoniamnist['train_labels']), (pneumoniamnist['test_images'], pneumoniamnist['test_labels'])
(x_val, y_val) = (pneumoniamnist['val_images'], pneumoniamnist['val_labels'])
print(x_train.shape) # (4708, 28, 28)
print(y_train.shape) # (4708, 1)
print(y_train[40:50]) # class-label
print(x_test.shape) # (624, 28, 28)
print(y_test.shape) # (624, 1)
# 將資料集轉成 'float32'
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
# rescale value to [0 - 1] from [0 - 255]
x_train /= 255 # rescaling
x_test /= 255 # rescaling
x_val = x_val.astype('float32')/255
# montage
# source: https://github.com/MedMNIST/MedMNIST/blob/main/getting_started.ipynb
from skimage.util import montage
def process(dataset, n_channels, length=20):
scale = length * length
image = np.zeros((scale, 28, 28, 3)) if n_channels == 3 else np.zeros((scale, 28, 28))
index = [i for i in range(scale)]
np.random.shuffle(index)
plt.figure(figsize=(6,6))
for idx in range(scale):
img = dataset[idx]
if n_channels == 3:
img = img.permute(1, 2, 0)
else:
img = img.reshape(28, 28)
image[index[idx]] = img
if n_channels == 1:
image = image.reshape(scale, 28, 28)
arr_out = montage(image)
plt.imshow(arr_out, cmap='gray')
else:
image = image.reshape(scale, 28, 28, 3)
arr_out = montage(image, multichannel=3)
plt.imshow(arr_out)
process( x_train, n_channels=1, length=5)
# visualization
import matplotlib.pylab as plt
sample_num = 99
img = x_train[sample_num].reshape(28, 28)
plt.imshow(img, cmap='gray')
template = "label:{label}"
_ = plt.title(template.format(label= str(y_train[sample_num])))
plt.grid(False)
"""
Explanation: 資料探索
End of explanation
"""
x_train.shape+(1,)
np.expand_dims(x_train, axis=3).shape
x_train = np.expand_dims(x_train, axis=3)
print('x_train shape:',x_train.shape)
x_test = np.expand_dims(x_test, axis=3)
print('x_test shape:',x_test.shape)
x_val = np.expand_dims(x_val, axis=3)
print('x_val shape:',x_val.shape)
# 將訓練資料與測試資料的 label,進行 Onehot encoding 轉換
num_classes = 2
from tensorflow.keras.utils import to_categorical
y_train_onehot = to_categorical(y_train)
y_test_onehot = to_categorical(y_test)
y_val_onehot = to_categorical(y_val)
print('y_train_onehot shape:', y_train_onehot.shape)
print('y_test_onehot shape:', y_test_onehot.shape)
print('y_val_onehot shape:', y_val_onehot.shape)
input = Input(shape=x_train.shape[1:])
x = Conv2D(32, (3, 3), activation='relu', padding='same')(input)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='last_conv_layer')(x)
x = GlobalAveragePooling2D(name='avg_pool')(x)
output = Dense(num_classes, activation='softmax', name='predictions')(x)
model = Model(inputs=[input], outputs=[output])
print(model.summary())
tf.keras.utils.plot_model(
model,
to_file='model_plot_CNN.png',
show_shapes=True,
show_layer_names=True,
rankdir='TB',
expand_nested=True,
dpi=96,
)
"""
Explanation: 搭建資料流
End of explanation
"""
# 編譯模型
# 選用 Adam 為 optimizer
from keras.optimizers import Adam
batch_size = 256
epochs = 20
init_lr = 0.001
opt = Adam(lr=init_lr)
model.compile(optimizer = opt, loss='categorical_crossentropy', metrics='accuracy')
cnn_history = model.fit(x_train, y_train_onehot,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_val, y_val_onehot),
verbose=2)
import plotly.graph_objects as go
plt.clf()
fig = go.Figure()
fig.add_trace(go.Scatter( y=cnn_history.history['accuracy'],
name='Train'))
fig.add_trace(go.Scatter( y=cnn_history.history['val_accuracy'],
name='Valid'))
fig.update_layout(height=500,width=700,
title='Accuracy for race feature',
xaxis_title='Epoch',
yaxis_title='Accuracy')
fig.show()
predictions = model.predict(x_test)
print(predictions.shape)
print(predictions[0:5])
print("**********************************************")
plt.hist(predictions)
plt.show()
y_pred = np.argmax(predictions, axis=1)
print(y_pred.shape)
print(y_pred[0:5])
print("**********************************************")
plt.hist(y_pred)
plt.show()
"""
Explanation: 模型訓練
End of explanation
"""
cnn_pred = model.evaluate(x_test, y_test_onehot, verbose=2)
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
import itertools
classes = ['normal','pneumonia']
print(classification_report(y_test, y_pred, target_names=classes))
print ("**************************************************************")
cm = confusion_matrix(y_test, y_pred)
plt.figure(figsize=(5,5))
plt.title('confusion matrix')
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = 'd' #'.2f'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
y_pred[0:10], y_pred.shape
_y_test = y_test.reshape(y_pred.shape)
_y_test[0:10], _y_test.shape
# visualization
import matplotlib.pylab as plt
sample_num = 1
img = x_test[sample_num].reshape(28, 28)
plt.imshow(img, cmap='gray')
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(y_test[sample_num]),
predict= str(y_pred[sample_num])))
plt.grid(False)
"""
Explanation: 模型評估
End of explanation
"""
#需安裝一下套件
!pip install tf-keras-vis
%%time
from matplotlib import cm
import matplotlib.pyplot as plt
from tf_keras_vis.gradcam import Gradcam,GradcamPlusPlus
from tensorflow.keras import backend as K
from tf_keras_vis.saliency import Saliency
from tf_keras_vis.utils import normalize
def Grad_CAM_savepictures(file_index,model,save_name):
def loss(output):
return (output[0][y_test[file_index][0]])
def model_modifier(m):
m.layers[-1].activation = tf.keras.activations.linear
return m
# Create Gradcam object
gradcam = Gradcam(model,model_modifier=model_modifier,clone=False)
originalimage=x_test[file_index]
originalimage=originalimage.reshape((1,originalimage.shape[0],originalimage.shape[1],1))
# Generate heatmap with GradCAM
cam = gradcam(loss,originalimage,penultimate_layer=-1)
cam = normalize(cam)
#overlap image
plt.figure(figsize=(12,8))
ax1=plt.subplot(1, 3, 1)
heatmap = np.uint8(cm.jet(cam)[..., :3] * 255)
ax1.imshow(x_test[file_index].reshape((x_test.shape[1],x_test.shape[2])),cmap="gray")
ax1.imshow(heatmap.reshape((x_test.shape[1],x_test.shape[2],3)), cmap='jet', alpha=0.4) # overlay
ax1.set_title("Grad-CAM")
gradcam = GradcamPlusPlus(model,model_modifier=model_modifier,clone=False)
cam = gradcam(loss,originalimage,penultimate_layer=-1)
cam = normalize(cam)
ax1=plt.subplot(1, 3, 2)
heatmap = np.uint8(cm.jet(cam)[..., :3] * 255)
ax1.imshow(x_test[file_index].reshape((x_test.shape[1],x_test.shape[2])),cmap="gray")
ax1.imshow(heatmap.reshape((x_test.shape[1],x_test.shape[2],3)), cmap='jet', alpha=0.4) # overlay
ax1.set_title("Grad-CAM++")
plt.savefig(save_name)
plt.show()
file_index = 0
Grad_CAM_savepictures( file_index, model, "Grad-CAM_{}.jpg".format(file_index))
print('saved file - Grad-CAM_{}.jpg'.format(file_index))
file_index = 1
Grad_CAM_savepictures( file_index, model, "Grad-CAM_{}.jpg".format(file_index))
print('saved file - Grad-CAM_{}.jpg'.format(file_index))
file_index = 2
Grad_CAM_savepictures( file_index, model, "Grad-CAM_{}.jpg".format(file_index))
print('saved file - Grad-CAM_{}.jpg'.format(file_index))
file_index = 10
Grad_CAM_savepictures( file_index, model, "Grad-CAM_{}.jpg".format(file_index))
print('saved file - Grad-CAM_{}.jpg'.format(file_index))
"""
Explanation: 解釋模型
tf-keras-vis
tf-keras-vis is a visualization toolkit for debugging tf.keras models in Tensorflow2.0+.
github 連結
grad-CAM
<img src="https://github.com/keisen/tf-keras-vis/raw/master/examples/images/gradcam_plus_plus.png" alt="gradcam figure" width="700">
End of explanation
"""
|
c-north/hdbscan | notebooks/Benchmarking scalability of clustering implementations.ipynb | bsd-3-clause | import hdbscan
import debacl
import fastcluster
import sklearn.cluster
import scipy.cluster
import sklearn.datasets
import numpy as np
import pandas as pd
import time
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_context('poster')
sns.set_palette('Paired', 10)
sns.set_color_codes()
"""
Explanation: Benchmarking Performance and Scaling of Python Clustering Algorithms
There are a host of different clustering algorithms and implementations thereof for Python. The performance and scaling can depend as much on the implementation as the underlying algorithm. Obviously a well written implementation in C or C++ will beat a naive implementation on pure Python, but there is more to it than just that. The internals and data structures used can have a large impact on performance, and can even significanty change asymptotic performance. All of this means that, given some amount of data that you want to cluster your options as to algorithm and implementation maybe significantly constrained. I'm both lazy, and prefer empirical results for this sort of thing, so rather than analyzing the implementations and deriving asymptotic performance numbers for various implementations I'm just going to run everything and see what happens.
To begin with we need to get together all the clustering implementations, along with some plotting libraries so we can see what is going on once we've got data. Obviously this is not an exhaustive collection of clustering implementations, so if I've left off your favourite I apologise, but one has to draw a line somewhere.
The implementations being test are:
Sklearn (which implements several algorithms):
K-Means clustering
DBSCAN clustering
Agglomerative clustering
Spectral clustering
Affinity Propagation
Scipy (which provides basic algorithms):
K-Means clustering
Agglomerative clustering
Fastcluster (which provides very fast agglomerative clustering in C++)
DeBaCl (Density Based Clustering; similar to a mix of DBSCAN and Agglomerative)
HDBSCAN (A robust hierarchical version of DBSCAN)
Obviously a major factor in performance will be the algorithm itself. Some algorithms are simply slower -- often, but not always, because they are doing more work to provide a better clustering.
End of explanation
"""
def benchmark_algorithm(dataset_sizes, cluster_function, function_args, function_kwds,
dataset_dimension=10, dataset_n_clusters=10, max_time=45, sample_size=2):
# Initialize the result with NaNs so that any unfilled entries
# will be considered NULL when we convert to a pandas dataframe at the end
result = np.nan * np.ones((len(dataset_sizes), sample_size))
for index, size in enumerate(dataset_sizes):
for s in range(sample_size):
# Use sklearns make_blobs to generate a random dataset with specified size
# dimension and number of clusters
data, labels = sklearn.datasets.make_blobs(n_samples=size,
n_features=dataset_dimension,
centers=dataset_n_clusters)
# Start the clustering with a timer
start_time = time.time()
cluster_function(data, *function_args, **function_kwds)
time_taken = time.time() - start_time
# If we are taking more than max_time then abort -- we don't
# want to spend excessive time on slow algorithms
if time_taken > max_time:
result[index, s] = time_taken
return pd.DataFrame(np.vstack([dataset_sizes.repeat(sample_size),
result.flatten()]).T, columns=['x','y'])
else:
result[index, s] = time_taken
# Return the result as a dataframe for easier handling with seaborn afterwards
return pd.DataFrame(np.vstack([dataset_sizes.repeat(sample_size),
result.flatten()]).T, columns=['x','y'])
"""
Explanation: Now we need some benchmarking code at various dataset sizes. Because some clustering algorithms have performance that can vary quite a lot depending on the exact nature of the dataset we'll also need to run several times on randomly generated datasets of each size so as to get a better idea of the average case performance.
We also need to generalise over algorithms which don't necessarily all have the same API. We can resolve that by taking a clustering function, argument tuple and keywords dictionary to let us do semi-arbitrary calls (fortunately all the algorithms do at least take the dataset to cluster as the first parameter).
Finally some algorithms scale poorly, and I don't want to spend forever doing clustering of random datasets so we'll cap the maximum time an algorithm can use; once it has taken longer than max time we'll just abort there and leave the remaining entries in our datasize by samples matrix unfilled.
In the end this all amounts to a fairly straightforward set of nested loops (over datasizes and number of samples) with calls to sklearn to generate mock data and the clustering function inside a timer. Add in some early abort and we're done.
End of explanation
"""
dataset_sizes = np.hstack([np.arange(1, 6) * 500, np.arange(3,7) * 1000, np.arange(4,17) * 2000])
"""
Explanation: Comparison of all ten implementations
Now we need a range of dataset sizes to test out our algorithm. Since the scaling performance is wildly different over the ten implementations we're going to look at it will be beneficial to have a number of very small dataset sizes, and increasing spacing as we get larger, spanning out to 32000 datapoints to cluster (to begin with). Numpy provides convenient ways to get this done via arange and vector multiplication. We'll start with step sizes of 500, then shift to steps of 1000 past 3000 datapoints, and finally steps of 2000 past 6000 datapoints.
End of explanation
"""
k_means = sklearn.cluster.KMeans(10)
k_means_data = benchmark_algorithm(dataset_sizes, k_means.fit, (), {})
dbscan = sklearn.cluster.DBSCAN()
dbscan_data = benchmark_algorithm(dataset_sizes, dbscan.fit, (), {})
scipy_k_means_data = benchmark_algorithm(dataset_sizes, scipy.cluster.vq.kmeans, (10,), {})
scipy_single_data = benchmark_algorithm(dataset_sizes, scipy.cluster.hierarchy.single, (), {})
fastclust_data = benchmark_algorithm(dataset_sizes, fastcluster.single, (), {})
hdbscan_ = hdbscan.HDBSCAN()
hdbscan_data = benchmark_algorithm(dataset_sizes, hdbscan_.fit, (), {})
debacl_data = benchmark_algorithm(dataset_sizes, debacl.geom_tree.geomTree, (5, 5), {'verbose':False})
agglomerative = sklearn.cluster.AgglomerativeClustering(10)
agg_data = benchmark_algorithm(dataset_sizes, agglomerative.fit, (), {}, sample_size=4)
spectral = sklearn.cluster.SpectralClustering(10)
spectral_data = benchmark_algorithm(dataset_sizes, spectral.fit, (), {}, sample_size=6)
affinity_prop = sklearn.cluster.AffinityPropagation()
ap_data = benchmark_algorithm(dataset_sizes, affinity_prop.fit, (), {}, sample_size=3)
"""
Explanation: Now it is just a matter of running all the clustering algorithms via our benchmark function to collect up all the requsite data. This could be prettier, rolled up into functions appropriately, but sometimes brute force is good enough. More importantly (for me) since this can take a significant amount of compute time, I wanted to be able to comment out algorithms that were slow or I was uninterested in easily. Which brings me to a warning for you the reader and potential user of the notebook: this next step is very expensive. We are running ten different clustering algorithms multiple times each on twenty two different dataset sizes -- and some of the clustering algorithms are slow (we are capping out at forty five seconds per run). That means that the next cell can take an hour or more to run. That doesn't mean "Don't try this at home" (I actually encourage you to try this out yourself and play with dataset parameters and clustering parameters) but it does mean you should be patient if you're going to!
End of explanation
"""
sns.regplot(x='x', y='y', data=k_means_data, order=2, label='Sklearn K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=dbscan_data, order=2, label='Sklearn DBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=scipy_k_means_data, order=2, label='Scipy K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=scipy_single_data, order=2, label='Scipy Single Linkage', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=fastclust_data, order=2, label='Fastcluster Single Linkage', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=hdbscan_data, order=2, label='HDBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=debacl_data, order=2, label='DeBaCl Geom Tree', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=spectral_data, order=2, label='Sklearn Spectral', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=agg_data, order=2, label='Sklearn Agglomerative', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=ap_data, order=2, label='Sklearn Affinity Propagation', x_estimator=np.mean)
plt.gca().axis([0, 34000, 0, 120])
plt.gca().set_xlabel('Number of data points')
plt.gca().set_ylabel('Time taken to cluster (s)')
plt.title('Performance Comparison of Clustering Implementations')
plt.legend()
"""
Explanation: Now we need to plot the results so we can see what is going on. The catch is that we have several datapoints for each dataset size and ultimately we would like to try and fit a curve through all of it to get the general scaling trend. Fortunately seaborn comes to the rescue here by providing regplot which plots a regression through a dataset, supports higher order regression (we should probably use order two as most algorithms are effectively quadratic) and handles multiple datapoints for each x-value cleanly (using the x_estimator keyword to put a point at the mean and draw an error bar to cover the range of data).
End of explanation
"""
large_dataset_sizes = np.arange(1,16) * 4000
hdbscan_ = hdbscan.HDBSCAN()
large_hdbscan_data = benchmark_algorithm(large_dataset_sizes,
hdbscan_.fit, (), {}, max_time=90, sample_size=1)
k_means = sklearn.cluster.KMeans(10)
large_k_means_data = benchmark_algorithm(large_dataset_sizes,
k_means.fit, (), {}, max_time=90, sample_size=1)
dbscan = sklearn.cluster.DBSCAN()
large_dbscan_data = benchmark_algorithm(large_dataset_sizes,
dbscan.fit, (), {}, max_time=90, sample_size=1)
large_fastclust_data = benchmark_algorithm(large_dataset_sizes,
fastcluster.single, (), {}, max_time=90, sample_size=1)
large_scipy_k_means_data = benchmark_algorithm(large_dataset_sizes,
scipy.cluster.vq.kmeans, (10,), {}, max_time=90, sample_size=1)
large_scipy_single_data = benchmark_algorithm(large_dataset_sizes,
scipy.cluster.hierarchy.single, (), {}, max_time=90, sample_size=1)
"""
Explanation: A few features stand out. First of all there appear to be essentially two classes of implementation, with DeBaCl being an odd case that falls in the middle. The fast implementations tend to be implementations of single linkage agglomerative clustering, K-means, and DBSCAN. The slow cases are largely from sklearn and include agglomerative clustering (in this case using Ward instead of single linkage).
For practical purposes this means that if you have much more than 10000 datapoints your clustering options are significantly constrained: sklearn spectral, agglomerative and affinity propagation are going to take far too long. DeBaCl may still be an option, but given that the hdbscan library provides "robust single linkage clustering" equivalent to what DeBaCl is doing (and with effectively the same runtime as hdbscan as it is a subset of that algorithm) it is probably not the best choice for large dataset sizes.
So let's drop out those slow algorithms so we can scale out a little further and get a closer look at the various algorithms that managed 32000 points in under thirty seconds. There is almost undoubtedly more to learn as we get ever larger dataset sizes.
Comparison of fast implementations
Let's compare the six fastest implementations now. We can scale out a little further as well; based on the curves above it looks like we should be able to comfortably get to 60000 data points without taking much more than a minute per run. We can also note that most of these implementations weren't that noisy so we can get away with a single run per dataset size.
End of explanation
"""
sns.regplot(x='x', y='y', data=large_k_means_data, order=2, label='Sklearn K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_dbscan_data, order=2, label='Sklearn DBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_scipy_k_means_data, order=2, label='Scipy K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_scipy_single_data, order=2, label='Scipy Single Linkage', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_fastclust_data, order=2, label='Fastcluster Single Linkage', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_hdbscan_data, order=2, label='HDBSCAN', x_estimator=np.mean)
plt.gca().axis([0, 64000, 0, 150])
plt.gca().set_xlabel('Number of data points')
plt.gca().set_ylabel('Time taken to cluster (s)')
plt.title('Performance Comparison of Fastest Clustering Implementations')
plt.legend()
"""
Explanation: Again we can use seaborn to do curve fitting and plotting, exactly as before.
End of explanation
"""
large_fastclust_data.tail(10)
large_scipy_single_data.tail(10)
"""
Explanation: Clearly something has gone woefully wrong with the curve fitting for the two single linkage algorithms, but what exactly? If we look at the raw data we can see.
End of explanation
"""
size_of_array = 44000 * (44000 - 1) / 2 # from pdist documentation
bytes_in_array = size_of_array * 8 # Since doubles use 8 bytes
gigabytes_used = bytes_in_array / (1024.0 ** 3) # divide out to get the number of GB
gigabytes_used
"""
Explanation: It seems that at around 44000 points we hit a wall and the runtimes spiked. A hint is that I'm running this on a laptop with 8GB of RAM. Both single linkage algorithms use scipy.spatial.pdist to compute pairwise distances between points, which returns an array of shape (n(n-1)/2, 1) of doubles. A quick computation shows that that array of distances is quite large once we nave 44000 points:
End of explanation
"""
sns.regplot(x='x', y='y', data=large_k_means_data, order=2, label='Sklearn K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_dbscan_data, order=2, label='Sklearn DBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_scipy_k_means_data, order=2, label='Scipy K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_scipy_single_data[:10], order=2, label='Scipy Single Linkage', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_fastclust_data[:10], order=2, label='Fastcluster Single Linkage', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_hdbscan_data, order=2, label='HDBSCAN', x_estimator=np.mean)
plt.gca().axis([0, 64000, 0, 150])
plt.gca().set_xlabel('Number of data points')
plt.gca().set_ylabel('Time taken to cluster (s)')
plt.title('Performance Comparison of Fastest Clustering Implementations')
plt.legend()
"""
Explanation: If we assume that my laptop is keeping much other than that distance array in RAM then clearly we are going to spend time paging out the distance array to disk and back and hence we will see the runtimes increase dramatically as we become disk IO bound. If we just leave off the last element we can get a better idea of the curves, but keep in mind that both single linkage algorithms do not scale past a limit set by your available RAM.
End of explanation
"""
huge_dataset_sizes = np.arange(1,19) * 10000
k_means = sklearn.cluster.KMeans(10)
huge_k_means_data = benchmark_algorithm(huge_dataset_sizes,
k_means.fit, (), {}, max_time=120, sample_size=5)
dbscan = sklearn.cluster.DBSCAN()
huge_dbscan_data = benchmark_algorithm(huge_dataset_sizes,
dbscan.fit, (), {}, max_time=120, sample_size=5)
huge_scipy_k_means_data = benchmark_algorithm(huge_dataset_sizes,
scipy.cluster.vq.kmeans, (10,), {}, max_time=120, sample_size=5)
"""
Explanation: Now it becomes clear that there were really three classes, not two: the K-Means and DBSCAN implementations are all packed along the very bottom while HDBSCAN and the single linkage implementations begin to consume more and more time for larger datasets.
In practice this is going to mean that for larger datasets you are going to be very constrained in what algorithms you can apply: if you get enough datapoints only K-Means and DBSCAN will be left. This is somewhat disappointing, paritcularly as K-Means is not a particularly good clustering algorithm, paricularly for exploratory data analysis. If you're willing to go for coffee while waiting for your run, however, than HDBSCAN will be able to handle 100,000 datapoints or so; if you're willing to wait over lunch you can go higher again.
With this in mind it is worth looking at how the K-Means and DBSCAN implementations perform. If we restrict to just those algorithms we can scale out to even larger dataset sizes again, and start to see how those different algorithms and implementations separate.
Comparison of K-Means and DBSCAN implementations
At this point we can scale out to 100000 datapoints easily enough: K-Means and DBSCAN can use various data structures to avoid having to compute the full pairwise distance matrix and are thus not as memory constrained as some of the other algorithms we looked at.
End of explanation
"""
sns.regplot(x='x', y='y', data=huge_k_means_data, order=1, label='Sklearn K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=huge_dbscan_data, order=1, label='Sklearn DBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=huge_scipy_k_means_data, order=1, label='Scipy K-Means', x_estimator=np.mean)
plt.gca().axis([0, 190000, 0, 20])
plt.gca().set_xlabel('Number of data points')
plt.gca().set_ylabel('Time taken to cluster (s)')
plt.title('Performance Comparison of K-Means and DBSCAN')
plt.legend()
"""
Explanation: This time around we'll use a linear rather than quadratic fit (feel free to try the quadratic fit for yourself of course). Why is that? Because of the 'secret sauce' that keeps these implementations runtimes so low: using appropriate data structures such as kd-trees these algorithms have $O(n\log n)$ asymptotics, so while they don't actually scale linearly, a linear fit is better than a quadratic one.
End of explanation
"""
import statsmodels.formula.api as sm
time_samples = [1000, 2000, 5000, 10000, 25000, 50000, 75000, 100000, 250000, 500000, 750000,
1000000, 5000000, 10000000, 50000000, 100000000, 500000000, 1000000000]
def get_timing_series(data, quadratic=True):
if quadratic:
data['x_squared'] = data.x**2
model = sm.ols('y ~ x + x_squared', data=data).fit()
predictions = [model.params.dot([1.0, i, i**2]) for i in time_samples]
return pd.Series(predictions, index=pd.Index(time_samples))
else: # assume n log(n)
data['xlogx'] = data.x * np.log(data.x)
model = sm.ols('y ~ x + xlogx', data=data).fit()
predictions = [model.params.dot([1.0, i, i*np.log(i)]) for i in time_samples]
return pd.Series(predictions, index=pd.Index(time_samples))
"""
Explanation: Now the differences become clear, and it demonstrates how much of a difference implementation can make: the sklearn implementation of K-Means is far better than the scipy implementation. The DBSCAN implementation falls somewhere in between. Since DBSCAN clustering is a lot better than K-Means (unless you have good reasons to assume that the clusters partition your data and are all drawn from Gaussian distributions) and the scaling is still very good I would suggest that unless you have a truly stupendous amount of data you wish to cluster then the sklearn DBSCAN implementation is a good choice.
But should I get a coffee?
So we know which implementations scale and which don't; a more useful thing to know in practice is, given a dataset, what can I run interactively? What can I run while I go and grab some coffee? How about a run over lunch? What if I'm willing to wait until I get in tomorrow morning? Each of these represent significant breaks in productivity -- once you aren't working interactively anymore your productivity drops measurably, and so on.
We can build a table for this. To start we'll need to be able to approximate how long a given clustering implementation will take to run. Fortunately we already gathered a lot of that data; if we load up the statsmodels package we can fit the data (with a quadratic or $n\log n$ fit depending on the implementation) and use the resulting model to make our predictions. Obviously this has some caveats: if you fill your RAM with a distance matrix your runtime isn't going to fit the curve.
I've hand built a time_samples list to give a reasonable set of potential data sizes that are nice and human readable. After that we just need a function to fit and build the curves.
End of explanation
"""
ap_timings = get_timing_series(ap_data)
spectral_timings = get_timing_series(spectral_data)
agg_timings = get_timing_series(agg_data)
debacl_timings = get_timing_series(debacl_data)
fastclust_timings = get_timing_series(large_fastclust_data.ix[:10,:].copy())
scipy_single_timings = get_timing_series(large_scipy_single_data.ix[:10,:].copy())
hdbscan_timings = get_timing_series(large_hdbscan_data)
#scipy_k_means_timings = get_timing_series(huge_scipy_k_means_data, quadratic=False)
dbscan_timings = get_timing_series(huge_dbscan_data, quadratic=False)
k_means_timings = get_timing_series(huge_k_means_data, quadratic=False)
timing_data = pd.concat([ap_timings, spectral_timings, agg_timings, debacl_timings,
fastclust_timings, scipy_single_timings, hdbscan_timings,
dbscan_timings, k_means_timings
], axis=1)
timing_data.columns=['AffinityPropagation', 'Spectral', 'Agglomerative',
'DeBaCl', 'Fastcluster', 'ScipySingleLinkage',
'HDBSCAN', 'DBSCAN', 'KMeans'
]
def get_size(series, max_time):
return series.index[series < max_time].max()
datasize_table = pd.concat([
timing_data.apply(get_size, max_time=30),
timing_data.apply(get_size, max_time=300),
timing_data.apply(get_size, max_time=3600),
timing_data.apply(get_size, max_time=8*3600)
], axis=1)
datasize_table.columns=('Interactive', 'Get Coffee', 'Over Lunch', 'Overnight')
datasize_table
"""
Explanation: Now we run that for each of our pre-existing datasets to extrapolate out predicted performance on the relevant dataset sizes. A little pandas wrangling later and we've produced a table of roughly how large a dataset you can tackle in each time frame with each implementation.
End of explanation
"""
|
xgrg/alfa | notebooks/Ages distributions.ipynb | mit | %matplotlib inline
import pandas as pd
from scipy import stats
from matplotlib import pyplot as plt
data = pd.read_excel('/home/grg/spm/data/covariates.xls')
for i in xrange(5):
x = data[data['apo'] == i]['age'].values
plt.hist(x, bins=20)
print i, 'W:%.4f p:%.4f -'%stats.shapiro(x), len(x), 'subjects between', int(min(x)), 'and', int(max(x))
plt.legend(['apoe23', 'apoe24', 'apoe33', 'apoe34', 'apoe44'])
plt.show()
"""
Explanation: Plotting age distributions with respect to genotype groups
End of explanation
"""
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import NearestNeighbors
def get_matching_pairs(treated_df, non_treated_df, scaler=True):
treated_x = treated_df.values
non_treated_x = non_treated_df.values
if scaler:
scaler = StandardScaler()
scaler.fit(treated_x)
treated_x = scaler.transform(treated_x)
non_treated_x = scaler.transform(non_treated_x)
nbrs = NearestNeighbors(n_neighbors=1, algorithm='ball_tree').fit(non_treated_x)
distances, indices = nbrs.kneighbors(treated_x)
indices = indices.reshape(indices.shape[0])
matched = non_treated_df.ix[indices]
matched = non_treated_df.irow(matched.index)
return matched
"""
Explanation: For two of the 5 groups, the Shapiro test p-value is lower than 1e-3, which means that the distributions of these two groups can't be considered as normal. (But theorically none of them is)
Matching pairs using nearest neighbours
The matching algorithm:
End of explanation
"""
df = pd.read_excel('/home/grg/spm/data/covariates.xls')
df = df[['subject','apo','age','gender','educyears']]
groups = [df[df['apo']==i] for i in xrange(5)]
for i in xrange(5):
groups[i] = groups[i].set_index(groups[i]['subject'])
del groups[i]['subject']
del groups[i]['apo']
"""
Explanation: Loading data
End of explanation
"""
treated_df = groups[4]
matched_df = [get_matching_pairs(treated_df, groups[i], scaler=False) for i in xrange(4)]
"""
Explanation: Matching the groups
End of explanation
"""
fig, ax = plt.subplots(figsize=(6,6))
for i in xrange(4):
x = matched_df[i]['age']
plt.hist(x, bins=20)
print i, 'W:%.4f p:%.4f -'%stats.shapiro(x), len(x), 'subjects between', int(min(x)), 'and', int(max(x))
x = treated_df['age']
plt.hist(x, bins=20)
print 4, 'W:%.4f p:%.4f -'%stats.shapiro(x), len(x), 'subjects between', int(min(x)), 'and', int(max(x))
plt.legend(['apoe23', 'apoe24', 'apoe33', 'apoe34', 'apoe44'])
"""
Explanation: Plotting data and see that the groups are now matching
End of explanation
"""
import pandas as pd
df = pd.read_excel('/home/grg/spm/data/covariates.xls')
df = df[['subject','apo','age','gender','educyears']]
groups = [df[df['apo']==i] for i in xrange(5)]
for i in xrange(5):
groups[i] = groups[i].set_index(groups[i]['subject'])
del groups[i]['subject']
del groups[i]['apo']
groups = [df[df['apo']==i] for i in xrange(5)]
for i in xrange(5):
groups[i] = groups[i].set_index(groups[i]['subject'])
del groups[i]['apo']
del groups[i]['subject']
treated_df = groups[4]
non_treated_df = groups[0]
from scipy.spatial.distance import cdist
from scipy import optimize
def get_matching_pairs(treated_df, non_treated_df):
cost_matrix = cdist(treated_df.values, non_treated_df.values)
row_ind, col_ind = optimize.linear_sum_assignment(cost_matrix)
return non_treated_df.iloc[col_ind]
treated_df = groups[4]
matched_df = [get_matching_pairs(treated_df, groups[i]) for i in xrange(4)]
"""
Explanation: Matching groups using linear assignment method
End of explanation
"""
fig, ax = plt.subplots(figsize=(6,6))
for i in xrange(4):
x = matched_df[i]['age']
plt.hist(x, bins=20)
print i, 'W:%.4f p:%.4f -'%stats.shapiro(x), len(x), 'subjects between', int(min(x)), 'and', int(max(x))
x = treated_df['age']
plt.hist(x, bins=20)
print 4, 'W:%.4f p:%.4f -'%stats.shapiro(x), len(x), 'subjects between', int(min(x)), 'and', int(max(x))
plt.legend(['apoe23', 'apoe24', 'apoe33', 'apoe34', 'apoe44'])
import json
groups_index = [each.index.tolist() for each in matched_df]
groups_index.append(groups[4].index.tolist())
json.dump(groups_index, open('/tmp/groups.json','w'))
"""
Explanation: Plotting data and see that the groups are now matching
End of explanation
"""
from scipy.stats import ttest_ind
for i in xrange(4):
print '=== Group %s ==='%i
tval_bef, pval_bef = ttest_ind(groups[i].values, treated_df.values)
tval_aft, pval_aft = ttest_ind(matched_df[i].values, treated_df.values)
print 'p-values before matching: %s - p-values after matching: %s'%(pval_bef, pval_aft)
df = pd.read_excel('/home/grg/spm/data/covariates.xls')
list(df[df['apo']!=1]['subject'].values)
"""
Explanation: Assessing the effect from the matching
We perform a two-sample t-test between each group and the target group, before and after applying the matching.
As the dataset is composed of 3 variables (age, gender, education), this returns 3 t values and 3 p-values for each comparison.
End of explanation
"""
|
newlawrence/poliastro | docs/source/examples/Analyzing the Parker Solar Probe flybys.ipynb | mit | from astropy import units as u
T_ref = 150 * u.day
T_ref
from poliastro.bodies import Earth, Sun, Venus
k = Sun.k
k
import numpy as np
"""
Explanation: Analyzing the Parker Solar Probe flybys
1. Modulus of the exit velocity, some features of Orbit #2
First, using the data available in the reports, we try to compute some of the properties of orbit #2. This is not enough to completely define the trajectory, but will give us information later on in the process.
End of explanation
"""
a_ref = np.cbrt(k * T_ref**2 / (4 * np.pi**2)).to(u.km)
a_ref.to(u.au)
"""
Explanation: $$ T = 2 \pi \sqrt{\frac{a^3}{\mu}} \Rightarrow a = \sqrt[3]{\frac{\mu T^2}{4 \pi^2}}$$
End of explanation
"""
energy_ref = (-k / (2 * a_ref)).to(u.J / u.kg)
energy_ref
from poliastro.twobody import Orbit
from poliastro.util import norm
from astropy.time import Time
flyby_1_time = Time("2018-09-28", scale="tdb")
flyby_1_time
r_mag_ref = norm(Orbit.from_body_ephem(Venus, epoch=flyby_1_time).r)
r_mag_ref.to(u.au)
v_mag_ref = np.sqrt(2 * k / r_mag_ref - k / a_ref)
v_mag_ref.to(u.km / u.s)
"""
Explanation: $$ \varepsilon = -\frac{\mu}{r} + \frac{v^2}{2} = -\frac{\mu}{2a} \Rightarrow v = +\sqrt{\frac{2\mu}{r} - \frac{\mu}{a}}$$
End of explanation
"""
d_launch = Time("2018-08-11", scale="tdb")
d_launch
ss0 = Orbit.from_body_ephem(Earth, d_launch)
ss1 = Orbit.from_body_ephem(Venus, epoch=flyby_1_time)
tof = flyby_1_time - d_launch
from poliastro import iod
(v0, v1_pre), = iod.lambert(Sun.k, ss0.r, ss1.r, tof.to(u.s))
v0
v1_pre
norm(v1_pre)
"""
Explanation: 2. Lambert arc between #0 and #1
To compute the arrival velocity to Venus at flyby #1, we have the necessary data to solve the boundary value problem.
End of explanation
"""
from poliastro.threebody.flybys import compute_flyby
V = Orbit.from_body_ephem(Venus, epoch=flyby_1_time).v
V
h = 2548 * u.km
d_flyby_1 = Venus.R + h
d_flyby_1.to(u.km)
V_2_v_, delta_ = compute_flyby(v1_pre, V, Venus.k, d_flyby_1)
norm(V_2_v_)
"""
Explanation: 3. Flyby #1 around Venus
We compute a flyby using poliastro with the default value of the entry angle, just to discover that the results do not match what we expected.
End of explanation
"""
def func(theta):
V_2_v, _ = compute_flyby(v1_pre, V, Venus.k, d_flyby_1, theta * u.rad)
ss_1 = Orbit.from_vectors(Sun, ss1.r, V_2_v, epoch=flyby_1_time)
return (ss_1.period - T_ref).to(u.day).value
"""
Explanation: 4. Optimization
Now we will try to find the value of $\theta$ that satisfies our requirements.
End of explanation
"""
import matplotlib.pyplot as plt
theta_range = np.linspace(0, 2 * np.pi)
plt.plot(theta_range, [func(theta) for theta in theta_range])
plt.axhline(0, color='k', linestyle="dashed")
func(0)
func(1)
from scipy.optimize import brentq
theta_opt_a = brentq(func, 0, 1) * u.rad
theta_opt_a.to(u.deg)
theta_opt_b = brentq(func, 4, 5) * u.rad
theta_opt_b.to(u.deg)
V_2_v_a, delta_a = compute_flyby(v1_pre, V, Venus.k, d_flyby_1, theta_opt_a)
V_2_v_b, delta_b = compute_flyby(v1_pre, V, Venus.k, d_flyby_1, theta_opt_b)
norm(V_2_v_a)
norm(V_2_v_b)
"""
Explanation: There are two solutions:
End of explanation
"""
ss01 = Orbit.from_vectors(Sun, ss1.r, v1_pre, epoch=flyby_1_time)
ss01
"""
Explanation: 5. Exit orbit
And finally, we compute orbit #2 and check that the period is the expected one.
End of explanation
"""
ss_1_a = Orbit.from_vectors(Sun, ss1.r, V_2_v_a, epoch=flyby_1_time)
ss_1_a
ss_1_b = Orbit.from_vectors(Sun, ss1.r, V_2_v_b, epoch=flyby_1_time)
ss_1_b
"""
Explanation: The two solutions have different inclinations, so we still have to find out which is the good one. We can do this by computing the inclination over the ecliptic - however, as the original data was in the International Celestial Reference Frame (ICRF), whose fundamental plane is parallel to the Earth equator of a reference epoch, we have change the plane to the Earth ecliptic, which is what the original reports use.
End of explanation
"""
from astropy.coordinates import CartesianRepresentation
from poliastro.frames import Planes, get_frame
def change_plane(ss_orig, plane):
"""Changes the plane of the Orbit.
"""
ss_orig_rv = ss_orig.frame.realize_frame(
ss_orig.represent_as(CartesianRepresentation)
)
dest_frame = get_frame(ss_orig.attractor, plane, obstime=ss_orig.epoch)
ss_dest_rv = ss_orig_rv.transform_to(dest_frame)
ss_dest_rv.representation_type = CartesianRepresentation
ss_dest = Orbit.from_vectors(
ss_orig.attractor,
r=ss_dest_rv.data.xyz,
v=ss_dest_rv.data.differentials['s'].d_xyz,
epoch=ss_orig.epoch,
plane=plane,
)
return ss_dest
change_plane(ss_1_a, Planes.EARTH_ECLIPTIC)
change_plane(ss_1_b, Planes.EARTH_ECLIPTIC)
"""
Explanation: Let's define a function to do that quickly for us, using the get_frame function from poliastro.frames:
End of explanation
"""
ss_1_a.period.to(u.day)
ss_1_a.a
"""
Explanation: Therefore, the correct option is the first one.
End of explanation
"""
from poliastro.plotting import OrbitPlotter
frame = OrbitPlotter()
frame.plot(ss0, label=Earth)
frame.plot(ss1, label=Venus)
frame.plot(ss01, label="#0 to #1")
frame.plot(ss_1_a, label="#1 to #2");
"""
Explanation: And, finally, we plot the solution:
End of explanation
"""
|
fastai/course-v3 | nbs/dl1/lesson4-collab.ipynb | apache-2.0 | user,item,title = 'userId','movieId','title'
path = untar_data(URLs.ML_SAMPLE)
path
ratings = pd.read_csv(path/'ratings.csv')
ratings.head()
"""
Explanation: Collaborative filtering example
collab models use data in a DataFrame of user, items, and ratings.
End of explanation
"""
data = CollabDataBunch.from_df(ratings, seed=42)
y_range = [0,5.5]
learn = collab_learner(data, n_factors=50, y_range=y_range)
learn.fit_one_cycle(3, 5e-3)
"""
Explanation: That's all we need to create and train a model:
End of explanation
"""
path=Config.data_path()/'ml-100k'
ratings = pd.read_csv(path/'u.data', delimiter='\t', header=None,
names=[user,item,'rating','timestamp'])
ratings.head()
movies = pd.read_csv(path/'u.item', delimiter='|', encoding='latin-1', header=None,
names=[item, 'title', 'date', 'N', 'url', *[f'g{i}' for i in range(19)]])
movies.head()
len(ratings)
rating_movie = ratings.merge(movies[[item, title]])
rating_movie.head()
data = CollabDataBunch.from_df(rating_movie, seed=42, valid_pct=0.1, item_name=title)
data.show_batch()
y_range = [0,5.5]
learn = collab_learner(data, n_factors=40, y_range=y_range, wd=1e-1)
learn.lr_find()
learn.recorder.plot(skip_end=15)
learn.fit_one_cycle(5, 5e-3)
learn.save('dotprod')
"""
Explanation: Movielens 100k
Let's try with the full Movielens 100k data dataset, available from http://files.grouplens.org/datasets/movielens/ml-100k.zip
End of explanation
"""
learn.load('dotprod');
learn.model
g = rating_movie.groupby(title)['rating'].count()
top_movies = g.sort_values(ascending=False).index.values[:1000]
top_movies[:10]
"""
Explanation: Here's some benchmarks on the same dataset for the popular Librec system for collaborative filtering. They show best results based on RMSE of 0.91, which corresponds to an MSE of 0.91**2 = 0.83.
Interpretation
Setup
End of explanation
"""
movie_bias = learn.bias(top_movies, is_item=True)
movie_bias.shape
mean_ratings = rating_movie.groupby(title)['rating'].mean()
movie_ratings = [(b, i, mean_ratings.loc[i]) for i,b in zip(top_movies,movie_bias)]
item0 = lambda o:o[0]
sorted(movie_ratings, key=item0)[:15]
sorted(movie_ratings, key=lambda o: o[0], reverse=True)[:15]
"""
Explanation: Movie bias
End of explanation
"""
movie_w = learn.weight(top_movies, is_item=True)
movie_w.shape
movie_pca = movie_w.pca(3)
movie_pca.shape
fac0,fac1,fac2 = movie_pca.t()
movie_comp = [(f, i) for f,i in zip(fac0, top_movies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
movie_comp = [(f, i) for f,i in zip(fac1, top_movies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
idxs = np.random.choice(len(top_movies), 50, replace=False)
idxs = list(range(50))
X = fac0[idxs]
Y = fac2[idxs]
plt.figure(figsize=(15,15))
plt.scatter(X, Y)
for i, x, y in zip(top_movies[idxs], X, Y):
plt.text(x,y,i, color=np.random.rand(3)*0.7, fontsize=11)
plt.show()
"""
Explanation: Movie weights
End of explanation
"""
|
julienchastang/unidata-python-workshop | notebooks/Primer/Numpy and Matplotlib Basics.ipynb | mit | # Convention for import to get shortened namespace
import numpy as np
# Create a simple array from a list of integers
a = np.array([1, 2, 3])
a
# See how many dimensions the array has
a.ndim
# Print out the shape attribute
a.shape
# Print out the data type attribute
a.dtype
# This time use a nested list of floats
a = np.array([[1., 2., 3., 4., 5.]])
a
# See how many dimensions the array has
a.ndim
# Print out the shape attribute
a.shape
# Print out the data type attribute
a.dtype
"""
Explanation: <div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>Primer</h1>
<h3>Unidata Python Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
<div style="float:right; width:250 px"><img src="http://www.contribute.geeksforgeeks.org/wp-content/uploads/numpy-logo1.jpg" alt="NumPy Logo" style="height: 250px;"></div>
Overview:
Teaching: 20 minutes
Exercises: 10 minutes
Questions
What are arrays?
How can arrays be manipulated effectively in Python?
Objectives
Create an array of ‘data’.
Perform basic calculations on this data using python math functions.
Slice and index the array
NumPy is the fundamental package for scientific computing with Python. It contains among other things:
- a powerful N-dimensional array object
- sophisticated (broadcasting) functions
- useful linear algebra, Fourier transform, and random number capabilities
The NumPy array object is the common interface for working with typed arrays of data across a wide-variety of scientific Python packages. NumPy also features a C-API, which enables interfacing existing Fortran/C/C++ libraries with Python and NumPy.
Create an array of 'data'
The NumPy array represents a contiguous block of memory, holding entries of a given type (and hence fixed size). The entries are laid out in memory according to the shape, or list of dimension sizes.
End of explanation
"""
a = np.arange(5)
print(a)
a = np.arange(3, 11)
print(a)
a = np.arange(1, 10, 2)
print(a)
"""
Explanation: NumPy also provides helper functions for generating arrays of data to save you typing for regularly spaced data.
arange(start, stop, interval) creates a range of values in the interval [start,stop) with step spacing.
linspace(start, stop, num) creates a range of num evenly spaced values over the range [start,stop].
arange
End of explanation
"""
b = np.linspace(5, 15, 5)
print(b)
b = np.linspace(2.5, 10.25, 11)
print(b)
"""
Explanation: linspace
End of explanation
"""
a = range(5, 10)
b = [3 + i * 1.5/4 for i in range(5)]
result = []
for x, y in zip(a, b):
result.append(x + y)
print(result)
"""
Explanation: Perform basic calculations with Python
Basic math
In core Python, that is without NumPy, creating sequences of values and adding them together requires writing a lot of manual loops, just like one would do in C/C++:
End of explanation
"""
a = np.arange(5, 10)
b = np.linspace(3, 4.5, 5)
a + b
"""
Explanation: That is very verbose and not very intuitive. Using NumPy this becomes:
End of explanation
"""
a * b
"""
Explanation: The four major mathematical operations operate in the same way. They perform an element-by-element calculation of the two arrays. The two must be the same shape though!
End of explanation
"""
np.pi
np.e
# This makes working with radians effortless!
t = np.arange(0, 2 * np.pi + np.pi / 4, np.pi / 4)
t
"""
Explanation: Constants
NumPy proves us access to some useful constants as well - remember you should never be typing these in manually! Other libraries such as SciPy and MetPy have their own set of constants that are more domain specific.
End of explanation
"""
# Calculate the sine function
sin_t = np.sin(t)
print(sin_t)
# Round to three decimal places
print(np.round(sin_t, 3))
# Calculate the cosine function
cos_t = np.cos(t)
print(cos_t)
# Convert radians to degrees
degrees = np.rad2deg(t)
print(degrees)
# Integrate the sine function with the trapezoidal rule
sine_integral = np.trapz(sin_t, t)
print(np.round(sine_integral, 3))
# Sum the values of the cosine
cos_sum = np.sum(cos_t)
print(cos_sum)
# Calculate the cumulative sum of the cosine
cos_csum = np.cumsum(cos_t)
print(cos_csum)
"""
Explanation: Array math functions
NumPy also has math functions that can operate on arrays. Similar to the math operations, these greatly simplify and speed up these operations. Be sure to checkout the listing of mathematical functions in the NumPy documentation.
End of explanation
"""
# Convention for import to get shortened namespace
import numpy as np
# Create an array for testing
a = np.arange(12).reshape(3, 4)
a
"""
Explanation: Index and slice arrays
Indexing is how we pull individual data items out of an array. Slicing extends this process to pulling out a regular set of the items.
End of explanation
"""
a[1, 2]
"""
Explanation: Indexing in Python is 0-based, so the command below looks for the 2nd item along the first dimension (row) and the 3rd along the second dimension (column).
End of explanation
"""
a[2]
"""
Explanation: Can also just index on one dimension
End of explanation
"""
a[0, -1]
"""
Explanation: Negative indices are also allowed, which permit indexing relative to the end of the array.
End of explanation
"""
# Get the 2nd and 3rd rows
a[1:3]
# All rows and 3rd column
a[:, 2]
# ... can be used to replace one or more full slices
a[..., 2]
# Slice every other row
a[::2]
# Slice out every other column
a[:, ::2]
# Slice every other item along each dimension -- how would we do this
"""
Explanation: Slicing syntax is written as start:stop[:step], where all numbers are optional.
- defaults:
- start = 0
- end = len(dim)
- step = 1
- The second colon is also optional if no step is used.
It should be noted that end represents one past the last item; one can also think of it as a half open interval: [start, end)
End of explanation
"""
%matplotlib inline
"""
Explanation: Plotting with Matplotlib
Matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms.
The first step is to set up our notebook environment so that matplotlib plots appear inline as images:
End of explanation
"""
import matplotlib.pyplot as plt
"""
Explanation: Next we import the matplotlib library's pyplot interface. This is a MATLAB-like interface that makes generating plots relatively simple. To shorten this long name, we import it as plt to keep things short but clear.
End of explanation
"""
times = np.array([ 93., 96., 99., 102., 105., 108., 111., 114., 117.,
120., 123., 126., 129., 132., 135., 138., 141., 144.,
147., 150., 153., 156., 159., 162.])
temps = np.array([310.7, 308.0, 296.4, 289.5, 288.5, 287.1, 301.1, 308.3,
311.5, 305.1, 295.6, 292.4, 290.4, 289.1, 299.4, 307.9,
316.6, 293.9, 291.2, 289.8, 287.1, 285.8, 303.3, 310.])
"""
Explanation: Now we generate some data to use while experimenting with plotting:
End of explanation
"""
# Create a figure and an axes
fig, ax = plt.subplots(figsize=(10, 6))
# Plot times as x-variable and temperatures as y-variable
ax.plot(times, temps)
"""
Explanation: Now we come to two quick lines to create a plot. Matplotlib has two core objects: the Figure and the Axes. The Axes is an individual plot with an x-axis, a y-axis, labels, etc; it has all of the various plotting methods we use. A Figure holds one or more Axes on which we draw.
Below the first line asks for a Figure 10 inches by 6 inches; matplotlib takes care of creating an Axes on it for us. After that, we call plot, with times as the data along the x-axis (independant values) and temps as the data along the y-axis (the dependant values).
End of explanation
"""
# Add some labels to the plot
ax.set_xlabel('Time')
ax.set_ylabel('Temperature')
# Prompt the notebook to re-display the figure after we modify it
fig
"""
Explanation: From there, we can do things like ask the axis to add labels for x and y:
End of explanation
"""
ax.set_title('GFS Temperature Forecast', fontdict={'size':16})
fig
"""
Explanation: We can also add a title to the plot:
End of explanation
"""
# Set up more temperature data
temps_1000 = np.array([316.0, 316.3, 308.9, 304.0, 302.0, 300.8, 306.2, 309.8,
313.5, 313.3, 308.3, 304.9, 301.0, 299.2, 302.6, 309.0,
311.8, 304.7, 304.6, 301.8, 300.6, 299.9, 306.3, 311.3])
"""
Explanation: Of course, we can do so much more...
End of explanation
"""
fig, ax = plt.subplots(figsize=(10, 6))
# Plot two series of data
# The label argument is used when generating a legend.
ax.plot(times, temps, label='Temperature (surface)')
ax.plot(times, temps_1000, label='Temperature (1000 mb)')
# Add labels and title
ax.set_xlabel('Time')
ax.set_ylabel('Temperature')
ax.set_title('Temperature Forecast')
# Add gridlines
ax.grid(True)
# Add a legend to the upper left corner of the plot
ax.legend(loc='upper left')
"""
Explanation: Here we call plot more than once to plot multiple series of temperature on the same plot; when plotting we pass label to plot to facilitate automatic creation. This is added with the legend call. We also add gridlines to the plot using the grid() call.
End of explanation
"""
fig, ax = plt.subplots(figsize=(10, 6))
# Specify how our lines should look
ax.plot(times, temps, color='tab:red', label='Temperature (surface)')
ax.plot(times, temps_1000, color='tab:red', linestyle='--',
label='Temperature (isobaric level)')
# Same as above
ax.set_xlabel('Time')
ax.set_ylabel('Temperature')
ax.set_title('Temperature Forecast')
ax.grid(True)
ax.legend(loc='upper left')
"""
Explanation: We're not restricted to the default look of the plots, but rather we can override style attributes, such as linestyle and color. color can accept a wide array of options for color, such as red or blue or HTML color codes. Here we use some different shades of red taken from the Tableau color set in matplotlib, by using tab:red for color.
End of explanation
"""
|
radhikapc/foundation-homework | homework11/Homework11-Radhika.ipynb | mit | import dateutils
import dateutil.parser
import pandas as pd
parking_df = pd.read_csv("small-violations.csv")
parking_df
parking_df.dtypes
import datetime
parking_df.head()['Issue Date'].astype(datetime.datetime)
import pandas as pd
parking_df = pd.read_csv("small-violations.csv")
parking_df
"""
Explanation: I want to make sure my Plate ID is a string. Can't lose the leading zeroes!
I don't think anyone's car was built in 0AD. Discard the '0's as NaN.
I want the dates to be dates! Read the read_csv documentation to find out how to make pandas automatically parse dates.
"Date first observed" is a pretty weird column, but it seems like it has a date hiding inside. Using a function with .apply, transform the string (e.g. "20140324") into a Python date. Make the 0's show up as NaN.
"Violation time" is... not a time. Make it a time.
There sure are a lot of colors of cars, too bad so many of them are the same. Make "BLK" and "BLACK", "WT" and "WHITE", and any other combinations that you notice.
Join the data with the Parking Violations Code dataset from the NYC Open Data site.
How much money did NYC make off of parking violations?
What's the most lucrative kind of parking violation? The most frequent?
New Jersey has bad drivers, but does it have bad parkers, too? How much money does NYC make off of all non-New York vehicles?
Make a chart of the top few.
What time of day do people usually get their tickets? You can break the day up into several blocks - for example 12am-6am, 6am-12pm, 12pm-6pm, 6pm-12am.
What's the average ticket cost in NYC?
Make a graph of the number of tickets per day.
Make a graph of the amount of revenue collected per day.
Manually construct a dataframe out of https://dmv.ny.gov/statistic/2015licinforce-web.pdf (only NYC boroughts - bronx, queens, manhattan, staten island, brooklyn), having columns for borough name, abbreviation, and number of licensed drivers.
What's the parking-ticket-$-per-licensed-driver in each borough of NYC? Do this with pandas and the dataframe you just made, not with your head!
End of explanation
"""
col_plateid = { 'Plate ID': 'str', }
violations_df = pd.read_csv("small-violations.csv", dtype=col_plateid)
violations_df.head(20)
print("The data type is",(type(violations_df['Plate ID'][0])))
"""
Explanation: 1. I want to make sure my Plate ID is a string. Can't lose the leading zeroes!
End of explanation
"""
type(parking_df['Vehicle Year'][0])
# DISCOVERY - pass value as [0] rather than 0
col_types = { 'Vehicle Year': [0] }
test_df = pd.read_csv("violations.csv", na_values=col_types, nrows=10)
test_df.head(10)
violations_df['Vehicle Year'] = violations_df['Vehicle Year'].replace("0","NaN")
violations_df.head(10)
"""
Explanation: 2. I don't think anyone's car was built in 0AD. Discard the '0's as NaN.
End of explanation
"""
type(violations_df['Issue Date'][0])
violate_df = pd.read_csv("small-violations.csv", parse_dates=True, infer_datetime_format=True, keep_date_col=True, date_parser=True, dayfirst=True, nrows=10)
#violate_df['Vehicle Year'] = test1_df['Vehicle Year'].replace("0","NaN")
violate_df.head()
yourdate = dateutil.parser.parse(violate_df['Issue Date'][0])
yourdate
violate_df.head()['Issue Date'].astype(datetime.datetime)
"""
Explanation: 3. I want the dates to be dates! Read the read_csv documentation to find out how to make pandas automatically parse dates.
End of explanation
"""
violate_df.columns
# changing it to string because it later needs to be converted into Python time.
col_observ = { 'Date First Observed': 'str', }
test2_df = pd.read_csv("violations.csv", dtype=col_observ, nrows=10)
test2_df.head()
# defining conversion into python time
def to_date(num):
if num == "0":
return num.replace("0","NaN")
else:
yourdate = dateutil.parser.parse(num)
date_in_py = yourdate.strftime("%Y %B %d")
return date_in_py
to_date("20140324")
# confirming its string.
type(test2_df['Date First Observed'][0])
test2_df['Date First Observed'].apply(to_date)
#replacing Date First Observed with Date First Observed column as already there are so many columns.
test2_df['Date First Observed'] = test2_df['Date First Observed'].apply(to_date)
"""
Explanation: 4. "Date first observed" is a pretty weird column, but it seems like it has a date hiding inside. Using a function with .apply, transform the string (e.g. "20140324") into a Python date. Make the 0's show up as NaN.
End of explanation
"""
violate_df['Violation Time'].head(5)
type(violate_df['Violation Time'][0])
# am replacing A and P with AM and PM to
def str_to_time(time_str):
s = time_str.replace("P"," PM").replace("A"," AM")
x = x = s[:2] + ":" + s[2:]
return x
str_to_time("1239P")
test2_df['Violation Time'] = test2_df['Violation Time'].apply(str_to_time)
def vio_date(time_str):
parsed_date = dateutil.parser.parse(time_str)
date_vio = parsed_date.strftime("%H:%M %p")
return date_vio
#return parsed_date.hour
print(vio_date("12:32 PM"))
test2_df['Violation Time'].apply(vio_date)
#replacing Violation Time with Date Violation Time column as already there are so many columns.
test2_df['Violation Time'] = test2_df['Violation Time'].apply(vio_date)
test2_df['Violation Time']
"""
Explanation: 5. "Violation time" is... not a time. Make it a time
End of explanation
"""
#violate_df['Vehicle Color'].count_values()
violate_df.groupby('Vehicle Color').describe()
def to_color(color_str):
if color_str == "WH":
return str(color_str.replace("WH","White"))
if color_str == "WHT":
return str(color_str.replace("WHT","White"))
if color_str == "RD":
return str(color_str.replace("RD","Red"))
if color_str == "BLK":
return str(color_str.replace("BLK","BLACK"))
if color_str == "BK":
return str(color_str.replace("BK","BLACK"))
if color_str == "BR":
return str(color_str.replace("BR","Brown"))
if color_str == "BRW":
return str(color_str.replace("BRW","Brown"))
if color_str == "GN":
return str(color_str.replace("GN","Green"))
if color_str == "GRY":
return str(color_str.replace("GRY","Gray"))
if color_str == "GY":
return str(color_str.replace("GY","Gray"))
if color_str == "BL":
return str(color_str.replace("BL","Blue"))
if color_str == "SILVR":
return str(color_str.replace("SILVR","Silver"))
if color_str == "SILVE":
return str(color_str.replace("SILVE","Silver"))
if color_str == "MAROO":
return str(color_str.replace("MAROO","Maroon"))
to_color("WHT")
test2_df['Vehicle Color'].apply(to_color)
#replacing Vehicle Color with Vehicle Color column as already there are so many columns.
test2_df['Vehicle Color'] = test2_df['Vehicle Color'].apply(to_color)
test2_df['Vehicle Color'].head()
"""
Explanation: 6. There sure are a lot of colors of cars, too bad so many of them are the same. Make "BLK" and "BLACK", "WT" and "WHITE", and any other combinations that you notice.
End of explanation
"""
df_code = pd.read_csv("DOF_Parking_Violation_Codes.csv")
df_code.head(10)
violate_df['Violation Legal Code'].head()
test2_df.join(df_code, on='Violation Code', how='left')
"""
Explanation: 7. Join the data with the Parking Violations Code dataset from the NYC Open Data site.
End of explanation
"""
for ammount in df_code["All Other Areas"]:
try:
money_to_int(ammount)
except:
print(ammount)
print(type(ammount))
def money_to_int(money_str):
return int(money_str.replace("$","").replace(",",""))
print(money_to_int("$115"))
import re
ammount_list = []
other_area = df_code["All Other Areas"]
for ammount in other_area:
try:
x = money_to_int(ammount)
ammount_list.append(x)
#print(amount_list)
except:
print("made it to except")
if isinstance(ammount,str):
print("is a string!")
clean = re.findall(r"\d{3}", ammount)
z = [int(i) for i in clean]
#print(type(z[0]))
#print(clean)
if len(z) > 1:
print("z is greater than 1")
avg = int(sum(z) / len(z))
print(type(avg))
#print(avg)
ammount_list.append(avg)
elif len(z) == 1:
print("only one item in list!")
print("Let's append", str(z[0]))
ammount_list.append(z[0])
#print(amount_list)
else:
ammount_list.append(None)
else:
ammount_list.append(None)
len(ammount_list)
df_code['new_areas'] = ammount_list
df_code
#df_code['new_areas'].sum()
#since I am unable to read the entire data set using the subset to calculate the sum.
test3_df = pd.read_csv("small-violations.csv", dtype=col_observ)
test3_df
test3_df.join(df_code, on='Violation Code', how='left')
# joining with the violation dataset
new_data = test3_df.join(df_code, on='Violation Code', how='left')
new_data['new_areas'].sum()
"""
Explanation: 8. How much money did NYC make off of parking violations?
End of explanation
"""
new_data.columns
new_data['Violation Code'].value_counts()
new_data[new_data['Violation Code'] == 21]
most_frequent = new_data[new_data['Violation Code'] == 21].head(1)
print("The most frequent violation is", most_frequent['DEFINITION'])
columns_to_show = ['Violation Code','new_areas']
new_data[columns_to_show]
lucrative_df = new_data[columns_to_show]
freq_df = new_data
#df.sort_values('length', ascending=False).head(3)
lucrative_df.groupby('Violation Code')['new_areas'].sum().sort_values(ascending=False)
new_data[new_data['Violation Code'] == 14].head(1)
most_lucrative = new_data[new_data['Violation Code'] == 14].head(1)
print("The most lucrative is Violation Code 14 which corresponds to", most_frequent['DEFINITION'])
"""
Explanation: 9. What's the most lucrative kind of parking violation? The most frequent?
End of explanation
"""
columns_to_show = ['Registration State','new_areas']
new_data[columns_to_show]
df_reg = new_data[columns_to_show]
df_reg[df_reg['Registration State'] != "NY"]
df_nonNY = df_reg[df_reg['Registration State'] != "NY"]
print("The total money that NYC make off of all non-New York vehicles is", df_nonNY['new_areas'].sum())
"""
Explanation: 10. New Jersey has bad drivers, but does it have bad parkers, too? How much money does NYC make off of all non-New York vehicles?
End of explanation
"""
df_nonNY.groupby('Registration State')['new_areas'].sum().sort_values(ascending=False).head(10)
import matplotlib.pyplot as plt
%matplotlib inline
df_nonNY.groupby('Registration State')['new_areas'].sum().sort_values(ascending=False).head(10).plot(kind='bar',x='Registration State', color='green')
"""
Explanation: 11. Make a chart of the top few.
End of explanation
"""
new_data.columns
test2_df['Violation Time'].head()
#new_data['Violation Time'] = test2_df['Violation Time']
violate_df['Violation Time']
type(new_data['Violation Time'][0])
v_time = new_data['Violation Time']
v_time.head(10)
def vio_date(time_str):
parsed_date = dateutil.parser.parse(time_str)
date_vio = parsed_date.strftime("%H:%M %p")
return date_vio
#return parsed_date.hour
print(vio_date("12:32 PM"))
# 12am-6am, 6am-12pm, 12pm-6pm, 6pm-12am
count1 = []
count2 = []
count3 = []
count4 = []
count5 = []
z = [i for i in v_time]
#print(z)
for i in z:
if i != None:
#print(i)
try:
vio_date(i)
#print(type(i))
#print("finished printing i")
except:
pass
#print(type(z[0]))
for item in z:
item = str(item)
#print(type(item))
if item < "06.00 AM":
count1.append(item)
#print(len(count1))
if item < "12.00 PM":
count2.append(item)
if item < "06.00 PM":
count3.append(item)
if item < "12.00 AM":
count4.append(item)
#else:
#count5.append(item)
#print(len(count5))
print(len(count4))
print(len(count3))
print(len(count2))
print(len(count1))
"""
Explanation: 12. What time of day do people usually get their tickets? You can break the day up into several blocks - for example 12am-6am, 6am-12pm, 12pm-6pm, 6pm-12am.
End of explanation
"""
#gives the Registration State wise ticket cost (new_areas)
df_reg
df_reg.describe()
"""
Explanation: 13. What's the average ticket cost in NYC?
End of explanation
"""
new_data.columns
# parsing to the daytime format. did earlier (dateutil.parser.parse(violate_df['Issue Date'][0])
new_data['Issue Date'] = test3_df['Issue Date']
new_data['Issue Date'].value_counts().head(10)
new_data['Issue Date'].value_counts().head(10).plot(kind='bar',x='Issue Date', color='orange')
"""
Explanation: 14. Make a graph of the number of tickets per day.
End of explanation
"""
# since new data issue date is showing so many nan. i am going back to old data
new_data['Issue Date'] = test3_df['Issue Date']
new_data['Issue Date']
columns_to_show = ['Issue Date','new_areas']
new_data[columns_to_show].head()
new_data.groupby('Issue Date')['new_areas'].sum().sort_values(ascending=False).head(10).plot(kind='bar',x='Issue Date', color='green')
"""
Explanation: 15. Make a graph of the amount of revenue collected per day.
End of explanation
"""
df = pd.read_csv("borough.csv")
df
# bronx, queens, manhattan, staten island, brooklyn o
df[58: 63]
NYC = df[58: 63]
NYC['code'] = ["BX", "K", "NYC", "Q", "R"]
NYC
"""
Explanation: 16. Manually construct a dataframe out of https://dmv.ny.gov/statistic/2015licinforce-web.pdf (only NYC boroughts - bronx, queens, manhattan, staten island, brooklyn), having columns for borough name, abbreviation, and number of licensed drivers.
End of explanation
"""
columns_to_show = ['Violation County','new_areas']
new_data[columns_to_show]
columns_to_show = ['Violation County','new_areas']
new_data[columns_to_show]
county = new_data[columns_to_show]
county.groupby('Violation County')['new_areas'].sum().sort_values(ascending=False).head(10)
"""
Explanation: 17. What's the parking-ticket-$-per-licensed-driver in each borough of NYC? Do this with pandas and the dataframe you just made, not with your head!
End of explanation
"""
|
ziweiwu/ziweiwu.github.io | notebook/Titanic_Investigation.ipynb | mit | #load the libraries that I might need to use
%matplotlib inline
import pandas as pd
import numpy as np
import csv
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
#read the csv file into a pandas dataframe
titanic_original = pd.DataFrame.from_csv('titanic-data.csv', index_col=None)
titanic_original
"""
Explanation: The Titanic Project
For this project, I want to investigate the unfortunate tragedy of the sinking of the Titanic. The movie "Titanic"- which I watched when I was still a child left a strong memory for me. The event occurred in the early morning of 15 April 1912, when the ship collided with an iceberg, and out of 2,224 passengers, more than 1500 died.
The dataset I am working with contains the demographic information, and other information including ticket class, cabin number, fare price of 891 passengers. The main question I am curious about: What are the factors that correlate with the survival outcome of passengers?
Load the dataset
First of all, I want to get an overview of the data and identify whether there is additional data cleaning/wrangling to be done before diving deeper. I start off by reading the CSV file into a Pandas Dataframe.
End of explanation
"""
#check if there is duplicated data by checking passenger ID.
len(titanic_original['PassengerId'].unique())
"""
Explanation: Data Dictionary
Variables Definitions
survival (Survival 0 = No, 1 = Yes)
pclass (Ticket class 1 = 1st, 2 = 2nd, 3 = 3rd)
sex (Sex)
Age (Age in years)
sibsp (# of siblings / spouses aboard the Titanic)
parch (# of parents / children aboard the Titanic)
ticket (Ticket number)
fare (Passenger fare price)
cabin (Cabin number)
embarked(Port of Embarkation C = Cherbourg, Q = Queenstown, S = Southampton)
Note:
pclass: A proxy for socio-economic status (SES)
-1nd = Upper
-2nd = Middle
-3rd = Lower
age: Age is fractional if less than 1.
sibsp: number of siblings and spouse
-Sibling = brother, sister, stepbrother, stepsister
-Spouse = husband, wife (mistresses and fiancés were ignored)
parch: number of parents and children
-Parent = mother, father
-Child = daughter, son, stepdaughter, stepson
-Some children travelled only with a nanny, therefore parch=0 for them.
Data Cleaning
I want to check if there is duplicated data. By using the unique(), I checked the passenger ID to see there is duplicated entries.
End of explanation
"""
#make a copy of dataset
titanic_cleaned=titanic_original.copy()
#remove ticket and cabin feature from dataset
titanic_cleaned=titanic_cleaned.drop(['Ticket','Cabin'], axis=1)
#Remove missing values.
titanic_cleaned=titanic_cleaned.dropna()
#Check to see if the cleaning is successful
titanic_cleaned.head()
"""
Explanation: Looks like there are no duplicated entries based on passengers ID. We have in total 891 passengers in the dataset. However I have noticed there is a lot of missing values in 'Cabin' feature, and the 'Ticket' feature does not provide useful information for my analysis. I decided to remove them from the dataset by using drop() function
There are also some missing values in the 'Age', I can either removed them or replace them with the mean. Considering there is still a good sample size (>700 entries) after removal, I decide to remove the missing values with dropNa()
End of explanation
"""
# Create Survival Label Column
titanic_cleaned['Survival'] = titanic_cleaned.Survived.map({0 : 'Died', 1 : 'Survived'})
titanic_cleaned.head()
# Create Class Label Column
titanic_cleaned['Class'] = titanic_cleaned.Pclass.map({1 : 'Upper Class', 2 : 'Middle Class', 3 : 'Lower Class'})
titanic_cleaned.head()
"""
Explanation: Take a look at Survived and Pclass columns. They are not very descriptive, so I decided to add two additional columns called Survival and Class with more descriptive values.
End of explanation
"""
#describe() provides a statistical overview of the dataset
titanic_cleaned.describe()
#calculate the median for each column
titanic_cleaned.median()
"""
Explanation: Data overview
Now with a clean dataset, I am ready to formulate my hypothesis. I want to get a general overview of statistics for the dataset first. I use the describe() function on the data set. The useful statistic to look at is the mean, which gives us a general idea what the average value is for each feature. The standard deviation provides information on the spread of the data. The min and max give me information regarding whether there are outliers in the dataset. We should be careful and take these outliers into account when analyzing our data. I also calculate the median for each column in case there are outliers.
End of explanation
"""
#I am using seaborn.countplot() to count and show the distribution of a single variable
sns.set(style="darkgrid")
ax = sns.countplot(x="Survival", data=titanic_cleaned)
plt.title("Distribution of Survival")
"""
Explanation: Looking at the means and medians, we see that the biggest difference is between mean and median of fare price. The mean is 34.57 while the median is only 15.65. It is likely due to the presence of outliers, the wealthy individuals who could afford the best suits. For example, the highest price fare is well over 500 dollars. I also see that the lowest fare price is 0, I suspect that those are the ship crews.
Now let's study the distribution of variables of interest. The countplot() from seaborn library plots a barplot that shows the counts of the variables. Let's take a look at our dependent variable - "Survived"
End of explanation
"""
#plt.figure() allows me to specify the size of the graph.
#using fig.add_subplot allows me to display two subplots side by side
fig = plt.figure(figsize=(10,5))
ax1 = fig.add_subplot(121)
ax1=sns.countplot(x="Sex", data=titanic_cleaned)
plt.title("Distribution of Gender")
fig.add_subplot(122)
ax2 = sns.countplot(x="Class", data=titanic_cleaned)
plt.title('Distributrion of Class')
"""
Explanation: We see that there were 342 passengers survived the disaster or around 38% of the sample.
Now, we also want to look at the distribution of some of other data including gender, socioeconomic class, age, and fare price. Gender, socioeconomic class, age are all categorical data, and barplot is best suited to show their count distribution. Fare price is a continuous variable, and a frequency distribution plot is used to study it.
End of explanation
"""
#By using hue argument, we can study the another variable, combine with our original variable
sns.countplot(x='Sex', hue='Class', data=titanic_cleaned)
plt.title('Gender and Socioeconomic class')
"""
Explanation: It is now a good idea to combine the two graph to see how is gender and socioeconomic class intertwined. We see that among men, there is a much higher number of lower socioeconomic class individuals compared to women. For middle and upper class, the number of men and women are very similar. It is likely that families made up of the majority middle and upper-class passengers, while the lower class passengers are mostly single men.
End of explanation
"""
#Use fig to store plot dimension
#use add_subplot to display two plots side by side
fig = plt.figure(figsize=(10,5))
ax1 = fig.add_subplot(121)
sns.distplot(titanic_cleaned.Fare)
plt.title('Distribution of fare price')
plt.ylabel('Density')
#for this plot, kde must be explicitly turn off for the y axis to counts instead of frequency density
axe2=fig.add_subplot(122)
sns.distplot(titanic_cleaned.Age,bins=40,hist=True, kde=False)
plt.title('Distribution of age')
plt.ylabel('Count')
"""
Explanation: Fare price is a continuous variable, and for this type of variable, we use seaborn.distplot() to study its frequency distribution.
In comparison, age is a discrete variable and can be plotted by seaborn.countplot() which plots a bar plot that shows the counts.
We align the two plots horizontal using add_subplot to better demonstrate this difference.
End of explanation
"""
#multiple plots can be overlayed. Boxplot() and striplot() turned out to be a good combination
sns.boxplot(x="Class", y="Fare", data=titanic_cleaned)
sns.stripplot(x="Class", y="Fare", data=titanic_cleaned, color=".25")
plt.title('Class and fare price')
"""
Explanation: We can see that the shape of two plots is quite different.
* The fare price distribution plot shows a positively skewed curve, as most of the prices are concentrated below 30 dollars, and highest prices are well over 500 dollars
* The age distribution plot demonstrates more of a bell-shaped curve (Gaussian distribution) with a slight mode for infants and young children. I suspect the slight spike for infants and young children to due to the presence of young families.
Observations on the dataset
342 passengers or roughly 38% of total survived.
There were significantly more men than women on board.
There are significantly higher numbers of lower class passengers compared to the mid and upper class.
The majority of fares sold are below 30 dollars, however, the upper price range of fare is very high, the most expensive ones are over 500 dollars, which should be considered outliers.
Hypothesis
Based on the overview of the data, I formulated 3 potential features that may have influenced the survival.
1. Fare price: What is the effect of fare price on survival rate? Are passengers who could afford more expensive tickets more likely to survive?
2. Gender: Does gender plays a role in survival? Are women more likely to survive than men?
3. Age: What age groups of the passengers are more likely to survive?
Fare Price and survival
Let's investigate fare price a bit deeper. First I am interested in looking at its relationship with socioeconomic class. Considering the large range of fare price, we use boxplot to better demonstrate the spread and confidence intervals of the data. The strip plot is used to show the density of data points, and more importantly the outliers.
End of explanation
"""
#make a copy of the dataset and named it titanic_fare
#copy is used instead of assigning is to perserve the dataset in case anything goes wrong
#add a new column stating whether the fare >35 (value=1) or <=35 dollars (value=0)
titanic_fare = titanic_cleaned.copy()
titanic_fare['Fare>35'] = np.where(titanic_cleaned['Fare']>35,'Yes','No')
#check to see if the column creation is succesful
titanic_fare.head()
#Calculate the survival rate for passenger who holds fare > $35.
#float() was used to forced a decimal result due to the limitation of python 2
high_fare_survival=titanic_fare.loc[(titanic_fare['Survived'] == 1)&(titanic_fare['Fare>35']=='Yes')]
high_fare_holder=titanic_fare.loc[(titanic_fare['Fare>35']=='Yes')]
high_fare_survival_rate=len(high_fare_survival)/float(len(high_fare_holder))
print high_fare_survival_rate
#Calculate the survival rate for passenger who holds fare <= $35.
low_fare_survival=titanic_fare.loc[(titanic_fare['Survived'] == 1)&(titanic_fare['Fare>35']=='No')]
low_fare_holder=titanic_fare.loc[(titanic_fare['Fare>35']=='No')]
low_fare_survival_rate=len(low_fare_survival)/float(len(low_fare_holder))
print low_fare_survival_rate
#plot a barplot for survival rate for fare price > $35 and <= $35
fare_survival_table=pd.DataFrame({'Fare Price':pd.Categorical(['No','Yes']),
'Survival Rate':pd.Series([0.32,0.62], dtype='float64')
})
bar=fare_survival_table.plot(kind='bar', x='Fare Price', rot=0)
plt.ylabel('Survival Rate')
plt.xlabel('Fare>35')
plt.title('Fare price and survival rate')
"""
Explanation: This is not surprising that the outliers existed exclusively in the high socioeconomic class group, as only the wealthy individuals can afford the higher fare price.
This is clear that the upper class were able to afford more expensive fares, with highest fares above 500 dollars.
To look at the survival rate, I break down the fare data into two groups:
1. Passengers with fare <=35 dollars
2. passengers with fare >35 dollars
End of explanation
"""
#seaborn.barplot() can directly calculate/display the survival rate and confidence interval from the dataset
sns.barplot(x='Fare>35',y='Survived',data=titanic_fare, palette="Blues_d")
plt.title('Fare price and survival rate')
plt.ylabel('Survival Rate')
"""
Explanation: The bar plot using matplotlib.pyplot does a reasonable job of showing the difference in survival rate between the two groups.
However with seaborn.barplot(), confidence intervals are directly calculated and displayed. This is an advantage of seaborn library.
End of explanation
"""
#use seaborn.lmplot to graph the logistic regression function
sns.lmplot(x="Fare", y="Survived", data=titanic_fare,
logistic=True, y_jitter=.03)
plt.title('Logistic regression using fare price as estimator for survival outcome')
plt.yticks([0, 1], ['Died', 'Survived'])
fare_bins = np.arange(0,500,10)
sns.distplot(titanic_cleaned.loc[(titanic_cleaned['Survived']==0) & (titanic_cleaned['Fare']),'Fare'], bins=fare_bins)
sns.distplot(titanic_cleaned.loc[(titanic_cleaned['Survived']==1) & (titanic_cleaned['Fare']),'Fare'], bins=fare_bins)
plt.title('fare distribution among survival classes')
plt.ylabel('frequency')
plt.legend(['did not survive', 'survived']);
"""
Explanation: As seen from the graph, taking into account of confidence intervals, higher fare group is associated with significantly higher survival rate (~0.62) compared to lower fare group (~0.31).
How about if we just look at fare price as the continuous variable in relation to survival outcome?
When the Y variable is binary like survival outcome in this case, the statistical analysis suitable is "logistic Regression", where x variable is used as an estimator for the binary outcome of Y variable.
Fortunately, Seaborn.lmplot() allows us to graph the logistic regression function using fare price as an estimator for survival, the function displays a sigmoid shape and higher fare price is indeed associated with the better chance of survival.
Note: the area around the line shows the confidence interval of the function.
End of explanation
"""
#Calculate the survival rate for female
female_survived=titanic_fare.loc[(titanic_cleaned['Survived'] == 1)&(titanic_cleaned['Sex']=='female')]
female_total=titanic_fare.loc[(titanic_cleaned['Sex']=='female')]
female_survival_rate=len(female_survived)/(len(female_total)*1.00)
print female_survival_rate
#Calculate the survival rate for male
male_survived=titanic_fare.loc[(titanic_cleaned['Survived'] == 1)&(titanic_cleaned['Sex']=='male')]
male_total=titanic_fare.loc[(titanic_cleaned['Sex']=='male')]
male_survival_rate=len(male_survived)/(len(male_total)*1.00)
print male_survival_rate
#plot a barplot for survival rate for female and male
#we can see that seaborn.barplot
sns.barplot(x='Sex',y='Survived',data=titanic_fare)
plt.title('Gender and survival rate')
plt.ylabel('Survival Rate')
##plot a barplot for survival rate for female and male, combine with fare price group
sns.barplot(x='Sex',y='Survived', hue='Fare>35',data=titanic_fare)
plt.title('Gender and survival rate')
plt.ylabel('Survival Rate')
#plot a barplot for survival rate for female and male, combine with socioeconomic class
sns.barplot(x='Sex',y='Survived', hue='Class',data=titanic_fare)
plt.title('Socioeconomic class and survival rate')
plt.ylabel('Survival Rate')
"""
Explanation: The fare distribution between survivors and non-survivors shows that there is peak in mortality for low fare price.
Gender and Survival
For this section, I am interested in investigation gender and survival rate. I will first calculate the survival rate for both female and male. Then plot a few graphs to visualize the relationship between gender and survival, and combine with other factors such as fare price and socioeconomic class.
End of explanation
"""
#create a age_group function
def age_group(age):
age_group=0
if age<10:
age_group=1
elif age <20:
age_group=2
elif age <30:
age_group=3
elif age <40:
age_group=4
elif age <50:
age_group=5
else:
age_group=6
return age_group
#create a series of age group number by applying the age_group function to age column
ageGroup_column = titanic_fare['Age'].apply(age_group)
#make a copy of titanic_fare and name it titanic_age
titanic_age=titanic_fare.copy()
#add age group column
titanic_age['Age Group'] = ageGroup_column
#check to see if age group column was added properly
titanic_age.head()
"""
Explanation: Therefore, being a female is associated with significantly higher survival rate compared to male.
In addition, being in the higher socioeconomic group and higher fare group are associated with a higher survival rate in both male and female.
The difference is that in the male the survival rates are similar for class 2 and 3 with class 1 being much higher, while in the female the survival rates are similar for class 1 and 2 with class 3 being much lower.
Age and Survival
To study the relationship between age and survival rate. First, I seperate age into 6 groups number from 1 to 6:
1. newborn to 10 years old
2. 10 to 20 years old
3. 20 to 30 years old
4. 30 to 40 years old
5. 40 to 50 years old
6. over 50 years old
Then, I added the age group number as a new column to the dataset.
End of explanation
"""
#Seaborn.barplot is used to plot a bargraph and confidence intervals for survival rate
sns.barplot(x='Age Group', y='Survived',data=titanic_age)
plt.title('Age group and survival rate')
plt.ylabel('Survival Rate')
"""
Explanation: Now, we want to plot a bar graph showing the relationship between age group and survival rate. Age group is used here instead of age because visually age group is easier to observe than using age variable when dealing with survival rate.
End of explanation
"""
#draw bargram and bring additional factors including gender and class
fig = plt.figure(figsize=(10,5))
ax1 = fig.add_subplot(121)
sns.barplot(x='Age Group', y='Survived', hue='Sex',data=titanic_age)
plt.title('Age group, gender and survival rate')
plt.ylabel('Survival Rate')
ax1 = fig.add_subplot(122)
sns.barplot(x='Age Group', y='Survived',hue='Pclass',data=titanic_age)
plt.title('Age group, class and survival rate')
plt.ylabel('Survival Rate')
"""
Explanation: Age Group 1: < 10
Age Group 2: >= 10 and < 20
Age Group 3: >= 20 and < 30
Age Group 4: >= 30 and < 40
Age Group 5: >= 40 and < 50
Age Group 6: >= 50
End of explanation
"""
#use seaborn.lmplot to graph the logistic regression function
sns.lmplot(x="Age", y="Survived", data=titanic_age,
logistic=True, y_jitter=.03)
plt.title('Logistic regression using age as the estimator for survival outcome')
plt.yticks([0, 1], ['Died', 'Survived'])
"""
Explanation: Age Group 1: < 10
Age Group 2: >= 10 and < 20
Age Group 3: >= 20 and < 30
Age Group 4: >= 30 and < 40
Age Group 5: >= 40 and < 50
Age Group 6: >= 50
The bar graphs demonstrate that only age group 1 (infants/young children) is associated with significantly higher survival rate. There are no clear distinctions on survival rate between the rest of age groups.
How about using age instead of age group. Is there a linear relationship between age and survival outcome? By using seaborn.lmplot(), We can perform a logistic regression on survival outcome using age as an estimator. Let's take a look.
End of explanation
"""
fare_bins = np.arange(0,100,2)
sns.distplot(titanic_cleaned.loc[(titanic_cleaned['Survived']==0) & (titanic_cleaned['Age']),'Age'], bins=fare_bins)
sns.distplot(titanic_cleaned.loc[(titanic_cleaned['Survived']==1) & (titanic_cleaned['Age']),'Age'], bins=fare_bins)
plt.title('age distribution among survival classes')
plt.ylabel('frequency')
plt.legend(['did not survive', 'survived']);
"""
Explanation: From the graph, we can see there is a negative linear relationship between age and survival outcome.
End of explanation
"""
# using the apply function and lambda to count missing values for each column
print titanic_original.apply(lambda x: sum(x.isnull().values), axis = 0)
"""
Explanation: The age distribution comparison between survivors and non-survivors confirmed the survival spike in young children.
Limitations
There are limitations on our analysis:
1. Missing values: due to too much missing values(688) for the cabin. I decided to remove this column from my analysis. However, the 178 missing values for age data posed problems for us. In my analysis, I decided to drop the missing values because I felt we still had a reasonable sample size of >700, but selection bias definitely increased as the sample size decreased. Another option could be using the mean of existing age data to fill in for the missing values, this approach could be a good option if we had lots of missing value and still wants to incorporate age variable into our analysis. In this case, bias also increases as we are making assumptions for the passengers with missing age.
Survival bias: the data was partially collected from survivors of the disaster, and there could be a lot of data that were missing for people who did not survive. This leads the dataset becomes more representative toward the survivors. This limitation is difficult to overcome, as data we have today is the best of what we could gather due to the disaster has been happened over 100 years ago.
Outliers: for the fare price analysis, we saw that the fare prices had a large difference between mean(34.57) and median(15.65). The highest fares were well over 500 dollars. As a result, the distribution of our fare prices distribution is very positive skewed. This can affect the validity and the accuracy of our analysis. However, because I really wanted to see the survival outcome for the wealthier individuals, I decided to incorporate those outliers into my analysis. An alternative approach is to drop the outliers (e.g. fare prices >500) from our analysis, especially if we are only interested in studying the majority of the sample.
End of explanation
"""
|
jgarciab/wwd2017 | class8/class8_impute.ipynb | gpl-3.0 | ##Some code to run at the beginning of the file, to be able to show images in the notebook
##Don't worry about this cell
#Print the plots in this screen
%matplotlib inline
#Be able to plot images saved in the hard drive
from IPython.display import Image
#Make the notebook wider
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
import seaborn as sns
import pylab as plt
import pandas as pd
import numpy as np
import scipy.stats
import statsmodels.formula.api as smf
"""
Explanation: Working with data 2017. Class 8
Contact
Javier Garcia-Bernardo
[email protected]
1. Clustering
2. Data imputation
3. Dimensionality reduction
End of explanation
"""
#Som elibraries
from sklearn import preprocessing
from sklearn.cluster import DBSCAN, KMeans
#Read teh data, dropna, get sample
df = pd.read_csv("data/big3_position.csv",sep="\t").dropna()
df["Revenue"] = np.log10(df["Revenue"])
df["Assets"] = np.log10(df["Assets"])
df["Employees"] = np.log10(df["Employees"])
df["MarketCap"] = np.log10(df["MarketCap"])
df = df.replace([np.inf,-np.inf],np.nan).dropna().sample(300)
df.head(2)
#Scale variables to give all of them the same weight
X = df.loc[:,["Revenue","Assets","Employees","MarketCap"]]
X = preprocessing.scale(X)
print(X.sum(0))
print(X.std(0))
X
"""
Explanation: 1. Clustering
End of explanation
"""
#Get labels of each row and add a new column with the labels
kmeans = KMeans(n_clusters=2, random_state=0).fit(X)
labels = kmeans.labels_
df["kmeans_labels"] = labels
sns.lmplot(x="MarketCap",y="Assets",hue="kmeans_labels",fit_reg=False,data=df)
"""
Explanation: 1a. Clustering with K-means
k-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells.
Other methods: http://scikit-learn.org/stable/modules/clustering.html
End of explanation
"""
#Get labels of each row and add a new column with the labels
db = DBSCAN(eps=1, min_samples=10).fit(X)
labels = db.labels_
df["dbscan_labels"] = labels
sns.lmplot(x="MarketCap",y="Assets",hue="dbscan_labels",fit_reg=False,data=df)
Image(url="http://scikit-learn.org/stable/_images/sphx_glr_plot_cluster_comparison_0011.png")
"""
Explanation: 1b. Clustering with DBSCAN
The DBSCAN algorithm views clusters as areas of high density separated by areas of low density. Due to this rather generic view, clusters found by DBSCAN can be any shape, as oppos
End of explanation
"""
import scipy
import pylab
import scipy.cluster.hierarchy as sch
# Generate distance matrix based on the difference between rows
D = np.zeros([4,4])
for i in range(4):
for j in range(4):
D[i,j] = np.sum(np.abs(X[:,i]-X[:,j])) #Euclidean distance or mutual information are also common
print(D)
#Create the linkage and plot
Y = sch.linkage(D, method='centroid') #many methods, single, complete...
Z1 = sch.dendrogram(Y, orientation='right',labels=["Revenue","Assets","Employees","MarketCap"])
"""
Explanation: 1c. Hierarchical clustering
Keeps aggreagating from a point
End of explanation
"""
#Required libraries
!conda install tensorflow -y
!pip install fancyimpute
!pip install pydot_ng
import sklearn.preprocessing
import sklearn
#Read the data again but do not
df = pd.read_csv("data/big3_position.csv",sep="\t")
df["Revenue"] = np.log10(df["Revenue"])
df["Assets"] = np.log10(df["Assets"])
df["Employees"] = np.log10(df["Employees"])
df["MarketCap"] = np.log10(df["MarketCap"])
le = sklearn.preprocessing.LabelEncoder()
labels = le.fit_transform(df["TypeEnt"])
df["TypeEnt_int"] = labels
print(le.classes_)
df = df.replace([np.inf,-np.inf],np.nan).sample(300)
df.head(2)
X = df.loc[:,["Revenue","Assets","Employees","MarketCap","TypeEnt_int"]].values
X
df.describe()
from fancyimpute import KNN
# X is the complete data matrix
# X_incomplete has the same values as X except a subset have been replace with NaN
# Use 10 nearest rows which have a feature to fill in each row's missing features
X_filled_knn = KNN(k=10).complete(X)
df.loc[:,cols] = X_filled_knn
df.describe()
"""
Explanation: 2. Imputation of missing data (fancy)
End of explanation
"""
|
transcranial/keras-js | notebooks/layers/pooling/GlobalAveragePooling3D.ipynb | mit | data_in_shape = (6, 6, 3, 4)
L = GlobalAveragePooling3D(data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(270)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.GlobalAveragePooling3D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: GlobalAveragePooling3D
[pooling.GlobalAveragePooling3D.0] input 6x6x3x4, data_format='channels_last'
End of explanation
"""
data_in_shape = (3, 6, 6, 3)
L = GlobalAveragePooling3D(data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(271)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.GlobalAveragePooling3D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [pooling.GlobalAveragePooling3D.1] input 3x6x6x3, data_format='channels_first'
End of explanation
"""
data_in_shape = (5, 3, 2, 1)
L = GlobalAveragePooling3D(data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(272)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.GlobalAveragePooling3D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [pooling.GlobalAveragePooling3D.2] input 5x3x2x1, data_format='channels_last'
End of explanation
"""
import os
filename = '../../../test/data/layers/pooling/GlobalAveragePooling3D.json'
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
with open(filename, 'w') as f:
json.dump(DATA, f)
print(json.dumps(DATA))
"""
Explanation: export for Keras.js tests
End of explanation
"""
|
CAChemE/stochastic-optimization | PSO/1D/1D-Python-PSO-algorithm-viz.ipynb | bsd-3-clause | import numpy as np
import matplotlib.pyplot as plt
# import scipy as sp
# import time
%matplotlib inline
plt.style.use('bmh')
"""
Explanation: Particle Swarm Optimization Algorithm (in Python!)
[SPOILER] We will be using the Particle Swarm Optimization algorithm to obtain the minumum of a customed objective function
First of all, let's import the libraries we'll need (remember we are using Python 3)
End of explanation
"""
x_lo = 0
x_up = 14
n_points = 1000
x = np.linspace(x_lo, x_up, n_points)
def f(x):
return x*np.sin(x) + x*np.cos(x)
y = f(x)
plt.plot(x,y)
plt.ylabel('$f(x) = \sin(x)+x\cos(x)$')
plt.xlabel('$x$')
plt.title('Function to be optimized')
"""
Explanation: We can define and plot the function we want to optimize:
End of explanation
"""
n_iterations = 50
def run_PSO(n_particles=5, omega=0.5, phi_p=0.5, phi_g=0.7):
""" PSO algorithm to a funcion already defined.
Params:
omega = 0.5 # Particle weight (intertial)
phi_p = 0.1 # particle best weight
phi_g = 0.1 # global global weight
"""
global x_best_p_global, x_particles, y_particles, u_particles, v_particles
# Note: we are using global variables to ease the use of interactive widgets
# This code will work fine without the global (and actually it will be safer)
## Initialization:
x_particles = np.zeros((n_particles, n_iterations))
x_particles[:, 0] = np.random.uniform(x_lo, x_up, size=n_particles)
x_best_particles = np.copy(x_particles[:, 0])
y_particles = f(x_particles[:, 0])
y_best_global = np.min(y_particles[:])
index_best_global = np.argmin(y_particles[:])
x_best_p_global = np.copy(x_particles[index_best_global,0])
# velocity units are [Length/iteration]
velocity_lo = x_lo-x_up
velocity_up = x_up-x_lo
u_particles = np.zeros((n_particles, n_iterations))
u_particles[:, 0] = 0.1*np.random.uniform(velocity_lo, velocity_up, size=n_particles)
v_particles = np.zeros((n_particles, n_iterations)) # Needed for plotting the velocity as vectors
# PSO STARTS
iteration = 1
while iteration <= n_iterations-1:
for i in range(n_particles):
x_p = x_particles[i, iteration-1]
u_p = u_particles[i, iteration-1]
x_best_p = x_best_particles[i]
r_p = np.random.uniform(0, 1)
r_g = np.random.uniform(0, 1)
u_p_new = omega*u_p+ \
phi_p*r_p*(x_best_p-x_p) + \
phi_g*r_g*(x_best_p_global-x_p)
x_p_new = x_p + u_p_new
if not x_lo <= x_p_new <= x_up:
x_p_new = x_p # ignore new position, it's out of the domain
u_p_new = 0
x_particles[i, iteration] = np.copy(x_p_new)
u_particles[i, iteration] = np.copy(u_p_new)
y_p_best = f(x_best_p)
y_p_new = f(x_p_new)
if y_p_new < y_p_best:
x_best_particles[i] = np.copy(x_p_new)
y_p_best_global = f(x_best_p_global)
if y_p_new < y_p_best_global:
x_best_p_global = x_p_new
iteration = iteration + 1
# Plotting convergence
y_particles = f(x_particles)
y_particles_best_hist = np.min(y_particles, axis=0)
y_particles_worst_hist = np.max(y_particles, axis=0)
y_best_global = np.min(y_particles[:])
index_best_global = np.argmin(y_particles[:])
fig, ax1 = plt.subplots(nrows=1, ncols=1, figsize=(10, 2))
# Limits of the function being plotted
ax1.plot((0,n_iterations),(np.min(y),np.min(y)), '--g', label="min$f(x)$")
ax1.plot((0,n_iterations),(np.max(y),np.max(y)),'--r', label="max$f(x)$")
# Convergence of the best particle and worst particle value
ax1.plot(np.arange(n_iterations),y_particles_best_hist,'b', label="$p_{best}$")
ax1.plot(np.arange(n_iterations),y_particles_worst_hist,'k', label="$p_{worst}$")
ax1.set_xlim((0,n_iterations))
ax1.set_ylabel('$f(x)$')
ax1.set_xlabel('$i$ (iteration)')
ax1.set_title('Convergence')
ax1.legend()
return
run_PSO()
"""
Explanation: PSO Algorithm
End of explanation
"""
from __future__ import print_function
import ipywidgets as widgets
from IPython.display import display, HTML
def plotPSO(i=0): #iteration
"""Visualization of particles and obj. function"""
fig, ax1 = plt.subplots(nrows=1, ncols=1, figsize=(10, 3))
ax1.plot(x,y)
ax1.set_xlim((x_lo,x_up))
ax1.set_ylabel('$f(x)$')
ax1.set_xlabel('$x$')
ax1.set_title('Function to be optimized')
#from IPython.core.debugger import Tracer
#Tracer()() #this one triggers the debugger
y_particles = f(x_particles)
ax1.plot(x_particles[:,i],y_particles[:,i], "ro")
ax1.quiver(x_particles[:,i],y_particles[:,i],u_particles[:,i],v_particles[:,i],
angles='xy', scale_units='xy', scale=1)
n_particles, iterations = x_particles.shape
tag_particles = range(n_particles)
for j, txt in enumerate(tag_particles):
ax1.annotate(txt, (x_particles[j,i],y_particles[j,i]))
w_arg_PSO = widgets.interact_manual(run_PSO,
n_particles=(2,50),
omega=(0,1,0.001),
phi_p=(0,1,0.001),
phi_g=(0,1,0.001))
w_viz_PSO = widgets.interact(plotPSO, i=(0,n_iterations-1))
"""
Explanation: Animation
End of explanation
"""
# More examples in https://github.com/ipython/ipywidgets/tree/master/docs/source/examples
"""
Explanation: Note:
<div class=\"alert alert-success\">
As of ipywidgets 5.0, only static images of the widgets in this notebook will show on http://nbviewer.ipython.org. To view the live widgets and interact with them, you will need to download this notebook and run it with a Jupyter Notebook server.
</div>
End of explanation
"""
|
goerlitz/ds-notebooks | jupyter/kaggle_sf-crime/SF Crime - Convert To DataFrame.ipynb | apache-2.0 | import csv
import pyspark
from pyspark.sql import SQLContext
from pyspark.sql.types import *
from StringIO import StringIO
from datetime import *
from dateutil.parser import parse
"""
Explanation: San Francisco Crime Dataset Conversion
Challenge
Spark does not support out-of-the box data frame creation from CSV files.
The CSV reader from Databricks provides such functionality but requires an extra library.
python
df = sqlContext.read \
.format('com.databricks.spark.csv') \
.options(header='true', inferschema='true') \
.load('train.csv')
Solution
Read scv files and create data frame manually.
End of explanation
"""
sc = pyspark.SparkContext('local[*]')
sqlContext = SQLContext(sc)
textRDD = sc.textFile("../../data/sf-crime/train.csv.bz2")
textRDD.count()
"""
Explanation: Initialize contexts and input file:
End of explanation
"""
header = textRDD.first()
textRDD = textRDD.filter(lambda line: not line == header)
"""
Explanation: Remove header row from input file:
End of explanation
"""
fields = [StructField(field_name, StringType(), True) for field_name in header.split(',')]
fields[0].dataType = TimestampType()
fields[7].dataType = FloatType()
fields[8].dataType = FloatType()
schema = StructType(fields)
"""
Explanation: Define data schema:
End of explanation
"""
# parse each csv line (fields may contain enclosed ',' in parantheses) and split into tuples
tupleRDD = textRDD \
.map(lambda line: next(csv.reader(StringIO(line)))) \
.map(lambda x: (parse(x[0]), x[1], x[2], x[3], x[4], x[5], x[6], float(x[7]), float(x[8])))
df = sqlContext.createDataFrame(tupleRDD, schema)
"""
Explanation: Parse CSV lines and transform values into tuples:
End of explanation
"""
df.write.save("../../data/sf-crime/train.parquet")
"""
Explanation: Write DataFrame as parquet file:
End of explanation
"""
|
tensorflow/docs-l10n | site/zh-cn/hub/tutorials/cropnet_cassava.ipynb | apache-2.0 | import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_hub as hub
#@title Helper function for displaying examples
def plot(examples, predictions=None):
# Get the images, labels, and optionally predictions
images = examples['image']
labels = examples['label']
batch_size = len(images)
if predictions is None:
predictions = batch_size * [None]
# Configure the layout of the grid
x = np.ceil(np.sqrt(batch_size))
y = np.ceil(batch_size / x)
fig = plt.figure(figsize=(x * 6, y * 7))
for i, (image, label, prediction) in enumerate(zip(images, labels, predictions)):
# Render the image
ax = fig.add_subplot(x, y, i+1)
ax.imshow(image, aspect='auto')
ax.grid(False)
ax.set_xticks([])
ax.set_yticks([])
# Display the label and optionally prediction
x_label = 'Label: ' + name_map[class_names[label]]
if prediction is not None:
x_label = 'Prediction: ' + name_map[class_names[prediction]] + '\n' + x_label
ax.xaxis.label.set_color('green' if label == prediction else 'red')
ax.set_xlabel(x_label)
plt.show()
"""
Explanation: CropNet: Cassava Disease Detection
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/hub/tutorials/cropnet_cassava"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/cropnet_cassava.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/cropnet_cassava.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">查看上GitHub</a> </td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/hub/tutorials/cropnet_cassava.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
<td><a href="https://tfhub.dev/google/cropnet/classifier/cassava_disease_V1/2"><img src="https://tensorflow.google.cn/images/hub_logo_32px.png">查看 TF Hub 模型</a></td>
</table>
此笔记本演示如何使用 TensorFlow Hub 中的 CropNet 木薯病虫害分类器模型。该模型可将木薯叶的图像分为 6 类:细菌性枯萎病、褐条病毒病、绿螨、花叶病、健康或未知。
此 Colab 演示了如何执行以下操作:
从 TensorFlow Hub 加载 https://tfhub.dev/google/cropnet/classifier/cassava_disease_V1/2 模型
从 TensorFlow Datasets (TFDS) 加载木薯数据集
将木薯叶图像分为 4 种不同的木薯病虫害类别、健康或未知。
评估分类器的准确率,并查看将模型在应用于域外图像时的鲁棒性。
导入和设置
End of explanation
"""
dataset, info = tfds.load('cassava', with_info=True)
"""
Explanation: 数据集
让我们从 TFDS 中加载木薯数据集
End of explanation
"""
info
"""
Explanation: 我们来查看数据集信息以了解更多内容,例如描述和引用以及有关可用样本量的信息
End of explanation
"""
# Extend the cassava dataset classes with 'unknown'
class_names = info.features['label'].names + ['unknown']
# Map the class names to human readable names
name_map = dict(
cmd='Mosaic Disease',
cbb='Bacterial Blight',
cgm='Green Mite',
cbsd='Brown Streak Disease',
healthy='Healthy',
unknown='Unknown')
print(len(class_names), 'classes:')
print(class_names)
print([name_map[name] for name in class_names])
"""
Explanation: 木薯数据集包含涉及 4 种不同病虫害的木薯叶图像以及健康的木薯叶图像。模型可以预测上述五个类,当模型不确定其预测结果时,会将图像分为第六个类,即“未知”类。
End of explanation
"""
def preprocess_fn(data):
image = data['image']
# Normalize [0, 255] to [0, 1]
image = tf.cast(image, tf.float32)
image = image / 255.
# Resize the images to 224 x 224
image = tf.image.resize(image, (224, 224))
data['image'] = image
return data
"""
Explanation: 将数据馈送至模型之前,我们需要进行一些预处理。模型接受大小为 224 x 224,且 RGB 通道值范围为 [0, 1] 的图像。让我们归一化图像并调整图像大小。
End of explanation
"""
batch = dataset['validation'].map(preprocess_fn).batch(25).as_numpy_iterator()
examples = next(batch)
plot(examples)
"""
Explanation: 我们看一下数据集中的一些样本
End of explanation
"""
classifier = hub.KerasLayer('https://tfhub.dev/google/cropnet/classifier/cassava_disease_V1/2')
probabilities = classifier(examples['image'])
predictions = tf.argmax(probabilities, axis=-1)
plot(examples, predictions)
"""
Explanation: 模型
让我们从 TF-Hub 中加载分类器并获取一些预测结果,然后查看模型对一些样本的预测
End of explanation
"""
#@title Parameters {run: "auto"}
DATASET = 'cassava' #@param {type:"string"} ['cassava', 'beans', 'i_naturalist2017']
DATASET_SPLIT = 'test' #@param {type:"string"} ['train', 'test', 'validation']
BATCH_SIZE = 32 #@param {type:"integer"}
MAX_EXAMPLES = 1000 #@param {type:"integer"}
def label_to_unknown_fn(data):
data['label'] = 5 # Override label to unknown.
return data
# Preprocess the examples and map the image label to unknown for non-cassava datasets.
ds = tfds.load(DATASET, split=DATASET_SPLIT).map(preprocess_fn).take(MAX_EXAMPLES)
dataset_description = DATASET
if DATASET != 'cassava':
ds = ds.map(label_to_unknown_fn)
dataset_description += ' (labels mapped to unknown)'
ds = ds.batch(BATCH_SIZE)
# Calculate the accuracy of the model
metric = tf.keras.metrics.Accuracy()
for examples in ds:
probabilities = classifier(examples['image'])
predictions = tf.math.argmax(probabilities, axis=-1)
labels = examples['label']
metric.update_state(labels, predictions)
print('Accuracy on %s: %.2f' % (dataset_description, metric.result().numpy()))
"""
Explanation: 评估和鲁棒性
我们来衡量分类器在拆分数据集上的准确率。我们还可以通过评估模型在非木薯数据集上的性能来评估其鲁棒性。对于 iNaturalist 或豆科植物等其他植物数据集中的图像,模型应几乎始终返回未知。
End of explanation
"""
|
dolittle007/dolittle007.github.io | notebooks/GLM-robust-with-outlier-detection.ipynb | gpl-3.0 | %matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import optimize
import pymc3 as pm
import theano as thno
import theano.tensor as T
# configure some basic options
sns.set(style="darkgrid", palette="muted")
pd.set_option('display.notebook_repr_html', True)
plt.rcParams['figure.figsize'] = 12, 8
np.random.seed(0)
"""
Explanation: GLM: Robust Regression with Outlier Detection
A minimal reproducable example of Robust Regression with Outlier Detection using Hogg 2010 Signal vs Noise method.
This is a complementary approach to the Student-T robust regression as illustrated in Thomas Wiecki's notebook in the PyMC3 documentation, that approach is also compared here.
This model returns a robust estimate of linear coefficients and an indication of which datapoints (if any) are outliers.
The likelihood evaluation is essentially a copy of eqn 17 in "Data analysis recipes: Fitting a model to data" - Hogg 2010.
The model is adapted specifically from Jake Vanderplas' implementation (3rd model tested).
The dataset is tiny and hardcoded into this Notebook. It contains errors in both the x and y, but we will deal here with only errors in y.
Note:
Python 3.4 project using latest available PyMC3
Developed using ContinuumIO Anaconda distribution on a Macbook Pro 3GHz i7, 16GB RAM, OSX 10.10.5.
During development I've found that 3 data points are always indicated as outliers, but the remaining ordering of datapoints by decreasing outlier-hood is slightly unstable between runs: the posterior surface appears to have a small number of solutions with similar probability.
Finally, if runs become unstable or Theano throws weird errors, try clearing the cache $> theano-cache clear and rerunning the notebook.
Package Requirements (shown as a conda-env YAML):
```
$> less conda_env_pymc3_examples.yml
name: pymc3_examples
channels:
- defaults
dependencies:
- python=3.4
- ipython
- ipython-notebook
- ipython-qtconsole
- numpy
- scipy
- matplotlib
- pandas
- seaborn
- patsy
- pip
$> conda env create --file conda_env_pymc3_examples.yml
$> source activate pymc3_examples
$> pip install --process-dependency-links git+https://github.com/pymc-devs/pymc3
```
Setup
End of explanation
"""
#### cut & pasted directly from the fetch_hogg2010test() function
## identical to the original dataset as hardcoded in the Hogg 2010 paper
dfhogg = pd.DataFrame(np.array([[1, 201, 592, 61, 9, -0.84],
[2, 244, 401, 25, 4, 0.31],
[3, 47, 583, 38, 11, 0.64],
[4, 287, 402, 15, 7, -0.27],
[5, 203, 495, 21, 5, -0.33],
[6, 58, 173, 15, 9, 0.67],
[7, 210, 479, 27, 4, -0.02],
[8, 202, 504, 14, 4, -0.05],
[9, 198, 510, 30, 11, -0.84],
[10, 158, 416, 16, 7, -0.69],
[11, 165, 393, 14, 5, 0.30],
[12, 201, 442, 25, 5, -0.46],
[13, 157, 317, 52, 5, -0.03],
[14, 131, 311, 16, 6, 0.50],
[15, 166, 400, 34, 6, 0.73],
[16, 160, 337, 31, 5, -0.52],
[17, 186, 423, 42, 9, 0.90],
[18, 125, 334, 26, 8, 0.40],
[19, 218, 533, 16, 6, -0.78],
[20, 146, 344, 22, 5, -0.56]]),
columns=['id','x','y','sigma_y','sigma_x','rho_xy'])
## for convenience zero-base the 'id' and use as index
dfhogg['id'] = dfhogg['id'] - 1
dfhogg.set_index('id', inplace=True)
## standardize (mean center and divide by 1 sd)
dfhoggs = (dfhogg[['x','y']] - dfhogg[['x','y']].mean(0)) / dfhogg[['x','y']].std(0)
dfhoggs['sigma_y'] = dfhogg['sigma_y'] / dfhogg['y'].std(0)
dfhoggs['sigma_x'] = dfhogg['sigma_x'] / dfhogg['x'].std(0)
## create xlims ylims for plotting
xlims = (dfhoggs['x'].min() - np.ptp(dfhoggs['x'])/5
,dfhoggs['x'].max() + np.ptp(dfhoggs['x'])/5)
ylims = (dfhoggs['y'].min() - np.ptp(dfhoggs['y'])/5
,dfhoggs['y'].max() + np.ptp(dfhoggs['y'])/5)
## scatterplot the standardized data
g = sns.FacetGrid(dfhoggs, size=8)
_ = g.map(plt.errorbar, 'x', 'y', 'sigma_y', 'sigma_x', marker="o", ls='')
_ = g.axes[0][0].set_ylim(ylims)
_ = g.axes[0][0].set_xlim(xlims)
plt.subplots_adjust(top=0.92)
_ = g.fig.suptitle('Scatterplot of Hogg 2010 dataset after standardization', fontsize=16)
"""
Explanation: Load and Prepare Data
We'll use the Hogg 2010 data available at https://github.com/astroML/astroML/blob/master/astroML/datasets/hogg2010test.py
It's a very small dataset so for convenience, it's hardcoded below
End of explanation
"""
with pm.Model() as mdl_ols:
## Define weakly informative Normal priors to give Ridge regression
b0 = pm.Normal('b0_intercept', mu=0, sd=100)
b1 = pm.Normal('b1_slope', mu=0, sd=100)
## Define linear model
yest = b0 + b1 * dfhoggs['x']
## Use y error from dataset, convert into theano variable
sigma_y = thno.shared(np.asarray(dfhoggs['sigma_y'],
dtype=thno.config.floatX), name='sigma_y')
## Define Normal likelihood
likelihood = pm.Normal('likelihood', mu=yest, sd=sigma_y, observed=dfhoggs['y'])
"""
Explanation: Observe:
Even judging just by eye, you can see these datapoints mostly fall on / around a straight line with positive gradient
It looks like a few of the datapoints may be outliers from such a line
Create Conventional OLS Model
The linear model is really simple and conventional:
$$\bf{y} = \beta^{T} \bf{X} + \bf{\sigma}$$
where:
$\beta$ = coefs = ${1, \beta_{j \in X_{j}}}$
$\sigma$ = the measured error in $y$ in the dataset sigma_y
Define model
NOTE:
+ We're using a simple linear OLS model with Normally distributed priors so that it behaves like a ridge regression
End of explanation
"""
with mdl_ols:
## take samples
traces_ols = pm.sample(2000, tune=1000)
"""
Explanation: Sample
End of explanation
"""
_ = pm.traceplot(traces_ols[-1000:], figsize=(12,len(traces_ols.varnames)*1.5),
lines={k: v['mean'] for k, v in pm.df_summary(traces_ols[-1000:]).iterrows()})
"""
Explanation: View Traces
NOTE: I'll 'burn' the traces to only retain the final 1000 samples
End of explanation
"""
with pm.Model() as mdl_studentt:
## Define weakly informative Normal priors to give Ridge regression
b0 = pm.Normal('b0_intercept', mu=0, sd=100)
b1 = pm.Normal('b1_slope', mu=0, sd=100)
## Define linear model
yest = b0 + b1 * dfhoggs['x']
## Use y error from dataset, convert into theano variable
sigma_y = thno.shared(np.asarray(dfhoggs['sigma_y'],
dtype=thno.config.floatX), name='sigma_y')
## define prior for Student T degrees of freedom
nu = pm.Uniform('nu', lower=1, upper=100)
## Define Student T likelihood
likelihood = pm.StudentT('likelihood', mu=yest, sd=sigma_y, nu=nu,
observed=dfhoggs['y'])
"""
Explanation: NOTE: We'll illustrate this OLS fit and compare to the datapoints in the final plot
Create Robust Model: Student-T Method
I've added this brief section in order to directly compare the Student-T based method exampled in Thomas Wiecki's notebook in the PyMC3 documentation
Instead of using a Normal distribution for the likelihood, we use a Student-T, which has fatter tails. In theory this allows outliers to have a smaller mean square error in the likelihood, and thus have less influence on the regression estimation. This method does not produce inlier / outlier flags but is simpler and faster to run than the Signal Vs Noise model below, so a comparison seems worthwhile.
Note: we'll constrain the Student-T 'degrees of freedom' parameter nu to be an integer, but otherwise leave it as just another stochastic to be inferred: no need for prior knowledge.
Define Model
End of explanation
"""
with mdl_studentt:
## take samples
traces_studentt = pm.sample(2000, tune=1000)
"""
Explanation: Sample
End of explanation
"""
_ = pm.traceplot(traces_studentt[-1000:],
figsize=(12,len(traces_studentt.varnames)*1.5),
lines={k: v['mean'] for k, v in pm.df_summary(traces_studentt[-1000:]).iterrows()})
"""
Explanation: View Traces
End of explanation
"""
def logp_signoise(yobs, is_outlier, yest_in, sigma_y_in, yest_out, sigma_y_out):
'''
Define custom loglikelihood for inliers vs outliers.
NOTE: in this particular case we don't need to use theano's @as_op
decorator because (as stated by Twiecki in conversation) that's only
required if the likelihood cannot be expressed as a theano expression.
We also now get the gradient computation for free.
'''
# likelihood for inliers
pdfs_in = T.exp(-(yobs - yest_in + 1e-4)**2 / (2 * sigma_y_in**2))
pdfs_in /= T.sqrt(2 * np.pi * sigma_y_in**2)
logL_in = T.sum(T.log(pdfs_in) * (1 - is_outlier))
# likelihood for outliers
pdfs_out = T.exp(-(yobs - yest_out + 1e-4)**2 / (2 * (sigma_y_in**2 + sigma_y_out**2)))
pdfs_out /= T.sqrt(2 * np.pi * (sigma_y_in**2 + sigma_y_out**2))
logL_out = T.sum(T.log(pdfs_out) * is_outlier)
return logL_in + logL_out
with pm.Model() as mdl_signoise:
## Define weakly informative Normal priors to give Ridge regression
b0 = pm.Normal('b0_intercept', mu=0, sd=10, testval=pm.floatX(0.1))
b1 = pm.Normal('b1_slope', mu=0, sd=10, testval=pm.floatX(1.))
## Define linear model
yest_in = b0 + b1 * dfhoggs['x']
## Define weakly informative priors for the mean and variance of outliers
yest_out = pm.Normal('yest_out', mu=0, sd=100, testval=pm.floatX(1.))
sigma_y_out = pm.HalfNormal('sigma_y_out', sd=100, testval=pm.floatX(1.))
## Define Bernoulli inlier / outlier flags according to a hyperprior
## fraction of outliers, itself constrained to [0,.5] for symmetry
frac_outliers = pm.Uniform('frac_outliers', lower=0., upper=.5)
is_outlier = pm.Bernoulli('is_outlier', p=frac_outliers, shape=dfhoggs.shape[0],
testval=np.random.rand(dfhoggs.shape[0]) < 0.2)
## Extract observed y and sigma_y from dataset, encode as theano objects
yobs = thno.shared(np.asarray(dfhoggs['y'], dtype=thno.config.floatX), name='yobs')
sigma_y_in = thno.shared(np.asarray(dfhoggs['sigma_y'], dtype=thno.config.floatX),
name='sigma_y_in')
## Use custom likelihood using DensityDist
likelihood = pm.DensityDist('likelihood', logp_signoise,
observed={'yobs': yobs, 'is_outlier': is_outlier,
'yest_in': yest_in, 'sigma_y_in': sigma_y_in,
'yest_out': yest_out, 'sigma_y_out': sigma_y_out})
"""
Explanation: Observe:
Both parameters b0 and b1 show quite a skew to the right, possibly this is the action of a few samples regressing closer to the OLS estimate which is towards the left
The nu parameter seems very happy to stick at nu = 1, indicating that a fat-tailed Student-T likelihood has a better fit than a thin-tailed (Normal-like) Student-T likelihood.
The inference sampling also ran very quickly, almost as quickly as the conventional OLS
NOTE: We'll illustrate this Student-T fit and compare to the datapoints in the final plot
Create Robust Model with Outliers: Hogg Method
Please read the paper (Hogg 2010) and Jake Vanderplas' code for more complete information about the modelling technique.
The general idea is to create a 'mixture' model whereby datapoints can be described by either the linear model (inliers) or a modified linear model with different mean and larger variance (outliers).
The likelihood is evaluated over a mixture of two likelihoods, one for 'inliers', one for 'outliers'. A Bernouilli distribution is used to randomly assign datapoints in N to either the inlier or outlier groups, and we sample the model as usual to infer robust model parameters and inlier / outlier flags:
$$
\mathcal{logL} = \sum_{i}^{i=N} log \left[ \frac{(1 - B_{i})}{\sqrt{2 \pi \sigma_{in}^{2}}} exp \left( - \frac{(x_{i} - \mu_{in})^{2}}{2\sigma_{in}^{2}} \right) \right] + \sum_{i}^{i=N} log \left[ \frac{B_{i}}{\sqrt{2 \pi (\sigma_{in}^{2} + \sigma_{out}^{2})}} exp \left( - \frac{(x_{i}- \mu_{out})^{2}}{2(\sigma_{in}^{2} + \sigma_{out}^{2})} \right) \right]
$$
where:
$\bf{B}$ is Bernoulli-distibuted $B_{i} \in [0_{(inlier)},1_{(outlier)}]$
Define model
End of explanation
"""
with mdl_signoise:
## two-step sampling to create Bernoulli inlier/outlier flags
step1 = pm.Metropolis([frac_outliers, yest_out, sigma_y_out, b0, b1])
step2 = pm.step_methods.BinaryGibbsMetropolis([is_outlier])
## take samples
traces_signoise = pm.sample(20000, step=[step1, step2], tune=10000, progressbar=True)
"""
Explanation: Sample
End of explanation
"""
traces_signoise[-10000:]['b0_intercept']
_ = pm.traceplot(traces_signoise[-10000:], figsize=(12,len(traces_signoise.varnames)*1.5),
lines={k: v['mean'] for k, v in pm.df_summary(traces_signoise[-1000:]).iterrows()})
"""
Explanation: View Traces
End of explanation
"""
outlier_melt = pd.melt(pd.DataFrame(traces_signoise['is_outlier', -1000:],
columns=['[{}]'.format(int(d)) for d in dfhoggs.index]),
var_name='datapoint_id', value_name='is_outlier')
ax0 = sns.pointplot(y='datapoint_id', x='is_outlier', data=outlier_melt,
kind='point', join=False, ci=None, size=4, aspect=2)
_ = ax0.vlines([0,1], 0, 19, ['b','r'], '--')
_ = ax0.set_xlim((-0.1,1.1))
_ = ax0.set_xticks(np.arange(0, 1.1, 0.1))
_ = ax0.set_xticklabels(['{:.0%}'.format(t) for t in np.arange(0,1.1,0.1)])
_ = ax0.yaxis.grid(True, linestyle='-', which='major', color='w', alpha=0.4)
_ = ax0.set_title('Prop. of the trace where datapoint is an outlier')
_ = ax0.set_xlabel('Prop. of the trace where is_outlier == 1')
"""
Explanation: NOTE:
During development I've found that 3 datapoints id=[1,2,3] are always indicated as outliers, but the remaining ordering of datapoints by decreasing outlier-hood is unstable between runs: the posterior surface appears to have a small number of solutions with very similar probability.
The NUTS sampler seems to work okay, and indeed it's a nice opportunity to demonstrate a custom likelihood which is possible to express as a theano function (thus allowing a gradient-based sampler like NUTS). However, with a more complicated dataset, I would spend time understanding this instability and potentially prefer using more samples under Metropolis-Hastings.
Declare Outliers and Compare Plots
View ranges for inliers / outlier predictions
At each step of the traces, each datapoint may be either an inlier or outlier. We hope that the datapoints spend an unequal time being one state or the other, so let's take a look at the simple count of states for each of the 20 datapoints.
End of explanation
"""
cutoff = 5
dfhoggs['outlier'] = np.percentile(traces_signoise[-1000:]['is_outlier'],cutoff, axis=0)
dfhoggs['outlier'].value_counts()
"""
Explanation: Observe:
The plot above shows the number of samples in the traces in which each datapoint is marked as an outlier, expressed as a percentage.
In particular, 3 points [1, 2, 3] spend >=95% of their time as outliers
Contrastingly, points at the other end of the plot close to 0% are our strongest inliers.
For comparison, the mean posterior value of frac_outliers is ~0.35, corresponding to roughly 7 of the 20 datapoints. You can see these 7 datapoints in the plot above, all those with a value >50% or thereabouts.
However, only 3 of these points are outliers >=95% of the time.
See note above regarding instability between runs.
The 95% cutoff we choose is subjective and arbitrary, but I prefer it for now, so let's declare these 3 to be outliers and see how it looks compared to Jake Vanderplas' outliers, which were declared in a slightly different way as points with means above 0.68.
Declare outliers
Note:
+ I will declare outliers to be datapoints that have value == 1 at the 5-percentile cutoff, i.e. in the percentiles from 5 up to 100, their values are 1.
+ Try for yourself altering cutoff to larger values, which leads to an objective ranking of outlier-hood.
End of explanation
"""
g = sns.FacetGrid(dfhoggs, size=8, hue='outlier', hue_order=[True,False],
palette='Set1', legend_out=False)
lm = lambda x, samp: samp['b0_intercept'] + samp['b1_slope'] * x
pm.plot_posterior_predictive_glm(traces_ols[-1000:],
eval=np.linspace(-3, 3, 10), lm=lm, samples=200, color='#22CC00', alpha=.2)
pm.plot_posterior_predictive_glm(traces_studentt[-1000:], lm=lm,
eval=np.linspace(-3, 3, 10), samples=200, color='#FFA500', alpha=.5)
pm.plot_posterior_predictive_glm(traces_signoise[-1000:], lm=lm,
eval=np.linspace(-3, 3, 10), samples=200, color='#357EC7', alpha=.3)
_ = g.map(plt.errorbar, 'x', 'y', 'sigma_y', 'sigma_x', marker="o", ls='').add_legend()
_ = g.axes[0][0].annotate('OLS Fit: Green\nStudent-T Fit: Orange\nSignal Vs Noise Fit: Blue',
size='x-large', xy=(1,0), xycoords='axes fraction',
xytext=(-160,10), textcoords='offset points')
_ = g.axes[0][0].set_ylim(ylims)
_ = g.axes[0][0].set_xlim(xlims)
"""
Explanation: Posterior Prediction Plots for OLS vs StudentT vs SignalNoise
End of explanation
"""
|
renecnielsen/twitter-diy | ipynb/02 Parse Twitter Data.ipynb | mit | from IPython.core.display import HTML
styles = open("../css/custom.css", "r").read()
HTML(styles)
"""
Explanation: Parse Twitter Data
Import retrieved tweets (from JSON file, pickle or similar)
Read in individual tweets
Create TSV file (and drop unwanted data)
Jupyter Notebook Style
Let's make this thing look nice.
End of explanation
"""
import sys,re,json,os,csv
import numpy as np
import cPickle as pickle
import uuid
from IPython.display import display_javascript, display_html, display
"""
Explanation: Get Data and Enrich It
End of explanation
"""
picklepath = '/Users/rcn/Desktop/twitter-analysis/data/raw/tweets.p'
tweets = pickle.load( open(picklepath, "rb" ) )
"""
Explanation: Read JSON or Pickle File with Tweets
Example pickle, Mac: /Users/[username]/Documents/twitter-analysis/data/raw/tweets.p
End of explanation
"""
print('We have %d tweets in total' % len(tweets))
"""
Explanation: Number of Tweets
End of explanation
"""
class RenderJSON(object):
def __init__(self, json_data):
if isinstance(json_data, dict):
self.json_str = json.dumps(json_data)
else:
self.json_str = json
self.uuid = str(uuid.uuid4())
def _ipython_display_(self):
display_html('<div id="{}" style="height: 600px; width:100%;"></div>'.format(self.uuid),
raw=True
)
display_javascript("""
require(["https://rawgit.com/caldwell/renderjson/master/renderjson.js"], function() {
document.getElementById('%s').appendChild(renderjson(%s))
});
""" % (self.uuid, self.json_str), raw=True)
RenderJSON(tweets[0])
"""
Explanation: What Does a Tweet Look Like?
Let's make JSON look nice (with thanks to Renderjson)
End of explanation
"""
tweetLinebreakError=0
for tweet in tweets:
try:
tweet['text'] = tweet['text'].replace('\n', ' ').replace('\r', '')
except:
tweetLinebreakError+=1
tweet['text'] = 'NaN'
print('Failed removing line breaks in %d tweets' % tweetLinebreakError)
"""
Explanation: Get Rid of Line Breaks in Tweets
End of explanation
"""
jsonpath = '' # Path to JSON file
picklepath = '' # Path to pickle file
tsvpath = '/Users/rcn/Desktop/twitter-analysis/data/tweets.tsv' # Path to tsv file
"""
Explanation: Save Data to Disk
Setup Local Paths
Paths on your machine to the file you'd like write to.
Example tsv, Mac: /Users/[username]/Documents/twitter-analysis/data/tweets.tsv
Example pickle, Mac: /Users/[username]/Documents/twitter-analysis/data/tweets.p
End of explanation
"""
with open(jsonpath, 'wb') as tweetsfile: # Get ready to write to output file
json.dump(tweets, tweetsfile) # Write tweets to json file
"""
Explanation: Save as JSON
End of explanation
"""
with open(picklepath, "wb") as tweetsfile:
pickle.dump(tweets, tweetsfile) # Write tweets to pickle file
"""
Explanation: Save as Pickle file
End of explanation
"""
header=['Tweet ID','Time','User','Username','Text','Language','User Location','Geo','Place','Likes','Retweets',
'Followers','Friends','Listed','Favourites','Hashtags','Mentions','Links','User Description']
outFile=csv.writer(open(tsvpath,'wb'),delimiter='\t')
outFile.writerow(header)
nIdError = 0
nDateError = 0
nNameError = 0
nScreenNameError = 0
nTextError = 0
nLanguageError = 0
nLocationError = 0
nGeoError = 0
nPlaceError = 0
nLikesError = 0
nRetweetsError = 0
nFollowersError = 0
nFriendsError = 0
nListedError = 0
nFavouritesError = 0
nTagsError = 0
nMentionsError = 0
nLinksError = 0
nDescriptionError = 0
documents=[]
for tweet in tweets:
outList=[]
try:
outList.append(tweet['id'])
documents.append(tweet['id'])
except:
outList.append('NaN')
documents.append('NaN')
nIdError+=1
try:
outList.append(tweet['created_at'])
documents.append(tweet['created_at'])
except:
outList.append('NaN')
documents.append('NaN')
nDateError+=1
try:
outList.append(tweet['user']['name'].encode('utf-8'))
documents.append(tweet['user']['name'].encode('utf-8'))
except:
nNameError+=1
outList.append('NaN')
documents.append('NaN')
try:
outList.append(tweet['user']['screen_name'])
documents.append(tweet['user']['screen_name'])
except:
nScreenNameError+=1
outList.append('NaN')
documents.append('NaN')
try:
outList.append(tweet['text'].encode('utf-8'))
documents.append(tweet['text'].encode('utf-8'))
except:
outList.append('NaN')
documents.append('NaN')
nTextError+=1
try:
outList.append(tweet['lang'])
documents.append(tweet['lang'])
except:
outList.append('NaN')
documents.append('NaN')
nLanguageError+=1
try:
outList.append(tweet['user']['location'].encode('utf-8'))
documents.append(tweet['user']['location'].encode('utf-8'))
except:
outList.append('NaN')
documents.append('NaN')
nLocationError+=1
try:
outList.append(tweet['geo'].encode('utf-8'))
documents.append(tweet['geo'].encode('utf-8'))
except:
outList.append('NaN')
documents.append('NaN')
nGeoError+=1
try:
outList.append(tweet['place'].encode('utf-8'))
documents.append(tweet['place'].encode('utf-8'))
except:
outList.append('NaN')
documents.append('NaN')
nPlaceError+=1
try:
outList.append(tweet['favorite_count'])
documents.append(tweet['favorite_count'])
except:
outList.append('NaN')
documents.append('NaN')
nLikesError+=1
try:
outList.append(tweet['retweet_count'])
documents.append(tweet['retweet_count'])
except:
outList.append('NaN')
documents.append('NaN')
nRetweetsError+=1
try:
outList.append(tweet['user']['followers_count'])
documents.append(tweet['user']['followers_count'])
except:
outList.append('NaN')
documents.append('NaN')
nFollowersError+=1
try:
outList.append(tweet['user']['friends_count'])
documents.append(tweet['user']['friends_count'])
except:
outList.append('NaN')
documents.append('NaN')
nFriendsError+=1
try:
outList.append(tweet['user']['listed_count'])
documents.append(tweet['user']['listed_count'])
except:
outList.append('NaN')
documents.append('NaN')
nListedError+=1
try:
outList.append(tweet['user']['favourites_count'])
documents.append(tweet['user']['favourites_count'])
except:
outList.append('NaN')
documents.append('NaN')
nFavouritesError+=1
try:
tweetTags=','.join([h.lower() for h in tweet['entities']['hashtags']])
outList.append(tweetTags.decode('utf-8'))
documents.append(tweetTags.decode('utf-8'))
except:
nTagsError+=1
outList.append('NaN')
documents.append('NaN')
try:
tweetMentions=','.join([m.lower() for m in tweet['entities']['user_mentions']])
outList.append(tweetMentions.decode('utf-8'))
documents.append(tweetMentions.decode('utf-8'))
except:
nMentionsError+=1
outList.append('NaN')
documents.append('NaN')
try:
tweetLinks=','.join([m.lower() for m in tweet['entities']['urls']])
outList.append(tweetLinks.decode('utf-8'))
documents.append(tweetLinks.decode('utf-8'))
except:
nLinksError+=1
outList.append('NaN')
documents.append('NaN')
try:
outList.append(tweet['user']['description'].encode('utf-8'))
documents.append(tweet['user']['description'].encode('utf-8'))
except:
nDescriptionError+=1
outList.append('NaN')
documents.append('NaN')
outFile.writerow(outList)
print "%d ID errors." % nIdError
print "%d date errors." % nDateError
print "%d name errors." % nNameError
print "%d screen name errors." % nScreenNameError
print "%d text errors." % nTextError
print "%d language errors." % nLanguageError
print "%d user location errors." % nLocationError
print "%d tweet geo errors." % nGeoError
print "%d tweet place errors." % nPlaceError
print "%d likes errors." % nLikesError
print "%d retweets errors." % nRetweetsError
print "%d followers errors." % nFollowersError
print "%d friends errors." % nFriendsError
print "%d listed errors." % nListedError
print "%d favourites errors." % nFavouritesError
print "%d hashtag errors." % nTagsError
print "%d mention errors." % nMentionsError
print "%d link errors." % nLinksError
print "%d Description errors." % nDescriptionError
"""
Explanation: Save as TSV
End of explanation
"""
|
uwkejia/Clean-Energy-Outlook | examples/Demo.ipynb | mit | from ceo import data_cleaning
from ceo import missing_data
from ceo import svr_prediction
from ceo import ridge_prediction
"""
Explanation: Examples
Importing libraries
End of explanation
"""
data_cleaning.clean_all_data()
"""
Explanation: datacleaning
The datacleaning module is used to clean and organize the data into 51 CSV files corresponding to the 50 states of the US and the District of Columbia.
The wrapping function clean_all_data takes all the data sets as input and sorts the data in to CSV files of the states.
The CSVs are stored in the Cleaned Data directory which is under the Data directory.
End of explanation
"""
missing_data.predict_all()
"""
Explanation: missing_data
The missing_data module is used to estimate the missing data of the GDP (from 1960 - 1962) and determine the values of the predictors (from 2016-2020).
The wrapping function predict_all takes the CSV files of the states as input and stores the predicted missing values in the same CSV files.
The CSVs generated replace the previous CSV files in the Cleaned Data directory which is under the Data directory.
End of explanation
"""
ridge_prediction.ridge_predict_all()
"""
Explanation: ridge_prediction
The ridge_prediction module is used to predict the future values of energies like wind energy, solar energy, hydro energy and nuclear energy from 2016-2020 using ridge regression.
The wrapping function ridge_predict_all takes the CSV files of the states as input and stores the future values of the energies in another CSV file under Ridge Regression folder under the Predicted Data directory.
End of explanation
"""
svr_prediction.SVR_predict_all()
"""
Explanation: svr_prediction
The svr_prediction module is used to predict the future values of energies like wind energy, solar energy, hydro energy and nuclear energy from 2016-2020 using Support Vector Regression
The wrapping function SVR_predict_all takes the CSV files of the states as input and stores the future values of the energies in another CSV file under SVR folder under the Predicted Data directory.
End of explanation
"""
%%HTML
<div class='tableauPlaceholder' id='viz1489609724011' style='position: relative'><noscript><a href='#'><img alt='Clean Energy Production in the contiguous United States(in million kWh) ' src='https://public.tableau.com/static/images/PB/PB87S38NW/1_rss.png' style='border: none' /></a></noscript><object class='tableauViz' style='display:none;'><param name='host_url' value='https%3A%2F%2Fpublic.tableau.com%2F' /> <param name='path' value='shared/PB87S38NW' /> <param name='toolbar' value='yes' /><param name='static_image' value='https://public.tableau.com/static/images/PB/PB87S38NW/1.png' /> <param name='animate_transition' value='yes' /><param name='display_static_image' value='yes' /><param name='display_spinner' value='yes' /><param name='display_overlay' value='yes' /><param name='display_count' value='yes' /></object></div> <script type='text/javascript'> var divElement = document.getElementById('viz1489609724011'); var vizElement = divElement.getElementsByTagName('object')[0]; vizElement.style.width='1004px';vizElement.style.height='869px'; var scriptElement = document.createElement('script'); scriptElement.src = 'https://public.tableau.com/javascripts/api/viz_v1.js'; vizElement.parentNode.insertBefore(scriptElement, vizElement); </script>
"""
Explanation: plots
Visualizations is done using Tableau software. The Tableau workbook for the predicted data is included in the repository. The Tableau dashboard created for this data is illustrated below:
End of explanation
"""
|
vierth/chinese_stylometry | Stanford DH Asia Stylometry.ipynb | gpl-3.0 | %pylab inline
pylab.rcParams['figure.figsize']=(12,8)
"""
Explanation: Digital Humanities Asia Workshop
Stylometerics and Genre Research in Imperial Chinese Studies
Coding for Stylometric Analysis
Paul Vierthaler, Boston College
@pvierth, [email protected]
Texts encodings
It is important to know the encodings of the files you are working with. This is probably one of the largest difficulties faced by people working with Chinese language texts.
Common encodings
Chinese language digital texts coming a variety of encodings
UTF-8
This is the easiest to work with. UTF, which stands for Unicode Transformation Format, is an international character set that encodes texts in all languages. It extends the ASCII character set and is the most common encoding on the internet.
GB 2312
This was the official character set of the People's Republic of China. It is a simplified character set.
GBK
GBK extends GB 2312, adding missing characters.
GB 18030
GB 18030 was designed to replace GB 2312 and GBK in 2005. It is a Unicode format but maintains compatibility with GB 2312. It also allows for traditional characters, as it is a Unicode encoding.
Big5
Big5 is a traditional Chinese format that is common in Taiwan and Hong Kong.
GB 2312 is still very common.
Many websites and texts files containing Chinese text still use GB 2312.
We will generally try to convert texts to UTF-8 if they are not already UTF-8.
File Organization
If we want to perform Stylometric analysis and compare a variety of texts, it is easiest to ensure that they are all in the same folder. This will allow us to write code that cleans the text and performs the analysis quickly and easily.
Make a Folder for your files.
This will need to be in the same folder as this Jupyter notebook. I have provided a collection of files to analyze as part of the workshop. Name the folder something sensible. I have chosen to call the included folder "corpus."
Decide on a way to store metadata.
If we want to keep track of information about our texts, we need to decide on a way to do this. I prefer to include a metadata file in the same folder as my Python script that describes each text. Each text is given an idea for a file name. We will use that ID to look up information about the text.
Setting up plotting for notebook
The following code sets up the plotting used in this notebook. Feel free to ignore this.
End of explanation
"""
my_file = open("test.txt", "r")
file_contents = my_file.read()
print(file_contents)
"""
Explanation: Let's code!
Opening a file and reading it to string
This is how we will be getting our data into Python.
End of explanation
"""
my_file = open("test.txt", "r", encoding="utf-8")
file_contents = my_file.read()
print(file_contents)
"""
Explanation: Opening parameters
open() takes several parameters. First, the file path then the open mode.
"r" means read
"w" means write
"a" means append
Be careful with "w"!
If you open a file in write mode, if it exists, the program will wipe the files contents without warning you. If it doesn't exist, it will automatically create the file.
Setting the encoding
By default, strings in Python3 are Unicode. If the file you are opening is not unicode, you have to tell python. If it is unicode, you don't have to tell it anything
End of explanation
"""
my_file = open("test.txt", "r", encoding="utf-8", errors="replace")
file_contents = my_file.read()
print(file_contents)
"""
Explanation: Dealing with Errors
When you open many files at once, you will sometimes run in to errors in encoding no matter what you do. You have several options. You can delete the bad character, or replace it with a question mark. The corpus I've provided doesn't have any of these issues, but as you adapt it to run in the wild, you may run in to some issues
End of explanation
"""
import os
for root, dirs, files in os.walk("corpus"):
for filename in files:
# I do not want to open hidden files
if filename[0] != ".":
# open the file
f = open(root + "/" + filename, "r", encoding = "utf8")
# read the contents to a variable
c = f.read()
# make sure to close the file when you are done
f.close()
# check to see if your code is working
# here I am just printing the length of the string
# printing the string would take up a lot of room.
print(len(c))
"""
Explanation: Opening multiple files.
You don't want to open each file you are interested in one at a time. Here we will import a library that will help us with this and save the contents.
On Windows machines you will need specify the encoding of a Chinese text file, even when it is UTF-8.
End of explanation
"""
import re
def clean(instring):
# Remove mid-file markers
instring = re.sub(r'~~~START\|.+?\|START~~~', "", instring)
# This regex will remove all letters and numbers
instring = re.sub(r'[a-zA-Z0-9]', "", instring)
# A variety of characters to remove
unwanted_chars = ['』','。', '!', ',', ':', '、', '(',
')', ';', '?', '〉', '〈', '」', '「',
'『', '“', '”', '!', '"', '#', '$', '%',
'&', "'", '(', ')', '*', '+', ',', '-',
'.', '/', "《", "》", "·", "a", "b"]
for char in unwanted_chars:
# replace each character with nothing
instring = instring.replace(char, "")
# return the resulting string.
return instring
"""
Explanation: Save the information to use later.
info_list = []
for root, dirs, files in os.walk("corpus"):
for filename in files:
if filename[0] != ".":
f = open(root + "/" + filename, "r")
c = f.read()
f.close()
info_list.append(c)
for c in info_list:
print(len(c))
Cleaning the texts
The texts must first be cleaned before they can do anything. The best way to do this is to write a function and will perform the cleaning. We will then call it when we need our texts to be cleaned. We will remove most unwanted characters with regular expressions. As we will have multiple characters not matching the regex, we will use a loop for the rest.
End of explanation
"""
info_list = []
# just for demonstration purposes
not_cleaned = []
for root, dirs, files in os.walk("corpus"):
for filename in files:
if filename[0] != ".":
f = open(root + "/" + filename, "r", encoding="utf8")
c = f.read()
f.close()
not_cleaned.append(c)
info_list.append(clean(c))
print("This is before:" + not_cleaned[0][:30])
print("This is after: " + info_list[0][:30])
"""
Explanation: Clean the text before saving it
End of explanation
"""
info_list = []
# just for demonstration purposes
not_cleaned = []
for root, dirs, files in os.walk("corpus"):
for filename in files:
if filename[0] != ".":
f = open(root + "/" + filename, "r", encoding="utf8")
c = f.read()
f.close()
not_cleaned.append(c)
# remove white space
c = re.sub("\s+", "", c)
info_list.append(clean(c))
print("This is before:" + not_cleaned[0][:30])
print("This is after: " + info_list[0][:30])
"""
Explanation: Remove Whitespace
As there was no whitespace in the original text, you might want to remove it from your digital copy. We can either do this easily with a regular expression in the file-reading stage. We could also add it to the cleaning function if we like.
End of explanation
"""
# This function does not retain the leftover small section at
# the end of the text
def textBreak(inputstring):
# Decide how long each section should be
divlim = 10000
# Calculate how many loops to run
loops = len(inputstring)//divlim
# Make an empty list to save the results
save = []
# Save chunks of equal length
for i in range(0, loops):
save.append(inputstring[i * divlim: (i + 1) * divlim])
return save
break_apart = True
if break_apart == True:
broken_chunks = []
for item in info_list:
broken_chunks.extend(textBreak(item))
# Check to see if it worked.
print(len(broken_chunks[0]))
"""
Explanation: Decide how to break apart strings
Now that we have a clean string to analyze, we will want to decide how to analyze it. The first step is to decide if we want to look at the entire text, or break it apart into equal lengths. There are advantages and disadvantages to each. I will show you how to break apart the texts. To not break the text apart, simply change break_apart to False.
End of explanation
"""
# Create a dictionary to store the information
metadata = {}
# open and extract the string
metadatafile = open("metadata.txt", "r", encoding="utf8")
metadatastring = metadatafile.read()
metadatafile.close()
# split into by line
lines = metadatastring.split("\n")
for line in lines:
# split using tabs
cells = line.split("\t")
# use the first column as the key, which I use store
# the rest of the columns
metadata[cells[0]] = cells[1:]
print(metadata)
"""
Explanation: Deal with Metadata
If you have structured your data well, you should have a metadata file that keeps track of each document in your corpus. This is not essential, but it is very helpful.
Read metadata file into a dictionary.
There are a variety of ways of doing this. I am just going to break the text apart and build the dictionary manually.
End of explanation
"""
# Create empty lists to store the information
info_list = []
title_list = []
author_list = []
era_list = []
genre_list = []
# Create dictionaries store unique info:
title_author = {}
title_era = {}
title_genre = {}
for root, dirs, files in os.walk("corpus"):
for filename in files:
if filename[0] != ".":
f = open(root + "/" + filename, "r", encoding="utf8")
c = f.read()
f.close()
c = re.sub("\s+", "", c)
c = clean(c)
# Get metadata. the [:-4] removes the .txt from filename
metainfo = metadata[filename[:-4]]
info_list.append(c)
title_list.append(metainfo[0])
author_list.append(metainfo[1])
era_list.append(metainfo[2])
genre_list.append(metainfo[3])
title_author[metainfo[0]] = metainfo[1]
title_era[metainfo[0]] = metainfo[2]
title_genre[metainfo[0]] = metainfo[3]
print(title_list)
"""
Explanation: Use the Metadata to store information about each file
I usually store the information in parallel lists. This way it is easy for the analysis part of the software to attach different labels.
The following code applies to using whole files (rather than breaking them apart).
End of explanation
"""
# Create empty lists/dictionaries to store the information
info_list = []
title_list = []
author_list = []
era_list = []
genre_list = []
title_author = {}
title_era = {}
title_genre = {}
# We should also track which section number
section_number = []
for root, dirs, files in os.walk("corpus"):
for filename in files:
if filename[0] != ".":
f = open(root + "/" + filename, "r", encoding="utf8")
c = f.read()
f.close()
c = re.sub("\s+", "", c)
c = clean(c)
# Get metadata. the [:-4] removes the .txt from filename
metainfo = metadata[filename[:-4]]
# The dictionary formation stays the same
title_author[metainfo[0]] = metainfo[1]
title_era[metainfo[0]] = metainfo[2]
title_genre[metainfo[0]] = metainfo[3]
# Break the Text apart
broken_sections = textBreak(c)
# We will need to extend, rather than append
info_list.extend(broken_sections)
title_list.extend([metainfo[0] for i in
range(0,len(broken_sections))])
author_list.extend([metainfo[1] for i in
range(0,len(broken_sections))])
era_list.extend([metainfo[2] for i in
range(0,len(broken_sections))])
genre_list.extend([metainfo[3] for i in
range(0,len(broken_sections))])
section_number.extend([i for i in range(0, len(broken_sections))])
print(author_list[:20])
"""
Explanation: It is a bit more complicated if you break texts apart
End of explanation
"""
# Create empty lists/dictionaries to store the information
info_list = []
title_list = []
author_list = []
era_list = []
genre_list = []
section_number = []
title_author = {}
title_era = {}
title_genre = {}
break_apart = False
for root, dirs, files in os.walk("corpus"):
for filename in files:
if filename[0] != ".":
f = open(root + "/" + filename, "r", encoding="utf8")
c = f.read()
f.close()
c = re.sub("\s+", "", c)
c = clean(c)
# Get metadata. the [:-4] removes the .txt from filename
metainfo = metadata[filename[:-4]]
title_author[metainfo[0]] = metainfo[1]
title_era[metainfo[0]] = metainfo[2]
title_genre[metainfo[0]] = metainfo[3]
if not break_apart:
info_list.append(c)
title_list.append(metainfo[0])
author_list.append(metainfo[1])
era_list.append(metainfo[2])
genre_list.append(metainfo[3])
else:
broken_sections = textBreak(c)
info_list.extend(broken_sections)
title_list.extend([metainfo[0] for i in
range(0,len(broken_sections))])
author_list.extend([metainfo[1] for i in
range(0,len(broken_sections))])
era_list.extend([metainfo[2] for i in
range(0,len(broken_sections))])
genre_list.extend([metainfo[3] for i in
range(0,len(broken_sections))])
section_number.extend([i for i in range(0, len(broken_sections))])
"""
Explanation: Let's put these two together
Let's add some logic so we can easily switch between the two.
End of explanation
"""
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
"""
Explanation: Now we can start calculating common characters
There are a variety of ways to do this. Here I will just use code packaged in the Sci-kit learn module.
End of explanation
"""
vectorizer = CountVectorizer(analyzer="word", ngram_range=(1,1),
token_pattern="\S+", max_features = 100)
"""
Explanation: Tokenizing the text with sci-kit learn vectorizer
When you use sci-kit learn, you can give it a lot of options.
If you have a file with whitespace between items:
You can use whitespace to tokenize your documents. For example, if you have used the Stanford Word Parser, then you can use a vectorizer set up like this to take advantage of this:
End of explanation
"""
vectorizer = CountVectorizer(analyzer="char",ngram_range=(1,1),
max_features = 100)
"""
Explanation: If you do not have a file with whitespace between items:
Here we will just use characters to tokenize. This works well with Imperial Chinese. Particularly with wenyan texts. Here we are telling it to look at characters, rathern than words, 1-grams, and the 100 most common characters.
End of explanation
"""
word_count_matrix = vectorizer.fit_transform(info_list)
# This will tell you the features found by the vectorizer.
vocab = vectorizer.get_feature_names()
print(vocab)
"""
Explanation: Apply the vectorizer to the texts
End of explanation
"""
import pandas as pd
from pandas import Series
fullcorpus = ""
for text in info_list:
fullcorpus += text
tokens = list(fullcorpus)
corpus_series = Series(tokens)
values = corpus_series.value_counts()
print(values[:10])
"""
Explanation: This are not in order of the most common character. We can use a library called pandas to asscertain this easily.
End of explanation
"""
# Just use this instead when creating your vectorizer.
# To get TF, tell it to not use idf. otherwise, set to true
vectorizer = TfidfVectorizer(use_idf=False, analyzer="char",
ngram_range=(1,1), max_features=10)
"""
Explanation: Term Frequency and Term Frequency - Inverse Document Frequency
sci-kit learn also has a term frequency and tfidf vectorizer that you can use, depending on how you want to think about your texts.
End of explanation
"""
from pandas import DataFrame
# Recreate a CountVectorizer object
vectorizer = CountVectorizer(analyzer="char", ngram_range=(1,1),
max_features=100)
word_count_matrix=vectorizer.fit_transform(info_list)
vocab = vectorizer.get_feature_names()
# We will need a dense matrix, not a sparse matrix
dense_words = word_count_matrix.toarray()
corpus_dataframe = DataFrame(dense_words, columns=vocab)
# Calculate how long each document is
doclengths = corpus_dataframe.sum(axis=1)
# Make a series that is the same length as the document length series
# but populated with 1000.
thousand = Series([1000 for i in range(0,len(doclengths))])
# Divide this by the length of each document
adjusteddoclengths = thousand.divide(doclengths)
# Multiply the corpus DataFrame by this adjusting factor
per_thousand = corpus_dataframe.multiply(adjusteddoclengths, axis = 0)
print(per_thousand)
# Convert back to word_count_matrix
word_count_matrix = per_thousand.as_matrix()
"""
Explanation: A bit more on normalization
If you are using texts of different length, you will need to be sure that you use some sort of normalization if you are hoping to use euclidean distance as a similarity measure. One of the easier ways to normalize is to adjust the raw character count to occurrences per thousand characters. The code below does this using pandas.
End of explanation
"""
my_vocab = ["的", "之", "曰", "说"]
vectorizer = CountVectorizer(analyzer="char",ngram_range=(1,1),
vocabulary = my_vocab)
"""
Explanation: Using Vocabulary
If you want to, you can give the vectorizer a set vocabulary to pay attention to, rather than just using the most common characters. This comes in handy when you have an idea which characters distinguish the texts most efficiently.
End of explanation
"""
from sklearn.metrics.pairwise import euclidean_distances, cosine_similarity
euc_or_cosine = "euc"
if euc_or_cosine == "euc":
similarity = euclidean_distances(word_count_matrix)
elif euc_or_cosine == "cos":
similarity = cosine_similarity(word_count_matrix)
"""
Explanation: Hierarchical Cluster Analysis: Making a Dendrogram
Now we can start calculating the relationships among these works. We will have to decide if we want to use euclidean distance or cosine similarity. We will import several tools to help us do this
Euclidean Distance
Each vector is understood as a point in space. You will need to calculate the distance between each point. We will use these to judge similarity.
Cosine similarity
Here we are interested in the direction that each vector points. You will calculate the angle between each point.
End of explanation
"""
from scipy.cluster.hierarchy import ward, dendrogram
linkage_matrix = ward(similarity)
"""
Explanation: You now have a similarity matrix
The similarity variable now contains the similarity measure between each document in the corpus. You can use this to create a linkage matrix which will allow you to visualize the relationships among these texts as a dendrogram.
Here we will use the "Ward" algorithm to cluster the texts together.
End of explanation
"""
# import the plotting library.
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.font_manager
# Set the font to a Chinese Font Family
# STHeiti works for Macs, SimHei should work on Windows
# Linux does not come with a compatible Chinese font.
# Here I have defaulted to a Japanese font.
# I've added logic that checks what system you are using.
from sys import platform
if platform == "linux" or platform == "linux2":
print("Sorry, I can't see the appropriate fonts, defaulting to Japanese")
matplotlib.rc('font', family="TakaoPGothic")
elif platform == "win32" or platform == "win64":
matplotlib.rc('font', family="SimHei")
elif platform == "darwin":
matplotlib.rc('font', family='STHeiti')
# Make the Dendrogram
dendrogram(linkage_matrix, labels=title_list)
plt.show()
"""
Explanation: Now it is time to visualize the relationships
When run normally, this will open a new window (and it will be higher quality than the image here). You will have to close it for later parts of the script to continue to run.
End of explanation
"""
dendrogram(linkage_matrix, labels=title_list)
# Add a Title
plt.title("Textual Relationships")
# Add x and y axis labels
plt.xlabel("Texts")
plt.ylabel("Distance")
# Set the angle of the labels so they are easier to read
plt.xticks(rotation=60)
# Show the plot
plt.show()
"""
Explanation: This can be made a bit more attractive
End of explanation
"""
# Set the size of the Figure
# This will make a Seven inch by Seven inch figure
plt.figure(figsize=(7,7))
dendrogram(linkage_matrix, labels=title_list)
# Add a Title
plt.title("Textual Relationships")
# Add x and y axis labels
plt.xlabel("Texts")
plt.ylabel("Distance")
# Set the angle of the labels so they are easier to read
plt.xticks(rotation=60)
plt.savefig("results.pdf")
"""
Explanation: Saving the Figure
To actually use the results, you will need to save the figure. You can save it as a variety of formats. I advise saving it as a pdf, which you can then edit further in Adobe Illustrator or some other vector-editing program.
End of explanation
"""
dendrogram(linkage_matrix, labels=title_list)
plt.title("Textual Relationships")
plt.xlabel("Texts")
plt.ylabel("Distance")
plt.xticks(rotation=60)
# Create a dictionary for color selection
# Here we are using genre as the basis for color
# You would need to change this if you wanted to color based on authorship.
color_dict = {"传奇":"red", "小说":"blue", "话本":"magenta"}
# Return information about the tick labels
plt_info = plt.gca()
tick_labels = plt_info.get_xmajorticklabels()
# Iterate through each tick label and assign a new color
for tick_label in tick_labels:
# Get the genre from the title to genre dictionary
genre = title_genre[tick_label.get_text()]
# Get the color from the dictionary
color = color_dict[genre]
# Set the color
tick_label.set_color(color)
# Show the plot
plt.show()
"""
Explanation: Adding some color
Sometimes it helps to add a bit of color to the figure so you can easily interpret it.
End of explanation
"""
from sklearn.decomposition import PCA
"""
Explanation: Principal Component Analysis
There are other ways to visualize the relationships among these texts. Principal component analysis is a way to explore the variance within the dataset. We can use much of the same data that we used for hierarchical cluster analysis.
Sci-kit learn also has the components necessary
You will need to import a few new modules
End of explanation
"""
# Create the PCA object
pca = PCA(n_components = 2)
# PCA requires a dense matrix. word_count_matrix is sparse
# unless you ran the normalization to per 1000 code above!
# Convert it to dense matrix
#dense_words = word_count_matrix.toarray()
dense_words = word_count_matrix
# Analyze the dataset
my_pca = pca.fit(dense_words).transform(dense_words)
"""
Explanation: Principal Components
PCA decomposes the dataset into abstracted components that describe the variance. These can be used as axes on which to replot the data. This will often allow you to get the best view of the data (or at least the most comprehensive).
How many components
Generally you will only need the first two principal components (which will describe the most variance within the dataset. Sometimes you will be interested in the third and fourth components. For now, just the first two will be fine.
End of explanation
"""
import numpy as np
# The input here will be the information you want to use to color
# the graph.
def info_for_graph(input_list):
# This will return the unique values.
# [a, a, a, b, b] would become
# {a, b}
unique_values = set(input_list)
# create a list of numerical label and a dictionary to
# populate a list
unique_labels = [i for i in range(0, len(unique_values))]
unique_dictionary = dict(zip(unique_values, unique_labels))
# make class list
class_list = []
for item in input_list:
class_list.append(unique_dictionary[item])
return unique_labels, np.array(class_list), unique_values
"""
Explanation: Plotting the results
Ploting the actual PCA is the first task. In a moment we will look at how to plot the loadings.
Setting up to Plot
You will need to decide how to visualize the results. Do you want to visualize by author, title, or genre?
We will write a function to take care of this.
End of explanation
"""
unique_labels, info_labels, unique_genres = info_for_graph(genre_list)
# Make a color list, the same length as unique labels
colors = ["red", "magenta", "blue"]
# Make the figure
plt.figure()
# Plot the points using color information.
# This code is partially adapted from brandonrose.org/clustering
for color, each_class, label in zip(colors, unique_labels, unique_genres):
plt.scatter(my_pca[info_labels == each_class, 0],
my_pca[info_labels == each_class, 1],
label = label, color = color)
# You should title the plot label your axes
plt.title("Principal Component Analysis")
plt.xlabel("PC1: " + "{0:.2f}".format(pca.explained_variance_ratio_[0] * 100)+"%")
plt.ylabel("PC2: " + "{0:.2f}".format(pca.explained_variance_ratio_[1] * 100)+"%")
# Give it a legend
plt.legend()
plt.show()
"""
Explanation: Using this information
This function returns everything we will need to properly visualize our Principal component analysis.
Call the function and use the results.
End of explanation
"""
unique_labels, info_labels, unique_genres = info_for_graph(genre_list)
colors = ["red", "magenta", "blue"]
plt.figure()
for color, each_class, label in zip(colors, unique_labels, unique_genres):
plt.scatter(my_pca[info_labels == each_class, 0],
my_pca[info_labels == each_class, 1],
label = label, color = color)
for i, text_label in enumerate(title_list):
plt.annotate(text_label, xy = (my_pca[i, 0], my_pca[i, 1]),
xytext=(my_pca[i, 0], my_pca[i, 1]),
size=8)
plt.title("Principal Component Analysis")
plt.xlabel("PC1: " + "{0:.2f}".format(pca.explained_variance_ratio_[0] * 100)+"%")
plt.ylabel("PC2: " + "{0:.2f}".format(pca.explained_variance_ratio_[1] * 100)+"%")
plt.legend()
plt.show()
"""
Explanation: Adding labels
It is fairly simple to add a line of code that adds labels to the figure. This is useful when you want to know where individual texts fall. It is less useful when you want to plot many texts at once.
End of explanation
"""
loadings = pca.components_
# This will plot the locations of the loadings, but make the
# points completely transparent.
plt.scatter(loadings[0], loadings[1], alpha=0)
# Label and Title
plt.title("Principal Component Loadings")
plt.xlabel("PC1: " + "{0:.2f}".format(pca.explained_variance_ratio_[0] * 100)+"%")
plt.ylabel("PC2: " + "{0:.2f}".format(pca.explained_variance_ratio_[1] * 100)+"%")
# Iterate through the vocab and plot where it falls on loadings graph
# numpy array the loadings info is held in is in the opposite format of the
# pca information
for i, txt in enumerate(vocab):
plt.annotate(txt, (loadings[0, i], loadings[1, i]), horizontalalignment='center',
verticalalignment='center', size=8)
plt.show()
"""
Explanation: Loadings Plot
You will often want to know how the individual variables have influenced where each text falls. The following code will create a loadings plot using the same data.
End of explanation
"""
|
gon1213/SDC | find_lane_lines/CarND_LaneLines_P1/P1.ipynb | gpl-3.0 | #importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimesions:', image.shape)
plt.imshow(image) #call as plt.imshow(gray, cmap='gray') to show a grayscaled image
"""
Explanation: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
End of explanation
"""
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, λ)
"""
Explanation: Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
"""
import os
os.listdir("test_images/")
"""
Explanation: Test on Images
Now you should build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
End of explanation
"""
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images directory.
"""
Explanation: run your solution on all test_images and make copies into the test_images directory).
End of explanation
"""
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image with lines are drawn on lanes)
return result
"""
Explanation: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
End of explanation
"""
white_output = 'white.mp4'
clip1 = VideoFileClip("solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
"""
Explanation: Let's try the one with the solid white lane on the right first ...
End of explanation
"""
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
"""
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
"""
yellow_output = 'yellow.mp4'
clip2 = VideoFileClip('solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
"""
Explanation: At this point, if you were successful you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. Modify your draw_lines function accordingly and try re-running your pipeline.
Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
"""
challenge_output = 'extra.mp4'
clip2 = VideoFileClip('challenge.mp4')
challenge_clip = clip2.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
"""
Explanation: Reflections
Congratulations on finding the lane lines! As the final step in this project, we would like you to share your thoughts on your lane finding pipeline... specifically, how could you imagine making your algorithm better / more robust? Where will your current algorithm be likely to fail?
Please add your thoughts below, and if you're up for making your pipeline more robust, be sure to scroll down and check out the optional challenge video below!
Submission
If you're satisfied with your video outputs it's time to submit! Submit this ipython notebook for review.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation
"""
|
lukasmerten/CRPropa3 | doc/pages/example_notebooks/galactic_lensing/lensing_cr.v4.ipynb | gpl-3.0 | import crpropa
import numpy as np
# read data from CRPropa output into container.
# The data is weighted with the source energy ~E**-1
M = crpropa.ParticleMapsContainer()
crdata = np.genfromtxt("crpropa_output.txt")
Id = np.array([int(x) for x in crdata[:,0]])
E = crdata[:,3] * crpropa.EeV
E0 = crdata[:,4] * crpropa.EeV
px = crdata[:,11]
py = crdata[:,12]
pz = crdata[:,13]
galactic_longitude = np.arctan2(-1. * py, -1. *px)
galactic_latitude = np.pi / 2 - np.arccos( -pz / np.sqrt(px*px + py*py+ pz*pz) )
M.addParticles(Id, E, galactic_longitude, galactic_latitude, E0**-1)
# Alternatively, data can be added manually.
# This provides freedom to adapt to customized weights used in the simulation
for i in range(1000):
particleId = crpropa.nucleusId(1,1)
energy = 10 * crpropa.EeV
galCenter = crpropa.Vector3d(-1,0,0)
momentumVector = crpropa.Random.instance().randFisherVector(galCenter, 200)
M.addParticle(particleId, energy, momentumVector)
"""
Explanation: Galactic Lensing of Simulated Cosmic Rays
Deflection in galactic magnetic fields can be accounted for using galactic lenses. To use the lenses efficiently, the ouput of a CRPropa simulation needs first to be filled in probability maps. These maps are then transformed by the lenses according to the rigidity of the particles. From the maps then finally new particles can be generated.
Input Results from Extragalactic Simulation
End of explanation
"""
%matplotlib inline
import healpy
import matplotlib.pyplot as plt
#stack all maps
crMap = np.zeros(49152)
for pid in M.getParticleIds():
energies = M.getEnergies(int(pid))
for i, energy in enumerate(energies):
crMap += M.getMap(int(pid), energy * crpropa.eV )
#plot maps using healpy
healpy.mollview(map=crMap, title='Unlensed')
plt.savefig('unlensed_map.png')
"""
Explanation: Plot Maps
The probability maps for the individual particles can be accessed directly and plotted with healpy.
End of explanation
"""
%matplotlib inline
# The lens can be downloaded here: https://crpropa.github.io/CRPropa3/ --> Additional Resources
lens = crpropa.MagneticLens('pathto/lens.cfg')
lens.normalizeLens()
M.applyLens(lens)
#stack all maps
crMap = np.zeros(49152)
for pid in M.getParticleIds():
energies = M.getEnergies(int(pid))
for i, energy in enumerate(energies):
crMap += M.getMap(int(pid), energy * crpropa.eV )
#plot maps using healpy
healpy.mollview(map=crMap, title='Lensed')
plt.savefig('lensed_map.png')
"""
Explanation: Apply Galactic Lenses
To apply a lens to a map, a lens object needs to be created and then applied to the map. The normalization of the lens ensures that the lens does not distort the spectra.
End of explanation
"""
pids, energies, lons, lats = M.getRandomParticles(10)
# create a scatter plot of the particles
plt.subplot(111, projection='hammer')
plt.scatter(lons, lats, c=np.log10(energies), lw=0)
plt.grid()
plt.savefig('scattered_particles.png')
"""
Explanation: Generate Particles from Map
If needed, sets of individual particles can then be generated from the maps under preservation of the relative (transformed) input weights. Note that the number of particles you can draw from a map depends on the quality of sampling on the input pdf. As a rule of thumb it is most likely not safe to draw more particles from the map than have been used to create the map.
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.1/examples/detached_rotstar.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.1,<2.2"
%matplotlib inline
"""
Explanation: Detached Binary: Roche vs Rotstar
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
b.add_dataset('mesh', times=[0.75], dataset='mesh01')
"""
Explanation: Adding Datasets
Now we'll create an empty mesh dataset at quarter-phase so we can compare the difference between using roche and rotstar for deformation potentials:
End of explanation
"""
b['requiv@primary@component'] = 1.8
"""
Explanation: Running Compute
Let's set the radius of the primary component to be large enough to start to show some distortion when using the roche potentials.
End of explanation
"""
b.run_compute(irrad_method='none', distortion_method='roche', model='rochemodel')
b.run_compute(irrad_method='none', distortion_method='rotstar', model='rotstarmodel')
"""
Explanation: Now we'll compute synthetics at the times provided using the default options
End of explanation
"""
afig, mplfig = b.plot(model='rochemodel',show=True)
afig, mplfig = b.plot(model='rotstarmodel',show=True)
"""
Explanation: Plotting
End of explanation
"""
|
PMEAL/OpenPNM-Examples | Tutorials/intermediate_usage.ipynb | mit | import openpnm as op
import scipy as sp
"""
Explanation: Tutorial 2 of 3: Digging Deeper into OpenPNM
This tutorial will follow the same outline as Getting Started, but will dig a little bit deeper at each step to reveal the important features of OpenPNM that were glossed over previously.
Learning Objectives
Explore different network topologies, and learn some handy topological query methods
Create a heterogeneous domain with different geometrical properties in different regions
Learn about data exchange between objects
Utilize pore-scale models for calculating properties of all types
Propagate changing geometrical and thermo-physical properties to all dependent properties
Calculate the permeability tensor for the stratified media
Use the Workspace Manager to save and load a simulation
Building a Cubic Network
As usual, start by importing the OpenPNM and Scipy packages:
End of explanation
"""
pn = op.network.Cubic(shape=[20, 20, 10], spacing=0.0001, connectivity=8)
"""
Explanation: Let's generate a cubic network again, but with a different connectivity:
End of explanation
"""
Ps1 = pn.pores(['top', 'bottom'])
Ts1 = pn.find_neighbor_throats(pores=Ps1, mode='union')
geom1 = op.geometry.GenericGeometry(network=pn, pores=Ps1, throats=Ts1, name='boundaries')
Ps2 = pn.pores(['top', 'bottom'], mode='not')
Ts2 = pn.find_neighbor_throats(pores=Ps2, mode='xnor')
geom2 = op.geometry.GenericGeometry(network=pn, pores=Ps2, throats=Ts2, name='core')
"""
Explanation: This Network has pores distributed in a cubic lattice, but connected to diagonal neighbors due to the connectivity being set to 8 (the default is 6 which is orthogonal neighbors). The various options are outlined in the Cubic class's documentation which can be viewed with the Object Inspector in Spyder.
OpenPNM includes several other classes for generating networks including random topology based on Delaunay tessellations (Delaunay).
It is also possible to import networks <data_io>_ from external sources, such as networks extracted from tomographic images, or that networks generated by external code.
Initialize and Build Multiple Geometry Objects
One of the main functionalities of OpenPNM is the ability to assign drastically different geometrical properties to different regions of the domain to create heterogeneous materials, such as layered structures. To demonstrate the motivation behind this feature, this tutorial will make a material that has different geometrical properties on the top and bottom surfaces compared to the internal pores. We need to create one Geometry object to manage the top and bottom pores, and a second to manage the remaining internal pores:
End of explanation
"""
geom1['pore.seed'] = sp.rand(geom1.Np)*0.5 + 0.2
geom2['pore.seed'] = sp.rand(geom2.Np)*0.5 + 0.2
"""
Explanation: The above statements result in two distinct Geometry objects, each applying to different regions of the domain. geom1 applies to only the pores on the top and bottom surfaces (automatically labeled 'top' and 'bottom' during the network generation step), while geom2 applies to the pores 'not' on the top and bottom surfaces.
The assignment of throats is more complicated and illustrates the find_neighbor_throats method, which is one of the more useful topological query methods <topology>_ on the Network class. In both of these calls, all throats connected to the given set of pores (Ps1 or Ps2) are found; however, the mode argument alters which throats are returned. The terms 'union' and 'intersection' are used in the "set theory" sense, such that 'union' returns all throats connected to the pores in the supplied list, while 'intersection' returns the throats that are only connected to the supplied pores. More specifically, if pores 1 and 2 have throats [1, 2] and [2, 3] as neighbors, respectively, then the 'union' mode returns [1, 2, 3] and the 'intersection' mode returns [2]. A detailed description of this behavior is given in :ref:topology.
Assign Static Seed values to Each Geometry
In :ref:getting_started we only assigned 'static' values to the Geometry object, which we calculated explicitly. In this tutorial we will use the pore-scale models that are provided with OpenPNM. To get started, however, we'll assign static random seed values between 0 and 1 to each pore on both Geometry objects, by assigning random numbers to each Geometry's 'pore.seed' property:
End of explanation
"""
seeds = sp.zeros_like(pn.Ps, dtype=float)
seeds[pn.pores(geom1.name)] = geom1['pore.seed']
seeds[pn.pores(geom2.name)] = geom2['pore.seed']
print(sp.all(seeds > 0)) # Ensure all zeros are overwritten
"""
Explanation: Each of the above lines produced an array of different length, corresponding to the number of pores assigned to each Geometry object. This is accomplished by the calls to geom1.Np and geom2.Np, which return the number of pores on each object.
Every Core object in OpenPNM possesses the same set of methods for managing their data, such as counting the number of pore and throat values they represent; thus, pn.Np returns 1000 while geom1.Np and geom2.Np return 200 and 800 respectively.
Accessing Data Distributed Between Geometries
The segmentation of the data between separate Geometry objects is essential to the management of pore-scale models, although it does create a complication: it's not easy to obtain a single array containing all the values of a given property for the whole network. It is technically possible to piece this data together manually since we know the locations where each Geometry object applies, but this is tedious so OpenPNM provides a shortcut. First, let's illustrate the manual approach using the 'pore.seed' values we have defined:
End of explanation
"""
seeds = pn['pore.seed']
"""
Explanation: The following code illustrates the shortcut approach, which accomplishes the same result as above in a single line:
End of explanation
"""
geom1.add_model(propname='pore.diameter',
model=op.models.geometry.pore_size.normal,
scale=0.00001, loc=0.00005,
seeds='pore.seed')
geom2.add_model(propname='pore.diameter',
model=op.models.geometry.pore_size.weibull,
shape=1.2, scale=0.00001, loc=0.00005,
seeds='pore.seed')
"""
Explanation: This shortcut works because the pn dictionary does not contain an array called 'pore.seed', so all associated Geometry objects are then checked for the requested array(s). If it is found, then OpenPNM essentially performs the interleaving of the data as demonstrated by the manual approach and returns all the values together in a single full-size array. If it is not found, then a standard KeyError message is received.
This exchange of data between Network and Geometry makes sense if you consider that Network objects act as a sort of master object relative Geometry objects. Networks apply to all pores and throats in the domain, while Geometries apply to subsets of the domain, so if the Network needs some values from all pores it has direct access.
Add Pore Size Distribution Models to Each Geometry
Pore-scale models are mathematical functions that are applied to each pore (or throat) in the network to produce some local property value. Each of the modules in OpenPNM (Network, Geometry, Phase and Physics) have a "library" of pre-written models located under "models" (i.e. Geometry.models). Below this level, the models are further categorized according to what property they calculate, and there are typical 2-3 models for each. For instance, under Geometry.models.pore_diameter you will see random, normal and weibull among others.
Pore size distribution models are assigned to each Geometry object as follows:
End of explanation
"""
geom1.add_model(propname='throat.diameter',
model=op.models.misc.from_neighbor_pores,
pore_prop='pore.diameter',
mode='min')
geom2.add_model(propname='throat.diameter',
model=op.models.misc.from_neighbor_pores,
mode='min')
pn['pore.diameter'][pn['throat.conns']]
"""
Explanation: Pore-scale models tend to be the most complex (i.e. confusing) aspects of OpenPNM, so it's worth dwelling on the important points of the above two commands:
Both geom1 and geom2 have a models attribute where the parameters specified in the add command are stored for future use if/when needed. The models attribute actually contains a ModelsDict object which is a customized dictionary for storing and managing this type of information.
The propname argument specifies which property the model calculates. This means that the numerical results of the model calculation will be saved in their respective Geometry objects as geom1['pore.diameter'] and geom2['pore.diameter'].
Each model stores it's result under the same propname but these values do not conflict since each Geometry object presides over a unique subset of pores and throats.
The model argument contains a handle to the desired function, which is extracted from the models library of the relevant Module (Geometry in this case). Each Geometry object has been assigned a different statistical model, normal and weibull. This ability to apply different models to different regions of the domain is reason multiple Geometry objects are permitted. The added complexity is well worth the added flexibility.
The remaining arguments are those required by the chosen model. In the above cases, these are the parameters that define the statistical distribution. Note that the mean pore size for geom1 will be 20 um (set by scale) while for geom2 it will be 50 um, thus creating the smaller surface pores as intended. The pore-scale models are well documented regarding what arguments are required and their meaning; as usual these can be viewed with Object Inspector in Spyder.
Now that we've added pore diameter models the each Geometry we can visualize the network in Paraview to confirm that distinctly different pore sizes on the surface regions:
Add Additional Pore-Scale Models to Each Geometry
In addition to pore diameter, there are several other geometrical properties needed to perform a permeability simulation. Let's start with throat diameter:
End of explanation
"""
geom1.add_model(propname='throat.endpoints',
model=op.models.geometry.throat_endpoints.spherical_pores)
geom2.add_model(propname='throat.endpoints',
model=op.models.geometry.throat_endpoints.spherical_pores)
geom1.add_model(propname='throat.area',
model=op.models.geometry.throat_area.cylinder)
geom2.add_model(propname='throat.area',
model=op.models.geometry.throat_area.cylinder)
geom1.add_model(propname='pore.area',
model=op.models.geometry.pore_area.sphere)
geom2.add_model(propname='pore.area',
model=op.models.geometry.pore_area.sphere)
geom1.add_model(propname='throat.conduit_lengths',
model=op.models.geometry.throat_length.conduit_lengths)
geom2.add_model(propname='throat.conduit_lengths',
model=op.models.geometry.throat_length.conduit_lengths)
"""
Explanation: Instead of using statistical distribution functions, the above lines use the neighbor model which determines each throat value based on the values found 'pore_prop' from it's neighboring pores. In this case, each throat is assigned the minimum pore diameter of it's two neighboring pores. Other options for mode include 'max' and 'mean'.
We'll also need throat length as well as the cross-sectional area of pores and throats, for calculating the hydraulic conductance model later.
End of explanation
"""
water = op.phases.GenericPhase(network=pn)
air = op.phases.GenericPhase(network=pn)
"""
Explanation: Create a Phase Object and Assign Thermophysical Property Models
For this tutorial, we will create a generic Phase object for water, then assign some pore-scale models for calculating their properties. Alternatively, we could use the prewritten Water class included in OpenPNM, which comes complete with the necessary pore-scale models, but this would defeat the purpose of the tutorial.
End of explanation
"""
water['pore.temperature'] = 353 # K
"""
Explanation: Note that all Phase objects are automatically assigned standard temperature and pressure conditions when created. This can be adjusted:
End of explanation
"""
water.add_model(propname='pore.viscosity',
model=op.models.phases.viscosity.water)
"""
Explanation: A variety of pore-scale models are available for calculating Phase properties, generally taken from correlations in the literature. An empirical correlation specifically for the viscosity of water is available:
End of explanation
"""
phys1 = op.physics.GenericPhysics(network=pn, phase=water, geometry=geom1)
phys2 = op.physics.GenericPhysics(network=pn, phase=water, geometry=geom2)
"""
Explanation: Create Physics Objects for Each Geometry
Physics objects are where geometric information and thermophysical properties are combined to produce the pore and throat scale transport parameters. Thus we need to create one Physics object for EACH Phase and EACH Geometry:
End of explanation
"""
mod = op.models.physics.hydraulic_conductance.hagen_poiseuille
phys1.add_model(propname='throat.hydraulic_conductance', model=mod)
phys2.add_model(propname='throat.hydraulic_conductance', model=mod)
"""
Explanation: Next add the Hagan-Poiseuille model to both:
End of explanation
"""
g = water['throat.hydraulic_conductance']
"""
Explanation: The same function (mod) was passed as the model argument to both Physics objects. This means that both objects will calculate the hydraulic conductance using the same function. A model must be assigned to both objects in order for the 'throat.hydraulic_conductance' property be defined everywhere in the domain since each Physics applies to a unique selection of pores and throats.
The "pore-scale model" mechanism was specifically designed to allow for users to easily create their own custom models. Creating custom models is outlined in :ref:advanced_usage.
Accessing Data Distributed Between Multiple Physics Objects
Just as Network objects can retrieve data from separate Geometries as a single array with values in the correct locations, Phase objects can retrieve data from Physics objects as follows:
End of explanation
"""
g1 = phys1['throat.hydraulic_conductance'] # Save this for later
g2 = phys2['throat.hydraulic_conductance'] # Save this for later
"""
Explanation: Each Physics applies to the same subset for pores and throats as the Geometries so its values are distributed spatially, but each Physics is also associated with a single Phase object. Consequently, only a Phase object can to request all of the values within the domain pertaining to itself.
In other words, a Network object cannot aggregate the Physics data because it doesn't know which Phase is referred to. For instance, when asking for 'throat.hydraulic_conductance' it could refer to water or air conductivity, so it can only be requested by water or air.
Pore-Scale Models: The Big Picture
Having created all the necessary objects with pore-scale models, it is now time to demonstrate why the OpenPNM pore-scale model approach is so powerful. First, let's inspect the current value of hydraulic conductance in throat 1 on phys1 and phys2:
End of explanation
"""
geom1['pore.seed'] = sp.rand(geom1.Np)
geom2['pore.seed'] = sp.rand(geom2.Np)
water['pore.temperature'] = 370 # K
"""
Explanation: Now, let's alter the Geometry objects by assigning new random seeds, and adjust the temperature of water.
End of explanation
"""
geom1.regenerate_models()
geom2.regenerate_models()
"""
Explanation: So far we have not run the regenerate command on any of these objects, which means that the above changes have not yet been applied to all the dependent properties. Let's do this and examine what occurs at each step:
End of explanation
"""
water.regenerate_models()
"""
Explanation: These two lines trigger the re-calculation of all the size related models on each Geometry object.
End of explanation
"""
print(sp.all(phys1['throat.hydraulic_conductance'] == g1)) # g1 was saved above
print(sp.all(phys2['throat.hydraulic_conductance'] == g2) ) # g2 was saved above
"""
Explanation: This line causes the viscosity to be recalculated at the new temperature. Let's confirm that the hydraulic conductance has NOT yet changed since we have not yet regenerated the Physics objects' models:
End of explanation
"""
phys1.regenerate_models()
phys2.regenerate_models()
print(sp.all(phys1['throat.hydraulic_conductance'] != g1))
print(sp.all(phys2['throat.hydraulic_conductance'] != g2))
"""
Explanation: Finally, if we regenerate phys1 and phys2 we can see that the hydraulic conductance will be updated to reflect the new sizes on the Geometries and the new temperature on the Phase:
End of explanation
"""
alg = op.algorithms.StokesFlow(network=pn, phase=water)
"""
Explanation: Determine Permeability Tensor by Changing Inlet and Outlet Boundary Conditions
The :ref:getting started tutorial <getting_started> already demonstrated the process of performing a basic permeability simulation. In this tutorial, we'll perform the simulation in all three perpendicular dimensions to obtain the permeability tensor of our heterogeneous anisotropic material.
End of explanation
"""
alg.set_value_BC(values=202650, pores=pn.pores('right'))
alg.set_value_BC(values=101325, pores=pn.pores('left'))
alg.run()
"""
Explanation: Set boundary conditions for flow in the X-direction:
End of explanation
"""
Q = alg.rate(pores=pn.pores('right'))
"""
Explanation: The resulting pressure field can be seen using Paraview:
To determine the permeability coefficient we must find the flow rate through the network to use in Darcy's law. The StokesFlow class (and all analogous transport algorithms) possess a rate method that calculates the net transport through a given set of pores:
End of explanation
"""
mu = sp.mean(water['pore.viscosity'])
"""
Explanation: To find K, we need to solve Darcy's law: Q = KA/(mu*L)(P_in - P_out). This requires knowing the viscosity and macroscopic network dimensions:
End of explanation
"""
L = 20 * 0.0001
A = 20 * 10 * (0.0001**2)
"""
Explanation: The dimensions of the network can be determined manually from the shape and spacing specified during its generation:
End of explanation
"""
Kxx = Q * mu * L / (A * 101325)
"""
Explanation: The pressure drop was specified as 1 atm when setting boundary conditions, so Kxx can be found as:
End of explanation
"""
alg.set_value_BC(values=202650, pores=pn.pores('front'))
alg.set_value_BC(values=101325, pores=pn.pores('back'))
alg.run()
"""
Explanation: We can either create 2 new Algorithm objects to perform the simulations in the other two directions, or reuse alg by adjusting the boundary conditions and re-running it.
End of explanation
"""
Q = alg.rate(pores=pn.pores('front'))
Kyy = Q * mu * L / (A * 101325)
"""
Explanation: The first call to set_boundary_conditions used the overwrite mode, which replaces all existing boundary conditions on the alg object with the specified values. The second call uses the merge mode which adds new boundary conditions to any already present, which is the default behavior.
A new value for the flow rate must be recalculated, but all other parameters are equal to the X-direction:
End of explanation
"""
alg.set_value_BC(values=202650, pores=pn.pores('top'))
alg.set_value_BC(values=101325, pores=pn.pores('bottom'))
alg.run()
Q = alg.rate(pores=pn.pores('top'))
L = 10 * 0.0001
A = 20 * 20 * (0.0001**2)
Kzz = Q * mu * L / (A * 101325)
"""
Explanation: The values of Kxx and Kyy should be nearly identical since both these two directions are parallel to the small surface pores. For the Z-direction:
End of explanation
"""
print(Kxx, Kyy, Kzz)
"""
Explanation: The permeability in the Z-direction is about half that in the other two directions due to the constrictions caused by the small surface pores.
End of explanation
"""
|
marburg-open-courseware/gmoc | docs/mpg-if_error_continue/worksheets/w-02-2_conditionals.ipynb | mit | import pandas as pd
url = "http://www.cpc.ncep.noaa.gov/data/indices/oni.ascii.txt"
# help(pd.read_fwf)
oni = pd.read_fwf(url, widths = [5, 5, 7, 7])
oni.head()
## Your solution goes here:
"""
Explanation: W02-2.1: Count the number of occurrences of each warm ENSO category
Using the ONI data set from the previous worksheet, identify the number of months with
weak,
medium,
strong,
and very strong
warm ENSO conditions (i.e. El Niño only).
In order to fulfill the required tasks, you will need to
initialize counter variables for the different categories of warm ENSO stages,
write a for loop with embedded if-elif conditions (one for each stage),
and increment the stage-specific counter variables based on the anomaly thresholds given in <a href="https://oer.uni-marburg.de/goto.php?target=pg_5103_720&client_id=mriliasmooc">W02-1: Loops</a>.
End of explanation
"""
## Your solution goes here:
"""
Explanation: In addition, please calculate the percentage of months characterized by at least weak El Niño conditions?
End of explanation
"""
## Your solution goes here:
"""
Explanation: <hr>
W02-2.2: Do the same for cold ENSO events...
...and put the stage-specific counter variables for both warm and cold ENSO stages together in a single dictionary using meaningful and clearly distinguishable keys (e.g. 'Weak El Nino', 'Moderate El Nino', ..., 'Weak La Nina', ...). If you feel a little insecure with creating dict objects, feel free to browse back to <a href="https://oer.uni-marburg.de/goto.php?target=pg_2625_720&client_id=mriliasmooc">E01-3</a> and let yourself inspire by the code included therein.
Oh, and remember that the stuff you created for answering the above task is still in the Jupyter Notebook's environment, so there is no need carry out the whole El Niño processing anew.
End of explanation
"""
|
lisitsyn/shogun | doc/ipython-notebooks/ica/ecg_sep.ipynb | bsd-3-clause | # change to the shogun-data directory
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
os.chdir(os.path.join(SHOGUN_DATA_DIR, 'ica'))
import numpy as np
# load data
# Data originally from:
# http://perso.telecom-paristech.fr/~cardoso/icacentral/base_single.html
data = np.loadtxt('foetal_ecg.dat')
# time steps
time_steps = data[:,0]
# abdominal signals
abdominal2 = data[:,1]
abdominal3 = data[:,2]
abdominal4 = data[:,3]
abdominal5 = data[:,4]
abdominal6 = data[:,5]
# thoracic signals
thoracic7 = data[:,6]
thoracic8 = data[:,7]
thoracic9 = data[:,8]
"""
Explanation: Fetal Electrocardiogram Extraction by Source Subspace Separation
By Kevin Hughes and Andreas Ziehe
This notebook illustrates <a href="http://en.wikipedia.org/wiki/Blind_signal_separation">Blind Source Seperation</a>(BSS) on several time synchronised Electrocardiogram's (ECG's) of the baby's mother using <a href="http://en.wikipedia.org/wiki/Independent_component_analysis">Independent Component Analysis</a> (ICA) in Shogun. This is used to extract the baby's ECG from it.
This task has been studied before and has been published in these papers:
Cardoso, J. F. (1998, May). Multidimensional independent component analysis.
In Acoustics, Speech and Signal Processing, 1998. Proceedings of the 1998
IEEE International Conference on (Vol. 4, pp. 1941-1944). IEEE.
Dirk Callaerts, "Signal Separation Methods based on Singular Value
Decomposition and their Application to the Real-Time Extraction of the
Fetal Electrocardiogram from Cutaneous Recordings", Ph.D. Thesis,
K.U.Leuven - E.E. Dept., Dec. 1989.
L. De Lathauwer, B. De Moor, J. Vandewalle, "Fetal Electrocardiogram
Extraction by Source Subspace Separation", Proc. IEEE SP / ATHOS
Workshop on HOS, June 12-14, 1995, Girona, Spain, pp. 134-138.
In this workbook I am going to show you how a similar result can be obtained using the ICA algorithms available in the Shogun Machine Learning Toolbox.
First we need some data, luckily an ECG dataset is distributed in the Shogun data repository. So the first step is to change the directory then we'll load the data.
End of explanation
"""
%matplotlib inline
# plot signals
import pylab as pl
# abdominal signals
for i in range(1,6):
pl.figure(figsize=(14,3))
pl.plot(time_steps, data[:,i], 'r')
pl.title('Abdominal %d' % (i))
pl.grid()
pl.show()
# thoracic signals
for i in range(6,9):
pl.figure(figsize=(14,3))
pl.plot(time_steps, data[:,i], 'r')
pl.title('Thoracic %d' % (i))
pl.grid()
pl.show()
"""
Explanation: Before we go any further let's take a look at this data by plotting it:
End of explanation
"""
import shogun as sg
# Signal Matrix X
X = (np.c_[abdominal2, abdominal3, abdominal4, abdominal5, abdominal6, thoracic7,thoracic8,thoracic9]).T
# Convert to features for shogun
mixed_signals = sg.features((X).astype(np.float64))
"""
Explanation: The peaks in the plot represent a heart beat but its pretty hard to interpret and I know I definitely can't see two distinc signals, lets see what we can do with ICA!
In general for performing Source Separation we need at least as many mixed signals as sources we're hoping to separate and in this case we actually have a lot more (9 mixtures but there is only 2 sources, mother and baby). There are several different approaches for handling this situation, some algorithms are specifically designed to handle this case while other times the data is pre-processed with Principal Component Analysis (PCA). It is also common to simply apply the separation to all the sources and then choose some of the extracted signal manually or using some other know criteria which is what I'll be showing in this example.
Now we create our ICA data set and convert to a Shogun features type:
End of explanation
"""
# Separating with SOBI
sep = sg.transformer('SOBI')
sep.put('tau', 1.0*np.arange(0,120))
sep.fit(mixed_signals)
signals = sep.transform(mixed_signals)
S_ = signals.get('feature_matrix')
"""
Explanation: Next we apply the ICA algorithm to separate the sources:
End of explanation
"""
# Show separation results
# Separated Signal i
for i in range(S_.shape[0]):
pl.figure(figsize=(14,3))
pl.plot(time_steps, S_[i], 'r')
pl.title('Separated Signal %d' % (i+1))
pl.grid()
pl.show()
"""
Explanation: And we plot the separated signals:
End of explanation
"""
|
xmnlab/pywim | notebooks/StorageRawData.ipynb | mit | from IPython.display import display
from datetime import datetime
from matplotlib import pyplot as plt
from scipy import misc
import h5py
import json
import numpy as np
import os
import pandas as pd
import sys
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#1.-Weigh-in-Motion-Storage-Raw-Data" data-toc-modified-id="1.-Weigh-in-Motion-Storage-Raw-Data-1"><span class="toc-item-num">1 </span>1. Weigh-in-Motion Storage Raw Data</a></div><div class="lev2 toc-item"><a href="#1.1-Standards" data-toc-modified-id="1.1-Standards-11"><span class="toc-item-num">1.1 </span>1.1 Standards</a></div><div class="lev3 toc-item"><a href="#1.1.1-File-and-dataset-names" data-toc-modified-id="1.1.1-File-and-dataset-names-111"><span class="toc-item-num">1.1.1 </span>1.1.1 File and dataset names</a></div><div class="lev3 toc-item"><a href="#1.1.2-Fields-name-and-extra-information" data-toc-modified-id="1.1.2-Fields-name-and-extra-information-112"><span class="toc-item-num">1.1.2 </span>1.1.2 Fields name and extra information</a></div><div class="lev2 toc-item"><a href="#1.2-Algorithms" data-toc-modified-id="1.2-Algorithms-12"><span class="toc-item-num">1.2 </span>1.2 Algorithms</a></div><div class="lev3 toc-item"><a href="#1.2.1-Start-up" data-toc-modified-id="1.2.1-Start-up-121"><span class="toc-item-num">1.2.1 </span>1.2.1 Start up</a></div><div class="lev3 toc-item"><a href="#1.2.2-Creating-the-file" data-toc-modified-id="1.2.2-Creating-the-file-122"><span class="toc-item-num">1.2.2 </span>1.2.2 Creating the file</a></div><div class="lev3 toc-item"><a href="#1.2.3-Reading-the-file" data-toc-modified-id="1.2.3-Reading-the-file-123"><span class="toc-item-num">1.2.3 </span>1.2.3 Reading the file</a></div><div class="lev1 toc-item"><a href="#References" data-toc-modified-id="References-2"><span class="toc-item-num">2 </span>References</a></div>
# 1. Weigh-in-Motion Storage Raw Data
Basically, the first main input data is the raw data sensors. These data can be acquired using a data acquisition device (DAQ) through analog channels (e.g. weigh sensors, temperature sensors, etc) and/or digital channels (e.g., inductive loops).
The three more wideley piezo-eletric weigh sensors used are piezo-ceramic, piezo-polymer and piezo-electric <cite data-cite="jiang2009improvements">(Jiang, 2009)</cite>.
The storing the raw sensor data allows studying of the input signals and validating weigh methods. In COST 323 <cite data-cite="tech:cost-323">(Jacob et al., 2009)</cite>, it was not found any description about the raw data layout file. By the way, this data can be represented by a matrix using as a first column a index with time instant, it can be represented by microseconds in floating point format and it is followed by other columns representing each sensor data.
## 1.1 Standards
On one file it can be saved any measurements of vehicle's run, e.g. the researcher can create one file per day and on each file all vehicle's run, with respecting to the date of the file. Each vehicle's run should be saved on a specific dataset. The main idea of these standards is promoting a best practice to store and share weigh-in-motion data.
### 1.1.1 File and dataset names
The filename should have be informative, respecting the date, site and lane and the organization type of the dataset. If the file contains measurements from more than one site so the site identification number should be **000**. The same idea should be used to lane identification number. The date field from the filename should contain the initial date time of the period. If it is necessary, inform the initial time too (optional). The standard structure proposed is:
```
wim_t_sss_ll_yyyymmdd[_hhMMSS]
```
E.g. **wim_day_001_01_20174904_004936**. When:
* ***wim* is a fixed text;
* **t** means the organization type of the datasets (i.e. **day** means one file per day, **week** means one file per week, **month** means one file per month, **year** means one file per year and **full** means a full file with a complete data);
* **sss** means site identification number (e.g. 001);
* **ll** means lane identification number (e.g. 02);
* **yyyy** means the year (e.g. 2012);
* **mm** means the mounth (e.g. 12);
* **dd** means the day (e.g. 30);
* **hh** means the hour (e.g. 23);
* **MM** means the minute (e.g. 59);
* **SS** means (e.g. 30).
For each vehicle's run, it should be created a new dataset. The dataset name should contain site identification number, lane identification number, date and time. The standard structure proposed is:
```
run_sss_ll_yyyymmdd_hhMMSS
```
E.g. **run_001_01_20174904_004936**. When **run** is a fixed text. The other parts in dataset name can be explained as in file name standard.
### 1.1.2 Fields name and extra information
Each dataset contains information from signal data. The dataset should contain some extra information to allow data post-processing. The columns on the dataset should be **index** and data from analog channels and digital channels. The standard for column names should be:
```
{t}{n}
```
Where {t} means the channel type (i.e. can be set as **a** for analog, or **d** for digital) and {n} means the number of the channel (e.g. **a1**).
The main extra information that should be saved on the dataset is:
* sample rate (e.g. 5000 [points per second]);
* date time (e.g. 2017-49-04 00:49:36);
* site id (e.g. 001);
* lane id (e.g. 01);
* temperature (e.g. 28.5);
* license_plate (e.g. AAA9999);
* sensor calibration constant (e.g. [0.98, 0.99, 0.75]);
* distance between sensors (e.g. [1.0, 1.5, 2.0]);
* sensor type (e.g. quartz, polymer, ceramic, etc or mixed);
* sensors layout (e.g. |/|\\|<|>|=|)
* channel configuration (this is a, optional attribute, it is required just when sensor type is mixed, e.g. {'a0': 'polymer', 'a1': 'ceramic'})
## 1.2 Algorithms
The algorithms presented here was written in Python language. If it is necessary to use another language would be easy to convert or rewrite this code in any language.
Storage Data module should be able to write and read data from hdf5 file with a simple approach, in other words, it should be easy for anybody to manipulate and understand this data using other languages.
End of explanation
"""
# local
sys.path.insert(0, os.path.dirname(os.getcwd()))
from pywim.utils.dsp.synthetic_data.sensor_data import gen_truck_raw_data
# generates a synthetic data
sample_rate = 2000
sensors_distance = [1, 2]
data = gen_truck_raw_data(
sample_rate=sample_rate, speed=20, vehicle_layout='O--O------O-',
sensors_distance=sensors_distance, p_signal_noise=100.0
)
data.plot()
plt.show()
data.head()
"""
Explanation: 1.2.1 Start up
End of explanation
"""
date_time = datetime.now()
site_id = '001'
lane_id = '01'
collection_type = 'day' # stored per day
f_id = 'wim_{}_{}_{}_{}'.format(
collection_type, site_id, lane_id,
date_time.strftime('%Y%m%d')
)
f = h5py.File('/tmp/{}.h5'.format(f_id), 'w')
print(f_id)
dset_id = 'run_{}_{}_{}'.format(
site_id, lane_id, date_time.strftime('%Y%M%d_%H%M%S')
)
print(dset_id)
dset = f.create_dataset(
dset_id, shape=(data.shape[0],),
dtype=np.dtype([
(k, float) for k in ['index'] + list(data.keys())
])
)
dset['index'] = data.index
for k in data.keys():
dset[k] = data[k]
# check if all values are the same
df = pd.DataFrame(dset[tuple(data.keys())], index=dset['index'])
np.all(df == data)
dset.attrs['sample_rate'] = sample_rate
dset.attrs['date_time'] = date_time.strftime('%Y-%M-%d %H:%M:%S')
dset.attrs['site_id'] = site_id
dset.attrs['lane_id'] = lane_id
dset.attrs['temperature'] = 28.5
dset.attrs['license_plate'] = 'AAA9999' # license plate number
dset.attrs['calibration_constant'] = [0.98, 0.99, 0.75]
dset.attrs['sensors_distance'] = sensors_distance
dset.attrs['sensor_type'] = 'mixed'
dset.attrs['sensors_layout'] = '|||'
dset.attrs['channel_configuration'] = json.dumps({
'a0': 'polymer', 'a1': 'ceramic', 'a2': 'polymer'
})
# flush its data to disk and close
f.flush()
f.close()
"""
Explanation: 1.2.2 Creating the file
End of explanation
"""
print('/tmp/{}.h5'.format(f_id))
f = h5py.File('/tmp/{}.h5'.format(f_id), 'r')
for dset_id in f.keys():
dset = f[dset_id]
paddle = len(max(dset.attrs, key=lambda v: len(v)))
print('')
print('='*80)
print(dset_id)
print('='*80)
for k in dset.attrs:
print('{}:'.format(k).ljust(paddle, ' '), dset.attrs[k], sep='\t')
pd.DataFrame(dset[dset.dtype.names[1:]], index=dset['index']).plot()
plt.show()
# f.__delitem__(dset_id)
f.flush()
f.close()
"""
Explanation: 1.2.3 Reading the file
End of explanation
"""
|
KshitijT/fundamentals_of_interferometry | 6_Deconvolution/6_1_sky_models.ipynb | gpl-2.0 | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
"""
Explanation: Outline
Glossary
6. Deconvolution in Imaging
Previous: 6. Introduction
Next: 6.2 Interative Deconvolution with Point Sources (CLEAN)
Import standard modules:
End of explanation
"""
import matplotlib.image as mpimg
from IPython.display import Image
from astropy.io import fits
import aplpy
#Disable astropy/aplpy logging
import logging
logger0 = logging.getLogger('astropy')
logger0.setLevel(logging.CRITICAL)
logger1 = logging.getLogger('aplpy')
logger1.setLevel(logging.CRITICAL)
from IPython.display import HTML
HTML('../style/code_toggle.html')
"""
Explanation: Import section specific modules:
End of explanation
"""
fig = plt.figure(figsize=(16, 7))
gc1 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-model.fits', \
figure=fig, subplot=[0.0,0.1,0.35,0.8])
gc1.show_colorscale(vmin=-0.1, vmax=1.0, cmap='viridis')
gc1.hide_axis_labels()
gc1.hide_tick_labels()
plt.title('Sky Model')
gc1.add_colorbar()
gc2 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-psf.fits', \
figure=fig, subplot=[0.5,0.1,0.35,0.8])
gc2.show_colorscale(cmap='viridis')
gc2.hide_axis_labels()
gc2.hide_tick_labels()
plt.title('KAT-7 PSF')
gc2.add_colorbar()
fig.canvas.draw()
"""
Explanation: 6.1 Sky Models<a id='deconv:sec:skymodels'></a>
Before we dive into deconvolution methods we need to introduce the concept of a sky model. Since we are making an incomplete sampling of the visibilities with limited resolution we do not recover the 'true' sky from an observation. The dirty image is the 'true' sky convolved (effectively blurred) out by the array point spread function (PSF). As discussed in the previous chapter, the PSF acts as a type of low-pass spatial filter limiting our resolution of the sky. We would like to some how recover a model for the true sky. At the end of deconvolution one of the outputs is the sky model.
We can look at the deconvolution process backwards by taking an ideal sky, shown in the left figure below (it may be difficult to see but there are various pixels with different intensities), and convolving it with the PSF response of the KAT-7 array, shown on the right, this is a point-source based sky model which will be discussed below. The sky model looks like a mostly empty image with a few non-zero pixels. The PSF is the same as the one shown in the previous chapter.
End of explanation
"""
fig = plt.figure(figsize=(16, 5))
fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-model.fits')
skyModel = fh[0].data
fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-psf.fits')
psf = fh[0].data
fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-dirty.fits')
dirtyImg = fh[0].data
fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-residual.fits')
residualImg = fh[0].data
#convolve the sky model with the PSF
sampFunc = np.fft.fft2(psf) #sampling function
skyModelVis = np.fft.fft2(skyModel) #sky model visibilities
sampModelVis = sampFunc * skyModelVis #sampled sky model visibilities
convImg = np.fft.fftshift(np.fft.ifft2(sampModelVis)).real + residualImg #sky model convolved with PSF
gc1 = aplpy.FITSFigure(convImg, figure=fig, subplot=[0,0.0,0.30,1])
gc1.show_colorscale(vmin=-1., vmax=3.0, cmap='viridis')
gc1.hide_axis_labels()
gc1.hide_tick_labels()
plt.title('PSF convolved with Sky Model')
gc1.add_colorbar()
gc2 = aplpy.FITSFigure(dirtyImg, figure=fig, subplot=[0.33,0.0,0.30,1])
gc2.show_colorscale(vmin=-1., vmax=3.0, cmap='viridis')
gc2.hide_axis_labels()
gc2.hide_tick_labels()
plt.title('Dirty')
gc2.add_colorbar()
gc3 = aplpy.FITSFigure(dirtyImg - convImg, figure=fig, subplot=[0.67,0.0,0.30,1])
gc3.show_colorscale(cmap='viridis')
gc3.hide_axis_labels()
gc3.hide_tick_labels()
plt.title('Difference')
gc3.add_colorbar()
fig.canvas.draw()
"""
Explanation: Left: a point-source sky model of a field of sources with various intensities. Right: PSF response of KAT-7 for a 6 hour observation at a declination of $-30^{\circ}$.
By convolving the ideal sky with the array PSF we effectively are recreating the dirty image. The figure on the left below shows the sky model convolved with the KAT-7 PSF. The centre image is the original dirty image created using uniform weighting in the previous chapter. The figure on the right is the difference between the two figures. The negative offset is an effect of the imager producing an absolute value PSF image. The main point to note is that the difference image shows the bright sources removed resulting in a fairly noise-like image.
End of explanation
"""
fig = plt.figure(figsize=(16, 7))
gc1 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-residual.fits', \
figure=fig, subplot=[0.1,0.1,0.35,0.8])
gc1.show_colorscale(vmin=-0.8, vmax=3., cmap='viridis')
gc1.hide_axis_labels()
gc1.hide_tick_labels()
plt.title('Residual')
gc1.add_colorbar()
gc2 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-image.fits', \
figure=fig, subplot=[0.5,0.1,0.35,0.8])
gc2.show_colorscale(vmin=-0.8, vmax=3., cmap='viridis')
gc2.hide_axis_labels()
gc2.hide_tick_labels()
plt.title('Restored')
gc2.add_colorbar()
fig.canvas.draw()
"""
Explanation: Left: the point-source sky model convolved with the KAT-7 PSF with the residual image added. Centre: the original dirty image. Right: the difference between the PSF-convoled sky model and the dirty image.
Now that we see we can recreate the dirty image from a sky model and the array PSF, we just need to learn how to do the opposite operation, deconvolution. In order to simplify the process we incorporate knowledge about the sky primarily by assuming a simple model for the sources.
6.1.1 The Point Source Assumption
We can use some prior information about the sources in the sky and the array as a priori information in our deconvolution attempt. The array and observation configuration results in a PSF which has a primary lobe of a particular scale, this is the effective resolution of the array. As we have seen in $\S$ 5.4 ➞ the choice of weighting functions can result in different PSF resolutions. But, no matter the array there is a limit to the resolution, so any source which has a smaller angular scale than the PSF resolution appears to be a point source. A point source is an idealized source which has no angular scale and is represented by a spatial Dirac $\delta$-function. Though all sources in the sky have a angular scale, many are much smaller than the angular resolution of the array PSF, so they can be approximated as a simple point source.
A nice features of the point source model of a source is that the Fourier transform of a Dirac $\delta$-function, by the Fourier shift theorem, is a simple complex phase function and the constant flux
$$
\mathscr{F} { C(\nu) \cdot \delta\,(l- l_0, m - m_0)}(u, v) = C(\nu) \cdot\iint \limits_{-\infty}^{\infty} \delta\,(l- l_0, m - m_0) \, e^{-2 \pi i (ul + vm)}\,dl\,dm = C(\nu) \cdot e^{-2 \pi i (ul_0 + vm_0)} \quad (6.1.1)
$$
where $C(\nu)$ is the flux of the source (which can include a dependence on observing frequency $\nu$), and $(l_0, m_0)$ is the phase centre. At the phase centre $(l_0, m_0)$, $\mathscr{F} { C \cdot \delta\,(0, 0)} = C$. A $\delta$-function based sky model leads to a nice, and computationally fast, method to generate visibilities, which is useful for deconvolution methods as we will see later in this chapter.
Of course we need to consider if using a collection of $\delta$-functions for a sky model is actually a good idea. The short answer is 'yes', the long answer 'yes to a limit' and current research is focused on using more advanced techniques to improve deconvolution. This is because the 2D Dirac $\delta$-function can be used a complete orthogonal basis set to describe any 2D function. Most sources in the sky are unresolved, that is they have a much smaller angular resolution then that of the array PSF. Sky sources which are resolved are a bit trickier, we will consider these sources later in the section.
A $\delta$-function basis set is also good for astronomical images because these images are generally sparse. That is, out of the thousands or millions of pixels in the image, only a few of the pixels contain sky sources (i.e. these pixels contain the information we desire) above the observed noise floor, the rest of the pixels contain mostly noise (i.e. contain no information). This differs from a natural image, for example a photograph of a duck, which is simply a collection of $\delta$-functions with different constant scale factors, one for each pixel in the image. Every pixel in a natural image generally contains information. We would say that in the $\delta$-function basis set a natural image is not sparse. As a side note, a natural image is usually sparse in wavelet space which is why image compression uses wavelet transforms.
Since an astronomical image is sparse we should be able to reduce the image down to separate out the noise from the sources and produce a small model which represents the true sky, this is the idea behind deconvolution. Looked in a different way, deconvolution is a process of filtering out the true sky flux from the instrument induced noise in each pixel. This idea of sparseness and information in aperture synthesis images is related to the field of compressed sensing. Much of the current research in radio interferometric deconvolution is framed in the compressed sensing context.
At the end of an deconvolution process we end up with two products: the sky model, and a set of residuals. A sky model can be as simple as a list of positions (either pixel number of sky position) and a flux value (and maybe a spectral index), e.g.
| Source ID | RA (H) | Dec (deg) | Flux (Jy) | SPI |
| --------- | ----------- | ------------ | --------- | ----- |
| 0 | 00:02:18.81 | -29:47:17.82 | 3.55 | -0.73 |
| 1 | 00:01:01.84 | -30:06:27.53 | 2.29 | -0.52 |
| 2 | 00:03:05.54 | -30:00:22.57 | 1.01 | -0.60 |
Table: A simple sky model of three unpolarized sources with different spectral indicies.
In this simple sky model there are three sources near right ascension 0 hours and declination $-30^{\circ}$, each source has an unpolarized flux in Jy and a spectral index.
The residuals are generally an image which results from the subtraction of the sky model from the original image. Additionally, a restored image is often produced, this is constructed from the sky model and residual image. This will be discussed further in the next few sections. An example of residual and restored images are shown below.
End of explanation
"""
def gauss2d(sigma):
"""Return a normalized 2d Gaussian function, sigma: size in pixels"""
return lambda x,y: (1./(2.*np.pi*(sigma**2.))) * np.exp(-1. * ((xpos**2. + ypos**2.) / (2. * sigma**2.)))
imgSize = 512
xpos, ypos = np.mgrid[0:imgSize, 0:imgSize].astype(float)
xpos -= imgSize/2.
ypos -= imgSize/2.
sigmas = [64., 16., 4., 1.]
fig = plt.figure(figsize=(16, 7))
#Gaussian image-domain source
ax1 = plt.subplot2grid((2, 4), (0, 0))
gauss1 = gauss2d(sigmas[0])
ax1.imshow(gauss1(xpos, ypos))
ax1.axis('off')
plt.title('Sigma: %i'%int(sigmas[0]))
#Gaussian image-domain source
ax2 = plt.subplot2grid((2, 4), (0, 1))
gauss2 = gauss2d(sigmas[1])
ax2.imshow(gauss2(xpos, ypos))
ax2.axis('off')
plt.title('Sigma: %i'%int(sigmas[1]))
#Gaussian image-domain source
ax3 = plt.subplot2grid((2, 4), (0, 2))
gauss3 = gauss2d(sigmas[2])
ax3.imshow(gauss3(xpos, ypos))
ax3.axis('off')
plt.title('Sigma: %i'%int(sigmas[2]))
#Gaussian image-domain source
ax4 = plt.subplot2grid((2, 4), (0, 3))
gauss4 = gauss2d(sigmas[3])
ax4.imshow(gauss4(xpos, ypos))
ax4.axis('off')
plt.title('Sigma: %i'%int(sigmas[3]))
#plot the visibility flux distribution as a function of baseline length
ax5 = plt.subplot2grid((2, 4), (1, 0), colspan=4)
visGauss1 = np.abs( np.fft.fftshift( np.fft.fft2(gauss1(xpos, ypos))))
visGauss2 = np.abs( np.fft.fftshift( np.fft.fft2(gauss2(xpos, ypos))))
visGauss3 = np.abs( np.fft.fftshift( np.fft.fft2(gauss3(xpos, ypos))))
visGauss4 = np.abs( np.fft.fftshift( np.fft.fft2(gauss4(xpos, ypos))))
ax5.plot(visGauss1[int(imgSize/2),int(imgSize/2):], label='%i'%int(sigmas[0]))
ax5.plot(visGauss2[int(imgSize/2),int(imgSize/2):], label='%i'%int(sigmas[1]))
ax5.plot(visGauss3[int(imgSize/2),int(imgSize/2):], label='%i'%int(sigmas[2]))
ax5.plot(visGauss4[int(imgSize/2),int(imgSize/2):], label='%i'%int(sigmas[3]))
ax5.hlines(1., xmin=0, xmax=int(imgSize/2)-1, linestyles='dashed')
plt.legend()
plt.ylabel('Flux')
plt.xlabel('Baseline Length')
plt.xlim(0, int(imgSize/8)-1)
ax5.set_xticks([])
ax5.set_yticks([])
"""
Explanation: Left: residual image after running a CLEAN deconvolution. Right: restored image constructed from convolving the point-source sky model with an 'ideal' PSF and adding the residual image.
Deconvolution, as can be seen in the figure on the left, builds a sky model by subtracts sources from the dirty image and adding them to a sky model. The resulting image shows the residual flux which was not added to the sky model. The restored image is a reconstruction of the field by convolving the sky model with an 'ideal' PSF and adding the residual image. This process will be discussed in the sections that follow.
The assumption that most sources are point source-like or can be represented by a set of point sources is the basis for the standard deconvolution process in radio interferometry, CLEAN. Though there are many new methods to perform deconvolution, CLEAN is the standard method and continues to dominate the field.
6.1.2 Resolved Sources
We need to consider what it means for a source to be 'resolved'. Each baseline in an array measures a particular spatial resolution. If the angular scale of a source is smaller than the spatial resolution of the longest baseline then the source is unresolved on every baseline. If the angular scale of a source is larger than the shortest baseline spatial resolution then the source is resolved on all baselines. In between these two extremes a source is resolved on longer baselines and unresolved on shorter baselines, thus the source is said to be partially resolved. The term extended source is often used as a synonym for fully- and partially-resolved sources.
Simple Gaussian extended sources are shown in the figure below. All sources have been normalized to have the same integrated flux. On the left is a very extended source, moving right is progressively smaller extended sources until the right most source which is a nearly point-source like object. Transforming each source into the visibility space, via Fourier transform, we plot the flux of each source as a function of baseline length. Baseline direction does not matter in these simple examples because the sources are circular Gaussians. For a very extended source (blue) the flux drops off quickly as a function of baseline length. In the limit the Gaussian size is decreased to that of a delta function the flux distribution (dashed black) is flat across all baseline lengths (this is the ideal case).
End of explanation
"""
|
bosscha/alma-calibrator | notebooks/selecting_source/alma_database_selection11.ipynb | gpl-2.0 | from collections import Counter
filename = "report_8_nonALMACAL_priority.txt"
with open(filename, 'r') as ifile:
wordcount = Counter(ifile.read().split())
"""
Explanation: find a word and count them
End of explanation
"""
current = ['3c454.3', 'J0006-0623', 'J0137+3309', 'J0211+1051', 'J0237+2848',
'J0241-0815', 'J0334-4008', 'J0440+2728', 'J0517-0520', 'J0538-4405', 'J0730-1141',
'J1037-2934', 'J1159+2914', 'J1449-004', 'J2232+1143', 'J0057-0024', 'J0138-0540',
'J0215-0222', 'J0238+166', 'J0301+0118', 'J0339-0133', 'J0426+0518', 'J0501-0159',
'J0521+1638', 'J0541-0541', 'J0750+1231', 'J1048-1909', 'J1225+1253', 'J1550+0527',
'J2258-2758', 'J0108+0135', 'J0141-0202', 'J0219+0120', 'J0239-0234', 'J0309+1029',
'J0339-0146', 'J0427-0700', 'J0509+1806', 'J0522-3627', 'J0604+2429', 'J1008+0621',
'J1058+0133', 'J1229+0203', 'J1650+0824', 'J0121+1149', 'J0149+0555', 'J0224+0659',
'J0239+0416', 'J0327+0044', 'J0423-0120', 'J0438+3004', 'J0510+180', 'J0532-0307',
'J0607-0834', 'J1011-0423', 'J1146+3958', 'J1337-1257', 'J2148+0657',
'J0538-4405', 'J0747-3310', 'J0922-3959', 'J1833-2103', 'J2148+0657',
'J0701-4634', 'J0828-3731', 'J1832-2039', 'J1924-2914', 'J2258-2758']
"""
Explanation: Our current data (largely only splitted, not reduced yet)
End of explanation
"""
already = []
for item in wordcount:
if item in current:
already.append(item)
print(item)
"""
Explanation: duplicate = ['J0538-4405', 'J2148+0657', 'J2258-2758']
Some object that the part of the data already taken
End of explanation
"""
|
EvanBianco/Practical_Programming_for_Geoscientists | Part2b__Synthetic_seismogram.ipynb | apache-2.0 | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: EXERCISE — Simple synthetic
This notebook looks at the convolutional model of a seismic trace.
For a fuller example, see Bianco, E (2004) in The Leading Edge.
First, the usual preliminaries.
End of explanation
"""
from welly import Well
L30 = Well.from_las('data/L30.las')
"""
Explanation: Load petrophysical data
We'll use welly to faciliate loading curves from an LAS file.
End of explanation
"""
sonic = L30.data['DT']/ 0.3048 # Convert to us per m
"""
Explanation: Make variables for the curves, converting to SI units on the way.
End of explanation
"""
# CODE GOES HERE
"""
Explanation: Q. What other curve do we need? Inspect the contents of L30 and assign any new curves to new variables.
End of explanation
"""
# CALCULATE VELOCITY
ai = # PUT THE EXPRESSION FOR IMPEDANCE HERE
plt.figure(figsize=(16, 2))
plt.plot(z, ai)
plt.show()
# YOU SHOULD GET SOMETHING LIKE THIS:
"""
Explanation: Q. Compute velocity and thus acoustic impedance.
End of explanation
"""
step = l30.header['Well']['STEP'].value * 0.3048 # Convert to m
"""
Explanation: Depth to time conversion
The logs are in depth, but the seismic is in travel time. So we need to convert the well data to time.
We don't know the seismic time, but we can model it from the DT curve: since DT is 'elapsed time', in microseconds per metre, we can just add up all these time intervals for 'total elapsed time'. Then we can use that to 'look up' the time of a given depth.
First, get the step (depth interval) of the log values:
End of explanation
"""
scaled_dt = step * np.nan_to_num(dt) / 1e6 # Convert to seconds per step
"""
Explanation: Now use this to scale the DT values to 'seconds per step' (instead of µs/m).
End of explanation
"""
kb = 0.3048 * l30.header['Well']['KB'].value
gl = 0.3048 * l30.header['Well']['GL'].value
start = 0.3048 * l30.header['Well']['STRT'].value
v_water = 1480
v_repl = 1800
water_layer =
repl_layer =
water_time =
repl_time =
print("Water time: {:.3f} ms\nRepl time: {:.3f} ms".format(water_time, repl_time))
"""
Explanation: Q. Calculate the time to the top fo the log
Now do a bunch of arithmetic to find the timing of the top of the log.
End of explanation
"""
dt_time = water_time + repl_time + 2*np.cumsum(scaled_dt)
"""
Explanation: Now finally we can compute the cumulative time elapsed on the DT log:
End of explanation
"""
dt = 0.004 # Sample interval.
maxt = 3 # Max time that we need; just needs to be longer than the log.
# Make a regular time basis: the seismic time domain.
seis_time = np.arange(0, maxt, dt)
# Interpolate the AI log onto this basis.
ai_t = np.interp(seis_time, dt_time, ai)
plt.figure(figsize=(16, 2))
plt.plot(seis_time, ai_t)
plt.show()
"""
Explanation: And then use this to convert the logs to a time basis:
End of explanation
"""
rc =
rc[np.isnan(rc)] = 0
"""
Explanation: Q. Compute the reflection coefficients in time.
End of explanation
"""
fig = plt.figure(figsize=(16, 2))
ax = fig.add_subplot(111)
ax.axhline()
for i, c in enumerate(rc):
mi, ma = (0.5, 0.5+c) if c > 0 else (0.5+c, 0.5)
ax.axvline(i, mi, ma)
ax.set_ylim(-0.3, 0.3)
plt.show()
"""
Explanation: Plotting these is a bit more fiddly, because we would like to show them as a sequence of spikes, rather than as a continuous curve, and matplotlib's axvline method wants everything in terms of fractions of the plot's dimensions, not as values in the data space.
End of explanation
"""
def ricker(f=25, length=0.128, dt=0.004):
t = np.arange(-length/2, (length-dt)/2, dt)
y = (1.0 - 2.0*(np.pi**2)*(f**2)*(t**2)) * np.exp(-(np.pi**2)*(f**2)*(t**2))
return t, y
"""
Explanation: Impulsive wavelet
Convolve with a wavelet.
End of explanation
"""
syn = np.convolve(rc, w, mode='same') # CHANGE W TO THE NAME OF YOUR WAVELET
plt.figure(figsize=(16,2))
plt.plot(seis_time[1:], syn)
plt.show()
"""
Explanation: Q. Make and plot a Ricker wavelet of 30 Hz central frequency.
End of explanation
"""
seismic = np.loadtxt('data/Penobscot_xl1155.txt')
syn.shape
# Sample index space
idx = np.arange(0, syn.size)
# The synthetic is at trace number 77
tr = 77
gain = 50
# Make a shifted version of the synthetic to overplot.
s = tr + gain*syn
plt.figure(figsize=(10,20))
plt.imshow(seismic.T, cmap='Greys')
plt.plot(s, idx)
plt.fill_betweenx(idx, tr, s, where=syn>0, lw=0)
plt.xlim(0, 400)
plt.ylim(800, 0)
plt.show()
"""
Explanation: If we are recording with dynamite or even an airgun, this might be an acceptable model of the seismic. But if we're using Vibroseis, things get more complicated.
Compare with the seismic
End of explanation
"""
|
adamamiller/PS1_star_galaxy | gaia/pmStarsForZTFdatabase.ipynb | mit | gaia_dir = "/Users/adamamiller/Desktop/PS1_fits/gaia_stars/"
gaia_df = pd.read_hdf(gaia_dir + "parallax_ps1_gaia_mag_pm_plx.h5")
pxl_not_pm = np.where((gaia_df["parallax_over_error"] >= 8) &
(gaia_df["pm_over_error"] < 7.5))
gaia_df.iloc[pxl_not_pm]
"""
Explanation: First - test to see if there are any stars that would be selected by the parallax criterion, but not by the proper motion criterion.
End of explanation
"""
pxl_ps1 = pd.read_hdf("parallax_ps1_gaia_cat_merge.h5")
pxl_objid = np.array(gaia_df["objid"].iloc[pxl_not_pm])
pxl_ps1_objid = np.array(pxl_ps1.index)
pxl_in_cat = np.isin(pxl_objid, pxl_ps1_objid)
print("There are {} pxl stars that need to be added to the catalog".format(len(pxl_objid[~pxl_in_cat])))
pxl_objid[~pxl_in_cat]
del pxl_ps1
del gaia_df
gc.collect()
"""
Explanation: There are ~579k sources that satisfy the pxl cut but not the pm cut. These now need to be matched to the pxl cut stars in PS1 to see which (if any) of these stars need to be adjusted in the ZTF database.
End of explanation
"""
gaia_dir = "/Users/adamamiller/Desktop/PS1_fits/gaia_stars/"
pm_df = pd.read_hdf(gaia_dir + "pm_ps1_gaia_mag_pm_plx.h5")
pm_ps1 = pd.read_hdf("pm_ps1_gaia_cat_merge.h5")
pm_objid = np.array(pm_df["objid"])
pm_ps1_objid =np.array(pm_ps1.index)
del pm_df
del pm_ps1
gc.collect()
pm_in_cat = np.isin(pm_objid, pm_ps1_objid)
pxl_and_pm_objid = np.append(pm_objid[~pm_in_cat],
pxl_objid[~pxl_in_cat])
new_stars_df = pd.DataFrame(pxl_and_pm_objid, columns=["objid"])
new_stars_df.to_hdf("objid_for_gaia_stars_for_ztf_database.h5", "d1")
"""
Explanation: Get the missing pm stars
End of explanation
"""
|
vadim-ivlev/STUDY | handson-data-science-python/DataScience-Python3/MultivariateRegression.ipynb | mit | import pandas as pd
df = pd.read_excel('http://cdn.sundog-soft.com/Udemy/DataScience/cars.xls')
df.head()
"""
Explanation: Multivariate Regression
Let's grab a small little data set of Blue Book car values:
End of explanation
"""
import statsmodels.api as sm
from sklearn.preprocessing import StandardScaler
scale = StandardScaler()
X = df[['Mileage', 'Cylinder', 'Doors']]
y = df['Price']
X[['Mileage', 'Cylinder', 'Doors']] = scale.fit_transform(X[['Mileage', 'Cylinder', 'Doors']].as_matrix())
print (X)
est = sm.OLS(y, X).fit()
est.summary()
y.groupby(df.Doors).mean()
"""
Explanation: We can use pandas to split up this matrix into the feature vectors we're interested in, and the value we're trying to predict.
Note how we are avoiding the make and model; regressions don't work well with ordinal values, unless you can convert them into some numerical order that makes sense somehow.
Let's scale our feature data into the same range so we can easily compare the coefficients we end up with.
End of explanation
"""
|
atavory/ibex | examples/digits_confidence_intervals.ipynb | bsd-3-clause | import multiprocessing
import pandas as pd
import numpy as np
from sklearn import datasets
import seaborn as sns
sns.set_style('whitegrid')
from sklearn.externals import joblib
from ibex.sklearn import decomposition as pd_decomposition
from ibex.sklearn import linear_model as pd_linear_model
from ibex.sklearn import model_selection as pd_model_selection
from ibex.sklearn.model_selection import GridSearchCV as PdGridSearchCV
%pylab inline
digits = datasets.load_digits()
features = ['f%d' % i for i in range(digits['data'].shape[1])]
digits = pd.DataFrame(
np.c_[digits['data'], digits['target']],
columns=features+['digit'])
digits.head()
"""
Explanation: Confidence Intervals In The Digits Dataset
This notebook illustrates finding confidence intervals in the Digits dataset. It is a version of the Scikit-Learn example Pipelining: chaining a PCA and a logistic regression.
The main point it shows is using pandas structures throughout the code, as well as the ease of creating pipelines using the | operator.
Loading The Data
First we load the dataset into a pandas.DataFrame.
End of explanation
"""
clf = pd_decomposition.PCA() | pd_linear_model.LogisticRegression()
"""
Explanation: Repeating The Scikit-Learn Grid-Search CV Example
Following the sickit-learn example, we now pipe the PCA step to a logistic regressor.
End of explanation
"""
estimator = PdGridSearchCV(
clf,
{'pca__n_components': [20, 40, 64], 'logisticregression__C': np.logspace(-4, 4, 3)},
n_jobs=multiprocessing.cpu_count())
estimator.fit(digits[features], digits.digit)
"""
Explanation: We now find the optimal fit parameters using grid-search CV.
End of explanation
"""
params = estimator.best_estimator_.get_params()
params['pca__n_components'], params['logisticregression__C']
estimator.best_score_
"""
Explanation: It is interesting to look at the best parameters and the best score:
End of explanation
"""
all_scores = pd_model_selection.cross_val_score(
estimator.best_estimator_,
digits[features],
digits.digit,
cv=pd_model_selection.ShuffleSplit(
n_splits=100,
test_size=0.15),
n_jobs=-1)
sns.boxplot(x=all_scores, color='grey', orient='v');
ylabel('classification score (mismatch)')
figtext(
0,
-0.1,
'Classification scores for optimized-parameter PCA followed by logistic-regression.');
"""
Explanation: Finding The Scores' Confidence Intervals
How significant is the improvement in the score?
Using the parameters found in the grid-search CV, we perform 1000 jacknife (leave 15% out) iterations.
End of explanation
"""
all_scores = pd_model_selection.cross_val_score(
pd_linear_model.LogisticRegression(),
digits[features],
digits.digit,
cv=pd_model_selection.ShuffleSplit(
n_splits=1000,
test_size=0.15),
n_jobs=-1)
sns.boxplot(x=all_scores, color='grey', orient='v');
ylabel('classification score (mismatch)')
figtext(
0,
-0.1,
'Classification scores for logistic-regression. The results do not seem significantly worse than the optimized-params' +
'PCA followed by logistic regression');
"""
Explanation: Using just logistic regression (which is much faster), we do the same.
End of explanation
"""
|
Dioptas/pymatgen | examples/Plotting a Pourbaix Diagram.ipynb | mit | from pymatgen.matproj.rest import MPRester
from pymatgen.core.ion import Ion
from pymatgen import Element
from pymatgen.phasediagram.pdmaker import PhaseDiagram
from pymatgen.analysis.pourbaix.entry import PourbaixEntry, IonEntry
from pymatgen.analysis.pourbaix.maker import PourbaixDiagram
from pymatgen.analysis.pourbaix.plotter import PourbaixPlotter
from pymatgen.entries.compatibility import MaterialsProjectCompatibility, AqueousCorrection
%matplotlib inline
"""
Explanation: This notebook provides an example of how to generate a Pourbaix Diagram using the Materials API and pymatgen. Currently, the process is a bit involved. But we are working to simplify the usage in the near future.
Author: Sai Jayaratnam
End of explanation
"""
def contains_entry(entry_list, entry):
for e in entry_list:
if e.entry_id == entry.entry_id or \
(abs(entry.energy_per_atom
- e.energy_per_atom) < 1e-6 and
entry.composition.reduced_formula ==
e.composition.reduced_formula):
return True
"""
Explanation: Let's first define a useful function for filtering duplicate entries.
End of explanation
"""
#This initializes the REST adaptor. Put your own API key in.
a = MPRester()
#Entries are the basic unit for thermodynamic and other analyses in pymatgen.
#This gets all entries belonging to the Fe-O-H system.
entries = a.get_entries_in_chemsys(['Fe', 'O', 'H'])
"""
Explanation: Using the Materials API, we obtain the entries for the relevant chemical system we are interested in.
End of explanation
"""
#Dictionary of ion:energy, where the energy is the formation energy of ions from
#the NBS tables. (Source: NBS Thermochemical Tables; FeO4[2-]: Misawa T., Corr. Sci., 13(9), 659-676 (1973))
ion_dict = {"Fe[2+]":-0.817471, "Fe[3+]":-0.0478, "FeO2[2-]":-3.06055, "FeOH[+]":-2.8738,
"FeOH[2+]":-2.37954, "HFeO2[-]":-3.91578, "Fe(OH)2[+]":-4.54022, "Fe2(OH)2[4+]":-4.84285,
"FeO2[-]":-3.81653, "FeO4[2-]":-3.33946, "Fe(OH)3(aq)":-6.83418, "Fe(OH)2[+]":-4.54022}
#Dictionary of reference state:experimental formation energy (from O. Kubaschewski) for reference state.
ref_dict = {"Fe2O3": -7.685050670886141}
ref_state = "Fe2O3"
"""
Explanation: To construct a Pourbaix diagram, we also need the reference experimental energies for the relevant aqueous ions. This process is done manually here. We will provide a means to obtain these more easily via a programmatic interface in future.
End of explanation
"""
# Run aqueouscorrection on the entries
aqcompat = AqueousCorrection("MP")
entries_aqcorr = list()
for entry in entries:
aq_corrected_entry = aqcompat.correct_entry(entry)
if not contains_entry(entries_aqcorr, aq_corrected_entry):
entries_aqcorr.append(aq_corrected_entry)
# Generate a phase diagram to consider only solid entries stable in water.
pd = PhaseDiagram(entries_aqcorr)
stable_solids = pd.stable_entries
stable_solids_minus_h2o = [entry for entry in stable_solids if
entry.composition.reduced_formula not in ["H2", "O2", "H2O", "H2O2"]]
pbx_solid_entries = []
for entry in stable_solids_minus_h2o:
pbx_entry = PourbaixEntry(entry)
pbx_entry.g0_replace(pd.get_form_energy(entry))
pbx_entry.reduced_entry()
pbx_solid_entries.append(pbx_entry)
# Calculate DFT reference energy for ions (See Persson et al, PRB (2012))
ref_entry = [entry for entry in stable_solids_minus_h2o if entry.composition.reduced_formula == ref_state][0]
ion_correction = pd.get_form_energy(ref_entry)/ref_entry.composition.get_reduced_composition_and_factor()[1] - ref_dict[ref_state]
el = Element("Fe")
pbx_ion_entries = []
# Get PourbaixEntry corresponding to each ion
for key in ion_dict:
comp = Ion.from_formula(key)
factor = comp.composition[el] / (ref_entry.composition[el] / ref_entry.composition.get_reduced_composition_and_factor()[1])
energy = ion_dict[key] + ion_correction * factor
pbx_entry_ion = PourbaixEntry(IonEntry(comp, energy))
pbx_entry_ion.name = key
pbx_ion_entries.append(pbx_entry_ion)
all_entries = pbx_solid_entries + pbx_ion_entries
# Generate and plot Pourbaix diagram
pourbaix = PourbaixDiagram(all_entries)
plotter = PourbaixPlotter(pourbaix)
plotter.plot_pourbaix(limits=[[-2, 16],[-3, 3]])
"""
Explanation: We will now construct the Pourbaix diagram, which requires the application of the AqueousCorrection, obtaining the stable entries, followed by generating a list of Pourbaix entries.
End of explanation
"""
|
KiranArun/A-Level_Maths | Matrices/Matrices.ipynb | mit | # we will be using numpy to create the arrays
# the code isn't so important in this notebook, just the arrays are
import numpy as np
"""
Explanation: A-Level: Matrices
End of explanation
"""
# array conaining 12 consecutive values in shape 3 by 4
a = np.arange(12).reshape([3,4])
print(a)
"""
Explanation: Matrices are 2d arrays written rows x columns
End of explanation
"""
# They can be added if they are the same size/shape
# they are added by adding the equivilent value in the other array
A = np.arange(9).reshape([3,3])
B = np.eye(3,3)
add = np.add(A,B)
sub = np.subtract(A,B)
print('A + B =\n', add)
print('A - B =\n', sub)
"""
Explanation: Simple Matrix Operations
Addition/Subtraction
$(A+B)_{i,j}$
$(A-B)_{i,j}$
End of explanation
"""
# Scalar multiplication just multiplies each element
sca_mul = np.multiply(2,A)
print(sca_mul)
"""
Explanation: Scalar Multiplication
$2\cdot A_{i,j}$
End of explanation
"""
B = np.linspace(0.5,12,24).reshape(3,8)
print('B =')
print(B)
print('\nB^T =')
print( np.transpose(B))
"""
Explanation: Transposition
This is where each row and column is swapped with its corresponding row or column.
$A^T$
End of explanation
"""
A = np.linspace(1,6,6).reshape(2,3)
B = np.linspace(1,9,9).reshape(3,3)
# use numpy's matrix multiplication matmul
C = np.matmul(A, B)
print('A =')
print(A)
print('B =')
print(B)
print('\nWith A x B = C \nC =')
print(C)
"""
Explanation: Matrix Multiplication
only works if
columns in A = rows in B
$A_{m,n}\cdot B_{n,p} = C_{m,p}$
Remember that the order affects the answer
End of explanation
"""
A = np.linspace(1,6,6).reshape(2,3)
B = np.linspace(2,12,6).reshape(3,2)
C = np.matmul(A, B)
print('A =')
print(A)
print('B =')
print(B)
print('\nWith A x B = C \nC =')
print(C)
"""
Explanation: In this Example,
the element $C_{i,j}$ will be
$\sum A[i,:] \cdot B[:,j]$
Another Example
End of explanation
"""
# Here, we are using the numpy.eye function
# This creates an identity matrix with the specified number size
I = np.eye(5)
print(I)
"""
Explanation: Identitiy Matrices
Identity Matrices are Matrices which if you multiply it by any matrix, it returns the same matrix you multiplied it by
They are equivalent to the number 1 but for matrices. They must always be a square and have 1's in a diagonal from top left to bottom right with the rest zeros
Noted as:
$I_s$
with s as a dimension
End of explanation
"""
A = np.arange(4).reshape(2,2)
print('A =')
print(A)
print('\nA^-1 =')
print(np.linalg.inv(A))
"""
Explanation: Zero Matrices
These are Matrices full of only 0's
Noted as: $O$
Inverse Matrices
If $A\cdot B = C$, then $A = \frac{C}{B}$
But we cant divide matrices so we use
$A = B^{-1}C$
$B^{-1}$ is the inverse of B
Also
$B\cdot B^{-1} = I$
with I as an identity matrix
For $2\times 2$ matrices
$\left(\begin{array}{cc}
a & b \c & d
\end{array}\right)^{-1} = \frac{1}{ad-bc}\left(\begin{array}{cc}
d & -b \-c & a
\end{array}\right)$
Matrices with inverses are called nonsingulars or invertables
A Matrix with no inverse, where $ad-bc=0$, is called a singular
The Determinant
$ad-bc$ is the determinant from above.
Noted as:
$det(M)$ or $|M|$
with M as the Matrix
End of explanation
"""
# The same example but in 1 line of python
print(np.linalg.inv(np.array([[1,-1,3],[2,-1,4],[2,2,1]])))
"""
Explanation: For $3\times 3$ or more matrices
$M = \left(\begin{array}{cc}
1 & -1 & 3 \2 & -1 & 4\ 2 & 2 & 1
\end{array}\right)$
To find $M^{-1}$:
First, find the Determinant:
For each element in the matrix, remove the row and column its in so you're left with a 2x2 matrix
if $M^{0,0}$ is choosen
$\left(\begin{array}{cc}
T & - & - \- & -1 & 4\ - & 2 & 1
\end{array}\right)$
with T as $M^{0,0}$
Returns
$\left(\begin{array}{cc}
-1 & 4\2 & 1
\end{array}\right)$
or if $M^{1,2}$ is choosen
$\left(\begin{array}{cc}
1 & -1 & - \- & - & T\ 2 & 2 & -
\end{array}\right)$
with T as $M^{0,0}$
Returns
$\left(\begin{array}{cc}
1 & -1\2 & 2
\end{array}\right)$
Now find the determinant of this submatrix using $ad-bc$
det$\left(\begin{array}{cc}
-1 & 4\2 & 1
\end{array}\right)$ = $(-1\times 1) - (4\times 2$) = $-9$
Since this submatrix was choosen from point $M^{0,0}$, in our matrix of minors, $M^{0,0} = -9$
if we do this for every point:
Matrix of Minors $= \left(\begin{array}{cc}
-9 & -6 & 6 \-7 & -5 & 4\ -1 & -2 & 1
\end{array}\right)$
Now we need to find the cofactors of the original matrix
We do this by multiplying each element by $-1^{i+j}$
for $M^{0,0}$, i and j = 0 so:
$-9\times -1^{0+0} = -9\times 1 = -9$
or for $M^{2,1}$, i = 2 and j = 1 so:
$-2\times -1^{2+1} = -2\times -1 = 2$
it can also be visualised as an array with alternating positive and negatives while always starting with a positive like:
$\left(\begin{array}{cc}
+ & - & + \- & + & -\ + & - & +
\end{array}\right)$
This gives us:
Matrix of Cofactors
$= \left(\begin{array}{cc}
-9 & 6 & 6 \7 & -5 & -4\ -1 & 2 & 1
\end{array}\right)$
Now find the determinant of the original matrix
Do this by multiplying each individual determinant on one row from our matrix of cofactors by the corresponding original value. It doesnt matter what row you choose, it will always be the same as the calculation endsup using all the elements in the original matrix
$|M| = (-9\times 1) + (6\times -1) + (6\times 3) = 3$
Now find the adjugate (or known as adjoint)
All we do is transpose the matrix of cofactors
This will allow us to multiply it by the original matrix and return an identity matrix
adjugate =
$= \left(\begin{array}{cc}
-9 & 7 & -1 \6 & -5 & 2\ 6 & -4 & 1
\end{array}\right)$
Finally, multiply it by $|M|^{-1}$
$M^{-1} = \frac{1}{3} \left(\begin{array}{cc} -9 & 7 & -1 \6 & -5 & 2\ 6 & -4 & 1\end{array}\right)$
We must multiply is by $|M|^{-1}$ because if we multiply any row by corresponding column in the original matrix, we will recieve the determinant because that is how we calculated the determinant in the first place
If the matrix is any bigger, do the same but calculate more submatrices until left with 2x2 matrices like in this example.
End of explanation
"""
|
Aniruddha-Tapas/Applied-Machine-Learning | Classification/Classifiying Ionosphere structure using K nearest neigbours algorithm.ipynb | mit | import csv
import numpy as np
# Size taken from the dataset and is known
X = np.zeros((351, 34), dtype='float')
y = np.zeros((351,), dtype='bool')
with open("data/Ionosphere/ionosphere.data", 'r') as input_file:
reader = csv.reader(input_file)
for i, row in enumerate(reader):
# Get the data, converting each item to a float
data = [float(datum) for datum in row[:-1]]
# Set the appropriate row in our dataset
X[i] = data
# 1 if the class is 'g', 0 otherwise
y[i] = row[-1] == 'g'
"""
Explanation: Classifying Ionosphere structure using K nearest neigbours algorithm
<hr>
Nearest neighbors
Amongst the standard machine algorithms, Nearest neighbors is perhaps one of the most intuitive algorithms. To predict the class of a new sample, we look through the training dataset for the samples that are most similar to our new sample.
We take the most similar sample and predict the class that the majority of those samples have. As an example, we wish to predict the class of the '?', based on which class it is more similar to (represented here by having similar objects closer together). We find the five nearest neighbors, which are three triangles, one circle and one plus. There are more
triangles than circles and plus, and hence the predicted class for the '?' is, therefore, a triangle.
<img src = "images/knn.png">
[image source]
Nearest neighbors can be used for nearly any dataset-however, since we will have to compute the distance between all pairs of samples, it can be very computationally expensive to do so.
For example if there are 10 samples in the dataset, there are 45 unique distances
to compute. However, if there are 1000 samples, there are nearly 500,000!
Distance metrics
If we have two samples, we need to know how close they are to each other. Further more, we need to answer
questions such as are these two samples more similar than the other two?
The most common distance metric that you might have heard of is Euclidean
distance, which is the real-world distance. Formally, Euclidean distance is the square root of the sum of the squared
distances for each feature. It is intuitive, albeit provides poor accuracy if some features have larger values than others. It also gives poor results when lots of features have a value of 0, i.e our data is 'sparse'. There are other distance metrics in use; two commonly employed ones are the Manhattan and Cosine distance. The Manhattan distance is the sum of the absolute differences in each feature (with no use of square distances). While the Manhattan distance does suffer if
some features have larger values than others, the effect is not as dramatic as in the
case of Euclidean. Regardless for the implementation of KNN algorithm here, we would consider the Euclidean distance.
Dataset
To understand KNNs, We will use the Ionosphere dataset, which is the recording of many
high-frequency antennas. The aim of the antennas is to determine whether there is a
structure in the ionosphere and a region in the upper atmosphere. Those that have a
structure are classified as good, while those that do not are classified as bad. Our aim is to determine whether an image
is good or bad.
You can download the dataset from : http://archive.ics.uci.edu/ml/datasets/Ionosphere.
Save the ionosphere.data file from the Data Folder to a folder named "data" on your computer.
For each row in the dataset, there are 35 values. The first 34 are measurements taken
from the 17 antennas (two values for each antenna). The last is either 'g' or 'b'; that
stands for good and bad, respectively.
End of explanation
"""
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=14)
print("There are {} samples in the training dataset".format(X_train.shape[0]))
print("There are {} samples in the testing dataset".format(X_test.shape[0]))
print("Each sample has {} features".format(X_train.shape[1]))
"""
Explanation: First, we load up the NumPy and csv modules. Then we create the X and y NumPy arrays to store the dataset in. The sizes of these
arrays are known from the dataset. We take the first 34 values from this sample, turn each into a float, and save that to
our dataset. Finally, we take the last value of the row and set the class. We set it to 1 (or True) if it
is a good sample, and 0 if it is not. We now have a dataset of samples and features in X, and the corresponding classes in y
Estimators in scikit-learn have two main functions: fit() and predict().
We train the algorithm using the fit method and our training set. We evaluate it
using the predict method on our testing set.
First, we need to create these training and testing sets. As before, import and run the
train_test_split function:
End of explanation
"""
from sklearn.neighbors import KNeighborsClassifier
estimator = KNeighborsClassifier()
"""
Explanation: Then, we import the nearest neighbor class and create an instance for it using the default parameters. By default, the algorithm will choose the five nearest neighbors to predict
the class of a testing sample:
End of explanation
"""
estimator.fit(X_train, y_train)
y_predicted = estimator.predict(X_test)
accuracy = np.mean(y_test == y_predicted) * 100
print("The accuracy is {0:.1f}%".format(accuracy))
"""
Explanation: After creating our estimator, we must then fit it on our training dataset. For the
nearest neighbor class, this records our dataset, allowing us to find the nearest
neighbor for a new data point, by comparing that point to the training dataset:
estimator.fit(X_train, y_train)
We then train the algorithm with our test set and evaluate with our testing set:
End of explanation
"""
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(estimator, X, y, scoring='accuracy')
average_accuracy = np.mean(scores) * 100
print("The average accuracy is {0:.1f}%".format(average_accuracy))
"""
Explanation: This scores 86.4 percent accuracy, which is impressive for a default algorithm and
just a few lines of code! Most scikit-learn default parameters are chosen explicitly
to work well with a range of datasets. However, you should always aim to choose
parameters based on knowledge of the application experiment.
End of explanation
"""
avg_scores = []
all_scores = []
parameter_values = list(range(1, 21)) # Including 20
for n_neighbors in parameter_values:
estimator = KNeighborsClassifier(n_neighbors=n_neighbors)
scores = cross_val_score(estimator, X, y, scoring='accuracy')
avg_scores.append(np.mean(scores))
all_scores.append(scores)
"""
Explanation: Using cross validation, the model this gives a slightly more modest result of 82.3 percent, but it is still quite good
considering we have not yet tried setting better parameters.
Tuning parameters
Almost all data mining algorithms have parameters that the user can set. This is
often a cause of generalizing an algorithm to allow it to be applicable in a wide
variety of circumstances. Setting these parameters can be quite difficult, as choosing
good parameter values is often highly reliant on features of the dataset.
The nearest neighbor algorithm has several parameters, but the most important
one is that of the number of nearest neighbors to use when predicting the class of
an unseen attribution. In scikit-learn, this parameter is called n_neighbors.
In the following figure, we show that when this number is too low, a randomly
labeled sample can cause an error. In contrast, when it is too high, the actual nearest
neighbors have a lower effect on the result.
If we want to test a number of values for the n_neighbors parameter, for example,
each of the values from 1 to 20, we can rerun the experiment many times by setting
n_neighbors and observing the result:
End of explanation
"""
%matplotlib inline
"""
Explanation: We compute and store the average in our list of scores. We also store the full set of
scores for later analysis. We can then plot the relationship between the value of n_neighbors and the
accuracy.
End of explanation
"""
from matplotlib import pyplot as plt
plt.figure(figsize=(32,20))
plt.plot(parameter_values, avg_scores, '-o', linewidth=5, markersize=24)
#plt.axis([0, max(parameter_values), 0, 1.0])
"""
Explanation: We then import pyplot from the matplotlib library and plot the parameter values
alongside average scores:
End of explanation
"""
X_broken = np.array(X)
"""
Explanation: While there is a lot of variance, the plot shows a decreasing trend as the number of
neighbors increases.
Preprocessing using pipelines
When taking measurements of real-world objects, we can often get features in
very different ranges. Like we saw in the case of classifying Animal data using Naive Bayes, if we are measuring the qualities of an animal,
we considered several features, as follows:
Number of legs: This is between the range of 0-8 for most animals, while
some have many more!
Weight: This is between the range of only a few micrograms, all the way
to a blue whale with a weight of 190,000 kilograms!
Number of hearts: This can be between zero to five, in the case of
the earthworm.
For a mathematical-based algorithm to compare each of these features, the differences in the scale, range, and units can be difficult to interpret. If we used the above features in many algorithms, the weight would probably be the most
influential feature due to only the larger numbers and not anything to do with the actual effectiveness of the feature.
One of the methods to overcome this is to use a process called preprocessing to normalize the features so that they all have the same range, or are put into categories like small, medium and large. Suddenly, the large difference in the
types of features has less of an impact on the algorithm, and can lead to large
increases in the accuracy.
Preprocessing can also be used to choose only the more effective features, create new
features, and so on. Preprocessing in scikit-learn is done through Transformer
objects, which take a dataset in one form and return an altered dataset after some
transformation of the data. These don't have to be numerical, as Transformers are also
used to extract features-however, in this section, we will stick with preprocessing.
An example
We can show an example of the problem by breaking the Ionosphere dataset.
While this is only an example, many real-world datasets have problems of this
form. First, we create a copy of the array so that we do not alter the original dataset:
End of explanation
"""
X_broken[:,::2] /= 10
"""
Explanation: Next, we break the dataset by dividing every second feature by 10:
End of explanation
"""
estimator = KNeighborsClassifier()
original_scores = cross_val_score(estimator, X, y,scoring='accuracy')
print("The original average accuracy for is {0:.1f}%".format(np.mean(original_scores) * 100))
broken_scores = cross_val_score(estimator, X_broken, y,scoring='accuracy')
print("The 'broken' average accuracy for is {0:.1f}%".format(np.mean(broken_scores) * 100))
"""
Explanation: In theory, this should not have a great effect on the result. After all, the values
for these features are still relatively the same. The major issue is that the scale has
changed and the odd features are now larger than the even features. We can see the
effect of this by computing the accuracy:
End of explanation
"""
from sklearn.preprocessing import MinMaxScaler
"""
Explanation: This gives a score of 82.3 percent for the original dataset, which drops down to
71.5 percent on the broken dataset. We can fix this by scaling all the features to
the range 0 to 1.
Standard preprocessing
The preprocessing we will perform for this experiment is called feature-based
normalization through the MinMaxScaler class.
End of explanation
"""
X_transformed = MinMaxScaler().fit_transform(X)
"""
Explanation: This class takes each feature and scales it to the range 0 to 1. The minimum value is
replaced with 0, the maximum with 1, and the other values somewhere in between.
To apply our preprocessor, we run the transform function on it. While MinMaxScaler
doesn't, some transformers need to be trained first in the same way that the classifiers
do. We can combine these steps by running the fit_transform function instead:
End of explanation
"""
X_transformed = MinMaxScaler().fit_transform(X_broken)
estimator = KNeighborsClassifier()
transformed_scores = cross_val_score(estimator, X_transformed, y,scoring='accuracy')
print("The average accuracy for is {0:.1f}%".format(np.mean(transformed_scores) * 100))
"""
Explanation: Here, X_transformed will have the same shape as X. However, each column will
have a maximum of 1 and a minimum of 0.
There are various other forms of normalizing in this way, which is effective for other
applications and feature types:
* Ensure the sum of the values for each sample equals to 1, using sklearn.
preprocessing.Normalizer
* Force each feature to have a zero mean and a variance of 1, using sklearn.
preprocessing.StandardScaler, which is a commonly used starting point
for normalization
* Turn numerical features into binary features, where any value above
a threshold is 1 and any below is 0, using sklearn.preprocessing.
Binarizer .
We can now create a workflow by combining the code from the previous sections,
using the broken dataset previously calculated:
End of explanation
"""
from sklearn.pipeline import Pipeline
"""
Explanation: This gives us back our score of 82.3 percent accuracy. The MinMaxScaler resulted in
features of the same scale, meaning that no features overpowered others by simply
being bigger values. While the Nearest Neighbor algorithm can be confused with
larger features, some algorithms handle scale differences better. In contrast, some
are much worse!
Pipelines
As experiments grow, so does the complexity of the operations. We may split up
our dataset, binarize features, perform feature-based scaling, perform sample-based
scaling, and many more operations.
Keeping track of all of these operations can get quite confusing and can result in
being unable to replicate the result. Problems include forgetting a step, incorrectly
applying a transformation, or adding a transformation that wasn't needed.
Another issue is the order of the code. In the previous section, we created our
X_transformed dataset and then created a new estimator for the cross validation.
If we had multiple steps, we would need to track all of these changes to the dataset
in the code.
Pipelines are a construct that addresses these problems (and others, which we will
see in the next chapter). Pipelines store the steps in your data mining workflow. They
can take your raw data in, perform all the necessary transformations, and then create
a prediction. This allows us to use pipelines in functions such as cross_val_score,
where they expect an estimator. First, import the Pipeline object:
End of explanation
"""
scaling_pipeline = Pipeline([('scale', MinMaxScaler()),
('predict', KNeighborsClassifier())])
"""
Explanation: Pipelines take a list of steps as input, representing the chain of the data mining
application. The last step needs to be an Estimator, while all previous steps are
Transformers. The input dataset is altered by each Transformer, with the output of
one step being the input of the next step. Finally, the samples are classified by the last
step's estimator. In our pipeline, we have two steps:
1. Use MinMaxScaler to scale the feature values from 0 to 1
2. Use KNeighborsClassifier as the classification algorithms
Each step is then represented by a tuple ('name', step). We can then create
our pipeline:
End of explanation
"""
scores = cross_val_score(scaling_pipeline, X_broken, y, scoring='accuracy')
print("The pipeline scored an average accuracy for is {0:.1f}%".format(np.mean(transformed_scores) * 100))
"""
Explanation: The key here is the list of tuples. The first tuple is our scaling step and the second
tuple is the predicting step. We give each step a name: the first we call scale and the
second we call predict, but you can choose your own names. The second part of the
tuple is the actual Transformer or estimator object.
Running this pipeline is now very easy, using the cross validation code from before:
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/b36af73820a7a52a4df3c42b66aef8a5/source_power_spectrum_opm.ipynb | bsd-3-clause | # Authors: Denis Engemann <[email protected]>
# Luke Bloy <[email protected]>
# Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
from mne.filter import next_fast_len
import mne
print(__doc__)
data_path = mne.datasets.opm.data_path()
subject = 'OPM_sample'
subjects_dir = op.join(data_path, 'subjects')
bem_dir = op.join(subjects_dir, subject, 'bem')
bem_fname = op.join(subjects_dir, subject, 'bem',
subject + '-5120-5120-5120-bem-sol.fif')
src_fname = op.join(bem_dir, '%s-oct6-src.fif' % subject)
vv_fname = data_path + '/MEG/SQUID/SQUID_resting_state.fif'
vv_erm_fname = data_path + '/MEG/SQUID/SQUID_empty_room.fif'
vv_trans_fname = data_path + '/MEG/SQUID/SQUID-trans.fif'
opm_fname = data_path + '/MEG/OPM/OPM_resting_state_raw.fif'
opm_erm_fname = data_path + '/MEG/OPM/OPM_empty_room_raw.fif'
opm_trans_fname = None
opm_coil_def_fname = op.join(data_path, 'MEG', 'OPM', 'coil_def.dat')
"""
Explanation: Compute source power spectral density (PSD) of VectorView and OPM data
Here we compute the resting state from raw for data recorded using
a Neuromag VectorView system and a custom OPM system.
The pipeline is meant to mostly follow the Brainstorm :footcite:TadelEtAl2011
OMEGA resting tutorial pipeline <bst_omega_>_.
The steps we use are:
Filtering: downsample heavily.
Artifact detection: use SSP for EOG and ECG.
Source localization: dSPM, depth weighting, cortically constrained.
Frequency: power spectral density (Welch), 4 sec window, 50% overlap.
Standardize: normalize by relative power for each source.
Preprocessing
End of explanation
"""
raws = dict()
raw_erms = dict()
new_sfreq = 90. # Nyquist frequency (45 Hz) < line noise freq (50 Hz)
raws['vv'] = mne.io.read_raw_fif(vv_fname, verbose='error') # ignore naming
raws['vv'].load_data().resample(new_sfreq)
raws['vv'].info['bads'] = ['MEG2233', 'MEG1842']
raw_erms['vv'] = mne.io.read_raw_fif(vv_erm_fname, verbose='error')
raw_erms['vv'].load_data().resample(new_sfreq)
raw_erms['vv'].info['bads'] = ['MEG2233', 'MEG1842']
raws['opm'] = mne.io.read_raw_fif(opm_fname)
raws['opm'].load_data().resample(new_sfreq)
raw_erms['opm'] = mne.io.read_raw_fif(opm_erm_fname)
raw_erms['opm'].load_data().resample(new_sfreq)
# Make sure our assumptions later hold
assert raws['opm'].info['sfreq'] == raws['vv'].info['sfreq']
"""
Explanation: Load data, resample. We will store the raw objects in dicts with entries
"vv" and "opm" to simplify housekeeping and simplify looping later.
End of explanation
"""
titles = dict(vv='VectorView', opm='OPM')
ssp_ecg, _ = mne.preprocessing.compute_proj_ecg(
raws['vv'], tmin=-0.1, tmax=0.1, n_grad=1, n_mag=1)
raws['vv'].add_proj(ssp_ecg, remove_existing=True)
# due to how compute_proj_eog works, it keeps the old projectors, so
# the output contains both projector types (and also the original empty-room
# projectors)
ssp_ecg_eog, _ = mne.preprocessing.compute_proj_eog(
raws['vv'], n_grad=1, n_mag=1, ch_name='MEG0112')
raws['vv'].add_proj(ssp_ecg_eog, remove_existing=True)
raw_erms['vv'].add_proj(ssp_ecg_eog)
fig = mne.viz.plot_projs_topomap(raws['vv'].info['projs'][-4:],
info=raws['vv'].info)
fig.suptitle(titles['vv'])
fig.subplots_adjust(0.05, 0.05, 0.95, 0.85)
"""
Explanation: Do some minimal artifact rejection just for VectorView data
End of explanation
"""
kinds = ('vv', 'opm')
n_fft = next_fast_len(int(round(4 * new_sfreq)))
print('Using n_fft=%d (%0.1f sec)' % (n_fft, n_fft / raws['vv'].info['sfreq']))
for kind in kinds:
fig = raws[kind].plot_psd(n_fft=n_fft, proj=True)
fig.suptitle(titles[kind])
fig.subplots_adjust(0.1, 0.1, 0.95, 0.85)
"""
Explanation: Explore data
End of explanation
"""
# Here we use a reduced size source space (oct5) just for speed
src = mne.setup_source_space(
subject, 'oct5', add_dist=False, subjects_dir=subjects_dir)
# This line removes source-to-source distances that we will not need.
# We only do it here to save a bit of memory, in general this is not required.
del src[0]['dist'], src[1]['dist']
bem = mne.read_bem_solution(bem_fname)
fwd = dict()
# check alignment and generate forward for VectorView
kwargs = dict(azimuth=0, elevation=90, distance=0.6, focalpoint=(0., 0., 0.))
fig = mne.viz.plot_alignment(
raws['vv'].info, trans=vv_trans_fname, subject=subject,
subjects_dir=subjects_dir, dig=True, coord_frame='mri',
surfaces=('head', 'white'))
mne.viz.set_3d_view(figure=fig, **kwargs)
fwd['vv'] = mne.make_forward_solution(
raws['vv'].info, vv_trans_fname, src, bem, eeg=False, verbose=True)
"""
Explanation: Alignment and forward
End of explanation
"""
with mne.use_coil_def(opm_coil_def_fname):
fig = mne.viz.plot_alignment(
raws['opm'].info, trans=opm_trans_fname, subject=subject,
subjects_dir=subjects_dir, dig=False, coord_frame='mri',
surfaces=('head', 'white'))
mne.viz.set_3d_view(figure=fig, **kwargs)
fwd['opm'] = mne.make_forward_solution(
raws['opm'].info, opm_trans_fname, src, bem, eeg=False, verbose=True)
del src, bem
"""
Explanation: And for OPM:
End of explanation
"""
freq_bands = dict(
delta=(2, 4), theta=(5, 7), alpha=(8, 12), beta=(15, 29), gamma=(30, 45))
topos = dict(vv=dict(), opm=dict())
stcs = dict(vv=dict(), opm=dict())
snr = 3.
lambda2 = 1. / snr ** 2
for kind in kinds:
noise_cov = mne.compute_raw_covariance(raw_erms[kind])
inverse_operator = mne.minimum_norm.make_inverse_operator(
raws[kind].info, forward=fwd[kind], noise_cov=noise_cov, verbose=True)
stc_psd, sensor_psd = mne.minimum_norm.compute_source_psd(
raws[kind], inverse_operator, lambda2=lambda2,
n_fft=n_fft, dB=False, return_sensor=True, verbose=True)
topo_norm = sensor_psd.data.sum(axis=1, keepdims=True)
stc_norm = stc_psd.sum() # same operation on MNE object, sum across freqs
# Normalize each source point by the total power across freqs
for band, limits in freq_bands.items():
data = sensor_psd.copy().crop(*limits).data.sum(axis=1, keepdims=True)
topos[kind][band] = mne.EvokedArray(
100 * data / topo_norm, sensor_psd.info)
stcs[kind][band] = \
100 * stc_psd.copy().crop(*limits).sum() / stc_norm.data
del inverse_operator
del fwd, raws, raw_erms
"""
Explanation: Compute and apply inverse to PSD estimated using multitaper + Welch.
Group into frequency bands, then normalize each source point and sensor
independently. This makes the value of each sensor point and source location
in each frequency band the percentage of the PSD accounted for by that band.
End of explanation
"""
def plot_band(kind, band):
"""Plot activity within a frequency band on the subject's brain."""
title = "%s %s\n(%d-%d Hz)" % ((titles[kind], band,) + freq_bands[band])
topos[kind][band].plot_topomap(
times=0., scalings=1., cbar_fmt='%0.1f', vmin=0, cmap='inferno',
time_format=title)
brain = stcs[kind][band].plot(
subject=subject, subjects_dir=subjects_dir, views='cau', hemi='both',
time_label=title, title=title, colormap='inferno',
time_viewer=False, show_traces=False,
clim=dict(kind='percent', lims=(70, 85, 99)), smoothing_steps=10)
brain.show_view(dict(azimuth=0, elevation=0), roll=0)
return fig, brain
fig_theta, brain_theta = plot_band('vv', 'theta')
"""
Explanation: Now we can make some plots of each frequency band. Note that the OPM head
coverage is only over right motor cortex, so only localization
of beta is likely to be worthwhile.
Theta
End of explanation
"""
fig_alpha, brain_alpha = plot_band('vv', 'alpha')
"""
Explanation: Alpha
End of explanation
"""
fig_beta, brain_beta = plot_band('vv', 'beta')
"""
Explanation: Beta
Here we also show OPM data, which shows a profile similar to the VectorView
data beneath the sensors. VectorView first:
End of explanation
"""
fig_beta_opm, brain_beta_opm = plot_band('opm', 'beta')
"""
Explanation: Then OPM:
End of explanation
"""
fig_gamma, brain_gamma = plot_band('vv', 'gamma')
"""
Explanation: Gamma
End of explanation
"""
|
chi-hung/SementicProj | webCrawler/amzProd.ipynb | mit | %watermark
"""
Explanation: This notebook is written by Yishin and Chi-Hung.
End of explanation
"""
def getVacuumTypeUrl(vacuumType,pageNum=1):
vcleaners={"central":11333709011,"canister":510108,"handheld":510114,"robotic":3743561,"stick":510112,"upright":510110,"wetdry":553022}
url_type_base="https://www.amazon.com/home-garden-kitchen-furniture-bedding/b/ref=sr_pg_"+str(pageNum)+"?ie=UTF8&node="
url=url_type_base+str(vacuumType)+"&page="+str(pageNum)
print (url)
return url
vcleaners={"central":11333709011,"canister":510108,"handheld":510114,"robotic":3743561,"stick":510112,"upright":510110,"wetdry":553022}
for key in vcleaners:
print(key,vcleaners[key])
getVacuumTypeUrl(vcleaners[key])
"""
Explanation: First of all, we know that there are 7 types of vacuums on Amazon
End of explanation
"""
def getFinalPageNum(url,maxretrytime=20):
passed=False
cnt=0
while(passed==False):
cnt+=1
print("iteration from getFinalPageNum=",cnt)
if(cnt>maxretrytime):
raise Exception("Error from getFinalPageNum(url)! Tried too many times but we are still blocked by Amazon.")
try:
with requests.Session() as session:
session.headers = {'User-Agent': "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:50.0) Gecko/20100101 Firefox/50.0"}
r=session.get(url)
if (r.status_code==200):
soup=BeautifulSoup(r.content,"lxml")
if("Robot Check" in soup.text):
print("we are blocked!")
else:
tagsFinalPageNum=soup.select("span[class='pagnDisabled']")
finalPageNum=str(tagsFinalPageNum[0].text)
passed=True
else:
print("Connection failed. Reconnecting...")
except:
print("Error from getFinalPageNum(url)! Probably due to connection time out")
return finalPageNum
def InferFinalPageNum(vacuumType,pageNum=1,times=10):
url=getVacuumTypeUrl(vacuumType,pageNum)
list_finalpageNum=[]
for j in range(times):
finalpageNum=getFinalPageNum(url)
list_finalpageNum.append(finalpageNum)
FinalpageNum=min(list_finalpageNum)
return FinalpageNum
FinalPageNum=InferFinalPageNum(510114,pageNum=1)
print('FinalPageNum=',FinalPageNum)
"""
Explanation: The following are two functions which we aim to obtain the total number of pages of each vacuum type
End of explanation
"""
def urlsGenerator(typenode,FinalPageNum):
#Note: 'typenode' and 'FinalpageNum' are both string
URLs=[]
pageIdx=1
while(pageIdx<=int(FinalPageNum)):
url_Type="https://www.amazon.com/home-garden-kitchen-furniture-bedding/b/ref=sr_pg_"+str(pageIdx)+"?ie=UTF8&node="
url=url_Type+str(typenode)+"&page="+str(pageIdx)
#print(url)
URLs.append(url)
pageIdx+=1
return URLs
"""
Explanation: So, right now, we are able to infer the total number of pages of a specific vacuum type.
The next step is to generate all URLs of the selected vacuum type:
End of explanation
"""
URLs=urlsGenerator(510114,FinalPageNum)
len(URLs)
for url in URLs:
print(url)
"""
Explanation: For the moment, let us choose the vacuum type "handheld":
End of explanation
"""
def soupGenerator(URLs,maxretrytime=20):
soups=[]
urlindex=0
for URL in URLs:
urlindex+=1
print("urlindex=",urlindex)
passed=False
cnt=0
while(passed==False):
cnt+=1
print("iteration=",cnt)
if(cnt>maxretrytime):
raise Exception("Error from soupGenerator(url,maxretrytime=20)! Tried too many times but we are still blocked by Amazon.")
try:
with requests.Session() as session:
session.headers = {'User-Agent': "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:50.0) Gecko/20100101 Firefox/50.0"}
r=session.get(URL)
if (r.status_code==200):
soup=BeautifulSoup(r.content,"lxml")
if("Robot Check" in soup.text):
print("we are blocked!")
else:
print("we are not blocked!")
soups.append(soup)
passed=True
else:
print ("Connection failed. Reconnecting...")
except:
print("Error from soupGenerator(URLs,maxretrytime=20)! Probably due to connection time out")
return soups
soups=soupGenerator(URLs,maxretrytime=20)
"""
Explanation: Next, we'd like to obtain all the "soups" of the vacuum type "handheld" and store them into a list
End of explanation
"""
print(len(soups))
"""
Explanation: How many soups have we created?
End of explanation
"""
example='''
<span class="abc">
<div>
<a href="http://123xyz.com"></a>
hello_div01
</div>
</span>
<span class="def">
<a href="http://www.go.123xyz"></a>
<div>hello_div02</div>
</span>
'''
mysoup=BeautifulSoup(example,"lxml")
print(mysoup.prettify())
"""
Explanation: Let us pause for a while. We would like to review the usage of CSS selectors
End of explanation
"""
mysoup.select(".abc a")
mysoup.select(".abc > a")
"""
Explanation: Exercise: look for a specific tag which is a descendent of some other tag
End of explanation
"""
mysoup.select(".abc > div")
"""
Explanation: the symbol > indicates that we'd like to look for a tags, which are direct descendents of the tag which its class=abc.
If we use ".abc a", it means that we would like to find all descendents of the tag which its class=abc.
End of explanation
"""
mysoup.select("a[href^='http']")
"""
Explanation: Exercise: we look for the tags whose value of the attr href starts with "http"
End of explanation
"""
mysoup.select("a[href$='http']")
"""
Explanation: Exercise: we look for the tags whose value of the attr href ends with "http"
End of explanation
"""
mysoup.select(".abc a")[0]["href"]
"""
Explanation: Exercise: extract the value of a specific attr of a specific tag
End of explanation
"""
sp=soups[70].select('li[id^="result_"]')[0]
print(sp)
for s in sp:
try:
print(sp.span)
except:
print("error")
"""
Explanation: more info about CSS selectors:
https://developer.mozilla.org/en-US/docs/Web/CSS/Attribute_selectors
http://wiki.jikexueyuan.com/project/python-crawler-guide/beautiful-soup.html
End of explanation
"""
URLs=urlsGenerator(510114,FinalPageNum)
len(URLs)
print(URLs[0])
#for url in URLs:
# print(url)
"""
Explanation: Let's go back.
First of all, let us look for the Product URL of the first item of the first page
print the link of the first page:
End of explanation
"""
soups[0].select('li[id^="result_"]')[0].select("a[class='a-link-normal s-access-detail-page a-text-normal']")[0]
"""
Explanation: We found that the Product URL of the first item can be extracted via:
End of explanation
"""
csrev_tag=soups[0].select('li[id^="result_"]')[0].select("a[href$='customerReviews']")[0]
print(csrev_tag)
"""
Explanation: where we have used the fact that each item has one unique id.
Now, we have another goal: obtain the total number of customer reviews of the selected item (first item in the first page). Doing so we are also able to obtain the link of that item, which is pretty nice, since the item name and the item ID can be extracted from that link.
End of explanation
"""
csrev_tag.parent
csrev_tag.parent.previous_sibling.previous_sibling
pricetag=csrev_tag.parent.previous_sibling.previous_sibling
price=pricetag.select(".sx-price-whole")[0].text
fraction_price=pricetag.select(".sx-price-fractional")[0].text
print(price,fraction_price)
print(int(price)+0.01*int(fraction_price))
"""
Explanation: This means we are able to obtain the total number of customer reviews (10,106) and also the link of the selected item:
https://www.amazon.com/BLACK-DECKER-CHV1410L-Cordless-Lithium/dp/B006LXOJC0/ref=lp_510114_1_1/157-7476471-7904367?s=vacuums&ie=UTF8&qid=1485361951&sr=1-1
The above link will then be replaced by the following one:
https://www.amazon.com/BLACK-DECKER-CHV1410L-Cordless-Lithium/product-reviews/B006LXOJC0/ref=cm_cr_getr_d_paging_btm_1?ie=UTF8&pageNumber=1&reviewerType=all_reviews&pageSize=1000
which shows 50 customer reviews per page (instead of 10 reviews per page by default).
Another Goal: We'd like to obtain the price of the selected item
Now, let's look for more information, e.g. the price of the selected product. We know that the tag we have found is stored at the end part of a big tag which contains all the info of a specific item. Now, to retrieve more info of that item, we'll move ourselves from the end part to the front gradually.
End of explanation
"""
pricetag.parent
pricetag.previous_sibling.parent.select(".a-size-small")[2].text
"""
Explanation: so, we are able to obtain the price of the selected item.
Yet Another Goal: Let's see if we can obtain the brand of the selected item
End of explanation
"""
for j in range(30):
try:
#selected=soups[2].select('li[id^="result_"]')[j].select_one("span[class='a-declarative']")
selected=soups[2].select('li[id^="result_"]')[j].select_one("i[class='a-icon a-icon-popover']").previous_sibling
print(len(selected),selected.string.split(" ")[0])
except:
print("index= ",j,", 0 stars (no reviews yet)")
print(soups[10].select('li[id^="result_"]')[0].find_all("a")[2]["href"]) # 5stars (although only 2 reviews)
print(soups[12].select('li[id^="result_"]')[0].find_all("a")[2]["href"]) # 0 start (no customer reviews yet)
"""
Explanation: Another goal: number of the average stars of the selected item
End of explanation
"""
def items_info_extractor(soups):
item_links=[]
item_num_of_reviews=[]
item_prices=[]
item_names=[]
item_ids=[]
item_brands=[]
item_avestars=[]
for soup in soups:
items=soup.select('li[id^="result_"]')
for item in items:
link_item=item.select("a[href$='customerReviews']")
# ignore those items which contains 0 customer reviews. Those items are irrelevent to us.
if (link_item !=[]):
price_tag=link_item[0].parent.previous_sibling.previous_sibling
price_main_tag=price_tag.select(".sx-price-whole")
price_fraction_tag=price_tag.select(".sx-price-fractional")
link=link_item[0]["href"]
# Ignore items which don't have normal price tags.
# Those are items which are not sold by Amazon directly.
# Also, remove those items which are ads (3 ads are shown in each page).
if((price_main_tag !=[]) & (price_fraction_tag !=[]) & (link.endswith("spons#customerReviews") == False)):
# extract the item's name and ID from the obtained link
item_name=link.split("/")[3]
item_id=link.split("/")[5]
# replace the obtained link by the link that will lead to the customer reviews
base_url="https://www.amazon.com/"
link=base_url+item_name+"/product-reviews/"+item_id+"/ref=cm_cr_getr_d_paging_btm_" \
+str(1)+"?ie=UTF8&pageNumber="+str(1)+"&reviewerType=all_reviews&pageSize=1000"
# obtain the price of the selected single item
price_main=price_main_tag[0].text
price_fraction=price_fraction_tag[0].text
item_price=int(price_main)+0.01*int(price_fraction)
# obtain the brand of the selected single item
item_brand=price_tag.parent.select(".a-size-small")[1].text
if(item_brand=="by "):
item_brand=price_tag.parent.select(".a-size-small")[2].text
# obtain the number of reviews of the selected single item
item_num_of_review=int(re.sub(",","",link_item[0].text))
# obtain the averaged number of stars
starSelect=item.select_one("span[class='a-declarative']")
if((starSelect is None) or (starSelect.span is None)): # there are no reviews yet (hence, we see no stars at all)
item_avestar=0
else:
item_avestar=starSelect.span.string.split(" ")[0] # there are some reviews. So, we are able to extract the averaged number of stars
# store the obtained variables into lists
item_links.append(link)
item_num_of_reviews.append(item_num_of_review)
item_prices.append(item_price)
item_names.append(item_name)
item_ids.append(item_id)
item_brands.append(item_brand)
item_avestars.append(item_avestar)
return item_brands,item_ids,item_names,item_prices,item_num_of_reviews,item_links,item_avestars
item_brands,item_ids,item_names,item_prices,item_num_of_reviews,item_links,item_avestars=items_info_extractor(soups)
print(len(item_ids))
print(len(set(item_ids)))
print(len(item_names))
print(len(set(item_names)))
print(len(item_links))
print(len(set(item_links)))
"""
Explanation: Now we are ready to merge all the ingredients learned from above code blocks into one function
End of explanation
"""
import collections
item_names_repeated=[]
for key in collections.Counter(item_names):
if collections.Counter(item_names)[key]>1:
print(key,collections.Counter(item_names)[key])
item_names_repeated.append(key)
#print [item for item, count in collections.Counter(a).items() if count > 1]
print(item_names_repeated)
items_repeated=[]
for name,link,price,numrev in zip(item_names,item_links,item_prices,item_num_of_reviews):
if name in item_names_repeated:
#print(name,link,"\n")
items_repeated.append((name,link,price,numrev))
"""
Explanation: The above results indicate that there are items that have the same product name but different links.
Cool. Let's find those products.
End of explanation
"""
items_repeated=sorted(items_repeated, key=lambda x: x[0])
print("item name, item link, item price, total # of reviews of that item","\n")
for idx,(name,link,price,numrev) in enumerate(items_repeated):
if((idx+1)%2==0):
print(name,link,price,numrev,"\n")
else:
print(name,link,price,numrev)
"""
Explanation: sort a list with the method: sorted ( a "key" has to be given )
End of explanation
"""
for id in item_ids:
if("B006LXOJC0" in id):
print(id)
df=pd.DataFrame.from_items([("pindex",item_ids),("type","handheld"),("pname",item_names),("brand",item_brands),("price",item_prices),("rurl",item_links),("totalRev",item_num_of_reviews),("avgStars",item_avestars)])
df.loc[:,["rurl","avgStars","totalRev"]]
"""
Explanation: What's found
* Each of the 7 items above has two different links/IDs (probably due to different color or seller) and varying prices.
Now, let's try to merge the obtained data into pandas dataframe
Reference: http://pbpython.com/pandas-list-dict.html
End of explanation
"""
from sqlalchemy import create_engine,Table,Column,Integer,String,MetaData,ForeignKey,Date
import pymysql
engine=create_engine("mysql+pymysql://semantic:[email protected]:13606/Tests?charset=utf8",echo=False, encoding='utf-8')
conn = engine.connect()
df.to_sql(name='amzProd', con=conn, if_exists = 'append', index=False)
conn.close()
"""
Explanation: Let's upload the obtained dataframe to MariaDB
End of explanation
"""
df.to_csv("ProdInfo_handheld_26012017.csv", encoding="utf-8")
"""
Explanation: Alternatively, we can store the obtained dataframe into a csv file
End of explanation
"""
pd.DataFrame.from_csv("ProdInfo_handheld_26012017.csv", encoding="utf-8")
"""
Explanation: And load it:
End of explanation
"""
from sqlalchemy import create_engine,Table,Column,Integer,String,MetaData,ForeignKey,Date
import pymysql
import datetime
"""
Explanation: Upload the obtained CSV files to the remote MariaDB
End of explanation
"""
pd.set_option('max_colwidth', 800)
for idx,df in enumerate(dfs):
print(idx,df.loc[df['pindex'] == 'B00SWGVICS'])
"""
Explanation: I found out that there might be same pindex in one dataframe. This can lead to an error if we are going to upload our data to MariaDB, as the primary key is ought to be unique.
End of explanation
"""
import os
from IPython.display import display
cwd=os.getcwd()
print(cwd)
"""
Explanation: Strategy: Store all csvs into one dataframe. Then, remove all duplicates before uploading to the DataBase.
End of explanation
"""
test_col = pd.DataFrame.from_items([("test_column1",np.arange(10))])
test_col2 = pd.DataFrame.from_items([("test_column2",5+np.arange(10))])
display(test_col,test_col2)
result = pd.concat([test_col, test_col2], axis=1)
display(result)
"""
Explanation: Now, it's time to get to know the Pandas Dataframe better. I'd like to figure out how two dataframes can be merged horizontally.
an one column example: pd.Dataframe.from_items()
End of explanation
"""
date="2017-02-01"
prodTypes=["central","canister","handheld","robotic","stick","upright","wetdry"]
# put all the dataframes into a list
dfs=[pd.DataFrame.from_csv("data/ProdInfo_%s_%s.csv"%(prodType,date), encoding="utf-8") for prodType in prodTypes]
for idx,df in enumerate(dfs):
cID=[j%7 for j in range(df.shape[0])]
colCID=pd.DataFrame.from_items([( "cID",cID )])
dfs[idx]=pd.concat([df, colCID], axis=1)
# concatenate dataframes
df=pd.concat(dfs).drop_duplicates("rurl")
df.to_csv("ProdInfo_all_%s.csv"%(date), encoding="utf-8")
date="2017-02-01"
date="2017-02-06"
prodTypes=["central","canister","handheld","robotic","stick","upright","wetdry"]
# put all the dataframes into a list
dfs=[pd.DataFrame.from_csv("data/ProdInfo_%s_%s.csv"%(prodType,date), encoding="utf-8") for prodType in prodTypes]
for idx,df in enumerate(dfs):
cID=[j%7 for j in range(df.shape[0])]
colCID=pd.DataFrame.from_items([( "cID",cID )])
dfs[idx]=pd.concat([df, colCID], axis=1)
# concatenate dataframes
df=pd.concat(dfs).drop_duplicates("rurl")
# prepare the connection and connect to the DB
engine=create_engine("mysql+pymysql://semantic:[email protected]:13606/Tests?charset=utf8",echo=False, encoding='utf-8')
conn = engine.connect()
# remove duplicates and upload the concatenated dataframe to the SQL DataBase
df.to_sql(name='amzProd', con=conn, if_exists = 'append', index=False)
# close the connection
conn.close()
len(df.iloc[974]["brand"])
df.iloc[463]["pname"]
!echo "Handheld-Vacuum-Cleaner-Abask-Vacuum-Cleaner-7-2V-60W-Ni-CD2200MA-3-5KPA-Suction-Portable-1-Accessories-Rechargeable-Cordless-Cleaner"| wc
"""
Explanation:
End of explanation
"""
|
cfjhallgren/shogun | doc/ipython-notebooks/statistical_testing/mmd_two_sample_testing.ipynb | gpl-3.0 | %pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
import shogun as sg
import numpy as np
"""
Explanation: Kernel hypothesis testing in Shogun
Heiko Strathmann - [email protected] - http://github.com/karlnapf - http://herrstrathmann.de
Soumyajit De - [email protected] - http://github.com/lambday
This notebook describes Shogun's framework for <a href="http://en.wikipedia.org/wiki/Statistical_hypothesis_testing">statistical hypothesis testing</a>. We begin by giving a brief outline of the problem setting and then describe various implemented algorithms.
All algorithms discussed here are instances of <a href="http://en.wikipedia.org/wiki/Kernel_embedding_of_distributions#Kernel_two_sample_test">kernel two-sample testing</a> with the maximum mean discrepancy, and are based on embedding probability distributions into <a href="http://en.wikipedia.org/wiki/Reproducing_kernel_Hilbert_space">Reproducing Kernel Hilbert Spaces</a> (RKHS).
There are two types of tests available, a quadratic time test and a linear time test. Both come in various flavours.
End of explanation
"""
# use scipy for generating samples
from scipy.stats import laplace, norm
def sample_gaussian_vs_laplace(n=220, mu=0.0, sigma2=1, b=np.sqrt(0.5)):
# sample from both distributions
X=norm.rvs(size=n)*np.sqrt(sigma2)+mu
Y=laplace.rvs(size=n, loc=mu, scale=b)
return X,Y
mu=0.0
sigma2=1
b=np.sqrt(0.5)
n=220
X,Y=sample_gaussian_vs_laplace(n, mu, sigma2, b)
# plot both densities and histograms
plt.figure(figsize=(18,5))
plt.suptitle("Gaussian vs. Laplace")
plt.subplot(121)
Xs=np.linspace(-2, 2, 500)
plt.plot(Xs, norm.pdf(Xs, loc=mu, scale=sigma2))
plt.plot(Xs, laplace.pdf(Xs, loc=mu, scale=b))
plt.title("Densities")
plt.xlabel("$x$")
plt.ylabel("$p(x)$")
plt.subplot(122)
plt.hist(X, alpha=0.5)
plt.xlim([-5,5])
plt.ylim([0,100])
plt.hist(Y,alpha=0.5)
plt.xlim([-5,5])
plt.ylim([0,100])
plt.legend(["Gaussian", "Laplace"])
plt.title('Samples');
"""
Explanation: Some Formal Basics (skip if you just want code examples)
To set the context, we here briefly describe statistical hypothesis testing. Informally, one defines a hypothesis on a certain domain and then uses a statistical test to check whether this hypothesis is true. Formally, the goal is to reject a so-called null-hypothesis $H_0:p=q$, which is the complement of an alternative-hypothesis $H_A$.
To distinguish the hypotheses, a test statistic is computed on sample data. Since sample data is finite, this corresponds to sampling the true distribution of the test statistic. There are two different distributions of the test statistic -- one for each hypothesis. The null-distribution corresponds to test statistic samples under the model that $H_0$ holds; the alternative-distribution corresponds to test statistic samples under the model that $H_A$ holds.
In practice, one tries to compute the quantile of the test statistic in the null-distribution. In case the test statistic is in a high quantile, i.e. it is unlikely that the null-distribution has generated the test statistic -- the null-hypothesis $H_0$ is rejected.
There are two different kinds of errors in hypothesis testing:
A type I error is made when $H_0: p=q$ is wrongly rejected. That is, the test says that the samples are from different distributions when they are not.
A type II error is made when $H_A: p\neq q$ is wrongly accepted. That is, the test says that the samples are from the same distribution when they are not.
A so-called consistent test achieves zero type II error for a fixed type I error, as it sees more data.
To decide whether to reject $H_0$, one could set a threshold, say at the $95\%$ quantile of the null-distribution, and reject $H_0$ when the test statistic lies below that threshold. This means that the chance that the samples were generated under $H_0$ are $5\%$. We call this number the test power $\alpha$ (in this case $\alpha=0.05$). It is an upper bound on the probability for a type I error. An alternative way is simply to compute the quantile of the test statistic in the null-distribution, the so-called p-value, and to compare the p-value against a desired test power, say $\alpha=0.05$, by hand. The advantage of the second method is that one not only gets a binary answer, but also an upper bound on the type I error.
In order to construct a two-sample test, the null-distribution of the test statistic has to be approximated. One way of doing this is called the permutation test, where samples from both sources are mixed and permuted repeatedly and the test statistic is computed for every of those configurations. While this method works for every statistical hypothesis test, it might be very costly because the test statistic has to be re-computed many times. Shogun comes with an extremely optimized implementation though. For completeness, Shogun also includes a number of more sohpisticated ways of approximating the null distribution.
Base class for Hypothesis Testing
Shogun implements statistical testing in the abstract class <a href="http://shogun.ml/CHypothesisTest">CHypothesisTest</a>. All implemented methods will work with this interface at their most basic level. We here focos on <a href="http://shogun.ml/CTwoSampleTest">CTwoSampleTest</a>. This class offers methods to
compute the implemented test statistic,
compute p-values for a given value of the test statistic,
compute a test threshold for a given p-value,
approximate the null distribution, e.g. perform the permutation test and
performing a full two-sample test, and either returning a p-value or a binary rejection decision. This method is most useful in practice. Note that, depending on the used test statistic.
Kernel Two-Sample Testing with the Maximum Mean Discrepancy
$\DeclareMathOperator{\mmd}{MMD}$
An important class of hypothesis tests are the two-sample tests.
In two-sample testing, one tries to find out whether two sets of samples come from different distributions. Given two probability distributions $p,q$ on some arbritary domains $\mathcal{X}, \mathcal{Y}$ respectively, and i.i.d. samples $X={x_i}{i=1}^m\subseteq \mathcal{X}\sim p$ and $Y={y_i}{i=1}^n\subseteq \mathcal{Y}\sim p$, the two sample test distinguishes the hypothesises
\begin{align}
H_0: p=q\
H_A: p\neq q
\end{align}
In order to solve this problem, it is desirable to have a criterion than takes a positive unique value if $p\neq q$, and zero if and only if $p=q$. The so called Maximum Mean Discrepancy (MMD), has this property and allows to distinguish any two probability distributions, if used in a reproducing kernel Hilbert space (RKHS). It is the distance of the mean embeddings $\mu_p, \mu_q$ of the distributions $p,q$ in such a RKHS $\mathcal{F}$ -- which can also be expressed in terms of expectation of kernel functions, i.e.
\begin{align}
\mmd[\mathcal{F},p,q]&=||\mu_p-\mu_q||\mathcal{F}^2\
&=\textbf{E}{x,x'}\left[ k(x,x')\right]-
2\textbf{E}{x,y}\left[ k(x,y)\right]
+\textbf{E}{y,y'}\left[ k(y,y')\right]
\end{align}
Note that this formulation does not assume any form of the input data, we just need a kernel function whose feature space is a RKHS, see [2, Section 2] for details. This has the consequence that in Shogun, we can do tests on any type of data (<a href="http://shogun.ml/CDenseFeatures">CDenseFeatures</a>, <a href="http://shogun.ml/CSparseFeatures">CSparseFeatures</a>, <a href="http://shogun.ml/CStringFeatures">CStringFeatures</a>, etc), as long as we or you provide a positive definite kernel function under the interface of <a href="http://shogun.ml/CKernel">CKernel</a>.
We here only describe how to use the MMD for two-sample testing. Shogun offers two types of test statistic based on the MMD, one with quadratic costs both in time and space, and one with linear time and constant space costs. Both come in different versions and with different methods how to approximate the null-distribution in order to construct a two-sample test.
Running Example Data. Gaussian vs. Laplace
In order to illustrate kernel two-sample testing with Shogun, we use a couple of toy distributions. The first dataset we consider is the 1D Standard Gaussian
$p(x)=\frac{1}{\sqrt{2\pi\sigma^2}}\exp\left(-\frac{(x-\mu)^2}{\sigma^2}\right)$
with mean $\mu$ and variance $\sigma^2$, which is compared against the 1D Laplace distribution
$p(x)=\frac{1}{2b}\exp\left(-\frac{|x-\mu|}{b}\right)$
with the same mean $\mu$ and variance $2b^2$. In order to increase difficulty, we set $b=\sqrt{\frac{1}{2}}$, which means that $2b^2=\sigma^2=1$.
End of explanation
"""
print "Gaussian vs. Laplace"
print "Sample means: %.2f vs %.2f" % (np.mean(X), np.mean(Y))
print "Samples variances: %.2f vs %.2f" % (np.var(X), np.var(Y))
"""
Explanation: Now how to compare these two sets of samples? Clearly, a t-test would be a bad idea since it basically compares mean and variance of $X$ and $Y$. But we set that to be equal. By chance, the estimates of these statistics might differ, but that is unlikely to be significant. Thus, we have to look at higher order statistics of the samples. In fact, kernel two-sample tests look at all (infinitely many) higher order moments.
End of explanation
"""
# turn data into Shogun representation (columns vectors)
feat_p=sg.RealFeatures(X.reshape(1,len(X)))
feat_q=sg.RealFeatures(Y.reshape(1,len(Y)))
# choose kernel for testing. Here: Gaussian
kernel_width=1
kernel=sg.GaussianKernel(10, kernel_width)
# create mmd instance of test-statistic
mmd=sg.QuadraticTimeMMD()
mmd.set_kernel(kernel)
mmd.set_p(feat_p)
mmd.set_q(feat_q)
# compute biased and unbiased test statistic (default is unbiased)
mmd.set_statistic_type(sg.ST_BIASED_FULL)
biased_statistic=mmd.compute_statistic()
mmd.set_statistic_type(sg.ST_UNBIASED_FULL)
statistic=unbiased_statistic=mmd.compute_statistic()
print "%d x MMD_b[X,Y]^2=%.2f" % (len(X), biased_statistic)
print "%d x MMD_u[X,Y]^2=%.2f" % (len(X), unbiased_statistic)
"""
Explanation: Quadratic Time MMD
We now describe the quadratic time MMD, as described in [1, Lemma 6], which is implemented in Shogun. All methods in this section are implemented in <a href="http://shogun.ml/CQuadraticTimeMMD">CQuadraticTimeMMD</a>, which accepts any type of features in Shogun, and use it on the above toy problem.
An unbiased estimate for the MMD expression above can be obtained by estimating expected values with averaging over independent samples
$$
\mmd_u[\mathcal{F},X,Y]^2=\frac{1}{m(m-1)}\sum_{i=1}^m\sum_{j\neq i}^mk(x_i,x_j) + \frac{1}{n(n-1)}\sum_{i=1}^n\sum_{j\neq i}^nk(y_i,y_j)-\frac{2}{mn}\sum_{i=1}^m\sum_{j\neq i}^nk(x_i,y_j)
$$
A biased estimate would be
$$
\mmd_b[\mathcal{F},X,Y]^2=\frac{1}{m^2}\sum_{i=1}^m\sum_{j=1}^mk(x_i,x_j) + \frac{1}{n^ 2}\sum_{i=1}^n\sum_{j=1}^nk(y_i,y_j)-\frac{2}{mn}\sum_{i=1}^m\sum_{j\neq i}^nk(x_i,y_j)
.$$
Computing the test statistic using <a href="http://shogun.ml/CQuadraticTimeMMD">CQuadraticTimeMMD</a> does exactly this, where it is possible to choose between the two above expressions. Note that some methods for approximating the null-distribution only work with one of both types. Both statistics' computational costs are quadratic both in time and space. Note that the method returns $m\mmd_b[\mathcal{F},X,Y]^2$ since null distribution approximations work on $m$ times null distribution. Here is how the test statistic itself is computed.
End of explanation
"""
mmd.set_null_approximation_method(sg.NAM_PERMUTATION)
mmd.set_num_null_samples(200)
# now show a couple of ways to compute the test
# compute p-value for computed test statistic
p_value=mmd.compute_p_value(statistic)
print "P-value of MMD value %.2f is %.2f" % (statistic, p_value)
# compute threshold for rejecting H_0 for a given test power
alpha=0.05
threshold=mmd.compute_threshold(alpha)
print "Threshold for rejecting H0 with a test power of %.2f is %.2f" % (alpha, threshold)
# performing the test by hand given the above results, note that those two are equivalent
if statistic>threshold:
print "H0 is rejected with confidence %.2f" % alpha
if p_value<alpha:
print "H0 is rejected with confidence %.2f" % alpha
# or, compute the full two-sample test directly
# fixed test power, binary decision
binary_test_result=mmd.perform_test(alpha)
if binary_test_result:
print "H0 is rejected with confidence %.2f" % alpha
"""
Explanation: Any sub-class of <a href="http://www.shogun.ml/CHypothesisTest">CHypothesisTest</a> can compute approximate the null distribution using permutation/bootstrapping. This way always is guaranteed to produce consistent results, however, it might take a long time as for each sample of the null distribution, the test statistic has to be computed for a different permutation of the data. Shogun's implementation is highly optimized, exploiting low-level CPU caching and multiple available cores.
End of explanation
"""
num_samples=500
# sample null distribution
null_samples=mmd.sample_null()
# sample alternative distribution, generate new data for that
alt_samples=np.zeros(num_samples)
for i in range(num_samples):
X=norm.rvs(size=n, loc=mu, scale=sigma2)
Y=laplace.rvs(size=n, loc=mu, scale=b)
feat_p=sg.RealFeatures(np.reshape(X, (1,len(X))))
feat_q=sg.RealFeatures(np.reshape(Y, (1,len(Y))))
# TODO: reset pre-computed kernel here
mmd.set_p(feat_p)
mmd.set_q(feat_q)
alt_samples[i]=mmd.compute_statistic()
np.std(alt_samples)
"""
Explanation: Now let us visualise distribution of MMD statistic under $H_0:p=q$ and $H_A:p\neq q$. Sample both null and alternative distribution for that. Use the interface of <a href="http://www.shogun.ml/CHypothesisTest">CHypothesisTest</a> to sample from the null distribution (permutations, re-computing of test statistic is done internally). For the alternative distribution, compute the test statistic for a new sample set of $X$ and $Y$ in a loop. Note that the latter is expensive, as the kernel cannot be precomputed, and infinite data is needed. Though it is not needed in practice but only for illustrational purposes here.
End of explanation
"""
def plot_alt_vs_null(alt_samples, null_samples, alpha):
plt.figure(figsize=(18,5))
plt.subplot(131)
plt.hist(null_samples, 50, color='blue')
plt.title('Null distribution')
plt.subplot(132)
plt.title('Alternative distribution')
plt.hist(alt_samples, 50, color='green')
plt.subplot(133)
plt.hist(null_samples, 50, color='blue')
plt.hist(alt_samples, 50, color='green', alpha=0.5)
plt.title('Null and alternative distriution')
# find (1-alpha) element of null distribution
null_samples_sorted=np.sort(null_samples)
quantile_idx=int(len(null_samples)*(1-alpha))
quantile=null_samples_sorted[quantile_idx]
plt.axvline(x=quantile, ymin=0, ymax=100, color='red', label=str(int(round((1-alpha)*100))) + '% quantile of null')
legend();
plot_alt_vs_null(alt_samples, null_samples, alpha)
"""
Explanation: Null and Alternative Distribution Illustrated
Visualise both distributions, $H_0:p=q$ is rejected if a sample from the alternative distribution is larger than the $(1-\alpha)$-quantil of the null distribution. See [1] for more details on their forms. From the visualisations, we can read off the test's type I and type II error:
type I error is the area of the null distribution being right of the threshold
type II error is the area of the alternative distribution being left from the threshold
End of explanation
"""
# optional: plot spectrum of joint kernel matrix
# TODO: it would be good if there was a way to extract the joint kernel matrix for all kernel tests
# get joint feature object and compute kernel matrix and its spectrum
feats_p_q=mmd.get_p_and_q()
mmd.get_kernel().init(feats_p_q, feats_p_q)
K=mmd.get_kernel().get_kernel_matrix()
w,_=np.linalg.eig(K)
# visualise K and its spectrum (only up to threshold)
plt.figure(figsize=(18,5))
plt.subplot(121)
plt.imshow(K, interpolation="nearest")
plt.title("Kernel matrix K of joint data $X$ and $Y$")
plt.subplot(122)
thresh=0.1
plt.plot(w[:len(w[w>thresh])])
title("Eigenspectrum of K until component %d" % len(w[w>thresh]));
"""
Explanation: Different Ways to Approximate the Null Distribution for the Quadratic Time MMD
As already mentioned, permuting the data to access the null distribution is probably the method of choice, due to the efficient implementation in Shogun. There exist a couple of methods that are more sophisticated (and slower) and either allow very fast approximations without guarantees or reasonably fast approximations that are consistent. We present a selection from [2], which are implemented in Shogun.
The first one is a spectral method that is based around the Eigenspectrum of the kernel matrix of the joint samples. It is faster than bootstrapping while being a consistent test. Effectively, the null-distribution of the biased statistic is sampled, but in a more efficient way than the bootstrapping approach. The converges as
$$
m\mmd^2_b \rightarrow \sum_{l=1}^\infty \lambda_l z_l^2
$$
where $z_l\sim \mathcal{N}(0,2)$ are i.i.d. normal samples and $\lambda_l$ are Eigenvalues of expression 2 in [2], which can be empirically estimated by $\hat\lambda_l=\frac{1}{m}\nu_l$ where $\nu_l$ are the Eigenvalues of the centred kernel matrix of the joint samples $X$ and $Y$. The distribution above can be easily sampled. Shogun's implementation has two parameters:
Number of samples from null-distribution. The more, the more accurate.
Number of Eigenvalues of the Eigen-decomposition of the kernel matrix to use. The more, the better the results get. However, the Eigen-spectrum of the joint gram matrix usually decreases very fast. Plotting the Spectrum can help. See [2] for details.
If the kernel matrices are diagonal dominant, this method is likely to fail. For that and more details, see the original paper. Computational costs are likely to be larger than permutation testing, due to the efficient implementation of the latter: Eigenvalues of the gram matrix cost $\mathcal{O}(m^3)$.
Below, we illustrate how to sample the null distribution and perform two-sample testing with the Spectrum approximation in the class <a href="https://shogun.ml/&QuadraticTimeMMD">CQuadraticTimeMMD</a>. This method only works with the biased statistic.
End of explanation
"""
# threshold for eigenspectrum
thresh=0.1
# compute number of eigenvalues to use
num_eigen=len(w[w>thresh])
# finally, do the test, use biased statistic
mmd.set_statistic_type(sg.ST_BIASED_FULL)
#tell Shogun to use spectrum approximation
mmd.set_null_approximation_method(sg.NAM_MMD2_SPECTRUM)
mmd.spectrum_set_num_eigenvalues(num_eigen)
mmd.set_num_null_samples(num_samples)
# the usual test interface
statistic=mmd.compute_statistic()
p_value_spectrum=mmd.compute_p_value(statistic)
print "Spectrum: P-value of MMD test is %.2f" % p_value_spectrum
# compare with ground truth from permutation test
mmd.set_null_approximation_method(sg.NAM_PERMUTATION)
mmd.set_num_null_samples(num_samples)
p_value_permutation=mmd.compute_p_value(statistic)
print "Bootstrapping: P-value of MMD test is %.2f" % p_value_permutation
"""
Explanation: The above plot of the Eigenspectrum shows that the Eigenvalues are decaying extremely fast. We choose the number for the approximation such that all Eigenvalues bigger than some threshold are used. In this case, we will not loose a lot of accuracy while gaining a significant speedup. For slower decaying Eigenspectrums, this approximation might be more expensive.
End of explanation
"""
# tell Shogun to use gamma approximation
mmd.set_null_approximation_method(sg.NAM_MMD2_GAMMA)
# the usual test interface
statistic=mmd.compute_statistic()
p_value_gamma=mmd.compute_p_value(statistic)
print "Gamma: P-value of MMD test is %.2f" % p_value_gamma
# compare with ground truth bootstrapping
mmd.set_null_approximation_method(sg.NAM_PERMUTATION)
p_value_spectrum=mmd.compute_p_value(statistic)
print "Bootstrapping: P-value of MMD test is %.2f" % p_value_spectrum
"""
Explanation: The Gamma Moment Matching Approximation and Type I errors
$\DeclareMathOperator{\var}{var}$
Another method for approximating the null-distribution is by matching the first two moments of a <a href="http://en.wikipedia.org/wiki/Gamma_distribution">Gamma distribution</a> and then compute the quantiles of that. This does not result in a consistent test, but usually also gives good results while being very fast. However, there are distributions where the method fail. Therefore, the type I error should always be monitored. Described in [2]. It uses
$$
m\mmd_b(Z) \sim \frac{x^{\alpha-1}\exp(-\frac{x}{\beta})}{\beta^\alpha \Gamma(\alpha)}
$$
where
$$
\alpha=\frac{(\textbf{E}(\text{MMD}_b(Z)))^2}{\var(\text{MMD}_b(Z))} \qquad \text{and} \qquad
\beta=\frac{m \var(\text{MMD}_b(Z))}{(\textbf{E}(\text{MMD}_b(Z)))^2}
$$
Then, any threshold and p-value can be computed using the gamma distribution in the above expression. Computational costs are in $\mathcal{O}(m^2)$. Note that the test is parameter free. It only works with the biased statistic.
End of explanation
"""
# type I error is false alarm, therefore sample data under H0
num_trials=50
rejections_gamma=zeros(num_trials)
rejections_spectrum=zeros(num_trials)
rejections_bootstrap=zeros(num_trials)
num_samples=50
alpha=0.05
for i in range(num_trials):
X=norm.rvs(size=n, loc=mu, scale=sigma2)
Y=laplace.rvs(size=n, loc=mu, scale=b)
# simulate H0 via merging samples before computing the
Z=hstack((X,Y))
X=Z[:len(X)]
Y=Z[len(X):]
feat_p=sg.RealFeatures(reshape(X, (1,len(X))))
feat_q=sg.RealFeatures(reshape(Y, (1,len(Y))))
# gamma
mmd=sg.QuadraticTimeMMD(feat_p, feat_q)
mmd.set_kernel(kernel)
mmd.set_null_approximation_method(sg.NAM_MMD2_GAMMA)
mmd.set_statistic_type(sg.ST_BIASED_FULL)
rejections_gamma[i]=mmd.perform_test(alpha)
# spectrum
mmd=sg.QuadraticTimeMMD(feat_p, feat_q)
mmd.set_kernel(kernel)
mmd.set_null_approximation_method(sg.NAM_MMD2_SPECTRUM)
mmd.spectrum_set_num_eigenvalues(num_eigen)
mmd.set_num_null_samples(num_samples)
mmd.set_statistic_type(sg.ST_BIASED_FULL)
rejections_spectrum[i]=mmd.perform_test(alpha)
# bootstrap (precompute kernel)
mmd=sg.QuadraticTimeMMD(feat_p, feat_q)
p_and_q=mmd.get_p_and_q()
kernel.init(p_and_q, p_and_q)
precomputed_kernel=sg.CustomKernel(kernel)
mmd.set_kernel(precomputed_kernel)
mmd.set_null_approximation_method(sg.NAM_PERMUTATION)
mmd.set_num_null_samples(num_samples)
mmd.set_statistic_type(sg.ST_BIASED_FULL)
rejections_bootstrap[i]=mmd.perform_test(alpha)
convergence_gamma=cumsum(rejections_gamma)/(arange(num_trials)+1)
convergence_spectrum=cumsum(rejections_spectrum)/(arange(num_trials)+1)
convergence_bootstrap=cumsum(rejections_bootstrap)/(arange(num_trials)+1)
print "Average rejection rate of H0 for Gamma is %.2f" % mean(convergence_gamma)
print "Average rejection rate of H0 for Spectrum is %.2f" % mean(convergence_spectrum)
print "Average rejection rate of H0 for Bootstrapping is %.2f" % mean(rejections_bootstrap)
"""
Explanation: As we can see, the above example was kind of unfortunate, as the approximation fails badly. We check the type I error to verify that. This works similar to sampling the alternative distribution: re-sample data (assuming infinite amounts), perform the test and average results. Below we compare type I errors or all methods for approximating the null distribution. This will take a while.
End of explanation
"""
# paramters of dataset
m=20000
distance=10
stretch=5
num_blobs=3
angle=pi/4
# these are streaming features
gen_p=sg.GaussianBlobsDataGenerator(num_blobs, distance, 1, 0)
gen_q=sg.GaussianBlobsDataGenerator(num_blobs, distance, stretch, angle)
# stream some data and plot
num_plot=1000
features=gen_p.get_streamed_features(num_plot)
features=features.create_merged_copy(gen_q.get_streamed_features(num_plot))
data=features.get_feature_matrix()
figure(figsize=(18,5))
subplot(121)
grid(True)
plot(data[0][0:num_plot], data[1][0:num_plot], 'r.', label='$x$')
title('$X\sim p$')
subplot(122)
grid(True)
plot(data[0][num_plot+1:2*num_plot], data[1][num_plot+1:2*num_plot], 'b.', label='$x$', alpha=0.5)
_=title('$Y\sim q$')
"""
Explanation: We see that Gamma basically never rejects, which is inline with the fact that the p-value was massively overestimated above. Note that for the other tests, the p-value is also not at its desired value, but this is due to the low number of samples/repetitions in the above code. Increasing them leads to consistent type I errors.
Linear Time MMD on Gaussian Blobs
So far, we basically had to precompute the kernel matrix for reasonable runtimes. This is not possible for more than a few thousand points. The linear time MMD statistic, implemented in <a href="http://shogun.ml/CLinearTimeMMD">CLinearTimeMMD</a> can help here, as it accepts data under the streaming interface <a href="http://shogun.ml/CStreamingFeatures">CStreamingFeatures</a>, which deliver data one-by-one.
And it can do more cool things, for example choose the best single (or combined) kernel for you. But we need a more fancy dataset for that to show its power. We will use one of Shogun's streaming based data generator, <a href="http://shogun.ml/CGaussianBlobsDataGenerator">CGaussianBlobsDataGenerator</a> for that. This dataset consists of two distributions which are a grid of Gaussians where in one of them, the Gaussians are stretched and rotated. This dataset is regarded as challenging for two-sample testing.
End of explanation
"""
block_size=100
# if features are already under the streaming interface, just pass them
mmd=sg.LinearTimeMMD(gen_p, gen_q)
mmd.set_kernel(kernel)
mmd.set_num_samples_p(m)
mmd.set_num_samples_q(m)
mmd.set_num_blocks_per_burst(block_size)
# compute an unbiased estimate in linear time
statistic=mmd.compute_statistic()
print "MMD_l[X,Y]^2=%.2f" % statistic
# note: due to the streaming nature, successive calls of compute statistic use different data
# and produce different results. Data cannot be stored in memory
for _ in range(5):
print "MMD_l[X,Y]^2=%.2f" % mmd.compute_statistic()
"""
Explanation: We now describe the linear time MMD, as described in [1, Section 6], which is implemented in Shogun. A fast, unbiased estimate for the original MMD expression which still uses all available data can be obtained by dividing data into two parts and then compute
$$
\mmd_l^2[\mathcal{F},X,Y]=\frac{1}{m_2}\sum_{i=1}^{m_2} k(x_{2i},x_{2i+1})+k(y_{2i},y_{2i+1})-k(x_{2i},y_{2i+1})-
k(x_{2i+1},y_{2i})
$$
where $ m_2=\lfloor\frac{m}{2} \rfloor$. While the above expression assumes that $m$ data are available from each distribution, the statistic in general works in an online setting where features are obtained one by one. Since only pairs of four points are considered at once, this allows to compute it on data streams. In addition, the computational costs are linear in the number of samples that are considered from each distribution. These two properties make the linear time MMD very applicable for large scale two-sample tests. In theory, any number of samples can be processed -- time is the only limiting factor.
We begin by illustrating how to pass data to <a href="http://shogun.ml/CLinearTimeMMD">CLinearTimeMMD</a>. In order not to loose performance due to overhead, it is possible to specify a block size for the data stream.
End of explanation
"""
# data source
gen_p=sg.GaussianBlobsDataGenerator(num_blobs, distance, 1, 0)
gen_q=sg.GaussianBlobsDataGenerator(num_blobs, distance, stretch, angle)
num_samples=100
print "Number of data is %d" % num_samples
# retreive some points, store them as non-streaming data in memory
data_p=gen_p.get_streamed_features(num_samples)
data_q=gen_q.get_streamed_features(num_samples)
# example to create mmd (note that num_samples can be maximum the number of data in memory)
mmd=sg.LinearTimeMMD(data_p, data_q)
mmd.set_kernel(sg.GaussianKernel(10, 1))
mmd.set_num_blocks_per_burst(100)
print "Linear time MMD statistic: %.2f" % mmd.compute_statistic()
"""
Explanation: Sometimes, one might want to use <a href="http://shogun.ml/CLinearTimeMMD">CLinearTimeMMD</a> with data that is stored in memory. In that case, it is easy to data in the form of for example <a href="http://shogun.ml/CStreamingDenseFeatures">CStreamingDenseFeatures</a> into <a href="http://shogun.ml/CDenseFeatures">CDenseFeatures</a>.
End of explanation
"""
mmd=sg.LinearTimeMMD(gen_p, gen_q)
mmd.set_kernel(kernel)
mmd.set_num_samples_p(m)
mmd.set_num_samples_q(m)
mmd.set_num_blocks_per_burst(block_size)
print "m=%d samples from p and q" % m
print "Binary test result is: " + ("Rejection" if mmd.perform_test(alpha) else "No rejection")
print "P-value test result is %.2f" % mmd.compute_p_value(mmd.compute_statistic())
"""
Explanation: The Gaussian Approximation to the Null Distribution
As for any two-sample test in Shogun, bootstrapping can be used to approximate the null distribution. This results in a consistent, but slow test. The number of samples to take is the only parameter. Note that since <a href="http://shogun.ml/CLinearTimeMMD">CLinearTimeMMD</a> operates on streaming features, new data is taken from the stream in every iteration.
Bootstrapping is not really necessary since there exists a fast and consistent estimate of the null-distribution. However, to ensure that any approximation is accurate, it should always be checked against bootstrapping at least once.
Since both the null- and the alternative distribution of the linear time MMD are Gaussian with equal variance (and different mean), it is possible to approximate the null-distribution by using a linear time estimate for this variance. An unbiased, linear time estimator for
$$
\var[\mmd_l^2[\mathcal{F},X,Y]]
$$
can simply be computed by computing the empirical variance of
$$
k(x_{2i},x_{2i+1})+k(y_{2i},y_{2i+1})-k(x_{2i},y_{2i+1})-k(x_{2i+1},y_{2i}) \qquad (1\leq i\leq m_2)
$$
A normal distribution with this variance and zero mean can then be used as an approximation for the null-distribution. This results in a consistent test and is very fast. However, note that it is an approximation and its accuracy depends on the underlying data distributions. It is a good idea to compare to the bootstrapping approach first to determine an appropriate number of samples to use. This number is usually in the tens of thousands.
<a href="http://shogun.ml/CLinearTimeMMD">CLinearTimeMMD</a> allows to approximate the null distribution in the same pass as computing the statistic itself (in linear time). This should always be used in practice since seperate calls of computing statistic and p-value will operator on different data from the stream. Below, we compute the test on a large amount of data (impossible to perform quadratic time MMD for this one as the kernel matrices cannot be stored in memory)
End of explanation
"""
# mmd instance using streaming features
mmd=sg.LinearTimeMMD(gen_p, gen_q)
mmd.set_num_samples_p(m)
mmd.set_num_samples_q(m)
mmd.set_num_blocks_per_burst(block_size)
sigmas=[2**x for x in np.linspace(-5, 5, 11)]
print "Choosing kernel width from", ["{0:.2f}".format(sigma) for sigma in sigmas]
for i in range(len(sigmas)):
mmd.add_kernel(sg.GaussianKernel(10, sigmas[i]))
# optmal kernel choice is possible for linear time MMD
mmd.set_kernel_selection_strategy(sg.KSM_MAXIMIZE_POWER)
# must be set true for kernel selection
mmd.set_train_test_mode(True)
# select best kernel
mmd.select_kernel()
best_kernel=mmd.get_kernel()
best_kernel=sg.GaussianKernel.obtain_from_generic(best_kernel)
print "Best single kernel has bandwidth %.2f" % best_kernel.get_width()
"""
Explanation: Kernel Selection for the MMD -- Overview
$\DeclareMathOperator{\argmin}{arg\,min}
\DeclareMathOperator{\argmax}{arg\,max}$
Now which kernel do we actually use for our tests? So far, we just plugged in arbritary ones. However, for kernel two-sample testing, it is possible to do something more clever.
Shogun's kernel selection methods for MMD based two-sample tests are all based around [3, 4]. For the <a href="http://shogun.ml/CLinearTimeMMD">CLinearTimeMMD</a>, [3] describes a way of selecting the optimal kernel in the sense that the test's type II error is minimised. For the linear time MMD, this is the method of choice. It is done via maximising the MMD statistic divided by its standard deviation and it is possible for single kernels and also for convex combinations of them. For the <a href="http://shogun.ml/CQuadraticTimeMMD">CQuadraticTimeMMD</a>, the best method in literature is choosing the kernel that maximised the MMD statistic [4]. For convex combinations of kernels, this can be achieved via a $L2$ norm constraint. A detailed comparison of all methods on numerous datasets can be found in [5].
MMD Kernel selection in Shogun always involves coosing a one of the methods of <a href="http://shogun.ml/CKernelSelectionStrategy">CGaussianKernel</a> All methods compute their results for a fixed set of these baseline kernels. We later give an example how to use these classes after providing a list of available methods.
KSM_MEDIAN_HEURISTIC: Selects from a set <a href="http://shogun.ml/CGaussianKernel">CGaussianKernel</a> instances the one whose width parameter is closest to the median of the pairwise distances in the data. The median is computed on a certain number of points from each distribution that can be specified as a parameter. Since the median is a stable statistic, one does not have to compute all pairwise distances but rather just a few thousands. This method a useful (and fast) heuristic that in many cases gives a good hint on where to start looking for Gaussian kernel widths. It is for example described in [1]. Note that it may fail badly in selecting a good kernel for certain problems.
KSM_MAXIMIZE_MMD: Selects from a set of arbitrary baseline kernels a single one that maximises the used MMD statistic -- more specific its estimate.
$$
k^*=\argmax_{k\in\mathcal{K}} \hat \eta_k,
$$
where $\eta_k$ is an empirical MMD estimate for using a kernel $k$.
This was first described in [4] and was empirically shown to perform better than the median heuristic above. However, it remains a heuristic that comes with no guarantees. Since MMD estimates can be computed in linear and quadratic time, this method works for both methods. However, for the linear time statistic, there exists a better method.
KSM_MAXIMIZE_POWER: Selects the optimal single kernel from a set of baseline kernels. This is done via maximising the ratio of the linear MMD statistic and its standard deviation.
$$
k^=\argmax_{k\in\mathcal{K}} \frac{\hat \eta_k}{\hat\sigma_k+\lambda},
$$
where $\eta_k$ is a linear time MMD estimate for using a kernel $k$ and $\hat\sigma_k$ is a linear time variance estimate of $\eta_k$ to which a small number $\lambda$ is added to prevent division by zero.
These are estimated in a linear time way with the streaming framework that was described earlier. Therefore, this method is only available for <a href="http://shogun.ml/CLinearTimeMMD">CLinearTimeMMD</a>. Optimal here means that the resulting test's type II error is minimised for a fixed type I error. Important: For this method to work, the kernel needs to be selected on different* data than the test is performed on. Otherwise, the method will produce wrong results.
<a href="http://shogun.ml/CMMDKernelSelectionCombMaxL2">CMMDKernelSelectionCombMaxL2</a> Selects a convex combination of kernels that maximises the MMD statistic. This is the multiple kernel analogous to <a href="http://shogun.ml/CMMDKernelSelectionMax">CMMDKernelSelectionMax</a>. This is done via solving the convex program
$$
\boldsymbol{\beta}^*=\min_{\boldsymbol{\beta}} {\boldsymbol{\beta}^T\boldsymbol{\beta} : \boldsymbol{\beta}^T\boldsymbol{\eta}=\mathbf{1}, \boldsymbol{\beta}\succeq 0},
$$
where $\boldsymbol{\beta}$ is a vector of the resulting kernel weights and $\boldsymbol{\eta}$ is a vector of which each component contains a MMD estimate for a baseline kernel. See [3] for details. Note that this method is unable to select a single kernel -- even when this would be optimal.
Again, when using the linear time MMD, there are better methods available.
<a href="http://shogun.ml/CMMDKernelSelectionCombOpt">CMMDKernelSelectionCombOpt</a> Selects a convex combination of kernels that maximises the MMD statistic divided by its covariance. This corresponds to \emph{optimal} kernel selection in the same sense as in class <a href="http://shogun.ml/CMMDKernelSelectionOpt">CMMDKernelSelectionOpt</a> and is its multiple kernel analogous. The convex program to solve is
$$
\boldsymbol{\beta}^*=\min_{\boldsymbol{\beta}} (\hat Q+\lambda I) {\boldsymbol{\beta}^T\boldsymbol{\beta} : \boldsymbol{\beta}^T\boldsymbol{\eta}=\mathbf{1}, \boldsymbol{\beta}\succeq 0},
$$
where again $\boldsymbol{\beta}$ is a vector of the resulting kernel weights and $\boldsymbol{\eta}$ is a vector of which each component contains a MMD estimate for a baseline kernel. The matrix $\hat Q$ is a linear time estimate of the covariance matrix of the vector $\boldsymbol{\eta}$ to whose diagonal a small number $\lambda$ is added to prevent division by zero. See [3] for details. In contrast to <a href="http://shogun.ml/CMMDKernelSelectionCombMaxL2">CMMDKernelSelectionCombMaxL2</a>, this method is able to select a single kernel when this gives a lower type II error than a combination. In this sense, it contains <a href="http://shogun.ml/CMMDKernelSelectionOpt">CMMDKernelSelectionOpt</a>.
MMD Kernel Selection in Shogun
In order to use one of the above methods for kernel selection, one has to create a new instance of <a href="http://shogun.ml/CCombinedKernel">CCombinedKernel</a> append all desired baseline kernels to it. This combined kernel is then passed to the MMD class. Then, an object of any of the above kernel selection methods is created and the MMD instance is passed to it in the constructor. There are then multiple methods to call
compute_measures to compute a vector kernel selection criteria if a single kernel selection method is used. It will return a vector of selected kernel weights if a combined kernel selection method is used. For \shogunclass{CMMDKernelSelectionMedian}, the method does throw an error.
select_kernel returns the selected kernel of the method. For single kernels this will be one of the baseline kernel instances. For the combined kernel case, this will be the underlying <a href="http://shogun.ml/CCombinedKernel">CCombinedKernel</a> instance where the subkernel weights are set to the weights that were selected by the method.
In order to utilise the selected kernel, it has to be passed to an MMD instance. We now give an example how to select the optimal single and combined kernel for the Gaussian Blobs dataset.
What is the best kernel to use here? This is tricky since the distinguishing characteristics are hidden at a small length-scale. Create some kernels to select the best from
End of explanation
"""
mmd.set_null_approximation_method(sg.NAM_MMD1_GAUSSIAN);
p_value_best=mmd.compute_p_value(mmd.compute_statistic());
print "Bootstrapping: P-value of MMD test with optimal kernel is %.2f" % p_value_best
"""
Explanation: Now perform two-sample test with that kernel
End of explanation
"""
m=5000
mmd.set_num_samples_p(m)
mmd.set_num_samples_q(m)
mmd.set_train_test_mode(False)
num_samples=500
# sample null and alternative distribution, implicitly generate new data for that
mmd.set_null_approximation_method(sg.NAM_PERMUTATION)
mmd.set_num_null_samples(num_samples)
null_samples=mmd.sample_null()
alt_samples=zeros(num_samples)
for i in range(num_samples):
alt_samples[i]=mmd.compute_statistic()
"""
Explanation: For the linear time MMD, the null and alternative distributions look different than for the quadratic time MMD as plotted above. Let's sample them (takes longer, reduce number of samples a bit). Note how we can tell the linear time MMD to smulate the null hypothesis, which is necessary since we cannot permute by hand as samples are not in memory)
End of explanation
"""
plot_alt_vs_null(alt_samples, null_samples, alpha)
"""
Explanation: And visualise again. Note that both null and alternative distribution are Gaussian, which allows the fast null distribution approximation and the optimal kernel selection
End of explanation
"""
|
phaustin/pyman | Book/chap4/chap4_io.ipynb | cc0-1.0 | strname = input("prompt to user ")
"""
Explanation: Input and Output
A good relationship depends on good communication. In this chapter you
learn how to communicate with Python. Of course, communicating is a
two-way street: input and output. Generally, when you have Python
perform some task, you need to feed it information---input. When it is
done with that task, it reports back to you the results of its
calculations---output.
There are two venues for input that concern us: the computer keyboard
and the input data file. Similarly, there are two venues for output: the
computer screen and the output data file. We start with input from the
keyboard and output to the computer screen. Then we deal with data file
input and output---or "io."
Keyboard input
Many computer programs need input from the user. In chap2:ScriptExmp1,
the program needed the distance traveled as an input in order to
determine the duration of the trip and the cost of the gasoline. As you
might like to use this same script to determine the cost of several
different trips, it would be useful if the program requested that input
when it was run from the IPython shell.
Python has a function called raw_input (renamed input in Python 3)
for getting input from the user and assigning it a variable name. It has
the form
End of explanation
"""
distance = input("Input distance of trip in miles: ")
"""
Explanation: When the input statement is executed, it prints the text in the
quotes to the computer screen and waits for input from the user. The
user types a string of characters and presses the return key. The
input function then assigns that string to the variable name on
the right of the assignment operator =.
Let's try it out this snippet of code in the IPython shell.
End of explanation
"""
distance
"""
Explanation: Python prints out the string argument of the input function and
waits for a response from you. Let's go ahead and type 450 for "450
miles" and press return. Now type the variable name distance to see
its value
End of explanation
"""
distance = eval(distance)
distance
"""
Explanation: The value of the distance is 450 as expected, but it is a string
(the u stands for "unicode" which refers to the string coding system
Python uses). Because we want to use 450 as a number and not a
distance, we need to convert it from a string to a number. We can do
that with the eval function by writing
End of explanation
"""
distance = input("Input distance of trip in miles: ")
distance
distance = float(distance)
distance
"""
Explanation: The eval function has converted distance to an integer. This is fine
and we are ready to move on. However, we might prefer that distance be
a float instead of an integer. There are two ways to do this. We could
assume the user is very smart and will type "450." instead of "450",
which will cause distance to be a float when eval does the conversion.
That is, the number 450 is dynamically typed to be a float or an integer
depending on whether or not the user uses a decimal point.
Alternatively, we could use the function float in place of eval,
which would ensure that distance is a floating point variable. Thus,
our code would look like this (including the user response):
End of explanation
"""
# Calculates time, gallons of gas used, and cost of gasoline for a trip
distance = input("Input distance of trip in miles: ")
distance = float(distance)
mpg = 30. # car mileage
speed = 60. # average speed
costPerGallon = 4.10 # price of gas
time = distance/speed
gallons = distance/mpg
cost = gallons*costPerGallon
"""
Explanation: Now let's incorporate what we have learned into the code we wrote for
chap2:ScriptExmp1
End of explanation
"""
distance = float(input("Input distance of trip in miles: "))
"""
Explanation: Lines 4 and 5 can be combined into a single line, which is a little more
efficient:
End of explanation
"""
# Calculates time, gallons of gas used, and cost of gasoline for
# a trip
distance = float(input("Input distance of trip in miles: "))
mpg = 30. # car mileage
speed = 60. # average speed
costPerGallon = 4.10 # price of gas
time = distance/speed
gallons = distance/mpg
cost = gallons*costPerGallon
print("\nDuration of trip = {0:0.1f} hours".format(time))
print("Gasoline used = {0:0.1f} gallons (@ {1:0.0f} mpg)"
.format(gallons, mpg))
print("Cost of gasoline = ${0:0.2f} (@ ${1:0.2f}/gallon)"
.format(cost, costPerGallon))
"""
Explanation: Whether you use float or int or eval depends on whether you want a
float, an integer, or a dynamically typed variable. In this program, it
doesn't matter.
Now you can simply run the program and then type time, gallons, and
cost to view the results of the calculations done by the program.
Before moving on to output, we note that sometimes you may want string
input rather that numerical input. For example, you might want the user
to input their name, in which case you would simply use the input
function without converting its output.
Screen output
It would be much more convenient if the program in the previous section
would simply write its output to the computer screen, instead of
requiring the user to type time, gallons, and cost to view the
results. Fortunately, this can be accomplished very simply using
Python's print function. For example, simply including the statement
print(time, gallons, cost) after line 12, running the program would
give the following result:
``` ipython
In [1]: run myTripIO.py
What is the distance of your trip in miles? 450
(7.5, 15.0, 61.49999999999999)
```
The program prints out the results as a tuple of time (in hours),
gasoline used (in gallons), and cost (in dollars). Of course, the
program doesn't give the user a clue as to which quantity is which. The
user has to know.
Formatting output with str.format()
We can clean up the output of the example above and make it considerably
more user friendly. The program below demonstrates how to do this.
End of explanation
"""
run myTripNiceIO.py
"""
Explanation: The final two print function calls in this script are continued on a
second line in order to improve readability. Running this program, with
the distance provided by the user, gives
End of explanation
"""
string1 = "How"
string2 = "are you my friend?"
int1 = 34
int2 = 942885
float1 = -3.0
float2 = 3.141592653589793e-14
print(' ***')
print(string1)
print(string1 + ' ' + string2)
print(' 1. {} {}'.format(string1, string2))
print(' 2. {0:s} {1:s}'.format(string1, string2))
print(' 3. {0:s} {0:s} {1:s} - {0:s} {1:s}'
.format(string1, string2))
print(' 4. {0:10s}{1:5s}'
.format(string1, string2))
print(' ***')
print(int1, int2)
print(' 6. {0:d} {1:d}'.format(int1, int2))
print(' 7. {0:8d} {1:10d}'.format(int1, int2))
print(' ***')
print(' 8. {0:0.3f}'.format(float1))
print(' 9. {0:6.3f}'.format(float1))
print('10. {0:8.3f}'.format(float1))
print(2*'11. {0:8.3f}'.format(float1))
print(' ***')
print('12. {0:0.3e}'.format(float2))
print('13. {0:10.3e}'.format(float2))
print('14. {0:10.3f}'.format(float2))
print(' ***')
print('15. 12345678901234567890')
print('16. {0:s}--{1:8d},{2:10.3e}'
.format(string2, int1, float2))
"""
Explanation: Now the output is presented in a way that is immediately understandable
to the user. Moreover, the numerical output is formatted with an
appropriate number of digits to the right of the decimal point. For good
measure, we also included the assumed mileage (30 mpg) and the cost of
the gasoline. All of this is controlled by the str.format() function
within the print function.
The argument of the print function is of the form str.format() where
str is a string that contains text that is written to be the screen,
as well as certain format specifiers contained in curly braces {}. The
format function contains the list of variables that are to be printed.
The \n at the start of the string in the print statement on line 12 in the newline character. It creates the blank line before the output is printed.
The positions of the curly braces determine where the variables in the format function at the end of the statement are printed.
The format string inside the curly braces specifies how each variable in the format function is printed.
The number before the colon in the format string specifies which variable in the list in the format function is printed. Remember, Python is zero-indexed, so 0 means the first variable is printed, 1 means the second variable, etc.
The zero after the colon specifies the minimum number of spaces reserved for printing out the variable in the format function. A zero means that only as many spaces as needed will be used.
The number after the period specifies the number of digits to the right of the decimal point that will be printed: 1 for time and gallons and 2 for cost.
The f specifies that a number with a fixed number of decimal points. If the f format specifier is replaced with e, then the number is printed out in exponential format (scientific notation).
In addition to f and e format types, there are two more that are
commonly used: d for integers (digits) and s for strings. There are,
in fact, many more formatting possibilities. Python has a whole "Format
Specification Mini-Language" that is documented at
http://docs.python.org/library/string.html#formatspec. It's very
flexible but arcane. You might find it simplest to look at the "Format
examples" section further down the same web page.
The program below illustrates most of the formatting you will need for
writing a few variables, be they strings, integers, or floats, to screen
or to data files (which we discuss in the next section).
End of explanation
"""
a = np.linspace(3, 19, 7)
print(a)
"""
Explanation: Successive empty brackets {} like those that appear in the statement
above print(' 1. {} {}'.format(string1, string2)) are numbered
consecutively starting at 0 and will print out whatever variables appear
inside the format() method using their default format.
Finally, note that the code starting on lines 14 and 16 each are split
into two lines. We have done this so that the lines fit on the page
without running off the edge. Python allows you to break lines up like
this to improve readability.
Printing arrays
Formatting NumPy arrays for printing requires another approach. As an
example, let's create an array and then format it in various ways. From
the IPython terminal
End of explanation
"""
np.set_printoptions(precision=2)
print(a)
"""
Explanation: Simply using the print function does print out the array, but perhaps
not in the format you desire. To control the output format, you use the
NumPy function set_printoptions. For example, suppose you want to see
no more than two digits to the right of the decimal point. Then you
simply write
End of explanation
"""
np.set_printoptions(precision=4)
print(a)
"""
Explanation: If you want to change the number of digits to the right of the decimal
point to 4, you set the keyword argument precision to 4
End of explanation
"""
np.set_printoptions(formatter={'float': lambda x: format(x, '6.2e')})
print(a)
"""
Explanation: Suppose you want to use scientific notation. The method for doing it is
somewhat arcane, using something called a lambda function. For now,
you don't need to understand how it works to use it. Just follow the
examples shown below, which illustrate several different output formats
using the print function with NumPy arrays.
End of explanation
"""
np.set_printoptions(formatter={'float': lambda x: format(x, '6.3f')})
print(a)
"""
Explanation: To specify the format of the output, you use the formatter keyword
argument. The first entry to the right of the curly bracket is a string
that can be 'float', as it is above, or 'int', or 'str', or a
number of other data types that you can look up in the online NumPy
documentation. The only other thing you should change is the format
specifier string. In the above example, it is '6.2e', specifying that
Python should allocate at least 6 spaces, with 2 digits to the right of
the decimal point in scientific (exponential) notation. For fixed width
floats with 3 digits to the right of the decimal point, use the f in
place of the e format specifier, as follows
End of explanation
"""
np.set_printoptions(precision=8)
print(a)
"""
Explanation: To return to the default format, type the following
End of explanation
"""
dataPt, time, height, error = np.loadtxt("MyData.txt", skiprows=5 , unpack=True)
"""
Explanation: The set_printoptions is a NumPy function, so if you use it in a script
or program, you should call it by writing np.set_printoptions.
File input
Reading data from a text file
Often you would like to analyze data that you have stored in a text
file. Consider, for example, the data file below for an experiment
measuring the free fall of a mass.
```
Data for falling mass experiment
Date: 16-Aug-2013
Data taken by Lauren and John
data point time (sec) height (mm) uncertainty (mm)
0 0.0 180 3.5
1 0.5 182 4.5
2 1.0 178 4.0
3 1.5 165 5.5
4 2.0 160 2.5
5 2.5 148 3.0
6 3.0 136 2.5
7 3.5 120 3.0
8 4.0 99 4.0
9 4.5 83 2.5
10 5.0 55 3.6
11 5.5 35 1.75
12 6.0 5 0.75
```
We would like to read these data into a Python program, associating the
data in each column with an appropriately named array. While there are a
multitude of ways to do this in Python, the simplest by far is to use
the NumPy loadtxt function, whose use we illustrate here. Suppose that
the name of the text file is MyData.txt. Then we can read the data
into four different arrays with the following statement
End of explanation
"""
time, height = np.loadtxt('MyData.txt', skiprows=5, usecols = (1,2), unpack=True)
"""
Explanation: In this case, the loadtxt function takes three arguments: the first is
a string that is the name of the file to be read, the second tells
loadtxt to skip the first 5 lines at the top of file, sometimes called
the header, and the third tells loadtxt to output the data (unpack
the data) so that it can be directly read into arrays. loadtxt reads
however many columns of data are present in the text file to the array
names listed to the left of the "=" sign. The names labeling the
columns in the text file are not used, but you are free to choose the
same or similar names, of course, as long as they are legal array names.
By the way, for the above loadtxt call to work, the file MyData.txt
should be in the current working directory of the IPython shell.
Otherwise, you need to specify the directory path with the file name.
It is critically important that the data file be a text file. It
cannot be a MSWord file, for example, or an Excel file, or anything
other than a plain text file. Such files can be created by a text editor
programs like Notepad and Notepad++ (for a PC) or TextEdit and
TextWrangler (for a Mac). They can also be created by MSWord and Excel
provided you explicitly save the files as text files. Beware: You
should exit any text file you make and save it with a program that
allows you to save the text file using UNIX-type formatting, which
uses a line feed (LF) to end a line. Some programs, like MSWord under
Windows, may include a carriage return (CR) character, which can confuse
loadtxt. Note that we give the file name a .txt extension, which
indicates to most operating systems that this is a text file, as
opposed to an Excel file, for example, which might have a .xlsx or
.xls extension.
If you don't want to read in all the columns of data, you can specify
which columns to read in using the usecols key word. For example, the
call
End of explanation
"""
dataPt, time, height, error = np.loadtxt("MyData.csv", skiprows=5 , unpack=True, delimiter=',')
"""
Explanation: reads in only columns 1 and 2; columns 0 and 3 are skipped. As a
consequence, only two array names are included to the left of the "="
sign, corresponding to the two column that are read. Writing
usecols = (0,2,3) would skip column 1 and read in only the data in
colums 0, 2, and 3. In this case, 3 array names would need to be
provided on the left hand side of the "=" sign.
One convenient feature of the loadtxt function is that it recognizes
any white space as a column separator: spaces, tabs, etc.
Finally you should remember that loadtxt is a NumPy function. So if
you are using it in a Python module, you must be sure to include an
"import numpy as np" statement before calling "np.loadtxt".
Reading data from a CSV file
Sometimes you have data stored in a spreadsheet program like Excel that
you would like to read into a Python program. The fig-ExcelWindow
shown here contains the same data set we saw above in a text file.
<figure>
<img src="attachment:ExcelDataFile.png" class="align-center" alt="" /><figcaption>Excel data sheet</figcaption>
</figure>
While there are a number of different approaches one can use to reading
such files, one of the simplest of most robust is to save the
spreadsheet as a CSV ("comma separated value") file, a format which all
common spreadsheet programs can create and read. So, if your Excel
spreadsheet was called MyData.xlsx, the CSV file saved using Excel's
Save As command would by default be MyData.csv. It would look like
this
Data for falling mass experiment,,,
Date: 16-Aug-2013,,,
Data taken by Lauren and John,,,
,,,
data point,time (sec),height (mm),uncertainty (mm)
0,0,180,3.5
1,0.5,182,4.5
2,1,178,4
3,1.5,165,5.5
4,2,160,2.5
5,2.5,148,3
6,3,136,2.5
7,3.5,120,3
8,4,99,4
9,4.5,83,2.5
10,5,55,3.6
11,5.5,35,1.75
12,6,5,0.75
As its name suggests, the CSV file is simply a text file with the data
that was formerly in spreadsheet columns now separated by commas. We can
read the data in this file into a Python program using the loadtxt
NumPy function once again. Here is the code
End of explanation
"""
import numpy as np
dataPt, time, height, error = np.loadtxt("MyData.txt", skiprows=5, unpack=True)
np.savetxt('MyDataOut.txt', list(zip(dataPt, time, height, error)), fmt="%12.1f")
# Unlike in Python 2, `zip` in Python 3 returns an iterator. For the sake of
# this exercise, I have exhausted the iterator with `list` -- Loren.
"""
Explanation: The form of the function is exactly the same as before except we have
added the argument delimiter=',' that tells loadtxt that the columns
are separated by commas instead of white space (spaces or tabs), which
is the default. Once again, we set the skiprows argument to skip the
header at the beginning of the file and to start reading at the first
row of data. The data are output to the arrays to the right of the
assignment operator = exactly as in the previous example.
File output
Writing data to a text file
There is a plethora of ways to write data to a data file in Python. We
will stick to one very simple one that's suitable for writing data files
in text format. It uses the NumPy savetxt routine, which is the
counterpart of the loadtxt routine introduced in the previous section.
The general form of the routine is
np.savetxt(filename, array, fmt="%0.18e", delimiter=" ", newline="\n", header="", footer="", comments="# ")
We illustrate savetext below with a script that first creates four
arrays by reading in the data file MyData.txt, as discussed in the
previous section, and then writes that same data set to another file
MyDataOut.txt.
End of explanation
"""
import numpy as np
dataPt, time, height, error = np.loadtxt("MyData.txt", skiprows=5, unpack=True)
info = 'Data for falling mass experiment'
info += '\nDate: 16-Aug-2013'
info += '\nData taken by Lauren and John'
info += '\n\n data point time (sec) height (mm) '
info += 'uncertainty (mm)'
np.savetxt('MyDataOut.txt', list(zip(dataPt, time, height, error)), header=info, fmt="%12.1f")
"""
Explanation: The first argument of of savetxt is a string, the name of the data
file to be created. Here we have chosen the name MyDataOut.txt,
inserted with quotes, which designates it as a string literal. Beware,
if there is already a file of that name on your computer, it will be
overwritten---the old file will be destroyed and a new one will be
created.
The second argument is the data array the is to be written to the data
file. Because we want to write not one but four data arrays to the file,
we have to package the four data arrays as one, which we do using the
zip function, a Python function that combines returns a list of
tuples, where the $i^\mathrm{th}$ tuple contains the $i^\mathrm{th}$
element from each of the arrays (or lists, or tuples) listed as its
arguments. Since there are four arrays, each row will be a tuple with
four entries, producing a table with four columns. Note that the first
two arguments, the filename and data array, are regular arguments and
thus must appear as the first and second arguments in the correct order.
The remaining arguments are all keyword arguments, meaning that they are
optional and can appear in any order, provided you use the keyword.
The next argument is a format string that determines how the elements of
the array are displayed in the data file. The argument is optional and,
if left out, is the format 0.18e, which displays numbers as 18 digit
floats in exponential (scientific) notation. Here we choose a different
format, 12.1f, which is a float displayed with 1 digit to the right of
the decimal point and a minimum width of 12. By choosing 12, which is
more digits than any of the numbers in the various arrays have, we
ensure that all the columns will have the same width. It also ensures
that the decimal points in column of numbers are aligned. This is
evident in the data file below, <span
class="title-ref">MyDataOut.txt</span>, which was produced by the above
script.
0.0 0.0 180.0 3.5
1.0 0.5 182.0 4.5
2.0 1.0 178.0 4.0
3.0 1.5 165.0 5.5
4.0 2.0 160.0 2.5
5.0 2.5 148.0 3.0
6.0 3.0 136.0 2.5
7.0 3.5 120.0 3.0
8.0 4.0 99.0 4.0
9.0 4.5 83.0 2.5
We omitted the optional delimiter keyword argument, which leaves the
delimiter as the default space.
We also omitted the optional header keyword argument, which is a
string variable that allows you to write header text above the data. For
example, you might want to label the data columns and also include the
information that was in the header of the original data file. To do so,
you just need to create a string with the information you want to
include and then use the header keyword argument. The code below
illustrates how to do this.
End of explanation
"""
np.savetxt('MyDataOut.csv', list(zip(dataPt, time, height, error)), fmt="%0.1f", delimiter=",")
"""
Explanation: Now the data file produces has a header preceding the data. Notice that
the header rows all start with a # comment character, which is the
default setting for the savetxt function. This can be changed using
the keyword argument comments. You can find more information about
savetxt using the IPython help function or from the online NumPy
documentation.
```
Data for falling mass experiment
Date: 16-Aug-2013
Data taken by Lauren and John
data point time (sec) height (mm) uncertainty (mm)
0.0 0.0 180.0 3.5
1.0 0.5 182.0 4.5
2.0 1.0 178.0 4.0
3.0 1.5 165.0 5.5
4.0 2.0 160.0 2.5
5.0 2.5 148.0 3.0
6.0 3.0 136.0 2.5
7.0 3.5 120.0 3.0
8.0 4.0 99.0 4.0
9.0 4.5 83.0 2.5
10.0 5.0 55.0 3.6
11.0 5.5 35.0 1.8
12.0 6.0 5.0 0.8
```
Writing data to a CSV file
To produce a CSV file, you would specify a comma as the delimiter. You
might use the 0.1f format specifier, which leaves no extra spaces
between the comma data separators, as the file is to be read by a
spreadsheet program, which will determine how the numbers are displayed.
The code, which could be substituted for the savetxt line in the above
code reads
End of explanation
"""
a = np.array([1, 3, 5, 7])
b = np.array([8, 7, 5, 4])
c = np.array([0, 9,-6,-8])
"""
Explanation: and produces the following data file
0.0,0.0,180.0,3.5
1.0,0.5,182.0,4.5
2.0,1.0,178.0,4.0
3.0,1.5,165.0,5.5
4.0,2.0,160.0,2.5
5.0,2.5,148.0,3.0
6.0,3.0,136.0,2.5
7.0,3.5,120.0,3.0
8.0,4.0,99.0,4.0
9.0,4.5,83.0,2.5
10.0,5.0,55.0,3.6
11.0,5.5,35.0,1.8
12.0,6.0,5.0,0.8
This data file, with a csv extension, can be directly read into a
spreadsheet program like Excel.
Exercises
Write a Python program that calculates how much money you can spend each day for lunch for the rest of the month based on today's date and how much money you currently have in your lunch account. The program should ask you: (1) how much money you have in your account, (2) what today's date is, and (3) how many days there are in month. The program should return your daily allowance. The results of running your program should look like this:
```
How much money (in dollars) in your lunch account? 118.39
What day of the month is today? 17
How many days in this month? 30
You can spend $8.46 each day for the rest of the month.
```
Extra: Create a dictionary (see chap3dictionaries) that stores the number of days in each month (forget about leap years) and have your program ask what month it is rather than the number of days in the month.
From the IPython terminal, create the following three NumPy arrays:
End of explanation
"""
d = zip(a, b, c)
"""
Explanation: Now use the zip function to create the object d defined as
End of explanation
"""
|
feffenberger/StatisticalMethods | examples/SDSScatalog/GalaxySizes.ipynb | gpl-2.0 | %load_ext autoreload
%autoreload 2
from __future__ import print_function
import numpy as np
import SDSS
import pandas as pd
import matplotlib
%matplotlib inline
galaxies = "SELECT top 1000 \
petroR50_i AS size, \
petroR50Err_i AS err \
FROM PhotoObjAll \
WHERE \
(type = '3' AND petroR50Err_i > 0)"
print (galaxies)
# Download data. This can take a few moments...
data = SDSS.select(galaxies)
data.head()
!mkdir -p downloads
data.to_csv("downloads/SDSSgalaxysizes.csv")
"""
Explanation: Illustrating Observed and Intrinsic Object Properties:
SDSS "Galaxy" Sizes
In a catalog, each galaxy's measurements come with "error bars" providing information about how uncertain we should be about each property of each galaxy.
This means that the distribution of "observed" galaxy properties (as reported in the catalog) is not the same as the underlying or "intrinsic" distribution.
Let's look at the distribution of observed sizes in the SDSS photometric object catalog.
End of explanation
"""
data = pd.read_csv("downloads/SDSSgalaxysizes.csv",usecols=["size","err"])
data['size'].hist(bins=np.linspace(0.0,5.0,100),figsize=(12,7))
matplotlib.pyplot.xlabel('Size / arcsec',fontsize=16)
matplotlib.pyplot.title('SDSS Observed Size',fontsize=20)
"""
Explanation: The Distribution of Observed SDSS "Galaxy" Sizes
Let's look at a histogram of galaxy sizes, for 1000 objects classified as "galaxies".
End of explanation
"""
data.plot(kind='scatter', x='size', y='err',s=100,figsize=(12,7));
"""
Explanation: Things to notice:
No small objects (why not?)
A "tail" to large size
Some very large sizes that look a little odd
Are these large galaxies actually large, or have they just been measured that way?
Let's look at the reported uncertainties on these sizes:
End of explanation
"""
def generate_galaxies(mu=np.log10(1.5),S=0.3,N=1000):
return pd.DataFrame({'size' : 10.0**(mu + S*np.random.randn(N))})
mu = np.log10(1.5)
S = 0.05
intrinsic = generate_galaxies(mu=mu,S=S,N=1000)
intrinsic.hist(bins=np.linspace(0.0,5.0,100),figsize=(12,7),color='green')
matplotlib.pyplot.xlabel('Size / arcsec',fontsize=16)
matplotlib.pyplot.title('Intrinsic Size',fontsize=20)
"""
Explanation: Generating Mock Data
Let's look at how distributions like this one can come about, by making a generative model for this dataset.
First, let's imagine a set of perfectly measured galaxies. They won't all have the same size, because the Universe isn't like that. Let's suppose the logarithm of their intrinsic sizes are drawn from a Gaussian distribution of width $S$ and mean $\mu$.
To model one mock galaxy, we draw a sample from this distribution. To model the whole dataset, we draw 1000 samples.
Note that this is a similar activity to making random catalogs for use in correlation function summaries; here, though, we want to start comparing real data with mock data to begin understanding it.
End of explanation
"""
def make_noise(sigma=0.3,N=1000):
return pd.DataFrame({'size' : sigma*np.random.randn(N)})
sigma = 0.3
errors = make_noise(sigma=sigma,N=1000)
observed = intrinsic + errors
observed.hist(bins=np.linspace(0.0,5.0,100),figsize=(12,7),color='red')
matplotlib.pyplot.xlabel('Size / arcsec',fontsize=16)
matplotlib.pyplot.title('Observed Size',fontsize=20)
both = pd.DataFrame({'SDSS': data['size'], 'Model': observed['size']}, columns=['SDSS', 'Model'])
both.hist(alpha=0.5,bins=np.linspace(0.0,5.0,100),figsize=(12,7))
"""
Explanation: Now let's add some observational uncertainty. We can model this by drawing random Gaussian offsets $\epsilon$ and add one to each intrinsic size.
End of explanation
"""
V_data = np.var(data['size'])
print ("Variance of the SDSS distribution = ",V_data)
V_int = np.var(intrinsic['size'])
V_noise = np.var(errors['size'])
V_obs = np.var(observed['size'])
print ("Variance of the intrinsic distribution = ", V_int)
print ("Variance of the noise = ", V_noise)
print ("Variance of the observed distribution = ", V_int + V_noise, \
"cf", V_obs)
"""
Explanation: Q: How did we do? Is this a good model for our data?
Play around with the parameters $\mu$, $S$ and $\sigma$ and see if you can get a better match to the observed distribution of sizes.
<br>
One last thing: let's look at the variances of these distributions.
Recall:
$V(x) = \frac{1}{N} \sum_{i=1}^N (x_i - \nu)^2$
If $\nu$, the population mean of $x$, is not known, an estimator for $V$ is
$\hat{V}(x) = \frac{1}{N} \sum_{i=1}^N (x_i - \bar{x})^2$
where $\bar{x} = \frac{1}{N} \sum_{i=1}^N x_i$, the sample mean.
End of explanation
"""
from IPython.display import Image
Image(filename="samplingdistributions.png",width=300)
"""
Explanation: You may recall this last result from previous statistics courses.
Why is the variance of our mock dataset's galaxy sizes so much smaller than that of the SDSS sample?
Sampling Distributions
In the above example we drew 1000 samples from two probability distributions:
The intrinsic size distribution, ${\rm Pr}(R_{\rm true}|\mu,S)$
The "error" distribution, ${\rm Pr}(R_{\rm obs}|R_{\rm true},\sigma)$
The procedure of drawing numbers from the first, and then adding numbers from the second, produced mock data - which then appeared to have been drawn from:
${\rm Pr}(R_{\rm obs}|\mu,S,\sigma)$
which is broader than either the intrinsic distribution or the error distribution.
Q: What would we do differently if we wanted to simulate 1 Galaxy?
The three distributions are related by an integral:
${\rm Pr}(R_{\rm obs}|\mu,S,\sigma) = \int {\rm Pr}(R_{\rm obs}|R_{\rm true},\sigma) \; {\rm Pr}(R_{\rm true}|\mu,S) \; dR_{\rm true}$
Note that this is not a convolution, in general - but it's similar to one.
When we only plot the 1D histogram of observed sizes, we are summing over or "marginalizing out" the intrinsic ones.
Probabilistic Graphical Models
We can draw a diagram representing the above combination of probability distributions, that:
Shows the dependencies between variables
Gives you a recipe for generating mock data
We can do this in python, using the daft package.:
End of explanation
"""
|
UltronAI/Deep-Learning | CS231n/assignment2/Dropout.ipynb | mit | # As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
"""
Explanation: Dropout
Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
[1] Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012
End of explanation
"""
np.random.seed(231)
x = np.random.randn(500, 500) + 10
for p in [0.3, 0.6, 0.75]:
out, _ = dropout_forward(x, {'mode': 'train', 'p': p})
out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})
print('Running tests with p = ', p)
print('Mean of input: ', x.mean())
print('Mean of train-time output: ', out.mean())
print('Mean of test-time output: ', out_test.mean())
print('Fraction of train-time output set to zero: ', (out == 0).mean())
print('Fraction of test-time output set to zero: ', (out_test == 0).mean())
print()
"""
Explanation: Dropout forward pass
In the file cs231n/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.
Once you have done so, run the cell below to test your implementation.
End of explanation
"""
np.random.seed(231)
x = np.random.randn(10, 10) + 10
dout = np.random.randn(*x.shape)
dropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123}
out, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)
print('dx relative error: ', rel_error(dx, dx_num))
"""
Explanation: Dropout backward pass
In the file cs231n/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
End of explanation
"""
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for dropout in [0, 0.25, 0.5]:
print('Running check with dropout = ', dropout)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
weight_scale=5e-2, dtype=np.float64,
dropout=dropout, seed=123)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
print()
"""
Explanation: Fully-connected nets with Dropout
In the file cs231n/classifiers/fc_net.py, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the dropout parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.
End of explanation
"""
# Train two identical nets, one with dropout and one without
np.random.seed(231)
num_train = 500
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
dropout_choices = [0, 0.75]
for dropout in dropout_choices:
model = FullyConnectedNet([500], dropout=dropout)
print(dropout)
solver = Solver(model, small_data,
num_epochs=25, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 5e-4,
},
verbose=True, print_every=100)
solver.train()
solvers[dropout] = solver
# Plot train and validation accuracies of the two models
train_accs = []
val_accs = []
for dropout in dropout_choices:
solver = solvers[dropout]
train_accs.append(solver.train_acc_history[-1])
val_accs.append(solver.val_acc_history[-1])
plt.subplot(3, 1, 1)
for dropout in dropout_choices:
plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Train accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
for dropout in dropout_choices:
plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Val accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.gcf().set_size_inches(15, 15)
plt.show()
"""
Explanation: Regularization experiment
As an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a dropout probability of 0.75. We will then visualize the training and validation accuracies of the two networks over time.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive/08_image/labs/flowers_fromscratch.ipynb | apache-2.0 | !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import os
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT ID
BUCKET = "cloud-training-demos-ml" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "2.1" # Tensorflow version
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
import pathlib
from PIL import Image
import IPython.display as display
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
import tensorflow_hub as hub
"""
Explanation: Flowers Image Classification with TensorFlow
This notebook demonstrates how to do image classification from scratch on a flowers dataset.
Learning Objectives
Know how to apply image augmentation
Know how to download and use a TensorFlow Hub module as a layer in Keras.
End of explanation
"""
data_dir = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
# Print data path
print("cd", data_dir)
"""
Explanation: Exploring the data
As usual, let's take a look at the data before we start building our model. We'll be using a creative-commons licensed flower photo dataset of 3670 images falling into 5 categories: 'daisy', 'roses', 'dandelion', 'sunflowers', and 'tulips'.
The below tf.keras.utils.get_file command downloads a dataset to the local Keras cache. To see the files through a terminal, copy the output of the cell below.
End of explanation
"""
data_dir = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob('*/*.jpg')))
print("There are", image_count, "images.")
CLASS_NAMES = np.array(
[item.name for item in data_dir.glob('*') if item.name != "LICENSE.txt"])
print("These are the available classes:", CLASS_NAMES)
"""
Explanation: We can use python's built in pathlib tool to get a sense of this unstructured data.
End of explanation
"""
roses = list(data_dir.glob('roses/*'))
for image_path in roses[:3]:
display.display(Image.open(str(image_path)))
"""
Explanation: Let's display the images so we can see what our model will be trying to learn.
End of explanation
"""
!gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv \
| head -5 > /tmp/input.csv
!cat /tmp/input.csv
!gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | \
sed 's/,/ /g' | awk '{print $2}' | sort | uniq > /tmp/labels.txt
!cat /tmp/labels.txt
"""
Explanation: Building the dataset
Keras has some convenient methods to read in image data. For instance tf.keras.preprocessing.image.ImageDataGenerator is great for small local datasets. A tutorial on how to use it can be found here, but what if we have so many images, it doesn't fit on a local machine? We can use tf.data.datasets to build a generator based on files in a Google Cloud Storage Bucket.
We have already prepared these images to be stored on the cloud in gs://cloud-ml-data/img/flower_photos/. The images are randomly split into a training set with 90% data and an iterable with 10% data listed in CSV files:
Training set: train_set.csv
Evaluation set: eval_set.csv
Explore the format and contents of the train.csv by running:
End of explanation
"""
IMG_HEIGHT = 224
IMG_WIDTH = 224
IMG_CHANNELS = 3
BATCH_SIZE = 32
# 10 is a magic number tuned for local training of this dataset.
SHUFFLE_BUFFER = 10 * BATCH_SIZE
AUTOTUNE = tf.data.experimental.AUTOTUNE
VALIDATION_IMAGES = 370
VALIDATION_STEPS = VALIDATION_IMAGES // BATCH_SIZE
def decode_img(img, reshape_dims):
# Convert the compressed string to a 3D uint8 tensor.
img = tf.image.decode_jpeg(img, channels=IMG_CHANNELS)
# Use `convert_image_dtype` to convert to floats in the [0,1] range.
img = tf.image.convert_image_dtype(img, tf.float32)
# Resize the image to the desired size.
return tf.image.resize(img, reshape_dims)
"""
Explanation: Let's figure out how to read one of these images from the cloud. TensorFlow's tf.io.read_file can help us read the file contents, but the result will be a Base64 image string. Hmm... not very readable for humans or Tensorflow.
Thankfully, TensorFlow's tf.image.decode_jpeg function can decode this string into an integer array, and tf.image.convert_image_dtype can cast it into a 0 - 1 range float. Finally, we'll use tf.image.resize to force image dimensions to be consistent for our neural network.
We'll wrap these into a function as we'll be calling these repeatedly. While we're at it, let's also define our constants for our neural network.
End of explanation
"""
img = tf.io.read_file(
"gs://cloud-ml-data/img/flower_photos/daisy/754296579_30a9ae018c_n.jpg")
# Uncomment to see the image string.
#print(img)
# TODO: decode image and plot it
"""
Explanation: Is it working? Let's see!
TODO 1.a: Run the decode_img function and plot it to see a happy looking daisy.
End of explanation
"""
def decode_csv(csv_row):
record_defaults = ["path", "flower"]
filename, label_string = tf.io.decode_csv(csv_row, record_defaults)
image_bytes = tf.io.read_file(filename=filename)
label = tf.math.equal(CLASS_NAMES, label_string)
return image_bytes, label
"""
Explanation: One flower down, 3669 more of them to go. Rather than load all the photos in directly, we'll use the file paths given to us in the csv and load the images when we batch. tf.io.decode_csv reads in csv rows (or each line in a csv file), while tf.math.equal will help us format our label such that it's a boolean array with a truth value corresponding to the class in CLASS_NAMES, much like the labels for the MNIST Lab.
End of explanation
"""
MAX_DELTA = 63.0 / 255.0 # Change brightness by at most 17.7%
CONTRAST_LOWER = 0.2
CONTRAST_UPPER = 1.8
def read_and_preprocess(image_bytes, label, random_augment=False):
if random_augment:
img = decode_img(image_bytes, [IMG_HEIGHT + 10, IMG_WIDTH + 10])
# TODO: augment the image.
else:
img = decode_img(image_bytes, [IMG_WIDTH, IMG_HEIGHT])
return img, label
def read_and_preprocess_with_augment(image_bytes, label):
return read_and_preprocess(image_bytes, label, random_augment=True)
"""
Explanation: Next, we'll transform the images to give our network more variety to train on. There are a number of image manipulation functions. We'll cover just a few:
tf.image.random_crop - Randomly deletes the top/bottom rows and left/right columns down to the dimensions specified.
tf.image.random_flip_left_right - Randomly flips the image horizontally
tf.image.random_brightness - Randomly adjusts how dark or light the image is.
tf.image.random_contrast - Randomly adjusts image contrast.
TODO 1.b: Add the missing parameters from the random augment functions.
End of explanation
"""
def load_dataset(csv_of_filenames, batch_size, training=True):
dataset = tf.data.TextLineDataset(filenames=csv_of_filenames) \
.map(decode_csv).cache()
if training:
dataset = dataset \
.map(read_and_preprocess_with_augment) \
.shuffle(SHUFFLE_BUFFER) \
.repeat(count=None) # Indefinately.
else:
dataset = dataset \
.map(read_and_preprocess) \
.repeat(count=1) # Each photo used once.
# Prefetch prepares the next set of batches while current batch is in use.
return dataset.batch(batch_size=batch_size).prefetch(buffer_size=AUTOTUNE)
"""
Explanation: Finally, we'll make a function to craft our full dataset using tf.data.dataset. The tf.data.TextLineDataset will read in each line in our train/eval csv files to our decode_csv function.
.cache is key here. It will store the dataset in memory
End of explanation
"""
train_path = "gs://cloud-ml-data/img/flower_photos/train_set.csv"
train_data = load_dataset(train_path, 1)
itr = iter(train_data)
"""
Explanation: We'll test it out with our training set. A batch size of one will allow us to easily look at each augmented image.
End of explanation
"""
image_batch, label_batch = next(itr)
img = image_batch[0]
plt.imshow(img)
print(label_batch[0])
"""
Explanation: Run the below cell repeatedly to see the results of different batches. The images have been un-normalized for human eyes. Can you tell what type of flowers they are? Is it fair for the AI to learn on?
End of explanation
"""
|
alexandrnikitin/algorithm-sandbox | courses/DAT256x/Module02/02 - 02 - Limits.ipynb | mit | %matplotlib inline
# Here's the function
def f(x):
return x**2 + x
from matplotlib import pyplot as plt
# Create an array of x values from 0 to 10 to plot
x = list(range(0, 11))
# Get the corresponding y values from the function
y = [f(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('f(x)')
plt.grid()
# Plot the function
plt.plot(x,y, color='lightgrey', marker='o', markeredgecolor='green', markerfacecolor='green')
plt.show()
"""
Explanation: Limits
You can use algebraeic methods to calculate the rate of change over a function interval by joining two points on the function with a secant line and measuring its slope. For example, a function might return the distance travelled by a cyclist in a period of time, and you can use a secant line to measure the average velocity between two points in time. However, this doesn't tell you the cyclist's vecolcity at any single point in time - just the average speed over an interval.
To find the cyclist's velocity at a specific point in time, you need the ability to find the slope of a curve at a given point. Differential Calculus enables us to do through the use of derivatives. We can use derivatives to find the slope at a specific x value by calculating a delta for x<sub>1</sub> and x<sub>2</sub> values that are infinitesimally close together - so you can think of it as measuring the slope of a tiny straight line that comprises part of the curve.
Introduction to Limits
However, before we can jump straight into derivatives, we need to examine another aspect of differential calculus - the limit of a function; which helps us measure how a function's value changes as the x<sub>2</sub> value approaches x<sub>1</sub>
To better understand limits, let's take a closer look at our function, and note that although we graph the function as a line, it is in fact made up of individual points. Run the following cell to show the points that we've plotted for integer values of x - the line is created by interpolating the points in between:
End of explanation
"""
%matplotlib inline
# Here's the function
def f(x):
return x**2 + x
from matplotlib import pyplot as plt
# Create an array of x values from 0 to 10 to plot
x = list(range(0,5))
x.append(4.25)
x.append(4.5)
x.append(4.75)
x.append(5)
x.append(5.25)
x.append(5.5)
x.append(5.75)
x = x + list(range(6,11))
# Get the corresponding y values from the function
y = [f(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('f(x)')
plt.grid()
# Plot the function
plt.plot(x,y, color='lightgrey', marker='o', markeredgecolor='green', markerfacecolor='green')
plt.show()
"""
Explanation: We know from the function that the f(x) values are calculated by squaring the x value and adding x, so we can easily calculate points in between and show them - run the following code to see this:
End of explanation
"""
%matplotlib inline
# Here's the function
def f(x):
return x**2 + x
from matplotlib import pyplot as plt
# Create an array of x values from 0 to 10 to plot
x = list(range(0,5))
x.append(4.25)
x.append(4.5)
x.append(4.75)
x.append(5)
x.append(5.25)
x.append(5.5)
x.append(5.75)
x = x + list(range(6,11))
# Get the corresponding y values from the function
y = [f(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('f(x)')
plt.grid()
# Plot the function
plt.plot(x,y, color='lightgrey', marker='o', markeredgecolor='green', markerfacecolor='green')
zx = 5
zy = f(zx)
plt.plot(zx, zy, color='red', marker='o', markersize=10)
plt.annotate('x=' + str(zx),(zx, zy), xytext=(zx - 0.5, zy + 5))
# Plot f(x) when x = 5.1
posx = 5.25
posy = f(posx)
plt.plot(posx, posy, color='blue', marker='<', markersize=10)
plt.annotate('x=' + str(posx),(posx, posy), xytext=(posx + 0.5, posy - 1))
# Plot f(x) when x = 4.9
negx = 4.75
negy = f(negx)
plt.plot(negx, negy, color='orange', marker='>', markersize=10)
plt.annotate('x=' + str(negx),(negx, negy), xytext=(negx - 1.5, negy - 1))
plt.show()
"""
Explanation: Now we can see more clearly that this function line is formed of a continuous series of points, so theoretically for any given value of x there is a point on the line, and there is an adjacent point on either side with a value that is as close to x as possible, but not actually x.
Run the following code to visualize a specific point for x = 5, and try to identify the closest point either side of it:
End of explanation
"""
%matplotlib inline
# Define function g
def g(x):
if x != 0:
return -(12/(2*x))**2
# Plot output from function g
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-20, 21)
# Get the corresponding y values from the function
y = [g(a) for a in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('g(x)')
plt.grid()
# Plot x against g(x)
plt.plot(x,y, color='green')
plt.show()
"""
Explanation: You can see the point where x is 5, and you can see that there are points shown on the graph that appear to be right next to this point (at x=4.75 and x=5.25). However, if we zoomed in we'd see that there are still gaps that could be filled by other values of x that are even closer to 5; for example, 4.9 and 5.1, or 4.999 and 5.001. If we could zoom infinitely close to the line we'd see that no matter how close a value you use (for example, 4.999999999999), there is always a value that's fractionally closer (for example, 4.9999999999999).
So what we can say is that there is a hypothetical number that's as close as possible to our desired value of x without actually being x, but we can't express it as a real number. Instead, we express its symbolically as a limit, like this:
\begin{equation}\lim_{x \to 5} f(x)\end{equation}
This is interpreted as the limit of function f(x) as x approaches 5.
Limits and Continuity
The function f(x) is continuous for all real numbered values of x. Put simply, this means that you can draw the line created by the function without lifting your pen (we'll look at a more formal definition later in this course).
However, this isn't necessarily true of all functions. Consider function g(x) below:
\begin{equation}g(x) = -(\frac{12}{2x})^{2}\end{equation}
This function is a little more complex than the previous one, but the key thing to note is that it requires a division by 2x. Now, ask yourself; what would happen if you applied this function to an x value of 0?
Well, 2 x 0 is 0, and anything divided by 0 is undefined. So the domain of this function does not include 0; in other words, the function is defined when x is any real number such that x is not equal to 0. The function should therefore be written like this:
\begin{equation}g(x) = -(\frac{12}{2x})^{2},\;\; x \ne 0\end{equation}
So why is this important? Let's investigate by running the following Python code to define the function and plot it for a set of arbitrary of values:
End of explanation
"""
%matplotlib inline
# Define function g
def g(x):
if x != 0:
return -(12/(2*x))**2
# Plot output from function g
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-20, 21)
# Get the corresponding y values from the function
y = [g(a) for a in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('g(x)')
plt.grid()
# Plot x against g(x)
plt.plot(x,y, color='green')
# plot a circle at the gap (or close enough anyway!)
xy = (0,g(1))
plt.annotate('O',xy, xytext=(-0.7, -37),fontsize=14,color='green')
plt.show()
"""
Explanation: Look closely at the plot, and note the gap the line where x = 0. This indicates that the function is not defined here.The domain of the function (it's set of possible input values) not include 0, and it's range (the set of possible output values) does not include a value for x=0.
This is a non-continuous function - in other words, it includes at least one gap when plotted (so you couldn't plot it by hand without lifting your pen). Specifically, the function is non-continuous at x=0.
By convention, when a non-continuous function is plotted, the points that form a continuous line (or interval) are shown as a line, and the end of each line where there is a discontinuity is shown as a circle, which is filled if the value at that point is included in the line and empty if the value is not included in the line.
In this case, the function produces two intervals with a gap between them where the function is not defined, so we can show the discontinuous point as an unfilled circle - run the following code to visualize this with Python:
End of explanation
"""
%matplotlib inline
def h(x):
if x >= 0:
import numpy as np
return 2 * np.sqrt(x)
# Plot output from function h
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-20, 21)
# Get the corresponding y values from the function
y = [h(a) for a in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('h(x)')
plt.grid()
# Plot x against h(x)
plt.plot(x,y, color='green')
# plot a circle close enough to the h(-x) limit for our purposes!
plt.plot(0, h(0), color='green', marker='o', markerfacecolor='green', markersize=10)
plt.show()
"""
Explanation: There are a number of reasons a function might be non-continuous. For example, consider the following function:
\begin{equation}h(x) = 2\sqrt{x},\;\; x \ge 0\end{equation}
Applying this function to a non-negative x value returns a valid output; but for any value where x is negative, the output is undefined, because the square root of a negative value is not a real number.
Here's the Python to plot function h:
End of explanation
"""
%matplotlib inline
def k(x):
import numpy as np
if x <= 0:
return x + 20
else:
return x - 100
# Plot output from function h
from matplotlib import pyplot as plt
# Create an array of x values for each non-contonuous interval
x1 = range(-20, 1)
x2 = range(1, 20)
# Get the corresponding y values from the function
y1 = [k(i) for i in x1]
y2 = [k(i) for i in x2]
# Set up the graph
plt.xlabel('x')
plt.ylabel('k(x)')
plt.grid()
# Plot x against k(x)
plt.plot(x1,y1, color='green')
plt.plot(x2,y2, color='green')
# plot a circle at the interval ends
plt.plot(0, k(0), color='green', marker='o', markerfacecolor='green', markersize=10)
plt.plot(0, k(0.0001), color='green', marker='o', markerfacecolor='w', markersize=10)
plt.show()
"""
Explanation: Now, suppose we have a function like this:
\begin{equation}
k(x) = \begin{cases}
x + 20, & \text{if } x \le 0, \
x - 100, & \text{otherwise }
\end{cases}
\end{equation}
In this case, the function's domain includes all real numbers, but its output is still non-continuous because of the way different values are returned depending on the value of x. The range of possible outputs for k(x ≤ 0) is ≤ 20, and the range of output values for k(x > 0) is x ≥ -100.
Let's use Python to plot function k:
End of explanation
"""
%matplotlib inline
# Define function a
def a(x):
return x**2 + 1
# Plot output from function a
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-10, 11)
# Get the corresponding y values from the function
y = [a(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('a(x)')
plt.grid()
# Plot x against a(x)
plt.plot(x,y, color='purple')
plt.show()
"""
Explanation: Finding Limits of Functions Graphically
So the question arises, how do we find a value for the limit of a function at a specific point?
Let's explore this function, a:
\begin{equation}a(x) = x^{2} + 1\end{equation}
We can start by plotting it:
End of explanation
"""
%matplotlib inline
# Define function a
def a(x):
return x**2 + 1
# Plot output from function a
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-10, 11)
# Get the corresponding y values from the function
y = [a(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('a(x)')
plt.grid()
# Plot x against a(x)
plt.plot(x,y, color='purple')
# Plot a(x) when x = 0
zx = 0
zy = a(zx)
plt.plot(zx, zy, color='red', marker='o', markersize=10)
plt.annotate(str(zy),(zx, zy), xytext=(zx, zy + 5))
plt.show()
"""
Explanation: Note that this function is continuous at all points, there are no gaps in its range. However, the range of the function is {a(x) ≥ 1} (in other words, all real numbers that are greater than or equal to 1). For negative values of x, the function appears to return ever-decreasing values as x gets closer to 0, and for positive values of x, the function appears to return ever-increasing values as x gets further from 0; but it never returns 0.
Let's plot the function for an x value of 0 and find out what the a(0) value is returned:
End of explanation
"""
%matplotlib inline
# Define function a
def a(x):
return x**2 + 1
# Plot output from function a
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-10, 11)
# Get the corresponding y values from the function
y = [a(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('a(x)')
plt.grid()
# Plot x against a(x)
plt.plot(x,y, color='purple')
# Plot a(x) when x = 0.1
posx = 0.1
posy = a(posx)
plt.plot(posx, posy, color='blue', marker='<', markersize=10)
plt.annotate(str(posy),(posx, posy), xytext=(posx + 1, posy))
# Plot a(x) when x = -0.1
negx = -0.1
negy = a(negx)
plt.plot(negx, negy, color='orange', marker='>', markersize=10)
plt.annotate(str(negy),(negx, negy), xytext=(negx - 2, negy))
plt.show()
"""
Explanation: OK, so a(0) returns 1.
What happens if we use x values that are very slightly higher or lower than 0?
End of explanation
"""
%matplotlib inline
# Define function b
def b(x):
if x != 0:
return (-2*x**2) * 1/x
# Plot output from function g
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-10, 11)
# Get the corresponding y values from the function
y = [b(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('b(x)')
plt.grid()
# Plot x against b(x)
plt.plot(x,y, color='purple')
plt.show()
"""
Explanation: These x values return a(x) values that are just slightly above 1, and if we were to keep plotting numbers that are increasingly close to 0, for example 0.0000000001 or -0.0000000001, the function would still return a value that is just slightly greater than 1. The limit of function a(x) as x approaches 0, is 1; and the notation to indicate this is:
\begin{equation}\lim_{x \to 0} a(x) = 1 \end{equation}
This reflects a more formal definition of function continuity. Previously, we stated that a function is continuous at a point if you can draw it at that point without lifting your pen. The more mathematical definition is that a function is continuous at a point if the limit of the function as it approaches that point from both directions is equal to the function's value at that point. In this case, as we approach x = 0 from both sides, the limit is 1; and the value of a(0) is also 1; so the function is continuous at x = 0.
Limits at Non-Continuous Points
Let's try another function, which we'll call b:
\begin{equation}b(x) = -2x^{2} \cdot \frac{1}{x},\;\;x\ne0\end{equation}
Note that this function has a domain that includes all real number values of x such that x does not equal 0. In other words, the function will return a valid output for any number other than 0.
Let's create it and plot it with Python:
End of explanation
"""
%matplotlib inline
# Define function b
def b(x):
if x != 0:
return (-2*x**2) * 1/x
# Plot output from function g
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-10, 11)
# Get the corresponding y values from the function
y = [b(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('b(x)')
plt.grid()
# Plot x against b(x)
plt.plot(x,y, color='purple')
# Plot b(x) for x = -0.1
negx = -0.1
negy = b(negx)
plt.plot(negx, negy, color='orange', marker='>', markersize=10)
plt.annotate(str(negy),(negx, negy), xytext=(negx + 1, negy))
plt.show()
"""
Explanation: The output from this function contains a gap in the line where x = 0. It seems that not only does the domain of the function (the values that can be passed in as x) exclude 0; but the range of the function (the set of values that can be returned from it) also excludes 0.
We can't evaluate the function for an x value of 0, but we can see what it returns for a value that is just very slightly less than 0:
End of explanation
"""
%matplotlib inline
# Define function b
def b(x):
if x != 0:
return (-2*x**2) * 1/x
# Plot output from function g
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-10, 11)
# Get the corresponding y values from the function
y = [b(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('b(x)')
plt.grid()
# Plot x against b(x)
plt.plot(x,y, color='purple')
# Plot b(x) for x = -0.0001
negx = -0.0001
negy = b(negx)
plt.plot(negx, negy, color='orange', marker='>', markersize=10)
plt.annotate(str(negy),(negx, negy), xytext=(negx + 1, negy))
plt.show()
"""
Explanation: We can even try a negative x value that's a little closer to 0.
End of explanation
"""
%matplotlib inline
# Define function b
def b(x):
if x != 0:
return (-2*x**2) * 1/x
# Plot output from function g
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-10, 11)
# Get the corresponding y values from the function
y = [b(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('b(x)')
plt.grid()
# Plot x against b(x)
plt.plot(x,y, color='purple')
# Plot b(x) for x = 0.1
posx = 0.1
posy = b(posx)
plt.plot(posx, posy, color='blue', marker='<', markersize=10)
plt.annotate(str(posy),(posx, posy), xytext=(posx + 1, posy))
plt.show()
"""
Explanation: So as the value of x gets closer to 0 from the left (negative), the value of b(x) is decreasing towards 0. We can show this with the following notation:
\begin{equation}\lim_{x \to 0^{-}} b(x) = 0 \end{equation}
Note that the arrow points to 0<sup>-</sup> (with a minus sign) to indicate that we're describing the limit as we approach 0 from the negative side.
So what about the positive side?
Let's see what the function value is when x is 0.1:
End of explanation
"""
%matplotlib inline
# Define function b
def b(x):
if x != 0:
return (-2*x**2) * 1/x
# Plot output from function g
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-10, 11)
# Get the corresponding y values from the function
y = [b(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('b(x)')
plt.grid()
# Plot x against b(x)
plt.plot(x,y, color='purple')
# Plot b(x) for x = 0.0001
posx = 0.0001
posy = b(posx)
plt.plot(posx, posy, color='blue', marker='<', markersize=10)
plt.annotate(str(posy),(posx, posy), xytext=(posx + 1, posy))
plt.show()
"""
Explanation: What happens if we decrease the value of x so that it's even closer to 0?
End of explanation
"""
%matplotlib inline
def c(x):
import numpy as np
if x <= 0:
return x + 20
else:
return x - 100
# Plot output from function h
from matplotlib import pyplot as plt
# Create arrays of x values
x1 = range(-20, 6)
x2 = range(6, 21)
# Get the corresponding y values from the function
y1 = [c(i) for i in x1]
y2 = [c(i) for i in x2]
# Set up the graph
plt.xlabel('x')
plt.ylabel('c(x)')
plt.grid()
# Plot x against c(x)
plt.plot(x1,y1, color='purple')
plt.plot(x2,y2, color='purple')
# plot a circle close enough to the c limits for our purposes!
plt.plot(5, c(5), color='purple', marker='o', markerfacecolor='purple', markersize=10)
plt.plot(5, c(5.001), color='purple', marker='o', markerfacecolor='w', markersize=10)
# plot some points from the +ve direction
posx = [20, 15, 10, 6]
posy = [c(i) for i in posx]
plt.scatter(posx, posy, color='blue', marker='<', s=70)
for p in posx:
plt.annotate(str(c(p)),(p, c(p)),xytext=(p, c(p) + 5))
# plot some points from the -ve direction
negx = [-15, -10, -5, 0, 4]
negy = [c(i) for i in negx]
plt.scatter(negx, negy, color='orange', marker='>', s=70)
for n in negx:
plt.annotate(str(c(n)),(n, c(n)),xytext=(n, c(n) + 5))
plt.show()
"""
Explanation: As with the negative side, as x approaches 0 from the positive side, the value of b(x) gets closer to 0; and we can show that like this:
\begin{equation}\lim_{x \to 0^{+}} b(x) = 0 \end{equation}
Now, even although the function is not defined at x = 0; since the limit as we approach x = 0 from the negative side is 0, and the limit when we approach x = 0 from the positive side is also 0; we can say that the overall, or two-sided limit for the function at x = 0 is 0:
\begin{equation}\lim_{x \to 0} b(x) = 0 \end{equation}
So can we therefore just ignore the gap and say that the function is continuous at x = 0? Well, recall that the formal definition for continuity is that to be continuous at a point, the function's limit as we approach the point in both directions must be equal to the function's value at that point. In this case, the two-sided limit as we approach x = 0 is 0, but b(0) is not defined; so the function is non-continuous at x = 0.
One-Sided Limits
Let's take a look at a different function. We'll call this one c:
\begin{equation}
c(x) = \begin{cases}
x + 20, & \text{if } x \le 0, \
x - 100, & \text{otherwise }
\end{cases}
\end{equation}
In this case, the function's domain includes all real numbers, but its range is still non-continuous because of the way different values are returned depending on the value of x. The range of possible outputs for c(x ≤ 0) is ≤ 20, and the range of output values for c(x > 0) is x ≥ -100.
Let's use Python to plot function c with some values for c(x) marked on the line
End of explanation
"""
%matplotlib inline
# Define function d
def d(x):
if x != 25:
return 4 / (x - 25)
# Plot output from function d
from matplotlib import pyplot as plt
# Create an array of x values
x = list(range(-100, 24))
x.append(24.9) # Add some fractional x
x.append(25) # values around
x.append(25.1) # 25 for finer-grain results
x = x + list(range(26, 101))
# Get the corresponding y values from the function
y = [d(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('d(x)')
plt.grid()
# Plot x against d(x)
plt.plot(x,y, color='purple')
plt.show()
"""
Explanation: The plot of the function shows a line in which the c(x) value increases towards 25 as x approaches 5 from the negative side:
\begin{equation}\lim_{x \to 5^{-}} c(x) = 25 \end{equation}
However, the c(x) value decreases towards -95 as x approaches 5 from the positive side:
\begin{equation}\lim_{x \to 5^{+}} c(x) = -95 \end{equation}
So what can we say about the two-sided limit of this function at x = 5?
The limit as we approach x = 5 from the negative side is not equal to the limit as we approach x = 5 from the positive side, so no two-sided limit exists for this function at that point:
\begin{equation}\lim_{x \to 5} \text{does not exist} \end{equation}
Asymptotes and Infinity
OK, time to look at another function:
\begin{equation}d(x) = \frac{4}{x - 25},\;\; x \ne 25\end{equation}
End of explanation
"""
%matplotlib inline
# Define function d
def d(x):
if x != 25:
return 4 / (x - 25)
# Plot output from function d
from matplotlib import pyplot as plt
# Create an array of x values
x = list(range(-100, 24))
x.append(24.9) # Add some fractional x
x.append(25) # values around
x.append(25.1) # 25 for finer-grain results
x = x + list(range(26, 101))
# Get the corresponding y values from the function
y = [d(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('d(x)')
plt.grid()
# Plot x against d(x)
plt.plot(x,y, color='purple')
# plot some points from the +ve direction
posx = [75, 50, 30, 25.5, 25.2, 25.1]
posy = [d(i) for i in posx]
plt.scatter(posx, posy, color='blue', marker='<')
for p in posx:
plt.annotate(str(d(p)),(p, d(p)))
# plot some points from the -ve direction
negx = [-55, 0, 23, 24.5, 24.8, 24.9]
negy = [d(i) for i in negx]
plt.scatter(negx, negy, color='orange', marker='>')
for n in negx:
plt.annotate(str(d(n)),(n, d(n)))
plt.show()
"""
Explanation: What's the limit of d as x approaches 25?
We can plot a few points to help us:
End of explanation
"""
# Define function a
def a(x):
return x**2 + 1
import pandas as pd
# Create a dataframe with an x column containing values either side of 0
df = pd.DataFrame ({'x': [-1, -0.5, -0.2, -0.1, -0.01, 0, 0.01, 0.1, 0.2, 0.5, 1]})
# Add an a(x) column by applying the function to x
df['a(x)'] = a(df['x'])
#Display the dataframe
df
"""
Explanation: From these plotted values, we can see that as x approaches 25 from the negative side, d(x) is decreasing, and as x approaches 25 from the positive side, d(x) is increasing. As x gets closer to 25, d(x) increases or decreases more significantly.
If we were to plot every fractional value of d(x) for x values between 24.9 and 25, we'd see a line that decreases indefintely, getting closer and closer to the x = 25 vertical line, but never actually reaching it. Similarly, plotting every x value between 25 and 25.1 would result in a line going up indefinitely, but always staying to the right of the vertical x = 25 line.
The x = 25 line in this case is an asymptote - a line to which a curve moves ever closer but never actually reaches. The positive limit for x = 25 in this case in not a real numbered value, but infinity:
\begin{equation}\lim_{x \to 25^{+}} d(x) = \infty \end{equation}
Conversely, the negative limit for x = 25 is negative infinity:
\begin{equation}\lim_{x \to 25^{-}} d(x) = -\infty \end{equation}
Finding Limits Numerically Using a Table
Up to now, we've estimated limits for a point graphically by examining a graph of a function. You can also approximate limits by creating a table of x values and the corresponding function values either side of the point for which you want to find the limits.
For example, let's return to our a function:
\begin{equation}a(x) = x^{2} + 1\end{equation}
If we want to find the limits as x is approaching 0, we can apply the function to some values either side of 0 and view them as a table. Here's some Python code to do that:
End of explanation
"""
# Define function e
def e(x):
if x == 0:
return 5
else:
return 1 + x**2
import pandas as pd
# Create a dataframe with an x column containing values either side of 0
x= [-1, -0.5, -0.2, -0.1, -0.01, 0, 0.01, 0.1, 0.2, 0.5, 1]
y =[e(i) for i in x]
df = pd.DataFrame ({' x':x, 'e(x)': y })
df
"""
Explanation: Looking at the output, you can see that the function values are getting closer to 1 as x approaches 0 from both sides, so:
\begin{equation}\lim_{x \to 0} a(x) = 1 \end{equation}
Additionally, you can see that the actual value of the function when x = 0 is also 1, so:
\begin{equation}\lim_{x \to 0} a(x) = a(0) \end{equation}
Which according to our earlier definition, means that the function is continuous at 0.
However, you should be careful not to assume that the limit when x is approaching 0 will always be the same as the value when x = 0; even when the function is defined for x = 0.
For example, consider the following function:
\begin{equation}
e(x) = \begin{cases}
x = 5, & \text{if } x = 0, \
x = 1 + x^{2}, & \text{otherwise }
\end{cases}
\end{equation}
Let's see what the function returns for x values either side of 0 in a table:
End of explanation
"""
%matplotlib inline
# Define function e
def e(x):
if x == 0:
return 5
else:
return 1 + x**2
from matplotlib import pyplot as plt
x= [-1, -0.5, -0.2, -0.1, -0.01, 0.01, 0.1, 0.2, 0.5, 1]
y =[e(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('e(x)')
plt.grid()
# Plot x against e(x)
plt.plot(x, y, color='purple')
# (we're cheating slightly - we'll manually plot the discontinous point...)
plt.scatter(0, e(0), color='purple')
# (... and overplot the gap)
plt.plot(0, 1, color='purple', marker='o', markerfacecolor='w', markersize=10)
plt.show()
"""
Explanation: As before, you can see that as the x values approach 0 from both sides, the value of the function gets closer to 1, so:
\begin{equation}\lim_{x \to 0} e(x) = 1 \end{equation}
However the actual value of the function when x = 0 is 5, not 1; so:
\begin{equation}\lim_{x \to 0} e(x) \ne e(0) \end{equation}
Which according to our earlier definition, means that the function is non-continuous at 0.
Run the following cell to see what this looks like as a graph:
End of explanation
"""
%matplotlib inline
# Define function f
def g(x):
if x != 1:
return (x**2 - 1) / (x - 1)
# Plot output from function g
from matplotlib import pyplot as plt
# Create an array of x values
x= range(-20, 21)
y =[g(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('g(x)')
plt.grid()
# Plot x against g(x)
plt.plot(x,y, color='purple')
plt.show()
"""
Explanation: Determining Limits Analytically
We've seen how to estimate limits visually on a graph, and by creating a table of x and f(x) values either side of a point. There are also some mathematical techniques we can use to calculate limits.
Direct Substitution
Recall that our definition for a function to be continuous at a point is that the two-directional limit must exist and that it must be equal to the function value at that point. It therefore follows, that if we know that a function is continuous at a given point, we can determine the limit simply by evaluating the function for that point.
For example, let's consider the following function g:
\begin{equation}g(x) = \frac{x^{2} - 1}{x - 1}, x \ne 1\end{equation}
Run the following code to see this function as a graph:
End of explanation
"""
%matplotlib inline
# Define function g
def g(x):
if x != 1:
return (x**2 - 1) / (x - 1)
# Plot output from function f
from matplotlib import pyplot as plt
# Create an array of x values
x= range(-20, 21)
y =[g(i) for i in x]
# Set the x point we're interested in
zx = 4
plt.xlabel('x')
plt.ylabel('g(x)')
plt.grid()
# Plot x against g(x)
plt.plot(x,y, color='purple')
# Plot g(x) when x = 0
zy = g(zx)
plt.plot(zx, zy, color='red', marker='o', markersize=10)
plt.annotate(str(zy),(zx, zy), xytext=(zx - 2, zy + 1))
plt.show()
print ('Limit as x -> ' + str(zx) + ' = ' + str(zy))
"""
Explanation: Now, suppose we need to find the limit of g(x) as x approaches 4. We can try to find this by simply substituting 4 for the x values in the function:
\begin{equation}g(4) = \frac{4^{2} - 1}{4 - 1}\end{equation}
This simplifies to:
\begin{equation}g(4) = \frac{15}{3}\end{equation}
So:
\begin{equation}\lim_{x \to 4} g(x) = 5\end{equation}
Let's take a look:
End of explanation
"""
%matplotlib inline
# Define function g
def f(x):
if x != 1:
return (x**2 - 1) / (x - 1)
# Plot output from function g
from matplotlib import pyplot as plt
# Create an array of x values
x= range(-20, 21)
y =[g(i) for i in x]
# Set the x point we're interested in
zx = 1
# Calculate the limit of g(x) when x->zx using the factored equation
zy = zx + 1
plt.xlabel('x')
plt.ylabel('g(x)')
plt.grid()
# Plot x against g(x)
plt.plot(x,y, color='purple')
# Plot the limit of g(x)
zy = zx + 1
plt.plot(zx, zy, color='red', marker='o', markersize=10)
plt.annotate(str(zy),(zx, zy), xytext=(zx - 2, zy + 1))
plt.show()
print ('Limit as x -> ' + str(zx) + ' = ' + str(zy))
"""
Explanation: Factorization
OK, now let's try to find the limit of g(x) as x approaches 1.
We know from the function definition that the function is not defined at x = 1, but we're not trying to find the value of g(x) when x equals 1; we're trying to find the limit of g(x) as x approaches 1.
The direct substitution approach won't work in this case:
\begin{equation}g(1) = \frac{1^{2} - 1}{1 - 1}\end{equation}
Simplifies to:
\begin{equation}g(1) = \frac{0}{0}\end{equation}
Anything divided by 0 is undefined; so all we've done is to confirm that the function is not defined at this point. You might be tempted to assume that this means the limit does not exist, but <sup>0</sup>/<sub>0</sub> is a special case; it's what's known as the indeterminate form; and there may be a way to solve this problem another way.
We can factor the x<sup>2</sup> - 1 numerator in the definition of g as as (x - 1)(x + 1), so the limit equation can we rewritten like this:
\begin{equation}\lim_{x \to a} g(x) = \frac{(x-1)(x+1)}{x - 1}\end{equation}
The x - 1 in the numerator and the x - 1 in the denominator cancel each other out:
\begin{equation}\lim_{x \to a} g(x)= x+1\end{equation}
So we can now use substitution for x = 1 to calculate the limit as 1 + 1:
\begin{equation}\lim_{x \to 1} g(x) = 2\end{equation}
Let's see what that looks like:
End of explanation
"""
%matplotlib inline
# Define function h
def h(x):
import math
if x >= 0 and x != 4:
return (math.sqrt(x) - 2) / (x - 4)
# Plot output from function h
from matplotlib import pyplot as plt
# Create an array of x values
x= range(-20, 21)
y =[h(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('h(x)')
plt.grid()
# Plot x against h(x)
plt.plot(x,y, color='purple')
plt.show()
"""
Explanation: Rationalization
Let's look at another function:
\begin{equation}h(x) = \frac{\sqrt{x} - 2}{x - 4}, x \ne 4 \text{ and } x \ge 0\end{equation}
Run the following cell to plot this function as a graph:
End of explanation
"""
%matplotlib inline
# Define function h
def h(x):
import math
if x >= 0 and x != 4:
return (math.sqrt(x) - 2) / (x - 4)
# Plot output from function h
from matplotlib import pyplot as plt
# Create an array of x values
x= range(-20, 21)
y =[h(i) for i in x]
# Specify the point we're interested in
zx = 4
# Calculate the limit of f(x) when x->zx using factored equation
import math
zy = 1 / ((math.sqrt(zx)) + 2)
plt.xlabel('x')
plt.ylabel('h(x)')
plt.grid()
# Plot x against h(x)
plt.plot(x,y, color='purple')
# Plot the limit of h(x) when x->zx
plt.plot(zx, zy, color='red', marker='o', markersize=10)
plt.annotate(str(zy),(zx, zy), xytext=(zx + 2, zy))
plt.show()
print ('Limit as x -> ' + str(zx) + ' = ' + str(zy))
"""
Explanation: To find the limit of h(x) as x approaches 4, we can't use the direct substitution method because the function is not defined at that point. However, we can take an alternative approach by multiplying both the numerator and denominator in the function by the conjugate of the numerator to rationalize the square root term (a conjugate is a binomial formed by reversing the sign of the second term of a binomial):
\begin{equation}\lim_{x \to a}h(x) = \frac{\sqrt{x} - 2}{x - 4}\cdot\frac{\sqrt{x} + 2}{\sqrt{x} + 2}\end{equation}
This simplifies to:
\begin{equation}\lim_{x \to a}h(x) = \frac{(\sqrt{x})^{2} - 2^{2}}{(x - 4)({\sqrt{x} + 2})}\end{equation}
The √x<sup>2</sup> is x, and 2<sup>2</sup> is 4, so we can simplify the numerator as follows:
\begin{equation}\lim_{x \to a}h(x) = \frac{x - 4}{(x - 4)({\sqrt{x} + 2})}\end{equation}
Now we can cancel out the x - 4 in both the numerator and denominator:
\begin{equation}\lim_{x \to a}h(x) = \frac{1}{{\sqrt{x} + 2}}\end{equation}
So for x approaching 4, this is:
\begin{equation}\lim_{x \to 4}h(x) = \frac{1}{{\sqrt{4} + 2}}\end{equation}
This simplifies to:
\begin{equation}\lim_{x \to 4}h(x) = \frac{1}{2 + 2}\end{equation}
Which is of course:
\begin{equation}\lim_{x \to 4}h(x) = \frac{1}{4}\end{equation}
So the limit of h(x) as x approaches 4 is <sup>1</sup>/<sub>4</sub> or 0.25.
Let's calculate and plot this with Python:
End of explanation
"""
%matplotlib inline
# Define function j
def j(x):
return x * 2 - 2
# Define function l
def l(x):
return -x * 2 + 4
# Plot output from functions j and l
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-10, 11)
# Get the corresponding y values from the functions
jy = [j(i) for i in x]
ly = [l(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.xticks(range(-10,11, 1))
plt.ylabel('y')
plt.yticks(range(-30,30, 2))
plt.grid()
# Plot x against j(x)
plt.plot(x,jy, color='green', label='j(x)')
# Plot x against l(x)
plt.plot(x,ly, color='magenta', label='l(x)')
plt.legend()
plt.show()
"""
Explanation: Rules for Limit Operations
When you are working with functions and limits, you may want to combine limits using arithmetic operations. There are some intuitive rules for doing this.
Let's define two simple functions, j:
\begin{equation}j(x) = 2x - 2\end{equation}
and l:
\begin{equation}l(x) = -2x + 4\end{equation}
Run the cell below to plot these functions:
End of explanation
"""
|
arcyfelix/Courses | 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/02-NumPy/.ipynb_checkpoints/Numpy Exercises-checkpoint.ipynb | apache-2.0 | # CODE HERE
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
<center>Copyright Pierian Data 2017</center>
<center>For more information, visit us at www.pieriandata.com</center>
NumPy Exercises
Now that we've learned about NumPy let's test your knowledge. We'll start off with a few simple tasks and then you'll be asked some more complicated questions.
IMPORTANT NOTE! Make sure you don't run the cells directly above the example output shown, otherwise you will end up writing over the example output!
Import NumPy as np
Create an array of 10 zeros
End of explanation
"""
# CODE HERE
"""
Explanation: Create an array of 10 ones
End of explanation
"""
# CODE HERE
"""
Explanation: Create an array of 10 fives
End of explanation
"""
# CODE HERE
"""
Explanation: Create an array of the integers from 10 to 50
End of explanation
"""
# CODE HERE
"""
Explanation: Create an array of all the even integers from 10 to 50
End of explanation
"""
# CODE HERE
"""
Explanation: Create a 3x3 matrix with values ranging from 0 to 8
End of explanation
"""
# CODE HERE
"""
Explanation: Create a 3x3 identity matrix
End of explanation
"""
# CODE HERE
"""
Explanation: Use NumPy to generate a random number between 0 and 1
End of explanation
"""
# CODE HERE
"""
Explanation: Use NumPy to generate an array of 25 random numbers sampled from a standard normal distribution
End of explanation
"""
# HERE IS THE GIVEN MATRIX CALLED MAT
# USE IT FOR THE FOLLOWING TASKS
mat = np.arange(1,26).reshape(5,5)
mat
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
"""
Explanation: Create the following matrix:
Create an array of 20 linearly spaced points between 0 and 1:
Numpy Indexing and Selection
Now you will be given a few matrices, and be asked to replicate the resulting matrix outputs:
End of explanation
"""
# CODE HERE
"""
Explanation: Now do the following
Get the sum of all the values in mat
End of explanation
"""
# CODE HERE
"""
Explanation: Get the standard deviation of the values in mat
End of explanation
"""
# CODE HERE
"""
Explanation: Get the sum of all the columns in mat
End of explanation
"""
|
ALEXKIRNAS/DataScience | CS231n/assignment2/BatchNormalization.ipynb | mit | # As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
"""
Explanation: Batch Normalization
One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].
The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.
The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.
It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.
[3] Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift", ICML 2015.
End of explanation
"""
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print('Before batch normalization:')
print(' means: ', a.mean(axis=0))
print(' stds: ', a.std(axis=0))
# Means should be close to zero and stds close to one
print('After batch normalization (gamma=1, beta=0)')
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})
print(' mean: ', a_norm.mean(axis=0))
print(' std: ', a_norm.std(axis=0))
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print('After batch normalization (nontrivial gamma, beta)')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in range(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After batch normalization (test-time):')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
"""
Explanation: Batch normalization: Forward
In the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.
End of explanation
"""
# Gradient check batchnorm backward pass
np.random.seed(231)
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)
db_num = eval_numerical_gradient_array(fb, beta.copy(), dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
"""
Explanation: Batch Normalization: backward
Now implement the backward pass for batch normalization in the function batchnorm_backward.
To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.
Once you have finished, run the following to numerically check your backward pass.
End of explanation
"""
np.random.seed(231)
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print('dx difference: ', rel_error(dx1, dx2))
print('dgamma difference: ', rel_error(dgamma1, dgamma2))
print('dbeta difference: ', rel_error(dbeta1, dbeta2))
print('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))
"""
Explanation: Batch Normalization: alternative backward (OPTIONAL, +3 points extra credit)
In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.
Surprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.
NOTE: This part of the assignment is entirely optional, but we will reward 3 points of extra credit if you can complete it.
End of explanation
"""
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
use_batchnorm=True)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
if reg == 0: print()
"""
Explanation: Fully Connected Nets with Batch Normalization
Now that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.
Concretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.
HINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.
End of explanation
"""
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
bn_solver.train()
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
solver.train()
"""
Explanation: Batchnorm for deep networks
Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
End of explanation
"""
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_history, 'o', label='batchnorm')
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label='baseline')
plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label='baseline')
plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
"""
Explanation: Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
End of explanation
"""
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers[weight_scale] = solver
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))
best_val_accs.append(max(solvers[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))
final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gca().set_ylim(1.0, 3.5)
plt.gcf().set_size_inches(10, 15)
plt.show()
"""
Explanation: Batch normalization and initialization
We will now run a small experiment to study the interaction of batch normalization and weight initialization.
The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.
End of explanation
"""
|
UltronAI/Deep-Learning | CS231n/assignment2/.ipynb_checkpoints/PyTorch-checkpoint.ipynb | mit | import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
from torch.utils.data import DataLoader
from torch.utils.data import sampler
import torchvision.datasets as dset
import torchvision.transforms as T
import numpy as np
import timeit
"""
Explanation: Training a ConvNet PyTorch
In this notebook, you'll learn how to use the powerful PyTorch framework to specify a conv net architecture and train it on the CIFAR-10 dataset.
End of explanation
"""
class ChunkSampler(sampler.Sampler):
"""Samples elements sequentially from some offset.
Arguments:
num_samples: # of desired datapoints
start: offset where we should start selecting from
"""
def __init__(self, num_samples, start = 0):
self.num_samples = num_samples
self.start = start
def __iter__(self):
return iter(range(self.start, self.start + self.num_samples))
def __len__(self):
return self.num_samples
NUM_TRAIN = 49000
NUM_VAL = 1000
cifar10_train = dset.CIFAR10('./cs231n/datasets', train=True, download=True,
transform=T.ToTensor())
loader_train = DataLoader(cifar10_train, batch_size=64, sampler=ChunkSampler(NUM_TRAIN, 0))
cifar10_val = dset.CIFAR10('./cs231n/datasets', train=True, download=True,
transform=T.ToTensor())
loader_val = DataLoader(cifar10_val, batch_size=64, sampler=ChunkSampler(NUM_VAL, NUM_TRAIN))
cifar10_test = dset.CIFAR10('./cs231n/datasets', train=False, download=True,
transform=T.ToTensor())
loader_test = DataLoader(cifar10_test, batch_size=64)
"""
Explanation: What's this PyTorch business?
You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.
For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, PyTorch (or TensorFlow, if you switch over to that notebook).
Why?
Our code will now run on GPUs! Much faster training. When using a framework like PyTorch or TensorFlow you can harness the power of the GPU for your own custom neural network architectures without having to write CUDA code directly (which is beyond the scope of this class).
We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand.
We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :)
We want you to be exposed to the sort of deep learning code you might run into in academia or industry.
How will I learn PyTorch?
If you've used Torch before, but are new to PyTorch, this tutorial might be of use: http://pytorch.org/tutorials/beginner/former_torchies_tutorial.html
Otherwise, this notebook will walk you through much of what you need to do to train models in Torch. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.
Load Datasets
We load the CIFAR-10 dataset. This might take a couple minutes the first time you do it, but the files should stay cached after that.
End of explanation
"""
dtype = torch.FloatTensor # the CPU datatype
# Constant to control how frequently we print train loss
print_every = 100
# This is a little utility that we'll use to reset the model
# if we want to re-initialize all our parameters
def reset(m):
if hasattr(m, 'reset_parameters'):
m.reset_parameters()
"""
Explanation: For now, we're going to use a CPU-friendly datatype. Later, we'll switch to a datatype that will move all our computations to the GPU and measure the speedup.
End of explanation
"""
class Flatten(nn.Module):
def forward(self, x):
N, C, H, W = x.size() # read in N, C, H, W
return x.view(N, -1) # "flatten" the C * H * W values into a single vector per image
"""
Explanation: Example Model
Some assorted tidbits
Let's start by looking at a simple model. First, note that PyTorch operates on Tensors, which are n-dimensional arrays functionally analogous to numpy's ndarrays, with the additional feature that they can be used for computations on GPUs.
We'll provide you with a Flatten function, which we explain here. Remember that our image data (and more relevantly, our intermediate feature maps) are initially N x C x H x W, where:
* N is the number of datapoints
* C is the number of channels
* H is the height of the intermediate feature map in pixels
* W is the height of the intermediate feature map in pixels
This is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we input data into fully connected affine layers, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a "Flatten" operation to collapse the C x H x W values per representation into a single long vector. The Flatten function below first reads in the N, C, H, and W values from a given batch of data, and then returns a "view" of that data. "View" is analogous to numpy's "reshape" method: it reshapes x's dimensions to be N x ??, where ?? is allowed to be anything (in this case, it will be C x H x W, but we don't need to specify that explicitly).
End of explanation
"""
# Here's where we define the architecture of the model...
simple_model = nn.Sequential(
nn.Conv2d(3, 32, kernel_size=7, stride=2),
nn.ReLU(inplace=True),
Flatten(), # see above for explanation
nn.Linear(5408, 10), # affine layer
)
# Set the type of all data in this model to be FloatTensor
simple_model.type(dtype)
loss_fn = nn.CrossEntropyLoss().type(dtype)
optimizer = optim.Adam(simple_model.parameters(), lr=1e-2) # lr sets the learning rate of the optimizer
"""
Explanation: The example model itself
The first step to training your own model is defining its architecture.
Here's an example of a convolutional neural network defined in PyTorch -- try to understand what each line is doing, remembering that each layer is composed upon the previous layer. We haven't trained anything yet - that'll come next - for now, we want you to understand how everything gets set up. nn.Sequential is a container which applies each layer
one after the other.
In that example, you see 2D convolutional layers (Conv2d), ReLU activations, and fully-connected layers (Linear). You also see the Cross-Entropy loss function, and the Adam optimizer being used.
Make sure you understand why the parameters of the Linear layer are 5408 and 10.
End of explanation
"""
fixed_model_base = nn.Sequential( # You fill this in!
)
fixed_model = fixed_model_base.type(dtype)
"""
Explanation: PyTorch supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful). One note: what we call in the class "spatial batch norm" is called "BatchNorm2D" in PyTorch.
Layers: http://pytorch.org/docs/nn.html
Activations: http://pytorch.org/docs/nn.html#non-linear-activations
Loss functions: http://pytorch.org/docs/nn.html#loss-functions
Optimizers: http://pytorch.org/docs/optim.html#algorithms
Training a specific model
In this section, we're going to specify a model for you to construct. The goal here isn't to get good performance (that'll be next), but instead to get comfortable with understanding the PyTorch documentation and configuring your own model.
Using the code provided above as guidance, and using the following PyTorch documentation, specify a model with the following architecture:
7x7 Convolutional Layer with 32 filters and stride of 1
ReLU Activation Layer
Spatial Batch Normalization Layer
2x2 Max Pooling layer with a stride of 2
Affine layer with 1024 output units
ReLU Activation Layer
Affine layer from 1024 input units to 10 outputs
And finally, set up a cross-entropy loss function and the RMSprop learning rule.
End of explanation
"""
## Now we're going to feed a random batch into the model you defined and make sure the output is the right size
x = torch.randn(64, 3, 32, 32).type(dtype)
x_var = Variable(x.type(dtype)) # Construct a PyTorch Variable out of your input data
ans = fixed_model(x_var) # Feed it through the model!
# Check to make sure what comes out of your model
# is the right dimensionality... this should be True
# if you've done everything correctly
np.array_equal(np.array(ans.size()), np.array([64, 10]))
"""
Explanation: To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes):
End of explanation
"""
# Verify that CUDA is properly configured and you have a GPU available
torch.cuda.is_available()
import copy
gpu_dtype = torch.cuda.FloatTensor
fixed_model_gpu = copy.deepcopy(fixed_model_base).type(gpu_dtype)
x_gpu = torch.randn(64, 3, 32, 32).type(gpu_dtype)
x_var_gpu = Variable(x.type(gpu_dtype)) # Construct a PyTorch Variable out of your input data
ans = fixed_model_gpu(x_var_gpu) # Feed it through the model!
# Check to make sure what comes out of your model
# is the right dimensionality... this should be True
# if you've done everything correctly
np.array_equal(np.array(ans.size()), np.array([64, 10]))
"""
Explanation: GPU!
Now, we're going to switch the dtype of the model and our data to the GPU-friendly tensors, and see what happens... everything is the same, except we are casting our model and input tensors as this new dtype instead of the old one.
If this returns false, or otherwise fails in a not-graceful way (i.e., with some error message), you may not have an NVIDIA GPU available on your machine. If you're running locally, we recommend you switch to Google Cloud and follow the instructions to set up a GPU there. If you're already on Google Cloud, something is wrong -- make sure you followed the instructions on how to request and use a GPU on your instance. If you did, post on Piazza or come to Office Hours so we can help you debug.
End of explanation
"""
%%timeit
ans = fixed_model(x_var)
"""
Explanation: Run the following cell to evaluate the performance of the forward pass running on the CPU:
End of explanation
"""
%%timeit
torch.cuda.synchronize() # Make sure there are no pending GPU computations
ans = fixed_model_gpu(x_var_gpu) # Feed it through the model!
torch.cuda.synchronize() # Make sure there are no pending GPU computations
"""
Explanation: ... and now the GPU:
End of explanation
"""
loss_fn = None
optimizer = None
pass
# This sets the model in "training" mode. This is relevant for some layers that may have different behavior
# in training mode vs testing mode, such as Dropout and BatchNorm.
fixed_model_gpu.train()
# Load one batch at a time.
for t, (x, y) in enumerate(loader_train):
x_var = Variable(x.type(gpu_dtype))
y_var = Variable(y.type(gpu_dtype).long())
# This is the forward pass: predict the scores for each class, for each x in the batch.
scores = fixed_model_gpu(x_var)
# Use the correct y values and the predicted y values to compute the loss.
loss = loss_fn(scores, y_var)
if (t + 1) % print_every == 0:
print('t = %d, loss = %.4f' % (t + 1, loss.data[0]))
# Zero out all of the gradients for the variables which the optimizer will update.
optimizer.zero_grad()
# This is the backwards pass: compute the gradient of the loss with respect to each
# parameter of the model.
loss.backward()
# Actually update the parameters of the model using the gradients computed by the backwards pass.
optimizer.step()
"""
Explanation: You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use the GPU datatype for your model and your tensors: as a reminder that is torch.cuda.FloatTensor (in our notebook here as gpu_dtype)
Train the model.
Now that you've seen how to define a model and do a single forward pass of some data through it, let's walk through how you'd actually train one whole epoch over your training data (using the simple_model we provided above).
Make sure you understand how each PyTorch function used below corresponds to what you implemented in your custom neural network implementation.
Note that because we are not resetting the weights anywhere below, if you run the cell multiple times, you are effectively training multiple epochs (so your performance should improve).
First, set up an RMSprop optimizer (using a 1e-3 learning rate) and a cross-entropy loss function:
End of explanation
"""
def train(model, loss_fn, optimizer, num_epochs = 1):
for epoch in range(num_epochs):
print('Starting epoch %d / %d' % (epoch + 1, num_epochs))
model.train()
for t, (x, y) in enumerate(loader_train):
x_var = Variable(x.type(gpu_dtype))
y_var = Variable(y.type(gpu_dtype).long())
scores = model(x_var)
loss = loss_fn(scores, y_var)
if (t + 1) % print_every == 0:
print('t = %d, loss = %.4f' % (t + 1, loss.data[0]))
optimizer.zero_grad()
loss.backward()
optimizer.step()
def check_accuracy(model, loader):
if loader.dataset.train:
print('Checking accuracy on validation set')
else:
print('Checking accuracy on test set')
num_correct = 0
num_samples = 0
model.eval() # Put the model in test mode (the opposite of model.train(), essentially)
for x, y in loader:
x_var = Variable(x.type(gpu_dtype), volatile=True)
scores = model(x_var)
_, preds = scores.data.cpu().max(1)
num_correct += (preds == y).sum()
num_samples += preds.size(0)
acc = float(num_correct) / num_samples
print('Got %d / %d correct (%.2f)' % (num_correct, num_samples, 100 * acc))
"""
Explanation: Now you've seen how the training process works in PyTorch. To save you writing boilerplate code, we're providing the following helper functions to help you train for multiple epochs and check the accuracy of your model:
End of explanation
"""
torch.cuda.random.manual_seed(12345)
fixed_model_gpu.apply(reset)
train(fixed_model_gpu, loss_fn, optimizer, num_epochs=1)
check_accuracy(fixed_model_gpu, loader_val)
"""
Explanation: Check the accuracy of the model.
Let's see the train and check_accuracy code in action -- feel free to use these methods when evaluating the models you develop below.
You should get a training loss of around 1.2-1.4, and a validation accuracy of around 50-60%. As mentioned above, if you re-run the cells, you'll be training more epochs, so your performance will improve past these numbers.
But don't worry about getting these numbers better -- this was just practice before you tackle designing your own model.
End of explanation
"""
# Train your model here, and make sure the output of this cell is the accuracy of your best model on the
# train, val, and test sets. Here's some code to get you started. The output of this cell should be the training
# and validation accuracy on your best model (measured by validation accuracy).
model = None
loss_fn = None
optimizer = None
train(model, loss_fn, optimizer, num_epochs=1)
check_accuracy(model, loader_val)
"""
Explanation: Don't forget the validation set!
And note that you can use the check_accuracy function to evaluate on either the test set or the validation set, by passing either loader_test or loader_val as the second argument to check_accuracy. You should not touch the test set until you have finished your architecture and hyperparameter tuning, and only run the test set once at the end to report a final value.
Train a great model on CIFAR-10!
Now it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves >=70% accuracy on the CIFAR-10 validation set. You can use the check_accuracy and train functions from above.
Things you should try:
Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
Number of filters: Above we used 32 filters. Do more or fewer do better?
Pooling vs Strided Convolution: Do you use max pooling or just stride convolutions?
Batch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster?
Network architecture: The network above has two layers of trainable parameters. Can you do better with a deep network? Good architectures to try include:
[conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
[conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
[batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM]
Global Average Pooling: Instead of flattening and then having multiple affine layers, perform convolutions until your image gets small (7x7 or so) and then perform an average pooling operation to get to a 1x1 image picture (1, 1 , Filter#), which is then reshaped into a (Filter#) vector. This is used in Google's Inception Network (See Table 1 for their architecture).
Regularization: Add l2 weight regularization, or perhaps use Dropout.
Tips for training
For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:
If the parameters are working well, you should see improvement within a few hundred iterations
Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
You should use the validation set for hyperparameter search, and save your test set for evaluating your architecture on the best parameters as selected by the validation set.
Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.
Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.
Model ensembles
Data augmentation
New Architectures
ResNets where the input from the previous layer is added to the output.
DenseNets where inputs into previous layers are concatenated together.
This blog has an in-depth overview
If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.
What we expect
At the very least, you should be able to train a ConvNet that gets at least 70% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.
You should use the space below to experiment and train your network.
Have fun and happy training!
End of explanation
"""
best_model = None
check_accuracy(best_model, loader_test)
"""
Explanation: Describe what you did
In the cell below you should write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.
Tell us here!
Test set -- run this only once
Now that we've gotten a result we're happy with, we test our final model on the test set (which you should store in best_model). This would be the score we would achieve on a competition. Think about how this compares to your validation set accuracy.
End of explanation
"""
|
diego0020/va_course_2015 | text_analysis/Text_Analysis_Tutorial.ipynb | mit | %cd C:/temp/
import pandas as pd
train = pd.read_csv("labeledTrainData.tsv", header=0, delimiter="\t", quoting=3)
"""
Explanation: Bag-of-Words
The bag-of-words model is a simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding grammar and even word order but keeping multiplicity. The bag-of-words model is commonly used in methods of document classification, where the (frequency of) occurrence of each word is used as a feature for training a classifier. [https://en.wikipedia.org/wiki/Bag-of-words_model]
In this tutorial [adapted from https://www.kaggle.com/c/word2vec-nlp-tutorial/details/part-1-for-beginners-bag-of-words] we'll create a bag-of-words of the dataset, and use the model in a machine learning algorithm
Reading the Data
End of explanation
"""
print(train.columns.values)
print(train.shape)
"""
Explanation: "header=0" indicates that the first line of the file contains column names, "delimiter=\t" indicates that the fields are separated by tabs, and quoting=3 ignore doubled quotes
End of explanation
"""
print train["review"][0]
"""
Explanation: We have 25000 rows, lets check the first one
End of explanation
"""
from bs4 import BeautifulSoup
example1 = BeautifulSoup(train["review"][0])
print(example1.get_text())
"""
Explanation: Data Cleaning and Text Preprocessing
First, we need to clean the text, removing the html markup. For this purpose, we'll use the Beautiful Soup library.
End of explanation
"""
import re
# Use regular expressions to do a find-and-replace
letters_only = re.sub("[^a-zA-Z]", # The pattern to search for (everything except letters)
" ", # The pattern to replace it with
example1.get_text() ) # The text to search
print letters_only
"""
Explanation: Dealing with Punctuation, Numbers and Stopwords
When considering how to clean the text, we should think about the data problem we are trying to solve. For many problems, it makes sense to remove punctuation. On the other hand, in this case, we are tackling a sentiment analysis problem, and it is possible that "!!!" or ":-(" could carry sentiment, and should be treated as words. In this tutorial, for simplicity, we remove the punctuation altogether.
To remove punctuation and numbers, we will use a package for dealing with regular expressions, called re, that comes built-in with Python
End of explanation
"""
lower_case = letters_only.lower() # Convert to lower case
words = lower_case.split() # Split into words
print(words)
"""
Explanation: We'll also convert our reviews to lower case and split them into individual words (a process called "tokenization")
End of explanation
"""
import nltk
#nltk.download() # Download text data sets, including stop words (A new window should open)
"""
Explanation: Finally, we need to decide how to deal with frequently occurring words that don't carry much meaning. Such words are called "stop words"; in English they include words such as "a", "and", "is", and "the". Conveniently, there are Python packages that come with stop word lists built in. Let's import a stop word list from the Python Natural Language Toolkit (NLTK).
End of explanation
"""
from nltk.corpus import stopwords # Import the stop word list
print stopwords.words("english")
"""
Explanation: Now we can use nltk to get a list of stop words
End of explanation
"""
words = [w for w in words if not w in stopwords.words("english")]
print(words)
"""
Explanation: To remove stop words from our movie review
End of explanation
"""
def review_to_words( raw_review ):
# Function to convert a raw review to a string of words
# The input is a single string (a raw movie review), and
# the output is a single string (a preprocessed movie review)
#
# 1. Remove HTML
review_text = BeautifulSoup(raw_review).get_text()
#
# 2. Remove non-letters
letters_only = re.sub("[^a-zA-Z]", " ", review_text)
#
# 3. Convert to lower case, split into individual words
words = letters_only.lower().split()
#
# 4. In Python, searching a set is much faster than searching
# a list, so convert the stop words to a set
stops = set(stopwords.words("english"))
#
# 5. Remove stop words
meaningful_words = [w for w in words if not w in stops]
#
# 6. Join the words back into one string separated by space,
# and return the result.
return( " ".join( meaningful_words ))
"""
Explanation: Now we have code to clean one review - but we need to clean the rest 25,000! To make our code reusable, let's create a function
End of explanation
"""
num_reviews = train["review"].size
clean_train_reviews = []
for i in xrange( 0, num_reviews ):
if( (i+1)%2500 == 0 ):
print "Review %d of %d\n" % ( i+1, num_reviews )
# Call our function for each one, and add the result to the list
clean_train_reviews.append( review_to_words( train["review"][i] ) )
"""
Explanation: At the end of the fuction we joined the words back into one paragraph. This is to make the output easier to use in our Bag of Words
Now let's loop through and clean all of the training set at once (this might take a few minutes)
End of explanation
"""
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(analyzer = "word", \
tokenizer = None, \
preprocessor = None, \
stop_words = None, \
max_features = 5000)
# fit_transform() does two functions: First, it fits the model
# and learns the vocabulary; second, it transforms our training data
# into feature vectors. The input to fit_transform should be a list of
# strings.
train_data_features = vectorizer.fit_transform(clean_train_reviews)
train_data_features = train_data_features.toarray()
print(train_data_features.shape)
"""
Explanation: Creating Features from a Bag of Words
Now that we have our training reviews tidied up, how do we convert them to some kind of numeric representation for machine learning? One common approach is called a Bag of Words. The Bag of Words model learns a vocabulary from all of the documents, then models each document by counting the number of times each word appears. For example, consider the following two sentences:
Sentence 1: "The cat sat on the hat"
Sentence 2: "The dog ate the cat and the hat"
From these two sentences, our vocabulary is as follows:
{ the, cat, sat, on, hat, dog, ate, and }
To get our bags of words, we count the number of times each word occurs in each sentence. In Sentence 1, "the" appears twice, and "cat", "sat", "on", and "hat" each appear once, so the feature vector for Sentence 1 is:
{ the, cat, sat, on, hat, dog, ate, and }
Sentence 1: { 2, 1, 1, 1, 1, 0, 0, 0 }
Similarly, the features for Sentence 2 are: { 3, 1, 0, 0, 1, 1, 1, 1}
In the IMDB data, we have a very large number of reviews, which will give us a large vocabulary. To limit the size of the feature vectors, we should choose some maximum vocabulary size. Below, we use the 5000 most frequent words (remembering that stop words have already been removed).
End of explanation
"""
# Take a look at the words in the vocabulary
vocab = vectorizer.get_feature_names()
print(vocab)
import numpy as np
# Sum up the counts of each vocabulary word
dist = np.sum(train_data_features, axis=0)
# For each, print the vocabulary word and the number of times it
# appears in the training set
for tag, count in zip(vocab, dist):
print count, tag
"""
Explanation: Now that the Bag of Words model is trained, let's look at the vocabulary
End of explanation
"""
from sklearn.cross_validation import train_test_split
random_state = np.random.RandomState(0)
X, y = train_data_features, train["sentiment"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.25, random_state=random_state)
print("Training the random forest...")
from sklearn.ensemble import RandomForestClassifier
# Initialize a Random Forest classifier with 100 trees
forest = RandomForestClassifier(n_estimators = 100, n_jobs=2)
from time import time
t0 = time()
# Fit the forest to the training set, using the bag of words as
# features and the sentiment labels as the response variable
forest = forest.fit(X_train, y_train)
print("... took %0.3fs" % (time() - t0))
"""
Explanation: Classification
At this point, we have numeric training features from the Bag of Words and the original sentiment labels for each feature vector, so let's do some supervised learning! Here, we'll use the Random Forest classifier. The Random Forest algorithm is included in scikit-learn (Random Forest uses many tree-based classifiers to make predictions, hence the "forest"). Below, we set the number of trees to 100 as a reasonable default value. More trees may or may not perform better (why?), but will certainly take longer to run. Likewise, the more features you include for each review, the longer this will take.
First, we'll separate the dataset into a training and testing set for model evaluation.
End of explanation
"""
y_pred = forest.predict(X_test)
from sklearn import metrics
print(metrics.classification_report(y_test, y_pred, target_names=['negative review', 'positive review']))
"""
Explanation: Now we'll evaluate the performance of our classifier
End of explanation
"""
import pandas as pd
afinn = pd.read_csv("AFINN-111.txt", header=None, delimiter="\t")
sent_dict = dict(zip(afinn[0], afinn[1]))
print(sent_dict)
"""
Explanation: Sentiment Analysis
Now we'll do sentiment analysis of the reviews using a famous list of words rated for sentiments called AFINN-111.
Check the description in http://www2.imm.dtu.dk/pubdb/views/publication_details.php?id=6010
First, we'll load the data file and create a sentiment dictionary
End of explanation
"""
#Calculate the sentiment in the provided text
def sentiment_in_text(text, sent_dict):
sentiment = 0.0
words = text.split()
for w in words:
if not w in sent_dict: continue
sentiment += float(sent_dict[w])
return sentiment
"""
Explanation: Next, we create a function to sum the sentiment associated with each word in a paragraph
End of explanation
"""
print(clean_train_reviews[0])
print(sentiment_in_text(clean_train_reviews[0], sent_dict))
print(clean_train_reviews[1])
print(sentiment_in_text(clean_train_reviews[1], sent_dict))
"""
Explanation: We'll use our cleaned review dataset in clean_train_reviews. Lets check the results on the first two.
Remember that negative values represent negative sentiments
End of explanation
"""
sentiment_values = [sentiment_in_text(x, sent_dict) for x in clean_train_reviews] #This is a list comprehension expression
sentiment_values = np.array(sentiment_values) #We convert the list to a numpy array for easier manipulation
print(sentiment_values)
"""
Explanation: Why this approach to sentiment analysis in movie reviews can be problematic?
Remember that we always need to think about the context when doing data analysis
Now we'll apply the function to the whole clean dataset
End of explanation
"""
y_pred_sent = [1 if x>0 else 0 for x in sentiment_values]
"""
Explanation: Then we'll convert this sentiment values to positive (1) and negative (0) reviews as we have in our dataset
End of explanation
"""
print(metrics.classification_report(y, y_pred_sent, target_names=['negative review', 'positive review']))
"""
Explanation: And we'll compare our results with the entire target vector (because we are not doing training at this point)
End of explanation
"""
#The bag-of-words is in the variable train_data_features
print(train_data_features.shape)
sentiment_values_matrix = np.matrix(sentiment_values).T
print(sentiment_values_matrix.shape)
#numpy.hstack() Stack arrays in sequence horizontally (column wise). The number of rows must match
X2 = np.hstack((sentiment_values_matrix, train_data_features))
print(X2.shape)
"""
Explanation: Not bad for such a simple method.
What can we say about the performance of our method? How can we improve the precision or recall?
Feature Engineering
The sentiment_values that we just created could be used as an additional feature of the dataset for our classification task. Lets combine the bag of words with the sentiment values in an extended feature set. This could improve the classification performance, but could also be detrimental (why?), so lets check.
End of explanation
"""
random_state = np.random.RandomState(0)
y = train["sentiment"]
X_train2, X_test2, y_train2, y_test2 = train_test_split(X2, y, test_size=.25, random_state=random_state)
print("Training again the random forest...")
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(n_estimators = 100, n_jobs=2)
from time import time
t0 = time()
forest = forest.fit(X_train2, y_train2)
print("... took %0.3fs" % (time() - t0))
y_pred2 = forest.predict(X_test2)
from sklearn import metrics
print(metrics.classification_report(y_test2, y_pred2, target_names=['negative review', 'positive review']))
"""
Explanation: Now we can do classification again with our new feature set
End of explanation
"""
# Read the test data
test = pd.read_csv("testData.tsv", header=0, delimiter="\t", \
quoting=3 )
# Verify that there are 25,000 rows and 2 columns
print test.shape
# Create an empty list and append the clean reviews one by one
num_reviews = len(test["review"])
clean_test_reviews = []
print "Cleaning and parsing the test set movie reviews...\n"
for i in xrange(0,num_reviews):
if( (i+1) % 2500 == 0 ):
print "Review %d of %d\n" % (i+1, num_reviews)
clean_review = review_to_words( test["review"][i] )
clean_test_reviews.append( clean_review )
# Get a bag of words for the test set, and convert to a numpy array
test_data_features = vectorizer.transform(clean_test_reviews)
test_data_features = test_data_features.toarray()
# Use the random forest to make sentiment label predictions
result = forest.predict(test_data_features)
# Copy the results to a pandas dataframe with an "id" column and
# a "sentiment" column
output = pd.DataFrame( data={"id":test["id"], "sentiment":result} )
# Use pandas to write the comma-separated output file
output.to_csv( "Bag_of_Words_model.csv", index=False, quoting=3 )
"""
Explanation: Was the new feature set useful or not?
An important note about Random Forests: Every time that you train a Random Forest you will obtain a somewhat different Random Forest (because it's random), so performance between forest can be different just because of the method, although the difference shouldn't be too big.
Optional: Creating a Submission for Kaggle
Note that when we use the Bag of Words for the test set, we only call "transform", not "fit_transform" as we did for the training set. In machine learning, you shouldn't use the test set to fit your model, otherwise you run the risk of overfitting. For this reason, we keep the test set off-limits until we are ready to make predictions.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/cccma/cmip6/models/sandbox-3/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccma', 'sandbox-3', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CCCMA
Source ID: SANDBOX-3
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:47
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
atlury/deep-opencl | DL0110EN/3.3.1_softmax_in_one_dimension_v2.ipynb | lgpl-3.0 | # Import the libraries we need for this lab
import torch.nn as nn
import torch
import matplotlib.pyplot as plt
import numpy as np
from torch.utils.data import Dataset, DataLoader
"""
Explanation: <a href="http://cocl.us/pytorch_link_top">
<img src="https://cocl.us/Pytorch_top" width="750" alt="IBM 10TB Storage" />
</a>
<img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 200, align = "center">
<h1>Softmax Classifer 1D</h1>
<h2>Table of Contents</h2>
<p>In this lab, you will use Softmax to classify three linearly separable classes, the features are in one dimension </p>
<ul>
<li><a href="#Makeup_Data">Make Some Data</a></li>
<li><a href="#Softmax">Build Softmax Classifier</a></li>
<li><a href="#Model_Cost">Train the Model</a></li>
<li><a href="#Result">Analyze Results</a></li>
</ul>
<p>Estimated Time Needed: <strong>25 min</strong></p>
<hr>
<h2>Preparation</h2>
We'll need the following libraries:
End of explanation
"""
# Create class for plotting
def plot_data(data_set, model = None, n = 1, color = False):
X = data_set[:][0]
Y = data_set[:][1]
plt.plot(X[Y == 0, 0].numpy(), Y[Y == 0].numpy(), 'bo', label = 'y = 0')
plt.plot(X[Y == 1, 0].numpy(), 0 * Y[Y == 1].numpy(), 'ro', label = 'y = 1')
plt.plot(X[Y == 2, 0].numpy(), 0 * Y[Y == 2].numpy(), 'go', label = 'y = 2')
plt.ylim((-0.1, 3))
plt.legend()
if model != None:
w = list(model.parameters())[0][0].detach()
b = list(model.parameters())[1][0].detach()
y_label = ['yhat=0', 'yhat=1', 'yhat=2']
y_color = ['b', 'r', 'g']
Y = []
for w, b, y_l, y_c in zip(model.state_dict()['0.weight'], model.state_dict()['0.bias'], y_label, y_color):
Y.append((w * X + b).numpy())
plt.plot(X.numpy(), (w * X + b).numpy(), y_c, label = y_l)
if color == True:
x = X.numpy()
x = x.reshape(-1)
top = np.ones(x.shape)
y0 = Y[0].reshape(-1)
y1 = Y[1].reshape(-1)
y2 = Y[2].reshape(-1)
plt.fill_between(x, y0, where = y1 > y1, interpolate = True, color = 'blue')
plt.fill_between(x, y0, where = y1 > y2, interpolate = True, color = 'blue')
plt.fill_between(x, y1, where = y1 > y0, interpolate = True, color = 'red')
plt.fill_between(x, y1, where = ((y1 > y2) * (y1 > y0)),interpolate = True, color = 'red')
plt.fill_between(x, y2, where = (y2 > y0) * (y0 > 0),interpolate = True, color = 'green')
plt.fill_between(x, y2, where = (y2 > y1), interpolate = True, color = 'green')
plt.legend()
plt.show()
"""
Explanation: Use the helper function to plot labeled data points:
End of explanation
"""
#Set the random seed
torch.manual_seed(0)
"""
Explanation: Set the random seed:
End of explanation
"""
# Create the data class
class Data(Dataset):
# Constructor
def __init__(self):
self.x = torch.arange(-2, 2, 0.1).view(-1, 1)
self.y = torch.zeros(self.x.shape[0])
self.y[(self.x > -1.0)[:, 0] * (self.x < 1.0)[:, 0]] = 1
self.y[(self.x >= 1.0)[:, 0]] = 2
self.y = self.y.type(torch.LongTensor)
self.len = self.x.shape[0]
# Getter
def __getitem__(self,index):
return self.x[index], self.y[index]
# Get Length
def __len__(self):
return self.len
"""
Explanation: <!--Empty Space for separating topics-->
<h2 id="Makeup_Data">Make Some Data</h2>
Create some linearly separable data with three classes:
End of explanation
"""
# Create the dataset object and plot the dataset object
data_set = Data()
data_set.x
plot_data(data_set)
"""
Explanation: Create the dataset object:
End of explanation
"""
# Build Softmax Classifier technically you only need linear
model = nn.Sequential(nn.Linear(1, 3))
model.state_dict()
"""
Explanation: <!--Empty Space for separating topics-->
<h2 id="Softmax">Build a Softmax Classifier </h2>
Build a Softmax classifier by using the Sequential module:
End of explanation
"""
# Create criterion function, optimizer, and dataloader
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr = 0.01)
trainloader = DataLoader(dataset = data_set, batch_size = 5)
"""
Explanation: <!--Empty Space for separating topics-->
<h2 id="Model">Train the Model</h2>
Create the criterion function, the optimizer and the dataloader
End of explanation
"""
# Train the model
LOSS = []
def train_model(epochs):
for epoch in range(epochs):
if epoch % 50 == 0:
pass
plot_data(data_set, model)
for x, y in trainloader:
optimizer.zero_grad()
yhat = model(x)
loss = criterion(yhat, y)
LOSS.append(loss)
loss.backward()
optimizer.step()
train_model(300)
"""
Explanation: Train the model for every 50 epochs plot, the line generated for each class.
End of explanation
"""
# Make the prediction
z = model(data_set.x)
_, yhat = z.max(1)
print("The prediction:", yhat)
"""
Explanation: <!--Empty Space for separating topics-->
<h2 id="Result">Analyze Results</h2>
Find the predicted class on the test data:
End of explanation
"""
# Print the accuracy
correct = (data_set.y == yhat).sum().item()
accuracy = correct / len(data_set)
print("The accuracy: ", accuracy)
"""
Explanation: Calculate the accuracy on the test data:
End of explanation
"""
|
Kaggle/learntools | notebooks/data_viz_to_coder/raw/ex4.ipynb | apache-2.0 | import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
print("Setup Complete")
"""
Explanation: In this exercise, you will use your new knowledge to propose a solution to a real-world scenario. To succeed, you will need to import data into Python, answer questions using the data, and generate scatter plots to understand patterns in the data.
Scenario
You work for a major candy producer, and your goal is to write a report that your company can use to guide the design of its next product. Soon after starting your research, you stumble across this very interesting dataset containing results from a fun survey to crowdsource favorite candies.
Setup
Run the next cell to import and configure the Python libraries that you need to complete the exercise.
End of explanation
"""
# Set up code checking
import os
if not os.path.exists("../input/candy.csv"):
os.symlink("../input/data-for-datavis/candy.csv", "../input/candy.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.data_viz_to_coder.ex4 import *
print("Setup Complete")
"""
Explanation: The questions below will give you feedback on your work. Run the following cell to set up our feedback system.
End of explanation
"""
# Path of the file to read
candy_filepath = "../input/candy.csv"
# Fill in the line below to read the file into a variable candy_data
candy_data = ____
# Run the line below with no changes to check that you've loaded the data correctly
step_1.check()
#%%RM_IF(PROD)%%
candy_data = pd.read_csv(candy_filepath, index_col="id")
step_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_1.hint()
#_COMMENT_IF(PROD)_
step_1.solution()
"""
Explanation: Step 1: Load the Data
Read the candy data file into candy_data. Use the "id" column to label the rows.
End of explanation
"""
# Print the first five rows of the data
____ # Your code here
"""
Explanation: Step 2: Review the data
Use a Python command to print the first five rows of the data.
End of explanation
"""
# Fill in the line below: Which candy was more popular with survey respondents:
# '3 Musketeers' or 'Almond Joy'? (Please enclose your answer in single quotes.)
more_popular = ____
# Fill in the line below: Which candy has higher sugar content: 'Air Heads'
# or 'Baby Ruth'? (Please enclose your answer in single quotes.)
more_sugar = ____
# Check your answers
step_2.check()
#%%RM_IF(PROD)%%
more_popular = '3 Musketeers'
more_sugar = 'Air Heads'
step_2.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_2.hint()
#_COMMENT_IF(PROD)_
step_2.solution()
"""
Explanation: The dataset contains 83 rows, where each corresponds to a different candy bar. There are 13 columns:
- 'competitorname' contains the name of the candy bar.
- the next 9 columns (from 'chocolate' to 'pluribus') describe the candy. For instance, rows with chocolate candies have "Yes" in the 'chocolate' column (and candies without chocolate have "No" in the same column).
- 'sugarpercent' provides some indication of the amount of sugar, where higher values signify higher sugar content.
- 'pricepercent' shows the price per unit, relative to the other candies in the dataset.
- 'winpercent' is calculated from the survey results; higher values indicate that the candy was more popular with survey respondents.
Use the first five rows of the data to answer the questions below.
End of explanation
"""
# Scatter plot showing the relationship between 'sugarpercent' and 'winpercent'
____ # Your code here
# Check your answer
step_3.a.check()
#%%RM_IF(PROD)%%
sns.scatterplot(x=candy_data['sugarpercent'], y=candy_data['winpercent'])
step_3.a.assert_check_passed()
#%%RM_IF(PROD)%%
sns.regplot(x=candy_data['sugarpercent'], y=candy_data['winpercent'])
step_3.a.assert_check_failed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_3.a.hint()
#_COMMENT_IF(PROD)_
step_3.a.solution_plot()
"""
Explanation: Step 3: The role of sugar
Do people tend to prefer candies with higher sugar content?
Part A
Create a scatter plot that shows the relationship between 'sugarpercent' (on the horizontal x-axis) and 'winpercent' (on the vertical y-axis). Don't add a regression line just yet -- you'll do that in the next step!
End of explanation
"""
#_COMMENT_IF(PROD)_
step_3.b.hint()
# Check your answer (Run this code cell to receive credit!)
step_3.b.solution()
"""
Explanation: Part B
Does the scatter plot show a strong correlation between the two variables? If so, are candies with more sugar relatively more or less popular with the survey respondents?
End of explanation
"""
# Scatter plot w/ regression line showing the relationship between 'sugarpercent' and 'winpercent'
____ # Your code here
# Check your answer
step_4.a.check()
#%%RM_IF(PROD)%%
sns.regplot(x=candy_data['sugarpercent'], y=candy_data['winpercent'])
step_4.a.assert_check_passed()
#%%RM_IF(PROD)%%
sns.scatterplot(x=candy_data['sugarpercent'], y=candy_data['winpercent'])
step_4.a.assert_check_failed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_4.a.hint()
#_COMMENT_IF(PROD)_
step_4.a.solution_plot()
"""
Explanation: Step 4: Take a closer look
Part A
Create the same scatter plot you created in Step 3, but now with a regression line!
End of explanation
"""
#_COMMENT_IF(PROD)_
step_4.b.hint()
# Check your answer (Run this code cell to receive credit!)
step_4.b.solution()
"""
Explanation: Part B
According to the plot above, is there a slight correlation between 'winpercent' and 'sugarpercent'? What does this tell you about the candy that people tend to prefer?
End of explanation
"""
# Scatter plot showing the relationship between 'pricepercent', 'winpercent', and 'chocolate'
____ # Your code here
# Check your answer
step_5.check()
#%%RM_IF(PROD)%%
sns.scatterplot(x=candy_data['pricepercent'], y=candy_data['winpercent'], hue=candy_data['chocolate'])
step_5.assert_check_passed()
#%%RM_IF(PROD)%%
#sns.scatterplot(x=candy_data['pricepercent'], y=candy_data['winpercent'])
#step_5.assert_check_failed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_5.hint()
#_COMMENT_IF(PROD)_
step_5.solution_plot()
"""
Explanation: Step 5: Chocolate!
In the code cell below, create a scatter plot to show the relationship between 'pricepercent' (on the horizontal x-axis) and 'winpercent' (on the vertical y-axis). Use the 'chocolate' column to color-code the points. Don't add any regression lines just yet -- you'll do that in the next step!
End of explanation
"""
# Color-coded scatter plot w/ regression lines
____ # Your code here
# Check your answer
step_6.a.check()
#%%RM_IF(PROD)%%
sns.scatterplot(x=candy_data['pricepercent'], y=candy_data['winpercent'])
step_6.a.assert_check_failed()
#%%RM_IF(PROD)%%
sns.lmplot(x="pricepercent", y="winpercent", hue="chocolate", data=candy_data)
step_6.a.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_6.a.hint()
#_COMMENT_IF(PROD)_
step_6.a.solution_plot()
"""
Explanation: Can you see any interesting patterns in the scatter plot? We'll investigate this plot further by adding regression lines in the next step!
Step 6: Investigate chocolate
Part A
Create the same scatter plot you created in Step 5, but now with two regression lines, corresponding to (1) chocolate candies and (2) candies without chocolate.
End of explanation
"""
#_COMMENT_IF(PROD)_
step_6.b.hint()
# Check your answer (Run this code cell to receive credit!)
step_6.b.solution()
"""
Explanation: Part B
Using the regression lines, what conclusions can you draw about the effects of chocolate and price on candy popularity?
End of explanation
"""
# Scatter plot showing the relationship between 'chocolate' and 'winpercent'
____ # Your code here
# Check your answer
step_7.a.check()
#%%RM_IF(PROD)%%
sns.swarmplot(x=candy_data['chocolate'], y=candy_data['winpercent'])
step_7.a.assert_check_passed()
#%%RM_IF(PROD)%%
#sns.swarmplot(x=candy_data['chocolate'], y=candy_data['sugarpercent'])
#step_7.a.assert_check_failed()
#%%RM_IF(PROD)%%
#sns.swarmplot(x=candy_data['fruity'], y=candy_data['winpercent'])
#step_7.a.assert_check_failed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_7.a.hint()
#_COMMENT_IF(PROD)_
step_7.a.solution_plot()
"""
Explanation: Step 7: Everybody loves chocolate.
Part A
Create a categorical scatter plot to highlight the relationship between 'chocolate' and 'winpercent'. Put 'chocolate' on the (horizontal) x-axis, and 'winpercent' on the (vertical) y-axis.
End of explanation
"""
#_COMMENT_IF(PROD)_
step_7.b.hint()
# Check your answer (Run this code cell to receive credit!)
step_7.b.solution()
"""
Explanation: Part B
You decide to dedicate a section of your report to the fact that chocolate candies tend to be more popular than candies without chocolate. Which plot is more appropriate to tell this story: the plot from Step 6, or the plot from Step 7?
End of explanation
"""
|
PrincetonACM/princetonacm.github.io | events/code-at-night/archive/python_talk/intro_to_python.ipynb | mit | # When a line begins with a '#' character, it designates a comment. This means that it's not actually a line of code
# Can you print 'hello world', as is customary for those new to a language?
# Can you make Python print the staircase below:
#
# ========
# | |
# ===============
# | | |
# ======================
"""
Explanation: Python 3 Tutorial Notebook
We'll be using this notebook to follow the slides from the workshop. You can also use it to experiment with Python yourself! Simply add a cell wherever you want, type in some Python code, and see what happens!
Topic 1: Printing
1.1 Printing Basics
End of explanation
"""
# The print(...) function can accept as many arguments as you'd like. It prints the arguments
# in order, separated by a space. For example...
print('hello', 'world', 'i', 'go', 'to', 'princeton')
# You can change the delimiter that separates the comma-separated arguments by changing the 'sep' parameter:
print('hello', 'world', 'i', 'go', 'to', 'princeton', sep=' --> ')
# By default, python adds a newline to the end of any statement you print. However, you can change this
# by changing the 'end' parameter. For example...
# Here we've told Python to print 'hello world' and then an empty string. This effectively
# removes the newline that Python adds by default.
print('hello prince', end='')
# The next line that we print then begins immediately after the previous thing we printed.
print('ton')
# We can also end our lines with something more exotic
print('ACM is so cool', end=' :)')
"""
Explanation: 1.2 Some other useful printing tips/tricks
End of explanation
"""
# What does the following snippet of code do?
day = 24
month = 'September'
year = '2021'
dotw = 'Friday'
print(month, day - 7, year, 'was a', dotw)
age_in_weeks = 1057
# What's the difference between the two statements below? Comment one of them out to check yourself!
age_in_years = 1057 / 52
age_in_years = 1057 // 52
print(age_in_years)
# Can you explain to the person next to you why there's a difference?
# Try to calculate the following in your head and see if your answer matches what Python says.
# The order of operations in Python is similar to what you learned in grade school: evaluate parentheses,
# then exponents, then multiplication/division/modulo from left to right, then addition and subtraction
# from left to right.
mystery = 2 ** 4 * 3 ** 2 % 7 * (2 + 7)
print(mystery)
# Why is the answer what Python says it is? Explain the steps to the person next to you.
# Write a function that converts a given temperature in Farenheit to Celsius and Kelvin
# The relevant formulas are (degrees Celsius) = (degrees Farenheit - 32) * 5 / 9
# and (Kelvin) = (degrees Celsius + 273)
farenheit = 86 # Change the value here to test your solution
celsius = ... # CHANGE THIS LINE OF CODE
kelvin = ... # CHANGE THIS LINE OF CODE
print(farenheit, 'degrees farenheit =', celsius, 'degrees celsius =', kelvin, 'kelvin')
"""
Explanation: Topic 2: Variables, Basic Operators, and Data Types
2.1 Playing with Numbers
For a description of all the operators that exist in Python, you can visit https://www.tutorialspoint.com/python/python_basic_operators.htm.
End of explanation
"""
# You are given the following string:
a = 'Thomas Cruise'
# Your job is to put the phrase 'Tom Cruise is 9 outta 10' into variable b using ONLY operations on string a.
# You may not concatenate letters or strings of your own. HINT: You can use the str(...) function to convert
# numerical values into strings so that you can concatenate it with another string
b = ... # CHANGE THIS LINE OF CODE
print(b)
# Practice with string formatting with mad libs! For this, you'll need to know
# how to receive input. It's really easy in Python:
word_1 = input('Input first word:\n') # This prompts the user with the phrase 'Input first word'
# and stores the result in the variable word_1
word_2 = input('Input second word:\n')
word_3 = input('Input third word:\n')
word_4 = input('Input fourth word:\n')
# You want to print the following mad libs:
#
# Hi, my name is [first phrase].
# One thing that I love about Princeton is [second phrase].
# One pet peeve I have about Princeton is [third phrase], but I can get over it because I have [fourth phrase].
#
# For the last sentence, use one print statement to print it!
print('\nYour mad libs is: ')
# CHANGE THE FOLLOWING THREE LINES OF CODE. HINT: Use the format() function!
print(...)
print(...)
print(...)
"""
Explanation: 2.2 Playing with Strings
End of explanation
"""
# Your objective is to write a boolean formula in Python that takes three boolean variables (a, b, c)
# and returns True if and only if exactly one of them is True. This is called the xor of the variables
# Toggle these to test your formula
a = False
b = True
c = False
# Write your formula here
xor = ... # CHANGE THIS LINE OF CODE
print(xor)
"""
Explanation: 2.3 Playing with Booleans
End of explanation
"""
# In Python, data types are divided into two categories: truthy and falsy. Falsy values include anything
# (strings, lists, etc.) that is empty, the special value None, any zero number, and the boolean False.
# You can use the bool(...) function to check whether a value is truthy or falsy:
print('bool(3) =', bool(3))
print('bool(0) =', bool(0))
print('bool("") =', bool(''))
print('bool(" ") =', bool(' '))
print('bool(False) =', bool(False))
print('bool(True) =', bool(True))
"""
Explanation: 2.4 Mixing Types
End of explanation
"""
x = 5
# What is the difference between this snippet of code:
if x % 2 == 0:
print(x, 'is even')
if x % 5 == 0:
print(x, 'is divisible by 5')
if x > 0:
print(x, 'is positive')
print()
# And this one:
if x % 2 == 0:
print(x, 'is even')
elif x % 5 == 0:
print(x, 'is divisible by 5')
elif x > 0:
print(x, 'is positive')
# Explain to the person next to you what the difference is.
# FizzBuzz is a very well-known programming challenge. It's quite easy, but it can trip up people
# who are trying to look for shortcuts to solving the problem. The problem is as follows:
#
# For every number k in order from 1 to 50, print
# - 'FizzBuzz' if the number is divisible by 3 and 5
# - 'Fizz' if the number is only divisible by 3
# - 'Buzz' if the number is only divisble by 5
# - the value of k if none of the above options hold
#
# Your task is to write a snippet of code that solves FizzBuzz.
"""
Explanation: Topic 3: If Statements, Ranges, and Loops
3.1 Practice with the Basics
End of explanation
"""
# The following if statement construct is so common it has a name ('ternary statement'):
#
# if (condition):
# x = something1
# elif (condition2):
# x = something2
# else:
# x = something3
#
# In python, this can be shortened into a one-liner:
#
# x = something else something2 if (condition2) else something3
#
# And this works for an arbitrary number of elif statements in between the initial if and final else.
# Can you convert the following block into a one-liner?
budget = 3
if budget > 50:
restaurant = 'Agricola'
elif budget > 30:
restaurant = 'Mediterra'
elif budget > 15:
restaurant = 'Thai Village'
else:
retaurant = 'Wawa'
# Write your solution below:
restaurant = ... # CHANGE THIS LINE OF CODE
print(restaurant)
"""
Explanation: 3.2 Ternary Statements in Python
End of explanation
"""
# Your job is to create a 'guessing game' where the program thinks of an integer from 1 to 50
# and will keep prompting you for a guess. It'll tell you each time whether your guess is
# too high or too low until you find the number.
# Don't touch these two lines of code; they choose a random number between 1 and 50
# and store it in mystery_num
from random import randint
mystery_num = randint(1, 100)
# Write your guessing game below:
guess = int(input('Guess a number:\n')) # First guess; don't forget to convert it to an int!
# WRITE THE REST OF THE CODE FOR THE GUESSING GAME HERE
# Follow-up: Using the best strategy, what's the worst-case number of guesses you should need?
"""
Explanation: 3.3 Practice with While Loops
End of explanation
"""
# When at the top of a loop, the 'in' keyword in Python will iterate through all of the sequence's
# members in order. For strings, members are individual characters; for lists and tuples, they're
# the items contained.
# Task: Given a list of lowercase words, print whether the word has a vowel. Example: if the input is
# ['rhythm', 'owl', 'hymn', 'aardvark'], you should output the following:
# rhythm has no vowels
# owl has a vowel
# hymn has no vowels
# aardvark has a vowel
# HINT: The 'in' keyword can also test whether something is a member of another object.
# Also, don't forget about break and continue!
vowels = ['a', 'e', 'i', 'o', 'u']
words = ['rhythm', 'owl', 'hymn', 'aardvark']
# WRITE YOUR CODE HERE
# Given a tuple, write a program to check if the value at index i is equal to the square of i.
# Example: If the input is nums = (0, 2, 4, 6, 8), then the desired output is
#
# True
# False
# True
# False
# False
#
# Because nums[0] = 0^2 and nums[2] = 4 = 2^2. HINT: Use enumerate!
nums = (0, 2, 4, 6, 8)
# WRITE YOUR CODE HERE
"""
Explanation: Topic 4: Data Structures in Python
4.1 Sequences
Strings, tuples, and lists are all considered sequences in Python, which is why there are many operations that work on all three of them.
4.1.1 Iterating
End of explanation
"""
# Slicing is one of the operations that work on all of them.
# Task 1: Given a string s whose length is odd and at least 5, can you print
# the middle three characters of it? Try to do it in one line.
# Example: if the input is 'PrInCeToN', the the output should be 'nCe'
s = 'PrInCeToN'
# WRITE YOUR CODE HERE
# Task 2: Given a tuple, return a tuple that includes only every other element, starting
# from the first. Example: if the input is (4, 5, 'cow', True, 9.4), then the output should
# be (4, 'cow', 9.4). Again, try to do it in one line — there's an easy way to do it with slicing.
t = (4, 5, 'cow', True, 9.4)
# WRITE YOUR CODE HERE
# Task 3: Do the same as task 2, except start from the last element and alternate backwards.
# Example: if the input is (3, 9, 1, 0, True, 'Tiger'), output should be ('Tiger', 0, 9)
t = (3, 9, 1, 0, True, 'Tiger')
# WRITE YOUR CODE HERE
"""
Explanation: 4.1.2 Slicing
End of explanation
"""
# Task 1: Given a list of names, return a new list where all the names which are more than 15
# characters long are removed.
names = ['Nalin Ranjan', 'Howard Yen', 'Sacheth Sathyanarayanan', 'Henry Tang', \
'Austen Mazenko', 'Michael Tang', 'Dangely Canabal', 'Vicky Feng']
# WRITE YOUR CODE HERE
# Task 2: Given a list of strings, return a list which is the reverse of the original, with
# all the strings reversed. Example: if the input is ['Its', 'nine', 'o-clock', 'on a', 'Saturday'],
# then the output should be ['yadrutaS', 'a no', 'kcolc-o', 'enin', 'stI']. Try to do it in one line!
# HINT: Use list comprehension and negative indices!
l = ['Its', 'nine', 'o-clock', 'on a', 'Saturday']
# WRITE YOUR CODE HERE
l1 = [5, 2, 6, 1, 8, 2, 4]
l2 = [6, 1, 2, 4]
# Python has a bunch of useful built-in list functions. Some of them are
l1.append(3) # adds the element 3 to the end of the the list
print(l1)
l1.insert(1, 7) # adds the element 7 as the second element of the list
print(l1)
l1.remove(2) # Removes the first occurrence of 7 in the list (DOES NOT REMOVE ALL)
print(l1)
l1.pop(4) # Remove the fifth item of the list (since everything is zero-indexed)
print(l1)
l1.sort() # Sorts the list in increasing order
print(l1)
l1.sort(reverse=True) # Sorts the list in decreasing order
print(l1)
print(l1.count(2)) # Counts the number of occurrences of the number 2 in the list
l1.extend(l2) # Appends all elements in l2 to the end of l1
print(l1)
# If the list is numeric, we can find the min, max, and sum easily:
print('Sum:', sum(l1))
print('Minimum:', min(l1))
print('Maximum:', max(l1))
# You can see all the list methods at https://www.w3schools.com/python/python_ref_list.asp
"""
Explanation: 4.2 List Comprehension and Other Useful List Functions
End of explanation
"""
# Task 1: In a dictionary, keys must be unique, but values need not be. Given a dictionary, write a script
# that prints the set of all unique values in a dictionary. Example: if the dictionary is
# {'Cap': 'bicker', 'Quad': 'sign-in', 'Colonial': 'sign-in', 'Tower': 'bicker', 'Charter': '???'}
# The program should print {'sign-in', 'bicker', '???'}
d = {'Cap': 'bicker', 'Quad': 'sign-in', 'Colonial': 'sign-in', 'Tower': 'bicker', 'Charter': '???'}
# WRITE YOUR CODE HERE
# Task 2: Given a passage of text (a string), analyze the frequency of each individual letter. Sort
# the letters by their frequency in the passage. Does your distribution look reasonable for English?
passage = """it was the best of times, it was the worst of times, it was the age of wisdom, it was
the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was
the season of Light, it was the season of Darkness, it was the spring of hope, it was the
winter of despair, we had everything before us, we had nothing before us, we were all going
direct to Heaven, we were all going direct the other way -- in short, the period was so far
like the present period that some of its noisiest authorities insisted on its being received,
for good or for evil, in the superlative degree of comparison only"""
# Here's the alphabet to help you out; it'll help you ignore other characters
alphabet = "abcdefghijklmnopqrstuvwxyz"
# WRITE YOUR CODE TO COLLECT THE DICTIONARY OF WORD FREQUENCIES HERE
# Don't change the code below: it'll take your dictionary of frequencies and sort it from most frequent to least
freqs = [(letter, d[letter]) for letter in d]
freqs.sort(key = lambda x: x[1], reverse=True)
print(freqs)
"""
Explanation: 4.3 Sets and Dictionaries
End of explanation
"""
# Task 1: Write a function that returns the minimum of three numbers. Don't use the built-in min function
def my_min(a, b, c):
# FILL IN THE FUNCTION DETAILS HERE (also delete the 'pass' keyword)
pass
# Test Cases
print('Minimum of 6, 3, 7 is', my_min(6, 3, 7))
print('Minimum of 0, 3.333, -52 is', my_min(0, -52, 3.333))
print('Minimum of -3, -1, 3.14159 is', my_min(-3, -1, 3.14159))
# Task 2: Write a function that checks if a given tuple of numbers is increasing (that is, each number
# is at least the number before it)
def my_increasing(t):
# FILL IN THE FUNCTION DETAILS HERE (also delete the 'pass' keyword)
pass
# Test Cases
print('(1, 2, 3, 4, 5, 7, 8) is increasing:', my_increasing((1, 2, 3, 4, 5, 7, 8)))
print('(1, 2, 3, 2, 5, 7, 8) is increasing:', my_increasing((1, 2, 3, 2, 5, 7, 8)))
print('(-1, 2, 3, 2.99, 5, 7, 8) is increasing:', my_increasing((1, 2, 3, 2, 5, 7, 8)))
# Task 3: Given a list of numbers that is guaranteed to contain all but one of the consecutive integers
# 1 to N (for some N), find the one that is missing. For example, if the input is [2, 1, 5, 4], your function
# should return 3, because that's the number missing from 1-5.
def my_missing(l):
# FILL IN THE FUNCTION DETAILS HERE (also delete the 'pass' keyword)
pass
print(my_missing([2, 1, 5, 4]))
print(my_missing([3, 4, 6, 2, 5, 7, 9, 8]))
"""
Explanation: Topic 5: Functions in Python
5.1 Practice with Basic Functions
End of explanation
"""
# Task: The sequence of Fibonacci numbers starts with the numbers 1 and 1 and every subsequent term
# is the sum of the previous two terms. So the sequence is 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 87, 144, ...
# Can you write a simple recursive function that calculates the nth Fibonacci number?
# WARNING: Don't call your function for anything more than 35 or pass a non-integer parameter.
# Your notebook might crash if you do.
# WRITE YOUR CODE HERE
print(fib(35)) # Should be 9227465
"""
Explanation: 5.2 Recursion
End of explanation
"""
# Part of the reason that we told you not to run your answer for 5.2 for large n is because the number of
# function calls generated is exponentially large: for n = 35, the number of function calls you have is on
# the order of 34 billion, which is a lot, even for a computer! If you did n = 75, the number of calls you
# would make is approximately 37 sextillion, which is more than the number of seconds until the heat death
# of the sun.
# You can avert this issue, however, if you **memoize** your function, which is just a fancy way of saying
# that you can remember values of your function instead of having to re-evaluate your function again. Python
# has a handy memoization tool:
from functools import lru_cache
@lru_cache
def fib(n):
# COPY THE CODE FROM THE PREVIOUS PROBLEM HERE
print(fib(100)) # Works no problem! Should be 354224848179261915075
# All we had to do was add the import statement and 'decorate' the function we wanted to remember
# values from with the line @lru_cache
"""
Explanation: 5.3 Memoization
End of explanation
"""
# Write a PrincetonStudent class, where a PrincetonStudent has a name, major, year,
# set of clubs, and a preference ordering of dining halls. We want to have
#
# - a default constructor that initializes the PrincetonStudent with a name, major, PUID, year, no clubs,
# and a random preference ordering of dining halls
# - a special constructor (class method) called detailed_student that initializes a PrincetonStudent
# with a name, major, year,
# a specific set of clubs, and a particular preference ordering of dining halls
# - a __str__() method that prints all the data of the student
# - a move_dhall_to_top() function that takes a dhall and moves it to the top
# of one's dining hall preference list
# - a __lt__() method that returns true if and only if this student has a name that comes before
# the other's alphabetically
# - an __eq__() method that returns true if and only if the PUIDs of students are equal
# HINT: To generate a random dining hall preference order, you can take a particular preference order
# and shuffle it using the random.shuffle(list) function
from random import shuffle
class PrincetonStudent():
# WRITE THE DETAILS OF YOUR CLASS HERE (also delete the 'pass keyword')
pass
# Test your PrincetonStudent class using this test suite. Feel free to write your own too!
nalin = PrincetonStudent('Nalin Ranjan', 'COS', '123456789', 2022)
print(nalin, end="\n\n")
nalin.clubs.extend(['ACM', 'Taekwondo', 'Princeton Legal Journal', 'Badminton'])
print(nalin, end="\n\n")
sacheth_clubs = ['ACM', 'Table Tennis']
sacheth_prefs = ['WuCox', 'Whitman', 'Forbes', 'RoMa', 'CJL']
sacheth = PrincetonStudent.detailed_student('Sacheth Sathyanarayanan', 'COS', \
'24681012', 2022, sacheth_clubs, sacheth_prefs)
print(sacheth)
print('Sacheth had a great meal at Whitman! It is now his favorite.\n')
sacheth.move_dhall_to_top('Whitman')
print(sacheth)
print('Sacheth is the same student as Nalin:', sacheth == nalin)
print('Sacheth\'s name comes before Nalin\'s:', sacheth < nalin)
"""
Explanation: Topic 6: Classes in Python
6.1 Practice Writing Basic Classes
End of explanation
"""
# Write an ACMOfficer class that inherits the PrincetonStudent class. An ACMOfficer has every attribute
# a PrincetonStudent has, and also a position and term expiration date. You'll only need to overwrite
# the constructors to accommodate these two additions. Remember that you can still call the parent's
# functions as subroutines.
class ACMOfficer(PrincetonStudent):
# WRITE THE DETAILS OF YOUR CLASS HERE (also delete the 'pass keyword')
pass
# Test your PrincetonStudent class using this test suite. Feel free to write your own too!
nalin = ACMOfficer('Nalin Ranjan', 'COS', '123456789', 2022, 'Chair', 2022)
print(nalin, end="\n\n")
nalin.clubs.extend(['ACM', 'Taekwondo', 'Princeton Legal Journal', 'Badminton'])
print(nalin, end="\n\n")
sacheth_clubs = ['ACM', 'Table Tennis']
sacheth_prefs = ['WuCox', 'Whitman', 'Forbes', 'RoMa', 'CJL']
sacheth = ACMOfficer.detailed_officer('Sacheth Sathyanarayanan', 'COS', '24681012',
2022, sacheth_clubs, sacheth_prefs, 'Treasurer', 2022)
print(sacheth)
print('Sacheth had a great meal at Whitman! It is now his favorite.\n')
sacheth.move_dhall_to_top('Whitman')
print(sacheth)
print('Sacheth is the same student as Nalin:', sacheth == nalin)
print('Sacheth\'s name comes before Nalin\'s:', sacheth < nalin)
"""
Explanation: 6.2 Inheritance
End of explanation
"""
# Numpy contains many mathematical functions/data analysis tools you might want to use
import numpy as np
# First: Write a function that returns the Gudermannian function evaluated at x.
def gudermannian(x, gamma):
# PUT YOUR GUDERMANNIAN FUNCTION IMPLEMENTATION HERE (also delete the 'pass keyword')
pass
# Next: use matplotlib to plot the function. HINT: Use matplotlib.pyplot
from matplotlib import pyplot as plt # You'll refer to pyplot as plt from now on
# HINT: pyplot requires that you have a set of x-values and a corresponding set of y-values.
# To make your plot look like a continuous curve, just make your x-values close enough (say in
# increments of 0.01).
x_vals = ... # CHANGE THIS LINE OF CODE. You'll have to use numpy's arange function (Google it!)
# Then, you'll have to make a set of y values for each gamma. HINT: If f(x) is a function
# defined on a single number, then running it on x_vals evaluates the function at every x value
# in x_vals.
# EDIT THE FOLLOWING THREE LINES OF CODE (You may or may not want to add some of your own too.)
plt.plot(...)
plt.plot(...)
plt.plot(...)
# If you're done early, can you add a legend to your graph?
"""
Explanation: Topic 7: Using Existing Python Libraries
Sigmoid activation functions are ubiquitous in machine learning. They all look somewhat like an S shape, starting
out flat, and then somewhere in the middle jumping pretty quickly before leveling off. One example is the Gudermannian Function, which takes the form
$$f(x, \gamma) = \gamma \arctan \left(\tanh \left( \frac x \gamma \right) \right)$$
for some value $\gamma$. You can think of $\gamma$ as a parameter that specifies "which" Gudermannian function we're talking about. Can you plot the Gudermannian Function in the range $[-5, 5]$ with $\gamma = {2, 4, 6}$? You will need access to numpy to find implementations of the arctan and tanh functions, and you will need matplotlib to create the actual plot.
HINT: Since we have three different values of $\gamma$, we'll have three different curves on the same graph.
End of explanation
"""
|
Nixonite/Handwritten-Digit-Classification-Using-SVD | SVD Classification of Handwritten Digits.ipynb | gpl-2.0 | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
data = pd.read_csv("train.csv")
data.head()
"""
Explanation: For this project, I will attempt to classify handwritten digits using SVD's left-singular vectors as fundamental subspaces for each digit.
The approach will be to generate one distinctive subspace of orthogonal vectors - one U matrix for each class/digit - using Singular Value Decomposition. After creating these matrices, a classifier can be constructed to assign any new unknown image into a class (0, 1, 2, ... ,9).
In the first code block, I'm just importing some data for images. The dataset contains one column for each pixel (28x28 images, so 784 column vectors), plus one column for the label (as we can see in the front of the dataframe below).
End of explanation
"""
print data.iloc[1][0]
plt.imshow(data.iloc[1][1:].reshape(28,28),cmap='Greys')
plt.show()
print data.iloc[28][0]
plt.imshow(data.iloc[28][1:].reshape(28,28),cmap='Greys')
plt.show()
np.unique(data['label'],return_counts=True)
"""
Explanation: So that's what the dataset looks like, each pixel is a column, each row is an image in the dataset. Actually I have a corresponding 'test' dataset but I'll just separate this one into train/test later on since it's big enough.
All values are grayscale 0-255. Here are some images below. I just reshaped each image vector into 28x28 for showing.
End of explanation
"""
def getTrainTest(digit):
digit_train = digit[0:int(len(digit)*.8)]
digit_train = digit_train[digit_train.columns[1:]].T
digit_test = digit[int(len(digit)*.8):]
digit_test = digit_test[digit_test.columns[1:]]
return (digit_train,digit_test)
zero = data[data['label']==0]
zero_train,zero_test = getTrainTest(zero)
one = data[data['label']==1]
one_train,one_test = getTrainTest(one)
two = data[data['label']==2]
two_train,two_test = getTrainTest(two)
three = data[data['label']==3]
three_train,three_test = getTrainTest(three)
four = data[data['label']==4]
four_train,four_test = getTrainTest(four)
zero_u,e,v = np.linalg.svd(zero_train,full_matrices=False)
one_u,e,v = np.linalg.svd(one_train,full_matrices=False)
two_u,e,v = np.linalg.svd(two_train,full_matrices=False)
three_u,e,v = np.linalg.svd(three_train,full_matrices=False)
four_u,e,v = np.linalg.svd(four_train,full_matrices=False)
#Regarding full_matrices = False
#If True, U and Vh are of shape (M,M), (N,N).
#If False, the shapes are (M,K) and (K,N), where K = min(M,N).
print zero_u.shape
print e.shape
print v.shape
"""
Explanation: In the below bit of code, I made a function to separate the dataset into just 5 classes for demonstration purposes (only 0, 1, 2, 3, and 4). I also separated each of these digit datasets into a train and test dataset by assigning the first 80% of the dataset to train, and the rest for test. This is for each digit dataset.
I then compute the SVD for each train dataset so that I will have zero_u, one_u, ... , four_u. These will be the left singular vectors which I will use later for classification purposes.
End of explanation
"""
for i in range(4):
plt.imshow(three_u[:,i].reshape(28,28),cmap='Greys')#first 5 columns of U
plt.show()
for i in range(4):
plt.imshow(zero_u[:,i].reshape(28,28),cmap='Greys')#first 5 columns of U
plt.show()
"""
Explanation: Seeing the singular vectors reconstructed is pretty cool. Especially the '0' one. See below for '3' and '0'.
End of explanation
"""
def classifyUnknownDigit(newDigit):
classes = [zero_u,one_u,two_u,three_u,four_u]
values = []
for U in classes:
values.append(np.linalg.norm((np.identity(len(U))-np.matrix(U)*np.matrix(U.T)).dot(newDigit),ord=2)/np.linalg.norm(newDigit,ord=2))
return values.index(min(values))
zero_pred = []
one_pred = []
two_pred = []
three_pred = []
four_pred = []
for i in range(len(four_test)):
four_pred.append(classifyUnknownDigit(four_test.iloc[i]))
for i in range(len(zero_test)):
zero_pred.append(classifyUnknownDigit(zero_test.iloc[i]))
for i in range(len(two_test)):
two_pred.append(classifyUnknownDigit(two_test.iloc[i]))
for i in range(len(one_test)):
one_pred.append(classifyUnknownDigit(one_test.iloc[i]))
for i in range(len(three_test)):
three_pred.append(classifyUnknownDigit(three_test.iloc[i]))
print "Accuracy"
print "------------"
print "0: ", zero_pred.count(0)/1.0/len(zero_pred) #count the number of 0's, divide by length of list to get accuracy.
print "1: ", one_pred.count(1)/1.0/len(one_pred)
print "2: ", two_pred.count(2)/1.0/len(two_pred)
print "3: ", three_pred.count(3)/1.0/len(three_pred)
"""
Explanation: Below I have written a classification function and I just went through each of the test datasets to classify them as a digit. I used the U matrices of each of the digits (zero_u, one_u, etc.) as the 'classifier' for each class, and I iterated through them to get their resulting output. The minimum of the output is the class which is assigned by this function.
The function utilizes the residual given by $\lVert z - U\alpha \lVert _2$, and to minimize this, we just have $\alpha = U^Tz$. The idea behind it is that the linear combination of singular vectors is most appropriate for the digit which it is constructed from, so an unknown '0' image would have the smallest residual with the '0' $U$ matrix residual.
End of explanation
"""
print "4: ", four_pred.count(4)/1.0/len(four_pred)
"""
Explanation: Woops forgot to print the 4 predictions. The above computation takes a few minutes and really beats up the CPU.
End of explanation
"""
np.unique(zero_pred,return_counts=True)
"""
Explanation: It's interesting to see how some digits are classified well like 1, 2, and 3, whereas the 0 and 4 are not as great. Still, 95% is pretty bad for real-world applications, especially for something as important as sending mail to the right people. The article talks about just letting humans classify the digits manually if it's not completely certain about a prediction (if there is no large distinction between values among the classifier matrices).
The function below prints the classes and number of times each class was selected for the zero_pred list. We can see that 0 was classified correctly 374 times, and misclassified a lot as a 3, then as a 2, and almost never as a 1 or 4. I guess the curves of the 2 and the 3 might have to do with this since 0's are curvey as well.
End of explanation
"""
np.unique(four_pred,return_counts=True)
"""
Explanation: Similarly doing this for the four classifier, we see that it's pretty good but it's confusing 4's for 3's more than 1's,2's, and 0's. I guess the middle line of the '3' digit might be contributing to some confusion.
End of explanation
"""
|
noppanit/machine-learning | parking-signs-nyc/.ipynb_checkpoints/Parking Signs-checkpoint.ipynb | mit | row = 'NO PARKING (SANITATION BROOM SYMBOL) 7AM-7:30AM EXCEPT SUNDAY'
assert from_time(row) == '07:00AM'
assert to_time(row) == '07:30AM'
special_case1 = 'NO PARKING (SANITATION BROOM SYMBOL) 11:30AM TO 1PM THURS'
assert from_time(special_case1) == '11:30AM'
assert to_time(special_case1) == '01:00PM'
special_case2 = 'NO PARKING (SANITATION BROOM SYMBOL) MOON & STARS (SYMBOLS) TUESDAY FRIDAY MIDNIGHT-3AM'
assert from_time(special_case2) == '12:00AM'
assert to_time(special_case2) == '03:00AM'
special_case3 = 'TRUCK (SYMBOL) TRUCK LOADING ONLY MONDAY-FRIDAY NOON-2PM'
assert from_time(special_case3) == '12:00PM'
assert to_time(special_case3) == '02:00PM'
special_case4 = 'NIGHT REGULATION (MOON & STARS SYMBOLS) NO PARKING (SANITATION BROOM SYMBOL) MIDNIGHT TO-3AM WED & SAT'
assert from_time(special_case4) == '12:00AM'
assert to_time(special_case4) == '03:00AM'
special_case5 = 'NO PARKING (SANITATION BROOM SYMBOL)8AM 11AM TUES & THURS'
assert from_time(special_case5) == '08:00AM'
assert to_time(special_case5) == '11:00AM'
special_case6 = 'NO PARKING (SANITATION BROOM SYMBOL) MONDAY THURSDAY 7AMM-7:30AM'
assert from_time(special_case6) == '07:00AM'
assert to_time(special_case6) == '07:30AM'
def filter_from_time(row):
if not pd.isnull(row['SIGNDESC1']):
print(from_time(row['SIGNDESC1']))
return from_time(row['SIGNDESC1'])
return np.nan
def filter_to_time(row):
if not pd.isnull(row['SIGNDESC1']):
return to_time(row['SIGNDESC1'])
return np.nan
data['FROM_TIME'] = data.apply(filter_from_time, axis=1)
data['TO_TIME'] = data.apply(filter_to_time, axis=1)
data[['SIGNDESC1', 'FROM_TIME', 'TO_TIME']].head(10)
"""
Explanation: Special Cases
assert extract_time('1 HR MUNI-METER PARKING 10AM-7PM MON THRU FRI 8AM-7PM SATURDAY W/ SINGLE ARROW') == ''
NO PARKING (SANITATION BROOM SYMBOL) 11:30AM TO 1 PM FRIW/ SINGLE ARROW
check if 2 timings is the maximum amount
End of explanation
"""
rows_with_AM_PM_but_time_NaN = data[(data['FROM_TIME'].isnull() | data['FROM_TIME'].isnull()) & (data['SIGNDESC1'].str.contains('[0-9]+(?:[AP]M)'))]
len(rows_with_AM_PM_but_time_NaN)
rows_with_AM_PM_but_time_NaN[['SIGNDESC1', 'FROM_TIME', 'TO_TIME']]
data.iloc[180670, data.columns.get_loc('SIGNDESC1')]
data.iloc[180670, data.columns.get_loc('FROM_TIME')] = '9AM'
data.iloc[180670, data.columns.get_loc('TO_TIME')] = '4AM'
data.iloc[212089, data.columns.get_loc('SIGNDESC1')]
data.iloc[212089, data.columns.get_loc('FROM_TIME')] = '10AM'
data.iloc[212089, data.columns.get_loc('TO_TIME')] = '11:30AM'
data.iloc[258938, data.columns.get_loc('SIGNDESC1')]
data.iloc[258938, data.columns.get_loc('FROM_TIME')] = '10AM'
data.iloc[258938, data.columns.get_loc('TO_TIME')] = '11:30AM'
data.iloc[258942, data.columns.get_loc('SIGNDESC1')]
data.iloc[258942, data.columns.get_loc('FROM_TIME')] = '10AM'
data.iloc[258942, data.columns.get_loc('TO_TIME')] = '11:30AM'
data.iloc[258944, data.columns.get_loc('SIGNDESC1')]
data.iloc[258944, data.columns.get_loc('FROM_TIME')] = '10AM'
data.iloc[258944, data.columns.get_loc('TO_TIME')] = '11:30AM'
data.iloc[283262, data.columns.get_loc('SIGNDESC1')]
data.iloc[283262, data.columns.get_loc('FROM_TIME')] = '6AM'
data.iloc[283262, data.columns.get_loc('TO_TIME')] = '7:30AM'
"""
Explanation: Find out if any rows has NaN
Want to find out if any rows has NaN from from_time and to_time but has timing in SIGNDESC1
End of explanation
"""
rows_with_AM_PM_but_time_NaN = data[(data['FROM_TIME'].isnull() | data['FROM_TIME'].isnull()) & (data['SIGNDESC1'].str.contains('[0-9]+(?:[AP]M)'))]
len(rows_with_AM_PM_but_time_NaN)
data[['SIGNDESC1', 'FROM_TIME', 'TO_TIME']]
"""
Explanation: Confirm that every row has from_time and to_time
End of explanation
"""
data['SIGNDESC1'].head(20)
#https://regex101.com/r/fO4zL8/3
regex_to_extract_days_idv_days = r'\b((?:(?:MON|MONDAY|TUES|TUESDAY|WED|WEDNESDAY|THURS|THURSDAY|FRI|FRIDAY|SAT|SATURDAY|SUN|SUNDAY)\s*)+)(?=\s|$)'
regex_to_extract_days_with_range = r'(MON|TUES|WED|THURS|FRI|SAT|SUN)\s(THRU|\&)\s(MON|TUES|WED|THURS|FRI|SAT|SUN)'
def extract_day(signdesc):
days = ['MON', 'TUES', 'WED', 'THURS', 'FRI', 'SAT', 'SUN']
p_idv_days = re.compile(regex_to_extract_days_idv_days)
m_idv_days = p_idv_days.search(signdesc)
p_range_days = re.compile(regex_to_extract_days_with_range)
m_range_days = p_range_days.search(signdesc)
if 'EXCEPT SUN' in signdesc:
return ', '.join(days[:6])
if 'INCLUDING SUNDAY' in signdesc:
return ', '.join(days)
if 'FRIW/' in signdesc:
return ', '.join(['FRI'])
if ('THRU' in signdesc) and m_range_days:
from_day = m_range_days.group(1)
to_day = m_range_days.group(3)
idx_frm_d = days.index(from_day)
idx_to_d = days.index(to_day)
return ', '.join([days[n] for n in range(idx_frm_d, idx_to_d + 1)])
if ('&' in signdesc) and m_range_days:
from_day = m_range_days.group(1)
to_day = m_range_days.group(3)
return ', '.join([from_day, to_day])
if m_idv_days:
days = m_idv_days.group(1)
d = []
for day in days.split(' '):
if len(day) > 3:
if day in ['MONDAY', 'WEDNESDAY', 'FRIDAY', 'SATURDAY', 'SUNDAY']:
d.append(day[:3])
if day in ['TUESDAY']:
d.append(day[:4])
if day in ['THURSDAY']:
d.append(day[:5])
else:
d.append(day)
return ', '.join(d)
return np.nan
def filter_days(row):
if not pd.isnull(row['SIGNDESC1']):
return extract_day(row['SIGNDESC1'])
return np.nan
assert extract_day('NO STANDING 11AM-7AM MON SAT') == "MON, SAT"
assert extract_day('NO STANDING MON FRI 7AM-9AM') == "MON, FRI"
assert extract_day('2 HOUR PARKING 9AM-5PM MON THRU SAT') == "MON, TUES, WED, THURS, FRI, SAT"
assert extract_day('1 HOUR PARKING 8AM-7PM EXCEPT SUNDAY') == "MON, TUES, WED, THURS, FRI, SAT"
assert extract_day('NO PARKING 10PM-8AM INCLUDING SUNDAY') == "MON, TUES, WED, THURS, FRI, SAT, SUN"
assert extract_day('NO PARKING (SANITATION BROOM SYMBOL) MONDAY THURSDAY 9:30AM-11AM') == "MON, THURS"
assert extract_day('NO PARKING (SANITATION BROOM SYMBOL) 11:30AM TO 1 PM FRIW/ SINGLE ARROW') == "FRI"
assert extract_day('NO PARKING (SANITATION BROOM SYMBOL) 8-9:30AM TUES & FRI') == "TUES, FRI"
assert extract_day('NO PARKING (SANITATION BROOM SYMBOL) TUESDAY FRIDAY 11AM-12:30PM') == "TUES, FRI"
data['DAYS'] = data.apply(filter_days, axis=1)
rows_with_days_but_DAYS_NAN = data[data['DAYS'].isnull() & data['SIGNDESC1'].str.contains('\sMON|\sTUES|\sWED|\sTHURS|\sFRI|\sSAT|\sSUN')]
rows_with_days_but_DAYS_NAN[['SIGNDESC1', 'DAYS']]
data.iloc[308838, data.columns.get_loc('SIGNDESC1')]
data.head()
"""
Explanation: Day of the week
End of explanation
"""
data.to_csv('Processed_Signs.csv', index=False)
"""
Explanation: Save to CSV
End of explanation
"""
|
nafitzgerald/allennlp | tutorials/notebooks/data_pipeline.ipynb | apache-2.0 | # This cell just makes sure the library paths are correct.
# You need to run this cell before you run the rest of this
# tutorial, but you can ignore the contents!
import os
import sys
module_path = os.path.abspath(os.path.join('../..'))
if module_path not in sys.path:
sys.path.append(module_path)
"""
Explanation: Allennlp uses a hierarchical system of data structures to represent a Dataset which allow easy padding, batching and iteration. This tutorial will cover some of the basic concepts.
At a high level, we use DatasetReaders to read a particular dataset into a Dataset of self-contained individual Instances,
which are made up of a dictionary of named Fields. There are many types of Fields which are useful for different types of data, such as TextField, for sentences, or LabelField for representing a categorical class label. Users who are familiar with the torchtext library from Pytorch will find a similar abstraction here.
End of explanation
"""
from allennlp.data import Token
from allennlp.data.fields import TextField, LabelField
from allennlp.data.token_indexers import SingleIdTokenIndexer
review = TextField(list(map(Token, ["This", "movie", "was", "awful", "!"])), token_indexers={"tokens": SingleIdTokenIndexer()})
review_sentiment = LabelField("negative", label_namespace="tags")
# Access the original strings and labels using the methods on the Fields.
print("Tokens in TextField: ", review.tokens)
print("Label of LabelField", review_sentiment.label)
"""
Explanation: Let's create two of the most common Fields, imagining we are preparing some data for a sentiment analysis model.
End of explanation
"""
from allennlp.data import Instance
instance1 = Instance({"review": review, "label": review_sentiment})
print("Fields in instance: ", instance1.fields)
"""
Explanation: Once we've made our Fields, we need to pair them together to form an Instance.
End of explanation
"""
from allennlp.data import Dataset
# Create another
review2 = TextField(list(map(Token, ["This", "movie", "was", "quite", "slow", "but", "good" "."])), token_indexers={"tokens": SingleIdTokenIndexer()})
review_sentiment2 = LabelField("positive", label_namespace="tags")
instance2 = Instance({"review": review2, "label": review_sentiment2})
review_dataset = Dataset([instance1, instance2])
"""
Explanation: ... and once we've made our Instance, we can group several of these into a Dataset.
End of explanation
"""
from allennlp.data import Vocabulary
# This will automatically create a vocab from our dataset.
# It will have "namespaces" which correspond to two things:
# 1. Namespaces passed to fields (e.g. the "tags" namespace we passed to our LabelField)
# 2. The keys of the 'Token Indexer' dictionary in 'TextFields'.
# passed to Fields (so it will have a 'tags' namespace).
vocab = Vocabulary.from_dataset(review_dataset)
print("This is the id -> word mapping for the 'tokens' namespace: ")
print(vocab.get_index_to_token_vocabulary("tokens"), "\n")
print("This is the id -> word mapping for the 'tags' namespace: ")
print(vocab.get_index_to_token_vocabulary("tags"), "\n")
print("Vocab Token to Index dictionary: ", vocab._token_to_index, "\n")
# Note that the "tags" namespace doesn't contain padding or unknown tokens.
# Next, we index our dataset using our newly generated vocabulary.
# This modifies the current object. You must perform this step before
# trying to generate arrays.
review_dataset.index_instances(vocab)
# Finally, we return the dataset as arrays, padded using padding lengths
# extracted from the dataset itself, which will be the max sentence length
# from our two instances.
padding_lengths = review_dataset.get_padding_lengths()
print("Lengths used for padding: ", padding_lengths, "\n")
tensor_dict = review_dataset.as_tensor_dict(padding_lengths)
print(tensor_dict)
"""
Explanation: In order to get our tiny sentiment analysis dataset ready for use in a model, we need to be able to do a few things:
- Create a vocabulary from the Dataset (using Vocabulary.from_dataset)
- Index the words and labels in theFields to use the integer indices specified by the Vocabulary
- Pad the instances to the same length
- Convert them into tensors.
The Dataset, Instance and Fields have some similar parts of their API.
End of explanation
"""
from allennlp.data.token_indexers import TokenCharactersIndexer
word_and_character_text_field = TextField(list(map(Token, ["Here", "are", "some", "longer", "words", "."])),
token_indexers={"tokens": SingleIdTokenIndexer(), "chars": TokenCharactersIndexer()})
mini_dataset = Dataset([Instance({"sentence": word_and_character_text_field})])
# Fit a new vocabulary to this Field and index it:
word_and_char_vocab = Vocabulary.from_dataset(mini_dataset)
mini_dataset.index_instances(word_and_char_vocab)
print("This is the id -> word mapping for the 'tokens' namespace: ")
print(vocab.get_index_to_token_vocabulary("tokens"), "\n")
print("This is the id -> word mapping for the 'chars' namespace: ")
print(vocab.get_index_to_token_vocabulary("chars"), "\n")
# Now, the padding lengths method will find the max sentence length
# _and_ max word length in the batch and pad all sentences to the max
# sentence length and all words to the max word length.
padding_lengths = mini_dataset.get_padding_lengths()
print("Lengths used for padding (Note that we now have a new "
"padding key num_token_characters from the TokenCharactersIndexer): ")
print(padding_lengths, "\n")
tensor_dict = mini_dataset.as_tensor_dict(padding_lengths)
print(tensor_dict)
"""
Explanation: Here, we've seen how to transform a dataset of 2 instances into arrays for feeding into an allennlp Model. One nice thing about the Dataset API is that we don't require the concept of a Batch - it's just a small dataset! If you are iterating over a large number of Instances, such as during training, you may want to look into allennlp.data.Iterators, which specify several different ways of iterating over a Dataset in batches, such as fixed batch sizes, bucketing and stochastic sorting.
There's been one thing we've left out of this tutorial so far - explaining the role of the TokenIndexer in TextField. We decided to introduce a new step into the typical tokenisation -> indexing -> embedding pipeline, because for more complicated encodings of words, such as those including character embeddings, this pipeline becomes difficult. Our pipeline contains the following steps: tokenisation -> TokenIndexers -> TokenEmbedders -> TextFieldEmbedders.
The token indexer we used above is the most basic one - it assigns a single ID to each word in the TextField. This is classically what you might think of when indexing words.
However, let's take a look at using a TokenCharacterIndexer as well - this takes the words in a TextField and generates indices for the characters in the words.
End of explanation
"""
|
tata-antares/jet_tagging_LHCb | jet-tagging-stacking.ipynb | apache-2.0 | treename = 'tag'
data_b = pandas.DataFrame(root_numpy.root2array('datasets/type=5.root', treename=treename)).dropna()
data_b = data_b[::40]
data_c = pandas.DataFrame(root_numpy.root2array('datasets/type=4.root', treename=treename)).dropna()
data_light = pandas.DataFrame(root_numpy.root2array('datasets/type=0.root', treename=treename)).dropna()
data = {'b': data_b, 'c': data_c, 'light': data_light}
jet_features = [column for column in data_b.columns if "Jet" in column]
sv_features = [column for column in data_b.columns if "SV" in column]
print "Jet features", ", ".join(jet_features)
print "SV features", ", ".join(sv_features)
"""
Explanation: Read data
End of explanation
"""
for d in data.values():
d['log_SVFDChi2'] = numpy.log(d['SVFDChi2'].values)
d['log_SVSumIPChi2'] = numpy.log(d['SVSumIPChi2'].values)
d['SVM_diff'] = numpy.log(d['SVMC'] ** 2 - d['SVM']**2)
d['SVM_rel'] = numpy.tanh(d['SVM'] / d['SVMC'])
d['SVM_rel2'] = (d['SVM'] / d['SVMC'])**2
d['SVR_rel'] = d['SVDR'] / (d['SVR'] + 1e-5)
d['R_FD_rel'] = numpy.tanh(d['SVR'] / d['SVFDChi2'])
d['jetP'] = numpy.sqrt(d['JetPx'] ** 2 + d['JetPy'] ** 2 + d['JetPz'] ** 2)
d['jetPt'] = numpy.sqrt(d['JetPx'] ** 2 + d['JetPy'] ** 2)
d['jetM'] = numpy.sqrt(d['JetE'] ** 2 - d['jetP'] ** 2 )
d['SV_jet_M_rel'] = d['SVM'] / d['jetM']
d['SV_jet_MC_rel'] = d['SVMC'] / d['jetM']
# full_data['P_Sin'] = 0.5 * d['SVMC'].values - (d['SVM'].values)**2 / (2. * d['SVMC'].values)
# full_data['Psv'] = d['SVPT'].values * d['P_Sin'].values
# full_data['Psv2'] = d['P_Sin'].values / d['SVPT'].values
# full_data['Mt'] = d['SVMC'].values - d['P_Sin'].values
# full_data['QtoN'] = 1. * d['SVQ'].values / d['SVN'].values
data_b = data_b.drop(['JetParton', 'JetFlavor', 'JetPx', 'JetPy'], axis=1)
data_c = data_c.drop(['JetParton', 'JetFlavor', 'JetPx', 'JetPy'], axis=1)
data_light = data_light.drop(['JetParton', 'JetFlavor', 'JetPx', 'JetPy'], axis=1)
jet_features = [column for column in data_b.columns if "Jet" in column]
additional_features = ['log_SVFDChi2', 'log_SVSumIPChi2',
'SVM_diff', 'SVM_rel', 'SVR_rel', 'SVM_rel2', 'SVR_rel', 'R_FD_rel',
'jetP', 'jetPt', 'jetM', 'SV_jet_M_rel', 'SV_jet_MC_rel']
"""
Explanation: Add features
End of explanation
"""
figsize(18, 60)
for i, feature in enumerate(data_b.columns):
subplot(len(data_b.columns) / 3, 3, i)
hist(data_b[feature].values, label='b', alpha=0.2, bins=60, normed=True)
hist(data_c[feature].values, label='c', alpha=0.2, bins=60, normed=True)
# hist(data_light[feature].values, label='light', alpha=0.2, bins=60, normed=True)
xlabel(feature); legend(loc='best');
title(roc_auc_score([0] * len(data_b) + [1]*len(data_c),
numpy.hstack([data_b[feature].values, data_c[feature].values])))
len(data_b), len(data_c), len(data_light)
jet_features = jet_features[2:]
"""
Explanation: Feature pdfs
End of explanation
"""
data_b_c_lds = LabeledDataStorage(pandas.concat([data_b, data_c]), [1] * len(data_b) + [0] * len(data_c))
data_c_light_lds = LabeledDataStorage(pandas.concat([data_c, data_light]), [1] * len(data_c) + [0] * len(data_light))
data_b_light_lds = LabeledDataStorage(pandas.concat([data_b, data_light]), [1] * len(data_b) + [0] * len(data_light))
def one_vs_one_training(base_estimators, data_b_c_lds, data_c_light_lds, data_b_light_lds, full_data,
prefix='bdt', folding=True, features=None):
if folding:
tt_folding_b_c = FoldingClassifier(base_estimators[0], n_folds=2, random_state=11, parallel_profile=PROFILE,
features=features)
tt_folding_c_light = FoldingClassifier(base_estimators[1], n_folds=2, random_state=11, parallel_profile=PROFILE,
features=features)
tt_folding_b_light = FoldingClassifier(base_estimators[2], n_folds=2, random_state=11, parallel_profile=PROFILE,
features=features)
else:
tt_folding_b_c = base_estimators[0]
tt_folding_b_c.features = features
tt_folding_c_light = base_estimators[1]
tt_folding_c_light.features = features
tt_folding_b_light = base_estimators[2]
tt_folding_b_light.features = features
%time tt_folding_b_c.fit_lds(data_b_c_lds)
%time tt_folding_c_light.fit_lds(data_c_light_lds)
%time tt_folding_b_light.fit_lds(data_b_light_lds)
bdt_b_c = numpy.concatenate([tt_folding_b_c.predict_proba(pandas.concat([data_b, data_c])),
tt_folding_b_c.predict_proba(data_light)])[:, 1]
bdt_c_light = numpy.concatenate([tt_folding_c_light.predict_proba(data_b),
tt_folding_c_light.predict_proba(pandas.concat([data_c, data_light]))])[:, 1]
p_b_light = tt_folding_b_light.predict_proba(pandas.concat([data_b, data_light]))[:, 1]
bdt_b_light = numpy.concatenate([p_b_light[:len(data_b)], tt_folding_b_light.predict_proba(data_c)[:, 1],
p_b_light[len(data_b):]])
full_data[prefix + '_b_c'] = bdt_b_c
full_data[prefix + '_b_light'] = bdt_b_light
full_data[prefix + '_c_light'] = bdt_c_light
"""
Explanation: One versus One
Prepare datasets:
b vs c
b vs light
c vs light
End of explanation
"""
full_data = pandas.concat([data_b, data_c, data_light])
full_data['label'] = [0] * len(data_b) + [1] * len(data_c) + [2] * len(data_light)
from hep_ml.nnet import MLPClassifier
from rep.estimators import SklearnClassifier
one_vs_one_training([SklearnClassifier(MLPClassifier(layers=(30, 10), epochs=700, random_state=11))]*3,
data_b_c_lds, data_c_light_lds, data_b_light_lds, full_data, 'mlp', folding=True,
features=sv_features + additional_features + jet_features)
from sklearn.linear_model import LogisticRegression
one_vs_one_training([LogisticRegression()]*3,
data_b_c_lds, data_c_light_lds, data_b_light_lds, full_data,
'logistic', folding=True, features=sv_features + additional_features + jet_features)
# from sklearn.svm import SVC
# from sklearn.pipeline import make_pipeline
# from sklearn.preprocessing import StandardScaler
# svm_feat = SklearnClassifier(make_pipeline(StandardScaler(), SVC(probability=True)), features=sv_features)
# %time svm_feat.fit(data_b_c_lds.data, data_b_c_lds.target)
# from sklearn.neighbors import KNeighborsClassifier
# one_vs_one_training([KNeighborsClassifier(metric='canberra')]*3,
# data_b_c_lds, data_c_light_lds, data_b_light_lds, full_data,
# 'knn', folding=True, features=sv_features)
# from rep.estimators import TheanetsClassifier
# theanets_base = TheanetsClassifier(layers=(20, 10), trainers=[{'algo': 'adadelta', 'learining_rate': 0.1}, ])
# nn = FoldingClassifier(theanets_base, features=sv_features, random_state=11, parallel_profile='ssh-py2')
# nn.fit(full_data, full_data.label)
# multi_probs = nn.predict_proba(full_data)
# full_data['th_0'] = multi_probs[:, 0] / multi_probs[:, 1]
# full_data['th_1'] = multi_probs[:, 0] / multi_probs[:, 2]
# full_data['th_2'] = multi_probs[:, 1] / multi_probs[:, 2]
mlp_features = ['mlp_b_c', 'mlp_b_light', 'mlp_c_light']
# knn_features = ['knn_b_c', 'knn_b_light', 'knn_c_light']
# th_features = ['th_0', 'th_1', 'th_2']
logistic_features = ['logistic_b_c', 'logistic_b_light', 'logistic_c_light']
"""
Explanation: Prepare stacking variables
End of explanation
"""
data_multi_lds = LabeledDataStorage(full_data, 'label')
variables_final = set(sv_features + additional_features + jet_features + mlp_features)
# variables_final = list(variables_final - {'SVN', 'SVQ', 'log_SVFDChi2', 'log_SVSumIPChi2', 'SVM_rel2', 'JetE', 'JetNDis'})
from rep.estimators import XGBoostClassifier
xgb_base = XGBoostClassifier(n_estimators=3000, colsample=0.7, eta=0.005, nthreads=8,
subsample=0.7, max_depth=6)
multi_folding_rbf = FoldingClassifier(xgb_base, n_folds=2, random_state=11,
features=variables_final)
%time multi_folding_rbf.fit_lds(data_multi_lds)
multi_probs = multi_folding_rbf.predict_proba(full_data)
'log loss', -numpy.log(multi_probs[numpy.arange(len(multi_probs)), full_data['label']]).sum() / len(full_data)
multi_folding_rbf.get_feature_importances()
labels = full_data['label'].values.astype(int)
multiclass_result = generate_result(1 - roc_auc_score(labels > 0, multi_probs[:, 0] / multi_probs[:, 1],
sample_weight=(labels != 2) * 1),
1 - roc_auc_score(labels > 1, multi_probs[:, 0] / multi_probs[:, 2],
sample_weight=(labels != 1) * 1),
1 - roc_auc_score(labels > 1, multi_probs[:, 1] / multi_probs[:, 2],
sample_weight=(labels != 0) * 1),
label='multiclass')
result = pandas.concat([multiclass_result])
result.index = result['name']
result = result.drop('name', axis=1)
result
"""
Explanation: Multiclassification
End of explanation
"""
|
googlecodelabs/odml-pathways | object-detection/codelab2/python/Train_a_salad_detector_with_TFLite_Model_Maker.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
"""
!pip install -q tflite-model-maker
!pip install -q pycocotools
!pip install -q tflite-support
"""
Explanation: Train a salad detector with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/googlecodelabs/odml-pathways/blob/main/object-detection/codelab2/python/Train_a_salad_detector_with_TFLite_Model_Maker.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/googlecodelabs/odml-pathways/blob/main/object-detection/codelab2/python/Train_a_salad_detector_with_TFLite_Model_Maker.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://raw.githubusercontent.com/googlecodelabs/odml-pathways/main/object-detection/codelab2/python/Train_a_salad_detector_with_TFLite_Model_Maker.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this colab notebook, you'll learn how to use the TensorFlow Lite Model Maker library to train a custom object detection model capable of detecting salads within images on a mobile device.
The Model Maker library uses transfer learning to simplify the process of training a TensorFlow Lite model using a custom dataset. Retraining a TensorFlow Lite model with your own custom dataset reduces the amount of training data required and will shorten the training time.
You'll use the publicly available Salads dataset, which was created from the Open Images Dataset V4.
Each image in the dataset contains objects labeled as one of the following classes:
* Baked Good
* Cheese
* Salad
* Seafood
* Tomato
The dataset contains the bounding-boxes specifying where each object locates, together with the object's label.
Here is an example image from the dataset:
<br/>
<img src="https://cloud.google.com/vision/automl/object-detection/docs/images/quickstart-preparing_a_dataset.png" width="400" hspace="0">
Prerequisites
Install the required packages
Start by installing the required packages, including the Model Maker package from the GitHub repo and the pycocotools library you'll use for evaluation.
End of explanation
"""
import numpy as np
import os
from tflite_model_maker.config import ExportFormat
from tflite_model_maker import model_spec
from tflite_model_maker import object_detector
import tensorflow as tf
assert tf.__version__.startswith('2')
tf.get_logger().setLevel('ERROR')
from absl import logging
logging.set_verbosity(logging.ERROR)
"""
Explanation: Import the required packages.
End of explanation
"""
spec = model_spec.get('efficientdet_lite2')
"""
Explanation: Prepare the dataset
Here you'll use the same dataset as the AutoML quickstart.
The Salads dataset is available at:
gs://cloud-ml-data/img/openimage/csv/salads_ml_use.csv.
It contains 175 images for training, 25 images for validation, and 25 images for testing. The dataset has five classes: Salad, Seafood, Tomato, Baked goods, Cheese.
<br/>
The dataset is provided in CSV format:
TRAINING,gs://cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg,Salad,0.0,0.0954,,,0.977,0.957,,
VALIDATION,gs://cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg,Seafood,0.0154,0.1538,,,1.0,0.802,,
TEST,gs://cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg,Tomato,0.0,0.655,,,0.231,0.839,,
Each row corresponds to an object localized inside a larger image, with each object specifically designated as test, train, or validation data. You'll learn more about what that means in a later stage in this notebook.
The three lines included here indicate three distinct objects located inside the same image available at gs://cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg.
Each row has a different label: Salad, Seafood, Tomato, etc.
Bounding boxes are specified for each image using the top left and bottom right vertices.
Here is a visualzation of these three lines:
<br>
<img src="https://cloud.google.com/vision/automl/object-detection/docs/images/quickstart-preparing_a_dataset.png" width="400" hspace="100">
If you want to know more about how to prepare your own CSV file and the minimum requirements for creating a valid dataset, see the Preparing your training data guide for more details.
If you are new to Google Cloud, you may wonder what the gs:// URL means. They are URLs of files stored on Google Cloud Storage (GCS). If you make your files on GCS public or authenticate your client, Model Maker can read those files similarly to your local files.
However, you don't need to keep your images on Google Cloud to use Model Maker. You can use a local path in your CSV file and Model Maker will just work.
Train your salad detection model
There are six steps to training an object detection model:
Step 1. Choose an object detection model archiecture.
This tutorial uses the EfficientDet-Lite2 model. EfficientDet-Lite[0-4] are a family of mobile/IoT-friendly object detection models derived from the EfficientDet architecture.
Here is the performance of each EfficientDet-Lite models compared to each others.
| Model architecture | Size(MB) | Latency(ms) | Average Precision** |
|--------------------|-----------|---------------|----------------------|
| EfficientDet-Lite0 | 4.4 | 37 | 25.69% |
| EfficientDet-Lite1 | 5.8 | 49 | 30.55% |
| EfficientDet-Lite2 | 7.2 | 69 | 33.97% |
| EfficientDet-Lite3 | 11.4 | 116 | 37.70% |
| EfficientDet-Lite4 | 19.9 | 260 | 41.96% |
<i> * Size of the integer quantized models. <br/>
Latency measured on Pixel 4 using 4 threads on CPU. <br/>
* Average Precision is the mAP (mean Average Precision) on the COCO 2017 validation dataset.
</i>
End of explanation
"""
train_data, validation_data, test_data = object_detector.DataLoader.from_csv('gs://cloud-ml-data/img/openimage/csv/salads_ml_use.csv')
"""
Explanation: Step 2. Load the dataset.
Model Maker will take input data in the CSV format. Use the ObjectDetectorDataloader.from_csv method to load the dataset and split them into the training, validation and test images.
Training images: These images are used to train the object detection model to recognize salad ingredients.
Validation images: These are images that the model didn't see during the training process. You'll use them to decide when you should stop the training, to avoid overfitting.
Test images: These images are used to evaluate the final model performance.
You can load the CSV file directly from Google Cloud Storage, but you don't need to keep your images on Google Cloud to use Model Maker. You can specify a local CSV file on your computer, and Model Maker will work just fine.
End of explanation
"""
model = object_detector.create(train_data, model_spec=spec, batch_size=8, train_whole_model=True, validation_data=validation_data)
"""
Explanation: Step 3. Train the TensorFlow model with the training data.
The EfficientDet-Lite0 model uses epochs = 50 by default, which means it will go through the training dataset 50 times. You can look at the validation accuracy during training and stop early to avoid overfitting.
Set batch_size = 8 here so you will see that it takes 21 steps to go through the 175 images in the training dataset.
Set train_whole_model=True to fine-tune the whole model instead of just training the head layer to improve accuracy. The trade-off is that it may take longer to train the model.
End of explanation
"""
model.evaluate(test_data)
"""
Explanation: Step 4. Evaluate the model with the test data.
After training the object detection model using the images in the training dataset, use the remaining 25 images in the test dataset to evaluate how the model performs against new data it has never seen before.
As the default batch size is 64, it will take 1 step to go through the 25 images in the test dataset.
End of explanation
"""
model.export(export_dir='.')
"""
Explanation: Step 5. Export as a TensorFlow Lite model.
Export the trained object detection model to the TensorFlow Lite format by specifying which folder you want to export the quantized model to. The default post-training quantization technique is full integer quantization.
End of explanation
"""
model.evaluate_tflite('model.tflite', test_data)
"""
Explanation: Step 6. Evaluate the TensorFlow Lite model.
Several factors can affect the model accuracy when exporting to TFLite:
* Quantization helps shrinking the model size by 4 times at the expense of some accuracy drop.
* The original TensorFlow model uses per-class non-max supression (NMS) for post-processing, while the TFLite model uses global NMS that's much faster but less accurate.
Keras outputs maximum 100 detections while tflite outputs maximum 25 detections.
Therefore you'll have to evaluate the exported TFLite model and compare its accuracy with the original TensorFlow model.
End of explanation
"""
#@title Load the trained TFLite model and define some visualization functions
#@markdown This code comes from the TFLite Object Detection [Raspberry Pi sample](https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/raspberry_pi).
import platform
import json
import cv2
from typing import List, NamedTuple
from tflite_support import metadata
Interpreter = tf.lite.Interpreter
load_delegate = tf.lite.experimental.load_delegate
# pylint: enable=g-import-not-at-top
class ObjectDetectorOptions(NamedTuple):
"""A config to initialize an object detector."""
enable_edgetpu: bool = False
"""Enable the model to run on EdgeTPU."""
label_allow_list: List[str] = None
"""The optional allow list of labels."""
label_deny_list: List[str] = None
"""The optional deny list of labels."""
max_results: int = -1
"""The maximum number of top-scored detection results to return."""
num_threads: int = 1
"""The number of CPU threads to be used."""
score_threshold: float = 0.0
"""The score threshold of detection results to return."""
class Rect(NamedTuple):
"""A rectangle in 2D space."""
left: float
top: float
right: float
bottom: float
class Category(NamedTuple):
"""A result of a classification task."""
label: str
score: float
index: int
class Detection(NamedTuple):
"""A detected object as the result of an ObjectDetector."""
bounding_box: Rect
categories: List[Category]
def edgetpu_lib_name():
"""Returns the library name of EdgeTPU in the current platform."""
return {
'Darwin': 'libedgetpu.1.dylib',
'Linux': 'libedgetpu.so.1',
'Windows': 'edgetpu.dll',
}.get(platform.system(), None)
class ObjectDetector:
"""A wrapper class for a TFLite object detection model."""
_OUTPUT_LOCATION_NAME = 'location'
_OUTPUT_CATEGORY_NAME = 'category'
_OUTPUT_SCORE_NAME = 'score'
_OUTPUT_NUMBER_NAME = 'number of detections'
def __init__(
self,
model_path: str,
options: ObjectDetectorOptions = ObjectDetectorOptions()
) -> None:
"""Initialize a TFLite object detection model.
Args:
model_path: Path to the TFLite model.
options: The config to initialize an object detector. (Optional)
Raises:
ValueError: If the TFLite model is invalid.
OSError: If the current OS isn't supported by EdgeTPU.
"""
# Load metadata from model.
displayer = metadata.MetadataDisplayer.with_model_file(model_path)
# Save model metadata for preprocessing later.
model_metadata = json.loads(displayer.get_metadata_json())
process_units = model_metadata['subgraph_metadata'][0]['input_tensor_metadata'][0]['process_units']
mean = 0.0
std = 1.0
for option in process_units:
if option['options_type'] == 'NormalizationOptions':
mean = option['options']['mean'][0]
std = option['options']['std'][0]
self._mean = mean
self._std = std
# Load label list from metadata.
file_name = displayer.get_packed_associated_file_list()[0]
label_map_file = displayer.get_associated_file_buffer(file_name).decode()
label_list = list(filter(lambda x: len(x) > 0, label_map_file.splitlines()))
self._label_list = label_list
# Initialize TFLite model.
if options.enable_edgetpu:
if edgetpu_lib_name() is None:
raise OSError("The current OS isn't supported by Coral EdgeTPU.")
interpreter = Interpreter(
model_path=model_path,
experimental_delegates=[load_delegate(edgetpu_lib_name())],
num_threads=options.num_threads)
else:
interpreter = Interpreter(
model_path=model_path, num_threads=options.num_threads)
interpreter.allocate_tensors()
input_detail = interpreter.get_input_details()[0]
# From TensorFlow 2.6, the order of the outputs become undefined.
# Therefore we need to sort the tensor indices of TFLite outputs and to know
# exactly the meaning of each output tensor. For example, if
# output indices are [601, 599, 598, 600], tensor names and indices aligned
# are:
# - location: 598
# - category: 599
# - score: 600
# - detection_count: 601
# because of the op's ports of TFLITE_DETECTION_POST_PROCESS
# (https://github.com/tensorflow/tensorflow/blob/a4fe268ea084e7d323133ed7b986e0ae259a2bc7/tensorflow/lite/kernels/detection_postprocess.cc#L47-L50).
sorted_output_indices = sorted(
[output['index'] for output in interpreter.get_output_details()])
self._output_indices = {
self._OUTPUT_LOCATION_NAME: sorted_output_indices[0],
self._OUTPUT_CATEGORY_NAME: sorted_output_indices[1],
self._OUTPUT_SCORE_NAME: sorted_output_indices[2],
self._OUTPUT_NUMBER_NAME: sorted_output_indices[3],
}
self._input_size = input_detail['shape'][2], input_detail['shape'][1]
self._is_quantized_input = input_detail['dtype'] == np.uint8
self._interpreter = interpreter
self._options = options
def detect(self, input_image: np.ndarray) -> List[Detection]:
"""Run detection on an input image.
Args:
input_image: A [height, width, 3] RGB image. Note that height and width
can be anything since the image will be immediately resized according
to the needs of the model within this function.
Returns:
A Person instance.
"""
image_height, image_width, _ = input_image.shape
input_tensor = self._preprocess(input_image)
self._set_input_tensor(input_tensor)
self._interpreter.invoke()
# Get all output details
boxes = self._get_output_tensor(self._OUTPUT_LOCATION_NAME)
classes = self._get_output_tensor(self._OUTPUT_CATEGORY_NAME)
scores = self._get_output_tensor(self._OUTPUT_SCORE_NAME)
count = int(self._get_output_tensor(self._OUTPUT_NUMBER_NAME))
return self._postprocess(boxes, classes, scores, count, image_width,
image_height)
def _preprocess(self, input_image: np.ndarray) -> np.ndarray:
"""Preprocess the input image as required by the TFLite model."""
# Resize the input
input_tensor = cv2.resize(input_image, self._input_size)
# Normalize the input if it's a float model (aka. not quantized)
if not self._is_quantized_input:
input_tensor = (np.float32(input_tensor) - self._mean) / self._std
# Add batch dimension
input_tensor = np.expand_dims(input_tensor, axis=0)
return input_tensor
def _set_input_tensor(self, image):
"""Sets the input tensor."""
tensor_index = self._interpreter.get_input_details()[0]['index']
input_tensor = self._interpreter.tensor(tensor_index)()[0]
input_tensor[:, :] = image
def _get_output_tensor(self, name):
"""Returns the output tensor at the given index."""
output_index = self._output_indices[name]
tensor = np.squeeze(self._interpreter.get_tensor(output_index))
return tensor
def _postprocess(self, boxes: np.ndarray, classes: np.ndarray,
scores: np.ndarray, count: int, image_width: int,
image_height: int) -> List[Detection]:
"""Post-process the output of TFLite model into a list of Detection objects.
Args:
boxes: Bounding boxes of detected objects from the TFLite model.
classes: Class index of the detected objects from the TFLite model.
scores: Confidence scores of the detected objects from the TFLite model.
count: Number of detected objects from the TFLite model.
image_width: Width of the input image.
image_height: Height of the input image.
Returns:
A list of Detection objects detected by the TFLite model.
"""
results = []
# Parse the model output into a list of Detection entities.
for i in range(count):
if scores[i] >= self._options.score_threshold:
y_min, x_min, y_max, x_max = boxes[i]
bounding_box = Rect(
top=int(y_min * image_height),
left=int(x_min * image_width),
bottom=int(y_max * image_height),
right=int(x_max * image_width))
class_id = int(classes[i])
category = Category(
score=scores[i],
label=self._label_list[class_id], # 0 is reserved for background
index=class_id)
result = Detection(bounding_box=bounding_box, categories=[category])
results.append(result)
# Sort detection results by score ascending
sorted_results = sorted(
results,
key=lambda detection: detection.categories[0].score,
reverse=True)
# Filter out detections in deny list
filtered_results = sorted_results
if self._options.label_deny_list is not None:
filtered_results = list(
filter(
lambda detection: detection.categories[0].label not in self.
_options.label_deny_list, filtered_results))
# Keep only detections in allow list
if self._options.label_allow_list is not None:
filtered_results = list(
filter(
lambda detection: detection.categories[0].label in self._options.
label_allow_list, filtered_results))
# Only return maximum of max_results detection.
if self._options.max_results > 0:
result_count = min(len(filtered_results), self._options.max_results)
filtered_results = filtered_results[:result_count]
return filtered_results
_MARGIN = 10 # pixels
_ROW_SIZE = 10 # pixels
_FONT_SIZE = 1
_FONT_THICKNESS = 1
_TEXT_COLOR = (0, 0, 255) # red
def visualize(
image: np.ndarray,
detections: List[Detection],
) -> np.ndarray:
"""Draws bounding boxes on the input image and return it.
Args:
image: The input RGB image.
detections: The list of all "Detection" entities to be visualize.
Returns:
Image with bounding boxes.
"""
for detection in detections:
# Draw bounding_box
start_point = detection.bounding_box.left, detection.bounding_box.top
end_point = detection.bounding_box.right, detection.bounding_box.bottom
cv2.rectangle(image, start_point, end_point, _TEXT_COLOR, 3)
# Draw label and score
category = detection.categories[0]
class_name = category.label
probability = round(category.score, 2)
result_text = class_name + ' (' + str(probability) + ')'
text_location = (_MARGIN + detection.bounding_box.left,
_MARGIN + _ROW_SIZE + detection.bounding_box.top)
cv2.putText(image, result_text, text_location, cv2.FONT_HERSHEY_PLAIN,
_FONT_SIZE, _TEXT_COLOR, _FONT_THICKNESS)
return image
#@title Run object detection and show the detection results
INPUT_IMAGE_URL = "https://storage.googleapis.com/cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg" #@param {type:"string"}
DETECTION_THRESHOLD = 0.3 #@param {type:"number"}
TEMP_FILE = '/tmp/image.png'
!wget -q -O $TEMP_FILE $INPUT_IMAGE_URL
im = Image.open(TEMP_FILE)
im.thumbnail((512, 512), Image.ANTIALIAS)
image_np = np.asarray(im)
# Load the TFLite model
options = ObjectDetectorOptions(
num_threads=4,
score_threshold=DETECTION_THRESHOLD,
)
detector = ObjectDetector(model_path='model.tflite', options=options)
# Run object detection estimation using the model.
detections = detector.detect(image_np)
# Draw keypoints and edges on input image
image_np = visualize(image_np, detections)
# Show the detection result
Image.fromarray(image_np)
"""
Explanation: You can download the TensorFlow Lite model file using the left sidebar of Colab. Right-click the model.tflite file and choose Download to download it to your local computer.
In the next step of the codelab, you'll use the ObjectDetector API of the TensorFlow Lite Task Library to integrate the model into the Android app.
(Optional) Test the TFLite model on your image
You can test the trained TFLite model using images from the internet.
* Replace the INPUT_IMAGE_URL below with your desired input image.
* Adjust the DETECTION_THRESHOLD to change the sensitivity of the model. A lower threshold means the model will pickup more objects but there will also be more false detection. Meanwhile, a higher threshold means the model will only pickup objects that it has confidently detected.
Although it requires some of boilerplate code to run the model in Python at this moment, integrating the model into a mobile app only requires a few lines of code.
End of explanation
"""
|
psi4/DatenQM | docs/qcfractal/source/quickstart.ipynb | bsd-3-clause | from qcfractal import FractalSnowflakeHandler
import qcfractal.interface as ptl
"""
Explanation: Example
This tutorial will go over general QCFractal usage to give a feel for the ecosystem.
In this tutorial, we employ Snowflake, a simple QCFractal stack which runs on a local machine
for demonstration and exploration purposes.
Installation
To begin this quickstart tutorial, first install the QCArchive Snowflake environment from conda:
conda env create qcarchive/qcfractal-snowflake -n snowflake
conda activate snowflake
If you have a pre-existing environment with qcfractal, ensure that rdkit and geometric are installed from the conda-forge channel and psi4 and dftd3 from the psi4 channel.
Importing QCFractal
First let us import two items from the ecosystem:
FractalSnowflakeHandler - This is a FractalServer that is temporary and is used for trying out new things.
qcfractal.interface is the QCPortal module, but if using QCFractal it is best to import it locally.
Typically we alias qcportal as ptl. We will do the same for qcfractal.interface so that the code can be used anywhere.
End of explanation
"""
server = FractalSnowflakeHandler()
server
"""
Explanation: We can now build a temporary server which acts just like a normal server, but we have a bit more direct control of it.
Warning! All data is lost when this notebook shuts down! This is for demonstration purposes only!
For information about how to setup a permanent QCFractal server, see the Setup Quickstart Guide.
End of explanation
"""
client = server.client()
client
"""
Explanation: We can then build a typical FractalClient
to automatically connect to this server using the client() helper command.
Note that the server names and addresses are identical in both the server and client.
End of explanation
"""
mol = ptl.Molecule.from_data("""
O 0 0 0
H 0 0 2
H 0 2 0
units bohr
""")
mol
"""
Explanation: Adding and Querying data
A server starts with no data, so let's add some! We can do this by adding a water molecule at a poor geometry from XYZ coordinates.
Note that all internal QCFractal values are stored and used in atomic units;
whereas, the standard Molecule.from_data() assumes an input of Angstroms.
We can switch this back to Bohr by adding a units command in the text string.
End of explanation
"""
print(mol.measure([0, 1]))
print(mol.measure([1, 0, 2]))
"""
Explanation: We can then measure various aspects of this molecule to determine its shape. Note that the measure command will provide a distance, angle, or dihedral depending if 2, 3, or 4 indices are passed in.
This molecule is quite far from optimal, so let's run an geometry optimization!
End of explanation
"""
spec = {
"keywords": None,
"qc_spec": {
"driver": "gradient",
"method": "b3lyp",
"basis": "6-31g",
"program": "psi4"
},
}
# Ask the server to compute a new computation
r = client.add_procedure("optimization", "geometric", spec, [mol])
print(r)
print(r.ids)
"""
Explanation: Evaluating a Geometry Optimization
We originally installed psi4 and geometric, so we can use these programs to perform a geometry optimization. In QCFractal, we call a geometry optimization a procedure, where procedure is a generic term for a higher level operation that will run multiple individual quantum chemistry energy, gradient, or Hessian evaluations. Other procedure examples are finite-difference computations, n-body computations, and torsiondrives.
We provide a JSON-like input to the client.add_procedure()
command to specify the method, basis, and program to be used.
The qc_spec field is used in all procedures to determine the underlying quantum chemistry method behind the individual procedure.
In this way, we can use any program or method that returns a energy or gradient quantity to run our geometry optimization!
(See also add_compute().)
End of explanation
"""
r2 = client.add_procedure("optimization", "geometric", spec, [mol])
print(r)
print(r.ids)
"""
Explanation: We can see that we submitted a single task to be evaluated and the server has not seen this particular procedure before.
The ids field returns the unique id of the procedure. Different procedures will always have a unique id, while identical procedures will always return the same id.
We can submit the same procedure again to see this effect:
End of explanation
"""
proc = client.query_procedures(id=r.ids)[0]
proc
"""
Explanation: Querying Procedures
Once a task is submitted, it will be placed in the compute queue and evaluated. In this particular case the FractalSnowflakeHandler uses your local hardware to evaluate these jobs. We recommend avoiding large tasks!
In general, the server can handle anywhere between laptop-scale resources to many hundreds of thousands of concurrent cores at many physical locations. The amount of resources to connect is up to you and the amount of compute that you require.
Since we did submit a very small job it is likely complete by now. Let us query this procedure from the server using its id like so:
End of explanation
"""
final_mol = proc.get_final_molecule()
print(final_mol.measure([0, 1]))
print(final_mol.measure([1, 0, 2]))
final_mol
"""
Explanation: This OptimizationRecord object has many different fields attached to it so that all quantities involved in the computation can be explored. For this example, let us pull the final molecule (optimized structure) and inspect the physical dimensions.
Note: if the status does not say COMPLETE, these fields will not be available. Try querying the procedure again in a few seconds to see if the task completed in the background.
End of explanation
"""
proc.show_history()
"""
Explanation: This water molecule has bond length and angle dimensions much closer to expected values. We can also plot the optimization history to see how each step in the geometry optimization affected the results. Though the chart is not too impressive for this simple molecule, it is hopefully illuminating and is available for any geometry optimization ever completed.
End of explanation
"""
ds = ptl.collections.ReactionDataset("My IE Dataset", ds_type="ie", client=client, default_program="psi4")
"""
Explanation: Collections
Submitting individual procedures or single quantum chemistry tasks is not typically done as it becomes hard to track individual tasks. To help resolve this, Collections are different ways of organizing standard computations so that many tasks can be referenced in a more human-friendly way. In this particular case, we will be exploring an intermolecular potential dataset.
To begin, we will create a new dataset and add a few intermolecular interactions to it.
End of explanation
"""
water_dimer = ptl.Molecule.from_data("""
O 0.000000 0.000000 0.000000
H 0.758602 0.000000 0.504284
H 0.260455 0.000000 -0.872893
--
O 3.000000 0.500000 0.000000
H 3.758602 0.500000 0.504284
H 3.260455 0.500000 -0.872893
""")
water_dimer.get_fragment(0, 1)
"""
Explanation: We can construct a water dimer that has fragments used in the intermolecular computation with the -- divider. A single water molecule with ghost atoms can be extracted like so:
End of explanation
"""
ds.add_ie_rxn("water dimer", water_dimer)
ds.add_ie_rxn("helium dimer", """
He 0 0 -3
--
He 0 0 3
""")
"""
Explanation: Many molecular entries can be added to this dataset where each is entry is a given intermolecular complex that is given a unique name. In addition, the add_ie_rxn method to can automatically fragment molecules.
End of explanation
"""
ds.save()
"""
Explanation: Once the Collection is created it can be saved to the server so that it can always be retrived at a future date
End of explanation
"""
client.list_collections()
ds = client.get_collection("ReactionDataset", "My IE Dataset")
ds
"""
Explanation: The client can list all Collections currently on the server and retrive collections to be manipulated:
End of explanation
"""
ds.compute("B3LYP-D3", "def2-SVP")
"""
Explanation: Computing with collections
Computational methods can be applied to all of the reactions in the dataset with just a few simple lines:
End of explanation
"""
ds.compute("PBE-D3", "def2-SVP")
"""
Explanation: By default this collection evaluates the non-counterpoise corrected interaction energy which typically requires three computations per entry (the complex and each monomer). In this case we compute the B3LYP and -D3 additive correction separately, nominally 12 total computations. However the collection is smart enough to understand that each Helium monomer is identical and does not need to be computed twice, reducing the total number of computations to 10 as shown here. We can continue to compute additional methods. Again, this is being evaluated on your computer! Be careful of the compute requirements.
End of explanation
"""
ds.list_values()
"""
Explanation: A list of all methods that have been computed for this dataset can also be shown:
End of explanation
"""
print(f"DataFrame units: {ds.units}")
ds.get_values()
"""
Explanation: The above only shows what has been computed and does not pull this data from the server to your computer. To do so, the get_history command can be used:
End of explanation
"""
ds.visualize(["B3LYP-D3", "PBE-D3"], "def2-SVP", bench="B3LYP/def2-svp", kind="violin")
"""
Explanation: You can also visualize results and more!
End of explanation
"""
|
IST256/learn-python | content/lessons/05-Functions/SmallGroup-Functions.ipynb | mit | #ORIGINAL CODE
import random
choices = ['rock', 'paper', 'scissors']
wins = 0
losses = 0
ties = 0
computer = random.choice(choices)
you = 'rock' #Always rock strategy
if (you == 'rock' and computer == 'scissors'):
outcome = "win"
elif (you == 'scissors' and computer =='rock'):
outcome = "lose"
elif (you == 'paper' and computer =='rock'):
outcome = "win"
elif (you == 'rock' and computer=='paper'):
outcome = "lose"
elif (you == 'scissors' and computer == 'paper'):
outcome = "win"
elif (you == 'paper' and computer == 'scissors'):
outcome = "lose"
else:
outcome = "tie"
print(f"You:'{you}' Computer:'{computer}' Game: {outcome} ")
"""
Explanation: Now You Code In Class : Rock Paper Scissors Experiment
In this now you code we will learn to re-factor a program into a function. This is the most common way to write a function when you are a beginner. Re-factoring is the act of re-writing code without changing its functionality. We commonly do re-factoring to improve performance or readability of our code. Through the process we will also demonstrate the DRY (don't repeat yourself principle of coding).
The way you do this is rather simple. First you write a program to solve the problem, then you re-write that program as a function and finally test the function to make sure it works as expected.
This helps train you to think abstractly about problems, but leverages what you understand currently about programming.
Introducing the Write - Refactor - Test - Rewrite approach
The best way to get good at writing functions, a skill you will need to master to become a respectable programmer, is to use the Write - Refactor - Test - Rewrite approach. The basic idea is as follows:
Write the program
Identify which parts of the program can be placed in a function
Refactor the code into a function. Extract the bits into a function into a new, separate cell independent of the original code.
Test the function so you are confident it works, using the expect... actual approach from the lab.
Re-Write the original program to call the function instead.
The Problem
Which Rock, Paper Scissors strategy is better? Random guessing or always choosing one of rock, paper, or scissors?
Let's write a program which plays the game of Rock, Paper, Scissors 10,000 times and compares strategies.
The Approach
Write the program once (done for you in the cell below)
refactor step 1 into its own function play_game(playera, playerb) which returns the winning player.
test the function to make sure it works. Write tests for all cases
re-write the main program to now call the function.
use the function in the final program.
End of explanation
"""
# PROMPT 4: Write function
"""
Explanation: Problem Analysis
For a function rock_paper_scissors() which plays the game, what are the inputs and outputs?
Inputs:
PROMPT 1
Outputs:
PROMPT 2
Function def in python: (just def part)
PROMPT 3
End of explanation
"""
# PROMPTS 5-13 test Cases
"""
Explanation: Test Cases
Writing a function is not helpful unless we have some assurances that it is correct. We solve this problem with test cases:
YOU COMPUTER OUTCOME
Rock Rock Tie
Rock Scissors Win
Rock Paper Lose
Scissors Rock Lose
Scissors Scissors Tie
Scissors Paper Win
Paper Rock Win
Paper Scissors Lose
Paper Paper Tie
PROMPTS 5 - 13
Write a print() statement for each test case:
When YOU=?, ROCK=?, EXPECT=?, ACTUAL=(call the function)
End of explanation
"""
import random
choices = ['rock', 'paper', 'scissors']
computer = random.choice(choices)
you = 'rock'
# TODO: PROMPT 14 call function
print(f"You:'{you}' Computer:'{computer}' Game: {outcome} ")
"""
Explanation: Re-Write
With the function code tested, and assurances it is correct, we can now re-write the original program, calling the function instead.
End of explanation
"""
from IPython.display import display, HTML
from ipywidgets import interact_manual
import random
@interact_manual(strategy=["rock","paper","scissors","random", "pick2"], times=(10_000, 500_000, 10_000) )
def main(strategy, times):
display(f"Running simulation {times} times using strategy {strategy}.")
# TODO Implement algorithm as code
# run this code to turn in your work!
from coursetools.submission import Submission
Submission().submit_now()
"""
Explanation: Back to the simulation
Now that we have function to play the game, we can build our simulation. Remember the original problem was to compare strategies by playing the game thousands of times.
There are 3 strategies for the user:
Rock (always choose rock)
Paper (always choose paper)
Scissors (always choose scissors)
Random (choose one at random)
INPUTS:
strategy
number of times to play the game
OUTPUTS:
Number of:
Wins
Losses
Ties
ALGORITHM:
PROMPT 15
Final Code
Let's use ipywidgets to create the user interface. Some notes:
the list of items ["rock","paper","scissors","random"] makes a drop down.
times=(10_000, 100_000, 10_000) produces a slider with min 10,000, max 100,000 and increment of 10,000
End of explanation
"""
|
anandha2017/udacity | nd101 Deep Learning Nanodegree Foundation/DockerImages/projects/03-tv-script-generation/notebooks/dlnd_tv_script_generation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
vocab_to_int = {word:integer for integer,word in enumerate(set(text))}
int_to_vocab = {integer:word for integer,word in enumerate(set(text))}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
dictionary = {
'.' : '||Period||',
',' : '||Comma||',
'"' : '||Quotation_Mark||',
';' : '||Semicolon||',
'!' : '||Exclamation_Mark||',
'?' : '||Question_Mark||',
'(' : '||Left_Parentheses||',
')' : '||Right_Parentheses||',
'--': '||Dash||',
'\n': '||Return||',
}
#print(dictionary)
return dictionary
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
inputs = tf.placeholder(dtype=tf.int32, shape=[None, None], name='input')
targets = tf.placeholder(dtype=tf.int32, shape=[None, None], name='targets')
learningRate = tf.placeholder(dtype=tf.float32, shape=None, name='learning_rate')
return inputs, targets, learningRate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([cell])
initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name='initial_state')
return cell, initial_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
From Skip-gram word2vec
Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return outputs, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
embed = get_embed(input_data, vocab_size, rnn_size)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
return logits, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
n_batch = len(int_text) // (batch_size * seq_length)
int_text_x = np.array(int_text[:batch_size * seq_length * n_batch])
int_text_y = np.roll(int_text_x, -1)
x_batches = np.split(int_text_x.reshape(batch_size, -1), n_batch, 1)
y_batches = np.split(int_text_y.reshape(batch_size, -1), n_batch, 1)
return np.array(list(zip(x_batches, y_batches)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
"""
# Number of Epochs
num_epochs = 200
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 13
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
input_tensor=loaded_graph.get_tensor_by_name("input:0")
initial_state_tensor=loaded_graph.get_tensor_by_name("initial_state:0")
final_state_tensor=loaded_graph.get_tensor_by_name("final_state:0")
probs_tensor=loaded_graph.get_tensor_by_name("probs:0")
return input_tensor, initial_state_tensor, final_state_tensor, probs_tensor
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
return int_to_vocab[np.argmax(probabilities)]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
isendel/machine-learning | ml-regression/week3-4/.ipynb_checkpoints/week-4-ridge-regression-assignment-1-checkpoint.ipynb | apache-2.0 | import pandas as pd
import numpy as np
from sklearn import linear_model
dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int, 'sqft_living15':float, 'grade':int, 'yr_renovated':int, 'price':float, 'bedrooms':float, 'zipcode':str, 'long':float, 'sqft_lot15':float, 'sqft_living':float, 'floors':float, 'condition':int, 'lat':float, 'date':str, 'sqft_basement':int, 'yr_built':int, 'id':str, 'sqft_lot':int, 'view':int}
"""
Explanation: Regression Week 4: Ridge Regression (interpretation)
In this notebook, we will run ridge regression multiple times with different L2 penalties to see which one produces the best fit. We will revisit the example of polynomial regression as a means to see the effect of L2 regularization. In particular, we will:
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression
* Use matplotlib to visualize polynomial regressions
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression, this time with L2 penalty
* Use matplotlib to visualize polynomial regressions under L2 regularization
* Choose best L2 penalty using cross-validation.
* Assess the final fit using test data.
We will continue to use the House data from previous notebooks. (In the next programming assignment for this module, you will implement your own ridge regression learning algorithm using gradient descent.)
Fire up graphlab create
End of explanation
"""
def polynomial_sframe(feature, degree):
output = pd.DataFrame()
"""
Explanation: Polynomial regression, revisited
We build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function polynomial_sframe from Week 3:
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
sales = graphlab.SFrame('kc_house_data.gl/')
"""
Explanation: Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
End of explanation
"""
sales = sales.sort(['sqft_living','price'])
"""
Explanation: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
End of explanation
"""
l2_small_penalty = 1e-5
"""
Explanation: Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5:
End of explanation
"""
(semi_split1, semi_split2) = sales.random_split(.5,seed=0)
(set_1, set_2) = semi_split1.random_split(0.5, seed=0)
(set_3, set_4) = semi_split2.random_split(0.5, seed=0)
"""
Explanation: Note: When we have so many features and so few data points, the solution can become highly numerically unstable, which can sometimes lead to strange unpredictable results. Thus, rather than using no regularization, we will introduce a tiny amount of regularization (l2_penalty=1e-5) to make the solution numerically stable. (In lecture, we discussed the fact that regularization can also help with numerical stability, and here we are seeing a practical example.)
With the L2 penalty specified above, fit the model and print out the learned weights.
Hint: make sure to add 'price' column to the new SFrame before calling graphlab.linear_regression.create(). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set=None in this call.
QUIZ QUESTION: What's the learned value for the coefficient of feature power_1?
Observe overfitting
Recall from Week 3 that the polynomial fit of degree 15 changed wildly whenever the data changed. In particular, when we split the sales data into four subsets and fit the model of degree 15, the result came out to be very different for each subset. The model had a high variance. We will see in a moment that ridge regression reduces such variance. But first, we must reproduce the experiment we did in Week 3.
First, split the data into split the sales data into four subsets of roughly equal size and call them set_1, set_2, set_3, and set_4. Use .random_split function and make sure you set seed=0.
End of explanation
"""
(train_valid, test) = sales.random_split(.9, seed=1)
train_valid_shuffled = graphlab.toolkits.cross_validation.shuffle(train_valid, random_seed=1)
"""
Explanation: Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Hint: When calling graphlab.linear_regression.create(), use the same L2 penalty as before (i.e. l2_small_penalty). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
The four curves should differ from one another a lot, as should the coefficients you learned.
QUIZ QUESTION: For the models learned in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
Ridge regression comes to rescue
Generally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing "large" weights. (Weights of model15 looked quite small, but they are not that small because 'sqft_living' input is in the order of thousands.)
With the argument l2_penalty=1e5, fit a 15th-order polynomial model on set_1, set_2, set_3, and set_4. Other than the change in the l2_penalty parameter, the code should be the same as the experiment above. Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
These curves should vary a lot less, now that you applied a high degree of regularization.
QUIZ QUESTION: For the models learned with the high level of regularization in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
Selecting an L2 penalty via cross-validation
Just like the polynomial degree, the L2 penalty is a "magic" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage: it leaves fewer observations available for training. Cross-validation seeks to overcome this issue by using all of the training set in a smart way.
We will implement a kind of cross-validation called k-fold cross-validation. The method gets its name because it involves dividing the training set into k segments of roughtly equal size. Similar to the validation set method, we measure the validation error with one of the segments designated as the validation set. The major difference is that we repeat the process k times as follows:
Set aside segment 0 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
Set aside segment 1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
...<br>
Set aside segment k-1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set
After this process, we compute the average of the k validation errors, and use it as an estimate of the generalization error. Notice that all observations are used for both training and validation, as we iterate over segments of data.
To estimate the generalization error well, it is crucial to shuffle the training data before dividing them into segments. GraphLab Create has a utility function for shuffling a given SFrame. We reserve 10% of the data as the test set and shuffle the remainder. (Make sure to use seed=1 to get consistent answer.)
End of explanation
"""
n = len(train_valid_shuffled)
k = 10 # 10-fold cross-validation
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
print i, (start, end)
"""
Explanation: Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.
With this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.
End of explanation
"""
train_valid_shuffled[0:10] # rows 0 to 9
"""
Explanation: Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
End of explanation
"""
print int(round(validation4['price'].mean(), 0))
"""
Explanation: Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Extract the fourth segment (segment 3) and assign it to a variable called validation4.
To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
End of explanation
"""
n = len(train_valid_shuffled)
first_two = train_valid_shuffled[0:2]
last_two = train_valid_shuffled[n-2:n]
print first_two.append(last_two)
"""
Explanation: After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0:start) and (end+1:n) of the data and paste them together. SFrame has append() method that pastes together two disjoint sets of rows originating from a common dataset. For instance, the following cell pastes together the first and last two rows of the train_valid_shuffled dataframe.
End of explanation
"""
print int(round(train4['price'].mean(), 0))
"""
Explanation: Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
End of explanation
"""
def k_fold_cross_validation(k, l2_penalty, data, output_name, features_list):
"""
Explanation: Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
For each i in [0, 1, ..., k-1]:
Compute starting and ending indices of segment i and call 'start' and 'end'
Form validation set by taking a slice (start:end+1) from the data.
Form training set by appending slice (end+1:n) to the end of slice (0:start).
Train a linear model using training set just formed, with a given l2_penalty
Compute validation error using validation set just formed
End of explanation
"""
# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.
# Using plt.xscale('log') will make your plot more intuitive.
"""
Explanation: Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following:
* We will again be aiming to fit a 15th-order polynomial model using the sqft_living input
* For l2_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, you can use this Numpy function: np.logspace(1, 7, num=13).)
* Run 10-fold cross-validation with l2_penalty
* Report which L2 penalty produced the lowest average validation error.
Note: since the degree of the polynomial is now fixed to 15, to make things faster, you should generate polynomial features in advance and re-use them throughout the loop. Make sure to use train_valid_shuffled when generating polynomial features!
QUIZ QUESTIONS: What is the best value for the L2 penalty according to 10-fold validation?
You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.
End of explanation
"""
|
freedomtan/tensorflow | tensorflow/lite/examples/experimental_new_converter/Keras_LSTM_fusion_Codelab.ipynb | apache-2.0 | !pip install tf-nightly
"""
Explanation: Overview
This CodeLab demonstrates how to build a fused TFLite LSTM model for MNIST recognition using Keras, and how to convert it to TensorFlow Lite.
The CodeLab is very similar to the Keras LSTM CodeLab. However, we're creating fused LSTM ops rather than the unfused versoin.
Also note: We're not trying to build the model to be a real world application, but only demonstrate how to use TensorFlow Lite. You can a build a much better model using CNN models. For a more canonical lstm codelab, please see here.
Step 0: Prerequisites
It's recommended to try this feature with the newest TensorFlow nightly pip build.
End of explanation
"""
import numpy as np
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
model = tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(28, 28), name='input'),
tf.keras.layers.LSTM(20, time_major=False, return_sequences=True),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation=tf.nn.softmax, name='output')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.summary()
"""
Explanation: Step 1: Build the MNIST LSTM model.
End of explanation
"""
# Load MNIST dataset.
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
x_train = x_train.astype(np.float32)
x_test = x_test.astype(np.float32)
# Change this to True if you want to test the flow rapidly.
# Train with a small dataset and only 1 epoch. The model will work poorly
# but this provides a fast way to test if the conversion works end to end.
_FAST_TRAINING = False
_EPOCHS = 5
if _FAST_TRAINING:
_EPOCHS = 1
_TRAINING_DATA_COUNT = 1000
x_train = x_train[:_TRAINING_DATA_COUNT]
y_train = y_train[:_TRAINING_DATA_COUNT]
model.fit(x_train, y_train, epochs=_EPOCHS)
model.evaluate(x_test, y_test, verbose=0)
"""
Explanation: Step 2: Train & Evaluate the model.
We will train the model using MNIST data.
End of explanation
"""
run_model = tf.function(lambda x: model(x))
# This is important, let's fix the input size.
BATCH_SIZE = 1
STEPS = 28
INPUT_SIZE = 28
concrete_func = run_model.get_concrete_function(
tf.TensorSpec([BATCH_SIZE, STEPS, INPUT_SIZE], model.inputs[0].dtype))
# model directory.
MODEL_DIR = "keras_lstm"
model.save(MODEL_DIR, save_format="tf", signatures=concrete_func)
converter = tf.lite.TFLiteConverter.from_saved_model(MODEL_DIR)
tflite_model = converter.convert()
"""
Explanation: Step 3: Convert the Keras model to TensorFlow Lite model.
End of explanation
"""
# Run the model with TensorFlow to get expected results.
TEST_CASES = 10
# Run the model with TensorFlow Lite
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
for i in range(TEST_CASES):
expected = model.predict(x_test[i:i+1])
interpreter.set_tensor(input_details[0]["index"], x_test[i:i+1, :, :])
interpreter.invoke()
result = interpreter.get_tensor(output_details[0]["index"])
# Assert if the result of TFLite model is consistent with the TF model.
np.testing.assert_almost_equal(expected, result)
print("Done. The result of TensorFlow matches the result of TensorFlow Lite.")
# Please note: TfLite fused Lstm kernel is stateful, so we need to reset
# the states.
# Clean up internal states.
interpreter.reset_all_variables()
"""
Explanation: Step 4: Check the converted TensorFlow Lite model.
Now load the TensorFlow Lite model and use the TensorFlow Lite python interpreter to verify the results.
End of explanation
"""
|
QuantStack/quantstack-talks | 2018-11-14-PyParis-widgets/notebooks/1.ipywidgets.ipynb | bsd-3-clause | from ipywidgets import IntSlider
slider = IntSlider()
slider
slider.value
slider.value = 20
slider
"""
Explanation: <center><img src="src/ipywidgets.svg" width="50%"></center>
Repository: https://github.com/jupyter-widgets/ipywidgets
Installation:
conda install -c conda-forge ipywidgets
Simple slider for driving an integer value
End of explanation
"""
from ipywidgets import Checkbox
checkbox = Checkbox(description='Check me')
checkbox
checkbox.value
checkbox.value = False
"""
Explanation: Widgets protocol
<center><img src="src/widgets-arch.png" width="50%"></center>
Drive a boolean value
End of explanation
"""
from ipywidgets import IntText, IntSlider, link, HBox
text = IntText()
slider = IntSlider()
link((text, 'value'), (slider, 'value'))
HBox([text, slider])
"""
Explanation: Link two widgets
End of explanation
"""
from ipywidgets import ToggleButton
button = ToggleButton(description='Click me!', button_style='danger')
def update_style(change):
button.button_style = 'info' if change['new'] else 'danger'
button.observe(update_style, 'value')
button
"""
Explanation: Observe changes on the widget model
End of explanation
"""
from ipywidgets import ColorPicker, DatePicker, IntProgress, Play, VBox, link
progress = IntProgress()
play = Play()
link((play, 'value'), (progress, 'value'))
VBox([ColorPicker(value='red'), DatePicker(), progress, play])
"""
Explanation: Variety of widgets in the core library
End of explanation
"""
from ipywidgets import Image
import PIL.Image
import io
import numpy as np
from skimage.filters import sobel
from skimage.color.adapt_rgb import adapt_rgb, each_channel
from skimage import filters
image = Image.from_file("src/marie.png")
image
im_in = PIL.Image.open(io.BytesIO(image.value))
im_array = np.array(im_in)[...,:3]
im_array
im_array_edges = adapt_rgb(each_channel)(sobel)(im_array)
im_array_edges = ((1-im_array_edges) * 255).astype(np.uint8)
im_out = PIL.Image.fromarray(im_array_edges)
f = io.BytesIO()
im_out.save(f, format='png')
image.value = f.getvalue()
"""
Explanation: Media widgets
Image widget
End of explanation
"""
from ipywidgets import Video, Image
from IPython.display import display
import numpy as np
import cv2
import base64
video = Video.from_file('src/Big.Buck.Bunny.mp4')
video
cap = cv2.VideoCapture('src/Big.Buck.Bunny.mp4')
frames = []
while(1):
try:
_, frame = cap.read()
fgmask = cv2.Canny(frame, 100, 100)
mask = fgmask > 100
frame[mask, :] = 0
frames.append(frame)
except Exception:
break
width = int(cap.get(3))
height = int(cap.get(4))
filename = 'src/output.mp4'
fourcc = cv2.VideoWriter_fourcc(*'avc1')
writer = cv2.VideoWriter(filename, fourcc, 25, (width, height))
for frame in frames:
writer.write(frame)
cap.release()
writer.release()
with open(filename, 'rb') as f:
video.value = f.read()
"""
Explanation: Video widget
End of explanation
"""
from ipywidgets import Widget
Widget.close_all()
"""
Explanation: Clean
End of explanation
"""
|
thehackerwithin/berkeley | code_examples/python_mayavi/mayavi_basic.ipynb | bsd-3-clause | # try one example, figure is created by default
mlab.test_molecule()
"""
Explanation: Overview
Mayavi is a high level plotting library built on tvtk.
Mayavi has a uses mlab for it's higher level plotting functions. The list of plotting functions can be found here. Functions exist for ploting lines, surfaces, 3d contours, points, ect.
Most of the plot types have a built in example found at mlab.test_*
End of explanation
"""
# clear the figure then load another example
mlab.clf()
mlab.test_flow_anim()
# create a new figure
mlab.figure('mesh_example', bgcolor=(0,0,0,))
mlab.test_surf()
"""
Explanation: Mayavi has some very useful interactive controls that can be accessed from the GUI. This includes the ability to record changing of parameters.
End of explanation
"""
|
geoffbacon/semrep | semrep/evaluate/koehn/koehn.ipynb | mit | %matplotlib inline
import os
import csv
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from sklearn.linear_model import LogisticRegression, LogisticRegressionCV
from sklearn.model_selection import train_test_split, StratifiedKFold
from sklearn.metrics import roc_curve, roc_auc_score, classification_report, confusion_matrix
from sklearn.preprocessing import LabelEncoder
data_path = '../../data'
tmp_path = '../../tmp'
"""
Explanation: Köhn
In this notebook I replicate Koehn (2015): What's in an embedding? Analyzing word embeddings through multilingual evaluation. This paper proposes to i) evaluate an embedding method on more than one language, and ii) evaluate an embedding model by how well its embeddings capture syntactic features. He uses an L2-regularized linear classifier, with an upper baseline that assigns the most frequent class. He finds that most methods perform similarly on this task, but that dependency based embeddings perform better. Dependency based embeddings particularly perform better when you decrease the dimensionality. Overall, the aim is to have an evalation method that tells you something about the structure of the learnt representations. He evaulates a range of different models on their ability to capture a number of different morphosyntactic features in a bunch of languages.
Embedding models tested:
- cbow
- skip-gram
- glove
- dep
- cca
- brown
Features tested:
- pos
- headpos (the pos of the word's head)
- label
- gender
- case
- number
- tense
Languages tested:
- Basque
- English
- French
- German
- Hungarian
- Polish
- Swedish
Word embeddings were trained on automatically PoS-tagged and dependency-parsed data using existing models. This is so the dependency-based embeddings can be trained. The evaluation is on hand-labelled data. English training data is a subset of Wikipedia; English test data comes from PTB. For all other languages, both the training and test data come from a shared task on parsing morphologically rich languages. Koehn trained embeddings with window size 5 and 11 and dimensionality 10, 100, 200.
Dependency-based embeddings perform the best on almost all tasks. They even do well when the dimensionality is reduced to 10, while other methods perform poorly in this case.
I'll need:
- models
- learnt representations
- automatically labeled data
- hand-labeled data
End of explanation
"""
size = 50
fname = 'embeddings/glove.6B.{}d.txt'.format(size)
glove_path = os.path.join(data_path, fname)
glove = pd.read_csv(glove_path, sep=' ', header=None, index_col=0, quoting=csv.QUOTE_NONE)
glove.head()
"""
Explanation: Learnt representations
GloVe
End of explanation
"""
fname = 'UD_English/features.csv'
features_path = os.path.join(data_path, os.path.join('evaluation/dependency', fname))
features = pd.read_csv(features_path).set_index('form')
features.head()
df = pd.merge(glove, features, how='inner', left_index=True, right_index=True)
df.head()
"""
Explanation: Features
End of explanation
"""
def prepare_X_and_y(feature, data):
"""Return X and y ready for predicting feature from embeddings."""
relevant_data = data[data[feature].notnull()]
columns = list(range(1, size+1))
X = relevant_data[columns]
y = relevant_data[feature]
train = relevant_data['set'] == 'train'
test = (relevant_data['set'] == 'test') | (relevant_data['set'] == 'dev')
X_train, X_test = X[train].values, X[test].values
y_train, y_test = y[train].values, y[test].values
return X_train, X_test, y_train, y_test
def predict(model, X_test):
"""Wrapper for getting predictions."""
results = model.predict_proba(X_test)
return np.array([t for f,t in results]).reshape(-1,1)
def conmat(model, X_test, y_test):
"""Wrapper for sklearn's confusion matrix."""
y_pred = model.predict(X_test)
c = confusion_matrix(y_test, y_pred)
sns.heatmap(c, annot=True, fmt='d',
xticklabels=model.classes_,
yticklabels=model.classes_,
cmap="YlGnBu", cbar=False)
plt.ylabel('Ground truth')
plt.xlabel('Prediction')
def draw_roc(model, X_test, y_test):
"""Convenience function to draw ROC curve."""
y_pred = predict(model, X_test)
fpr, tpr, thresholds = roc_curve(y_test, y_pred)
roc = roc_auc_score(y_test, y_pred)
label = r'$AUC={}$'.format(str(round(roc, 3)))
plt.plot(fpr, tpr, label=label);
plt.title('ROC')
plt.xlabel('False positive rate');
plt.ylabel('True positive rate');
plt.legend();
def cross_val_auc(model, X, y):
for _ in range(5):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=y)
model = model.fit(X_train, y_train)
draw_roc(model, X_test, y_test)
X_train, X_test, y_train, y_test = prepare_X_and_y('Tense', df)
model = LogisticRegression(penalty='l2', solver='liblinear')
model = model.fit(X_train, y_train)
conmat(model, X_test, y_test)
sns.distplot(model.coef_[0], rug=True, kde=False);
"""
Explanation: Prediction
End of explanation
"""
|
dipanjank/ml | data_analysis/digit_recognition/feature_extractor.ipynb | gpl-3.0 | %pylab inline
pylab.style.use('ggplot')
import numpy as np
import pandas as pd
import cv2
import os
image_dir = os.path.join(os.getcwd(), 'font_images')
if not os.path.isdir(image_dir) or len(os.listdir(image_dir)) == 0:
print('no images found in {}'.format(image_dir))
"""
Explanation: In this notebook, we
load up the saved .png files.
read the image into a numpy array.
partition the image into individual arrays (recall that each image is '012345689') representing each number.
resize each digit image into 16*16
The features are the eigenvectors for each image array. Thus, if we have n images in our training set, this process produces an n * 16 feature matrix.
End of explanation
"""
img_mat = cv2.imread(os.path.join(image_dir, 'arial.png'))
# Convert to grayscale
gs = cv2.cvtColor(img_mat, cv2.COLOR_BGR2GRAY)
gs.shape
pylab.imshow(gs, cmap='gray')
pylab.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom='off', top='off', left='off', right='off', # don't display ticks
labelbottom='off', labeltop='off', labelleft='off', labelright='off' # don't display ticklabels
)
# Partition the columns into 10 equal parts
split_positions = np.linspace(0, gs.shape[1], num=12).astype(np.int)
split_positions = split_positions[1:-1]
# manual tweak by inspection
split_positions[0] += 10
split_positions
parts = np.array_split(gs, split_positions, axis=1)
fig, axes = pylab.subplots(1, len(parts))
for part, ax in zip(parts, axes):
ax.imshow(part, cmap='gray')
ax.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom='off', top='off', left='off', right='off', # don't display ticks
labelbottom='off', labeltop='off', labelleft='off', labelright='off' # don't display ticklabels
)
fig, axes = pylab.subplots(1, len(parts))
binarized = []
for ax, p in zip(axes, parts):
resized = cv2.resize(p, (32, 32))
_, bin_img = cv2.threshold(resized, 127, 255, cv2.THRESH_BINARY)
binarized.append(bin_img)
ax.imshow(bin_img, cmap='gray')
ax.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom='off', top='off', left='off', right='off', # don't display ticks
labelbottom='off', labeltop='off', labelleft='off', labelright='off' # don't display ticklabels
)
"""
Explanation: First, we outline the processing for a single image.
End of explanation
"""
def calc_on_pixel_fraction(part_img):
# Note that on pixel == 0, off pixel == 255
_, counts = np.unique(part_img, return_counts=True)
return counts[0] / counts[1]
on_pixel_fractions = [calc_on_pixel_fraction(p) for p in binarized]
on_pixel_fractions = pd.Series(on_pixel_fractions, index=list('0123456789,'))
on_pixel_fractions.plot(kind='bar', title='On pixel fractions for all chars')
"""
Explanation: Now we're ready to build image features. Let's take one of the images and work out the feature extraction process.
Statistical Features
Fraction of On Pixels
End of explanation
"""
# Again, note that on pixel == 0, off pixel == 255
def calc_f_on_pixel_pos(part_img, f, axis=0):
assert axis in (0, 1)
on_x, on_y = np.where(part_img==0)
on_dim = on_x if axis == 0 else on_y
return f(on_dim)
m_x = [calc_f_on_pixel_pos(p, np.mean, axis=0) for p in binarized]
m_y = [calc_f_on_pixel_pos(p, np.mean, axis=1) for p in binarized]
mean_on_pixel_xy = pd.DataFrame(np.column_stack([m_x, m_y]),
index=list('0123456789,'),
columns=['mean_x', 'mean_y'])
mean_on_pixel_xy.plot(kind='bar', subplots=True)
"""
Explanation: Mean x, y Positions of All On Pixels
End of explanation
"""
v_x = [calc_f_on_pixel_pos(p, np.var, axis=0) for p in binarized]
v_y = [calc_f_on_pixel_pos(p, np.var, axis=1) for p in binarized]
var_on_pixel_xy = pd.DataFrame(np.column_stack([v_x, v_y]),
index=list('0123456789,'),
columns=['var_x', 'var_y'])
var_on_pixel_xy.plot(kind='bar', subplots=True)
"""
Explanation: Variance of x-y Positions of All on Pixels
End of explanation
"""
def calc_on_pixel_x_y_corr(part_img):
coef = np.corrcoef(np.where(part_img == 0))
return coef[1, 0]
x_y_corrs = [calc_on_pixel_x_y_corr(p) for p in binarized]
x_y_corrs = pd.Series(x_y_corrs, index=list('0123456789,'))
x_y_corrs.plot(kind='bar')
"""
Explanation: Correlation of x-y positions of All Pixels
End of explanation
"""
def calc_moments(part_img):
moments = cv2.moments(part_img, binaryImage=True)
return moments
m_list = [calc_moments(p) for p in binarized]
m_df = pd.DataFrame.from_records(m_list)
chars = ('zero', 'one', 'two', 'three', 'four',
'five', 'six', 'seven', 'eight', 'nine', 'comma')
m_df.index = chars
m_df.head()
figure, axes = pylab.subplots(8, 3, figsize=(20, 24))
moment_cols = m_df.columns.values.reshape(8, 3)
for i, row in enumerate(moment_cols):
for j, col in enumerate(row):
m_df.loc[:, col].plot(kind='bar', title=col, ax=axes[i][j])
pylab.tight_layout()
"""
Explanation: Note: I decided to not use this feature after adding the moment based features.
Moment Based Features
Moment calculation in OpenCV is described here:
http://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html
End of explanation
"""
from scipy.fftpack import dct
def calc_dct2d_zigzagged_coeffs(part_img, n_diags=3):
dct_result = dct(dct(part_img, norm='ortho').T, norm='ortho')
# To make a feature vector out of the DCT results by taking the elements
# of dct_result in a zigzagged fashion.
# We can access these efficiently
# by taking the mirror image and accessing the diagonals.
mirrored = np.fliplr(dct_result)
idx_first = mirrored.shape[0] - 1
idx_last = idx_first - n_diags
zigzagged_coeffs = np.concatenate([np.diag(mirrored, k)
for k in range(idx_first, idx_last, -1)])
return zigzagged_coeffs
diag_var_dct = [calc_dct2d_zigzagged_coeffs(p, n_diags=3) for p in binarized]
dct_df = pd.DataFrame.from_records(diag_var_dct, index=chars)
dct_df.plot(kind='bar', subplots=True, figsize=(10, 20))
"""
Explanation: So, among all the moments, we choose the normalized moments: nu03, nu11 ('en-eu-one-one'), and nu12. All the other features are have similar shapes across the character classes.
DCT Based Features
End of explanation
"""
def partition_image(img_file, n_chars, size=32, threshold=127):
"""
* Read the RGB image `img_file`
* Convert to grayscale
* Split into one subarray per character
* Resize to `size * size`
* Binarize with threshold `threshold`
Return a list of subarrays for each character.
"""
assert os.path.isfile(img_file)
img_mat = cv2.imread(img_file)
gs = cv2.cvtColor(img_mat, cv2.COLOR_BGR2GRAY)
split_positions = np.linspace(0, gs.shape[1], num=n_chars+1).astype(np.int)
split_positions = split_positions[1:-1]
# manual tweak by inspection
split_positions[0] += 10
parts = np.array_split(gs, split_positions, axis=1)
resized_images = []
for p in parts:
p_new = cv2.resize(p, (size, size))
_, bin_img = cv2.threshold(p_new, threshold, 255, cv2.THRESH_BINARY)
resized_images.append(bin_img)
return resized_images
from functools import partial
def calc_on_pixel_fraction(part_img):
_, counts = np.unique(part_img, return_counts=True)
return counts[0] /counts[1]
def calc_f_on_pixel_pos(part_img, f, axis=0):
assert axis in (0, 1)
on_x, on_y = np.where(part_img==0)
on_dim = on_x if axis == 0 else on_y
return f(on_dim)
def calc_on_pixel_x_y_corr(part_img):
coef = np.corrcoef(np.where(part_img == 0))
return coef[0, 1]
def calc_moments(part_img, moments_to_keep={'nu03', 'nu11', 'nu12'}):
moments = cv2.moments(part_img, binaryImage=True)
return {k: v for k, v in moments.items() if k in moments_to_keep}
from scipy.fftpack import dct
def calc_dct2d_zigzagged_coeffs(part_img, n_diags=3):
"""Return a 1D numpy array with the zigzagged 2D DCT coefficients."""
dct_result = dct(dct(part_img, norm='ortho').T, norm='ortho')
mirrored = np.fliplr(dct_result)
idx_first = mirrored.shape[0] - 1
idx_last = idx_first - n_diags
zigzagged_coeffs = np.concatenate([np.diag(mirrored, k)
for k in range(idx_first, idx_last, -1)])
return zigzagged_coeffs
# dictionary of functions
feature_calc = {
'on_pixel_frac': calc_on_pixel_fraction,
# 'on_pixel_x_mean': partial(calc_f_on_pixel_pos, f=np.mean, axis=0),
# 'on_pixel_y_mean': partial(calc_f_on_pixel_pos, f=np.mean, axis=1),
'on_pixel_x_var': partial(calc_f_on_pixel_pos, f=np.var, axis=0),
'on_pixel_y_var': partial(calc_f_on_pixel_pos, f=np.var, axis=1),
# 'on_pixel_x_y_corr': calc_on_pixel_x_y_corr,
}
def extract_features(img_file, chars):
"""
Extract_features for a combined image. Returns a DataFrame with 1 row per character.
"""
char_images = partition_image(img_file, len(chars))
font_name = os.path.basename(img_file).split('.')[0]
features = []
for char_img in char_images:
feature_vals = {fname: fgen(char_img) for fname, fgen in feature_calc.items()}
# Calculate the moment feature values separately and update feature_vals.
moment_features = calc_moments(char_img)
feature_vals.update(moment_features)
features.append(feature_vals)
features = pd.DataFrame.from_records(features, index=chars)
features.index.name = 'char_name'
features['font_name'] = font_name
# Include the DCT features
dct_features = [calc_dct2d_zigzagged_coeffs(p) for p in char_images]
dct_features = pd.DataFrame.from_records(diag_var_dct, index=chars)
dct_features.columns = ['dct_{}'.format(c) for c in dct_features.columns]
# Combine DCT and other features
all_features = pd.concat([features, dct_features], axis=1)
return all_features
from IPython.display import display
from ipywidgets import FloatProgress
font_files = [os.path.join(image_dir, f) for f in os.listdir(image_dir)]
prog = FloatProgress(min=1, max=len(font_files), description='Extracting features...')
display(prog)
all_features = []
chars = ('zero', 'one', 'two', 'three', 'four',
'five', 'six', 'seven', 'eight', 'nine', 'comma')
for font_file in font_files:
feature_df = extract_features(font_file, chars)
all_features.append(feature_df)
prog.value += 1
prog.bar_style = 'success'
all_features = pd.concat(all_features, axis=0)
all_features.info()
num_values = all_features.drop('font_name', axis=1)
np.isfinite(num_values).sum(axis=0)
# This is only necessary if on_pixel_x_y_corr is included.
if 'on_pixel_x_y_corr' in all_features.keys():
invalid_corr = ~np.isfinite(all_features['on_pixel_x_y_corr'])
all_features.loc[invalid_corr, 'on_pixel_x_y_corr']
comma_mean = all_features.loc['comma', 'on_pixel_x_y_corr'].mean()
four_mean = all_features.loc['four', 'on_pixel_x_y_corr'].mean()
invalid_comma_idx = (all_features.index == 'comma') & invalid_corr
all_features.loc[invalid_comma_idx, 'on_pixel_x_y_corr'] = comma_mean
invalid_four_idx = (all_features.index == 'four') & invalid_corr
all_features.loc[invalid_four_idx, 'on_pixel_x_y_corr'] = four_mean
all_features.to_csv('char_features.csv')
"""
Explanation: Putting it Together
End of explanation
"""
|
h2oai/h2o-3 | h2o-py/demos/uplift_drf_demo.ipynb | apache-2.0 | import h2o
from h2o.estimators.uplift_random_forest import H2OUpliftRandomForestEstimator
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.style as style
import pandas as pd
h2o.init(strict_version_check=False) # max_mem_size=10
"""
Explanation: H2O Uplift Distributed Random Forest
Author: Veronika Maurerova [email protected]
Modeling Uplift
Distributed Uplift Random Forest (Uplift DRF) is a classification tool for modeling uplift - the incremental impact of a treatment. This tool is very useful for example in marketing or in medicine. This machine learning approach is inspired by the A/B testing method.
To model uplift, the analyst needs to collect data specifically - before the experiment, the objects are divided usually into two groups:
treatment group: receive some kind of treatment (for example customer get some type of discount)
control group: is separated from the treatment (customers in this group get no discount).
Then the data are prepared and an analyst can gather information about the response - for example, whether customers bought a product, patients recovered from the disease, or similar.
Uplift approaches
There are several approaches to model uplift:
Meta-learner algorithms
Instrumental variables algorithms
Neural-networks-based algorithms
Tree-based algorithms
Tree Based Uplift Algorithm
Tree-based algorithm means in every tree, it takes information about treatment/control group assignment and information about response directly into a decision about splitting a node. The uplift score is the criterion to make a decision similar to the Gini coefficient in the standard decision tree.
Uplift metric to decide best split
The goal is to maximize the differences between the class distributions in the treatment and control sets, so the splitting criteria are based on distribution divergences. The distribution divergence is calculated based on the uplift_metric parameter. In H2O-3, three uplift_metric types are supported:
Kullback-Leibler divergence (uplift_metric="KL") - uses logarithms to calculate divergence, asymmetric, widely used, tends to infinity values (if treatment or control group distributions contain zero values).
$ KL(P, Q) = \sum_{i=0}^{N} p_i \log{\frac{p_i}{q_i}}$
Squared Euclidean distance (uplift_metric="euclidean") - symmetric and stable distribution, does not tend to infinity values.
$ E(P, Q) = \sum_{i=0}^{N} \sqrt{p_i-q_i}$
Chi-squared divergence (uplift_metric="chi_squared") - Euclidean divergence normalized by control group distribution. Asymmetric and also tends to infinity values (if control group distribution contains zero values).
$\sqrt{X}(P, Q) = \sum_{i=0}^{N} \frac{\sqrt{p_i-q_i}}{q_i}$
where:
$P$ is treatment group distribution
$Q$ is control group distribution
In a tree node the result value for a split is sum: $metric(P, Q) + metric(1-P, 1-Q)$.
For the split gain value, the result within the node is normalized using a Gini coefficient (Eclidean or ChiSquared) or entropy (KL) for each distribution before and after the split.
Uplift score in each leaf is calculated as:
$TP = (TY1 + 1) / (T + 2)$
$CP = (CY1 + 1) / (C + 2)$
$uplift_score = TP - CP $
where:
- $T$ how many observations in a leaf are from the treatment group (how many data rows in a leaf have treatment_column label == 1)
- $C$ how many observations in a leaf are from the control group (how many data rows in the leaf have treatment_column label == 0)
- $TY1$ how many observations in a leaf are from the treatment group and respond to the offer (how many data rows in the leaf have treatment_column label == 1 and response_column label == 1)
- $CY1$ how many observations in a leaf are from the control group and respond to the offer (how many data rows in the leaf have treatment_column label == 0 and response_column label == 1)
Note: A higher uplift score means more observations from treatment group respond to the offer than from control group. Which means offered treatment has positive effect. The uplift score can be negative, if more observations from control group respond to the offer without treatment.
<br>
<br>
H2O Implementation (Major release 3.36)
The H2O-3 implementation of Uplift DRF is based on DRF because the principle of training is similar to DRF. It is tree based uplift algorithm. Uplift DRF generates a forest of classification uplift trees, rather than a single classification tree. Each of these trees is a weak learner built on a subset of rows and columns. More trees will reduce the variance. Classification takes the average prediction over all of their trees to make a final prediction.
Currently, in H2O-3 only binomial trees are supported, as well as the uplift curve metric and Area Under Uplift curve (AUUC) metric, normalized AUUC, and the Qini value. We are working on adding also regression trees and more metrics, for example, Qini coefficient, and more.
Start H2O-3
End of explanation
"""
control_name = "control"
treatment_column = "treatment"
response_column = "visit"
feature_cols = ["f"+str(x) for x in range(0,12)]
df = pd.read_csv("http://mr-0xd4:50070/webhdfs/v1/datasets/criteo/v2.1/criteo-research-uplift-v2.1.csv?op=OPEN")
df.head()
"""
Explanation: Load data
To demonstrate how Uplift DRF works, Criteo dataset is used.
Source:
Diemert Eustache, Betlei Artem} and Renaudin, Christophe and Massih-Reza, Amini, "A Large Scale Benchmark for Uplift Modeling", ACM, Proceedings of the AdKDD and TargetAd Workshop, KDD, London,United Kingdom, August, 20, 2018, https://ailab.criteo.com/criteo-uplift-prediction-dataset/.
Description:
The dataset was created by The Criteo AI Lab
Consists of 13M rows, each one representing a user with 12 features, a treatment indicator and 2 binary labels (visits and conversions).
Positive labels mean the user visited/converted on the advertiser website during the test period (2 weeks).
The global treatment ratio is 84.6%.
Detailed description of the columns:
f0, f1, f2, f3, f4, f5, f6, f7, f8, f9, f10, f11: feature values (dense, float)
treatment: treatment group (1 = treated, 0 = control)
conversion: whether a conversion occured for this user (binary, label)
visit: whether a visit occured for this user (binary, label)
exposure: treatment effect, whether the user has been effectively exposed (binary)
End of explanation
"""
print('Total number of samples: {}'.format(len(df)))
print('The dataset is largely imbalanced: ')
print(df['treatment'].value_counts(normalize = True))
print('Percentage of users that visit: {}%'.format(100*round(df['visit'].mean(),4)))
print('Percentage of users that convert: {}%'.format(100*round(df['conversion'].mean(),4)))
print('Percentage of visitors that convert: {}%'.format(100*round(df[df["visit"]==1]["conversion"].mean(),4)))
# Print proportion of a binary column
# https://www.kaggle.com/code/hughhuyton/criteo-uplift-modelling/notebook
def print_proportion(df, column):
fig = plt.figure(figsize = (10,6))
target_count = df[column].value_counts()
print('Class 0:', target_count[0])
print('Class 1:', target_count[1])
print('Proportion:', int(round(target_count[1] / target_count[0])), ': 1')
target_count.plot(kind='bar', title='Treatment Class Distribution', color=['#2077B4', '#FF7F0E'], fontsize = 15)
plt.xticks(rotation=0)
print_proportion(df, treatment_column)
from sklearn.model_selection import train_test_split
train_df, test_df = train_test_split(df, test_size=0.2, random_state=42, stratify=df['treatment'])
print(train_df.shape)
print(test_df.shape)
del(df)
print_proportion(train_df, treatment_column)
# Random Undersampling (finding the majority class and undersampling it)
# https://www.kaggle.com/code/hughhuyton/criteo-uplift-modelling/notebook
def random_under(df, feature):
target = df[feature].value_counts()
if target.values[0]<target.values[1]:
under = target.index.values[1]
else:
under = target.index.values[0]
df_0 = df[df[feature] != under]
df_1 = df[df[feature] == under]
df_treatment_under = df_1.sample(len(df_0))
df_1 = pd.concat([df_treatment_under, df_0], axis=0)
return df_1
train_df = random_under(train_df, treatment_column)
print_proportion(train_df, treatment_column)
# method to transfor data for LGWUM method, will be explained later
def target_class_lgwum(df, treatment, target, column_name):
#CN:
df[column_name] = 0
#CR:
df.loc[(df[treatment] == 0) & (df[target] != 0), column_name] = 1
#TN:
df.loc[(df[treatment] != 0) & (df[target] == 0), column_name] = 2
#TR:
df.loc[(df[treatment] != 0) & (df[target] != 0), column_name] = 3
return df
response_column_lgwum = "lqwum_response"
train_df = target_class_lgwum(train_df, treatment_column, response_column, response_column_lgwum)
test_df = target_class_lgwum(test_df, treatment_column, response_column, response_column_lgwum)
"""
Explanation: Prepare data
Inspiration from: https://www.kaggle.com/code/hughhuyton/criteo-uplift-modelling/notebook
To modeling uplift the treatment and control group data have to have similar distribution. In real world usually the control group is smaller than the treatment group. It is also a case of Crieteo dataset and we have to rebalanced the data to have a similar size.
End of explanation
"""
h2o_train_df = h2o.H2OFrame(train_df)
del(train_df)
h2o_train_df[treatment_column] = h2o_train_df[treatment_column].asfactor()
h2o_train_df[response_column] = h2o_train_df[response_column].asfactor()
h2o_train_df[response_column_lgwum] = h2o_train_df[response_column_lgwum].asfactor()
h2o_test_df = h2o.H2OFrame(test_df)
h2o_test_df[treatment_column] = h2o_test_df[treatment_column].asfactor()
h2o_test_df[response_column] = h2o_test_df[response_column].asfactor()
h2o_test_df[response_column_lgwum] = h2o_test_df[response_column_lgwum].asfactor()
del(test_df)
"""
Explanation: Import data to H2O
End of explanation
"""
ntree = 20
max_depth = 15
metric="Euclidean"
h2o_uplift_model = H2OUpliftRandomForestEstimator(
ntrees=ntree,
max_depth=max_depth,
min_rows=30,
nbins=1000,
sample_rate=0.80,
score_each_iteration=True,
treatment_column=treatment_column,
uplift_metric=metric,
auuc_nbins=1000,
auuc_type="gain",
seed=42)
h2o_uplift_model.train(y=response_column, x=feature_cols, training_frame=h2o_train_df)
h2o_uplift_model
"""
Explanation: Train H2O UpliftDRF model
End of explanation
"""
# Plot uplift score
# source https://www.kaggle.com/code/hughhuyton/criteo-uplift-modelling/notebook
def plot_uplift_score(uplift_score):
plt.figure(figsize = (10,6))
plt.xlim(-.05, .1)
plt.hist(uplift_score, bins=1000, color=['#2077B4'])
plt.xlabel('Uplift score')
plt.ylabel('Number of observations in validation set')
h2o_uplift_pred = h2o_uplift_model.predict(h2o_test_df)
h2o_uplift_pred
plot_uplift_score(h2o_uplift_pred['uplift_predict'].as_data_frame().uplift_predict)
"""
Explanation: Predict and plot Uplift Score
End of explanation
"""
perf_h2o = h2o_uplift_model.model_performance(h2o_test_df)
"""
Explanation: Evaluate the model
End of explanation
"""
perf_h2o.auuc_table()
"""
Explanation: Area Under Uplift Curve (AUUC) calculation
To calculate AUUC for big data, the predictions are binned to histograms. Due to this feature the results should be different compared to exact computation.
To define AUUC, binned predictions are sorted from largest to smallest value. For every group the cumulative sum of observations statistic is calculated. The uplift is defined based on these statistics.
Types of AUUC
| AUUC type | Formula |
|:----------:|:-------------------------------------------:|
| Qini | $TY1 - CY1 * \frac{T}{C}$ |
| Lift | $\frac{TY1}{T} - \frac{CY1}{C}$ |
| Gain | $(\frac{TY1}{T} - \frac{CY1}{C}) * (T + C)$ |
Where:
T how many observations are in the treatment group (how many data rows in the bin have treatment_column label == 1)
C how many observations are in the control group (how many data rows in the bin have treatment_column label == 0)
TY1 how many observations are in the treatment group and respond to the offer (how many data rows in the bin have treatment_column label == 1 and response_column label == 1)
CY1 how many observations are in the control group and respond to the offer (how many data rows in the bin have treatment_column label == 0 and response_column label == 1)
The resulting AUUC value is:
Not normalized.
The result could be a positive or negative number.
Higher number means better model.
More information about normalization is in Normalized AUUC section.
For some observation groups the results should be NaN. In this case, the results from NaN groups are linearly interpolated to calculate AUUC and plot uplift curve.
End of explanation
"""
perf_h2o.plot_uplift(metric="qini")
perf_h2o.plot_uplift(metric="gain")
perf_h2o.plot_uplift(metric="lift")
"""
Explanation: Cumulative Uplift curve plot
To plot the uplift curve, the plot_upliftmethod can be used. There is specific parameter metric which can be "qini", "gain", or "lift". The most popular is the Qini uplift curve which is similar to the ROC curve. The Gain and Lift curves are also known from traditional binomial models.
Depending on these curves, you can decide how many observations (for example customers) from the test dataset you send an offer to get optimal gain.
End of explanation
"""
perf_h2o.aecu_table()
"""
Explanation: Qini value and Average Excess Cumulative Uplift (AECU)
Qini value is calculated as the difference between the Qini AUUC and area under the random uplift curve (random AUUC). The random AUUC is computed as diagonal from zero to overall gain uplift.
The Qini value can be generalized for all AUUC metric types. So AECU for Qini metric is the same as Qini value, but the AECU can be also calculated for Gain and Lift metric type. These values are stored in aecu_table.
End of explanation
"""
perf_h2o.plot_uplift(metric="gain", normalize=True)
perf_h2o.auuc_normalized()
"""
Explanation: Normalized AUUC
To get normalized AUUC, you have to call auuc_normalized method. The normalized AUUC is calculated from uplift values which are normalized by uplift value from maximal treated number of observations. So if you have for example uplift values [10, 20, 30] the normalized uplift is [1/3, 2/3, 1]. If the maximal value is negative, the normalization factor is the absolute value from this number. The normalized AUUC can be again negative and positive and can be outside of (0, 1) interval. The normalized AUUC for auuc_metric="lift" is not defined, so the normalized AUUC = AUUC for this case. Also the plot_uplift with metric="lift" is the same for normalize=False and normalize=True.
End of explanation
"""
h2o_uplift_model.scoring_history()
"""
Explanation: Scoring histrory and importance of number of trees
To speed up the calculation of AUUC, the predictions are binned into quantile histograms. To calculate precision AUUC the more bins the better. The more trees usually produce more various predictions and then the algorithm creates histograms with more bins. So the algorithm needs more iterations to get meaningful AUUC results.
You can see in the scoring history table the number of bins as well as the result AUUC. There is also Qini value parameter, which reflects the number of bins and then is a better pointer of the model improvement. In the scoring history table below you can see the algorithm stabilized after building 6 trees. But it depends on data and model settings on how many trees are necessary.
End of explanation
"""
from h2o.estimators.gbm import H2OGradientBoostingEstimator
h2o_gbm_lgwum = H2OGradientBoostingEstimator(ntrees=ntree,
max_depth=max_depth,
min_rows=30,
nbins=1000,
score_each_iteration=False,
seed=42)
h2o_gbm_lgwum.train(y=response_column_lgwum, x=feature_cols, training_frame=h2o_train_df)
h2o_gbm_lgwum
uplift_predict_lgwum = h2o_gbm_lgwum.predict(h2o_test_df)
result = uplift_predict_lgwum.as_data_frame()
result.columns = ['predict', 'p_cn', 'p_cr', 'p_tn', 'p_tr']
result['uplift_score'] = result.eval('\
p_cn/(p_cn + p_cr) \
+ p_tr/(p_tn + p_tr) \
- p_tn/(p_tn + p_tr) \
- p_cr/(p_cn + p_cr)')
result
plot_uplift_score(result.uplift_score)
lgwum_predict = h2o.H2OFrame(result['uplift_score'].tolist())
perf_lgwum = h2o.make_metrics(lgwum_predict, h2o_test_df[response_column], treatment=h2o_test_df[treatment_column], auuc_type="gain", auuc_nbins=81)
perf_lgwum
perf_h2o.plot_uplift(metric="qini")
perf_lgwum.plot_uplift(metric="qini")
"""
Explanation: Comparasion Tree-based approach and Generalized Weighed Uplift (LGWUM)
LGWUM (Kane et al., 2014) is one of several methods available for Uplift Modeling, and uses an approach to Uplift Modelling better known as Class Variable Transformation. LGWUM assumes that positive uplift lies in treating treatment-group responders (TR) and control-group non-responders (CN), whilst avoiding treatment-group non-responders (TN) and control-group responders (CR). This is visually shown as:
𝑈𝑝𝑙𝑖𝑓𝑡 𝐿𝐺𝑊𝑈𝑀 = P(TR)/P(T) + P(CN)/P(C) - P(TN)/P(T) - P(CR)/P(C)
source: https://www.kaggle.com/code/hughhuyton/criteo-uplift-modelling/notebook
End of explanation
"""
|
DOV-Vlaanderen/pydov | docs/notebooks/search_grondwatermonsters.ipynb | mit | %matplotlib inline
import inspect, sys
# check pydov path
import pydov
"""
Explanation: Example of DOV search methods for groundwater samples (grondwatermonsters)
Use cases:
Get groundwater samples in a bounding box
Get groundwater samples with specific properties
Get the coordinates of all groundwater samples in Ghent
Get groundwater samples based on a combination of specific properties
Get groundwater samples based on a selection of screens (filters)
End of explanation
"""
from pydov.search.grondwatermonster import GrondwaterMonsterSearch
gwmonster = GrondwaterMonsterSearch()
"""
Explanation: Get information about the datatype 'GrondwaterMonster'
End of explanation
"""
print(gwmonster.get_description())
"""
Explanation: A description is provided for the 'GrondwaterMonster' datatype:
End of explanation
"""
fields = gwmonster.get_fields()
# print available fields
for f in fields.values():
print(f['name'])
"""
Explanation: The different fields that are available for objects of the 'GrondwaterMonster' datatype can be requested with the get_fields() method:
End of explanation
"""
# print information for a certain field
fields['waarde']
"""
Explanation: You can get more information of a field by requesting it from the fields dictionary:
* name: name of the field
* definition: definition of this field
* cost: currently this is either 1 or 10, depending on the datasource of the field. It is an indication of the expected time it will take to retrieve this field in the output dataframe.
* notnull: whether the field is mandatory or not
* type: datatype of the values of this field
End of explanation
"""
# if an attribute can have several values, these are listed under 'values', e.g. for 'parameter':
list(fields['parameter']['values'].items())[0:10]
fields['parameter']['values']['NH4']
"""
Explanation: Optionally, if the values of the field have a specific domain the possible values are listed as values:
End of explanation
"""
from pydov.util.location import Within, Box
df = gwmonster.search(location=Within(Box(93378, 168009, 94246, 169873)))
df.head()
"""
Explanation: Example use cases
Get groundwater samples in a bounding box
Get data for all the groundwater samples that are geographically located within the bounds of the specified box.
The coordinates are in the Belgian Lambert72 (EPSG:31370) coordinate system and are given in the order of lower left x, lower left y, upper right x, upper right y.
End of explanation
"""
for pkey_grondwatermonster in df.pkey_grondwatermonster.unique()[0:5]:
print(pkey_grondwatermonster)
"""
Explanation: Using the pkey attributes one can request the details of the corresponding grondwatermonster in a webbrowser (only showing the first unique records):
End of explanation
"""
[i for i,j in inspect.getmembers(sys.modules['owslib.fes'], inspect.isclass) if 'Property' in i]
"""
Explanation: Get groundwater samples with specific properties
Next to querying groundwater samples based on their geographic location within a bounding box, we can also search for groundwater samples matching a specific set of properties. For this we can build a query using a combination of the 'GrondwaterMonster' fields and operators provided by the WFS protocol.
A list of possible operators can be found below:
End of explanation
"""
from owslib.fes import PropertyIsEqualTo
query = PropertyIsEqualTo(
propertyname='gemeente',
literal='Leuven')
df = gwmonster.search(query=query)
df.head()
"""
Explanation: In this example we build a query using the PropertyIsEqualTo operator to find all groundwater samples that are within the community (gemeente) of 'Leuven':
End of explanation
"""
for pkey_grondwatermonster in df.pkey_grondwatermonster.unique()[0:5]:
print(pkey_grondwatermonster)
"""
Explanation: Once again we can use the pkey_grondwatermonster as a permanent link to the information of the groundwater samples:
End of explanation
"""
df['parameter_label'] = df['parameter'].map(fields['parameter']['values'])
df[['pkey_grondwatermonster', 'datum_monstername', 'parameter', 'parameter_label', 'waarde', 'eenheid']].head()
"""
Explanation: We can add the descriptions of the parameter values as an extra column 'parameter_label':
End of explanation
"""
from owslib.fes import Or, Not, PropertyIsNull, PropertyIsLessThanOrEqualTo, And, PropertyIsLike
query = And([PropertyIsEqualTo(propertyname='gemeente',
literal='Hamme'),
PropertyIsEqualTo(propertyname='kationen',
literal='true')
])
df_hamme = gwmonster.search(query=query,
return_fields=('pkey_grondwatermonster', 'parameter', 'parametergroep', 'waarde', 'eenheid','datum_monstername'))
df_hamme.head()
"""
Explanation: Get groundwater screens based on a combination of specific properties
Get all groundwater screens in Hamme that have measurements for cations (kationen). And filter to get only Sodium values after fetching all records.
End of explanation
"""
df_hamme = df_hamme[df_hamme.parameter=='Na']
df_hamme.head()
"""
Explanation: You should note that this initial dataframe contains all parameters (not just the cations). The filter will only make sure that only samples where any cation was analysed are in the list. If we want to filter more, we should do so in the resulting dataframe.
End of explanation
"""
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: Working with water samples
For further analysis and visualisation of the time series data, we can use the data analysis library pandas and visualisation library matplotlib.
End of explanation
"""
query = PropertyIsEqualTo(
propertyname='pkey_filter',
literal='https://www.dov.vlaanderen.be/data/filter/1991-001040')
df = gwmonster.search(query=query)
df.head()
"""
Explanation: Query the data of a specific filter using its pkey:
End of explanation
"""
df['datum_monstername'] = pd.to_datetime(df['datum_monstername'])
"""
Explanation: The date is still stored as a string type. Transforming to a data type using the available pandas function to_datetime and using these dates as row index:
End of explanation
"""
pivot = df.pivot_table(columns=df.parameter, values='waarde', index='datum_monstername')
pivot
"""
Explanation: For many usecases, it is useful to create a pivoted table, showing the value per parameter
End of explanation
"""
parameters = ['NO3', 'NO2', 'NH4']
ax = pivot[parameters].plot.line(style='.-', figsize=(12, 5))
ax.set_xlabel('');
ax.set_ylabel('concentration (mg/l)');
ax.set_title('Concentration nitrite, nitrate and ammonium for filter id 1991-001040');
"""
Explanation: Plotting
The default plotting functionality of Pandas can be used:
End of explanation
"""
from pydov.search.grondwaterfilter import GrondwaterFilterSearch
from pydov.util.query import Join
gfs = GrondwaterFilterSearch()
gemeente = 'Kalmthout'
filter_query = And([PropertyIsLike(propertyname='meetnet',
literal='meetnet 1 %'),
PropertyIsEqualTo(propertyname='gemeente',
literal=gemeente)])
filters = gfs.search(query=filter_query, return_fields=['pkey_filter'])
monsters = gwmonster.search(query=Join(filters, 'pkey_filter'))
monsters.head()
"""
Explanation: Combine search in filters and groundwater samples
For this example, we will first search filters, and later search all samples for this selection.
We will select filters in the primary network located in Kalmthout.
End of explanation
"""
parameter = 'NH4'
trends_sel = monsters[(monsters.parameter==parameter) & (monsters.veld_labo=='LABO')]
trends_sel = trends_sel.set_index('datum_monstername')
trends_sel['label'] = trends_sel['gw_id'] + ' F' + trends_sel['filternummer']
# By pivoting, we get each location in a different column
trends_sel_pivot = trends_sel.pivot_table(columns='label', values='waarde', index='datum_monstername')
trends_sel_pivot.index = pd.to_datetime(trends_sel_pivot.index)
# resample to yearly values and plot data
ax = trends_sel_pivot.resample('A').median().plot.line(style='.-', figsize=(12, 5))
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax.set_title(f'Long term evolution of {parameter} in {gemeente}');
ax.set_xlabel('year');
ax.set_ylabel('concentration (mg/l)');
"""
Explanation: We will filter out some parameters, and show trends per location.
End of explanation
"""
|
Quantiacs/quantiacs-python | sampleSystems/svm_momentum_tutorial.ipynb | mit | import quantiacsToolbox
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn import svm
%matplotlib inline
%%html
<style>
table {float:left}
</style>
"""
Explanation: Quantiacs Toolbox Sample: Support Vector Machine(Momentum)
This tutorial will show you how to use svm and momentum to predict the trend using the Quantiacs toolbox.
We use the 20-day closing price momentum for the last week (5 days) as features and trend of the next day as value.
For each prediction, we lookback one year (252 days).
End of explanation
"""
F_AD = pd.read_csv('./tickerData/F_AD.txt')
CLOSE = np.array(F_AD.loc[:252-1, [' CLOSE']])
plt.plot(CLOSE)
"""
Explanation: For developing and testing a strategy, we will use the raw data in the tickerData folder that has been downloaded via the Toolbox's loadData() function.
This is just a simple sample to show how svm works.
Extract the closing price of the Australian Dollar future (F_AD) for the past year:
End of explanation
"""
momentum = (CLOSE[20:] - CLOSE[:-20]) / CLOSE[:-20]
plt.plot(momentum)
"""
Explanation: Momentum is generally defined as the return between two points in time separated by a fixed interval:
(p2-p1)/p1
Momentum is an indicator of the average speed of price on a time scale defined by the interval.
The most used intervals by investors are 1, 3, 6 and 12 months, or their equivalent in trading days.
Calculate 20-day momentum:
End of explanation
"""
X = np.concatenate([momentum[i:i+5] for i in range(252-20-5)], axis=1).T
y = np.sign((CLOSE[20+5:] - CLOSE[20+5-1: -1]).T[0])
"""
Explanation: Now we can create samples.
Use the last 5 days' momentum as features.
We will use a binary trend: y = 1 if price goes up, y = -1 if price goes down
For example, given close price, momentum at 19900114:
| DATE | CLOSE | MOMENTUM |
| :--- | ----- | -------- |
| 19900110 | 77580.0 | -0.01778759 |
| 19900111 | 77980.0 | -0.00599427 |
| 19900112 | 78050.0 | -0.01574397 |
| 19900113 | 77920.0 | -0.00402702 |
| 19900114 | 77770.0 | -0.01696891 |
| 19900115 | 78060.0 | -0.01824298 |
Corresponding sample should be
x = (-0.01778759, -0.00599427, -0.01574397, -0.00402702, -0.01696891)
y = 1
End of explanation
"""
clf = svm.SVC()
clf.fit(X, y)
clf.predict(momentum[-5:].T)
"""
Explanation: Use svm learn and predict:
End of explanation
"""
F_AD.loc[251:252, ['DATE', ' CLOSE']]
"""
Explanation: 1 shows that the close price will go up tomorrow.
What is the real result?
End of explanation
"""
class myStrategy(object):
def myTradingSystem(self, DATE, OPEN, HIGH, LOW, CLOSE, VOL, OI, P, R, RINFO, exposure, equity, settings):
def predict(momentum, CLOSE, lookback, gap, dimension):
X = np.concatenate([momentum[i:i + dimension] for i in range(lookback - gap - dimension)], axis=1).T
y = np.sign((CLOSE[dimension+gap:] - CLOSE[dimension+gap-1:-1]).T[0])
y[y==0] = 1
clf = svm.SVC()
clf.fit(X, y)
return clf.predict(momentum[-dimension:].T)
nMarkets = len(settings['markets'])
lookback = settings['lookback']
dimension = settings['dimension']
gap = settings['gap']
pos = np.zeros((1, nMarkets), dtype=np.float)
momentum = (CLOSE[gap:, :] - CLOSE[:-gap, :]) / CLOSE[:-gap, :]
for market in range(nMarkets):
try:
pos[0, market] = predict(momentum[:, market].reshape(-1, 1),
CLOSE[:, market].reshape(-1, 1),
lookback,
gap,
dimension)
except ValueError:
pos[0, market] = .0
return pos, settings
def mySettings(self):
""" Define your trading system settings here """
settings = {}
# Futures Contracts
settings['markets'] = ['CASH', 'F_AD', 'F_BO', 'F_BP', 'F_C', 'F_CC', 'F_CD',
'F_CL', 'F_CT', 'F_DX', 'F_EC', 'F_ED', 'F_ES', 'F_FC', 'F_FV', 'F_GC',
'F_HG', 'F_HO', 'F_JY', 'F_KC', 'F_LB', 'F_LC', 'F_LN', 'F_MD', 'F_MP',
'F_NG', 'F_NQ', 'F_NR', 'F_O', 'F_OJ', 'F_PA', 'F_PL', 'F_RB', 'F_RU',
'F_S', 'F_SB', 'F_SF', 'F_SI', 'F_SM', 'F_TU', 'F_TY', 'F_US', 'F_W', 'F_XX',
'F_YM']
settings['lookback'] = 252
settings['budget'] = 10 ** 6
settings['slippage'] = 0.05
settings['gap'] = 20
settings['dimension'] = 5
return settings
result = quantiacsToolbox.runts(myStrategy)
"""
Explanation: Hooray! Our strategy successfully predict the trend.
Now we can use Quantiac's Toolbox to run our strategy.
End of explanation
"""
|
liganega/Gongsu-DataSci | notebooks/GongSu11_List_Comprehension.ipynb | gpl-3.0 | odd_20 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
"""
Explanation: 리스트 조건제시법(List Comprehension)
주요 내용
주어진 리스트를 이용하여 특정 성질을 만족하는 새로운 리스트를 생성하고자 할 때
리스트 조건제시법을 활용하면 매우 효율적인 코딩을 할 수 있다.
리스트 조건제시법은 집합을 정의할 때 사용하는 조건제시법과 매우 유사하다.
예를 들어,0부터 1억 사이에 있는 홀수들을 원소로 갖는 집합을 정의하려면
두 가지 방법을 활용할 수 있다.
원소나열법
{1, 3, 5, 7, 9, 11, ..., 99999999}
중간에 사용한 점점점(...) 중략기호는 0부터 1억 사이의 총 5천만개의 홀수를
나열하는 것은 불가능하기에 사용한 기호이다.
실제로 1초에 하나씩 숫자를 적는다 해도 5천만 초, 약 1년 8개월이 걸린다.
조건제시법
{ x | 0 <= x <= 1억, 단 x는 홀수}
여기서는 조건제시법을 활용하여 새로운 리스트를 생성하는 방법을 알아본다.
오늘의 주요 예제
$y = x^2$ 함수의 그래프를 아래와 같이 그려보자.
단, $x$는 -10에서 10사이의 값을 가진다.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="images/pyplot_exp.png" style="width:350">
</td>
</tr>
</table>
</p>
특정 성질을 만족하는 리스트 생성하기
예제
0부터 20 사이의 모든 홀수를 순서대로 담고 있는 리스트를 어떻게 구현할까?
집합의 경우에서처럼 원소나열법 또는 조건제시법을 활용할 수 있다.
End of explanation
"""
i = 0
odd_20 = []
while i <= 20:
if i % 2 == 1:
odd_20.append(i)
i += 1
print(odd_20)
"""
Explanation: 아니면, 반복문을 활용할 수 있다.
while 반복문: 리스트의 append() 메소드를 활용한다.
End of explanation
"""
odd_20 = []
for i in range(21):
if i % 2 == 1:
odd_20.append(i)
print(odd_20)
"""
Explanation: for 반복문: range() 함수를 활용한다.
End of explanation
"""
odd_nums = [1, 3, 5, 7, 9, 11, ..., 99999999]
"""
Explanation: 예제
이제 0부터 1억 사이의 모든 홀수를 순서대로 담고 있는 리스트를 원소나열법으로 구현할 수 있을까?
답은 '아니다'이다. 집합을 정의할 때처럼 생략기호를 사용할 수는 있지만, 제대로 작동하지 않는다.
예를 들어, 0부터 1억 사이의 모든 홀수들의 리스트를 아래와 같이 선언해 보자.
End of explanation
"""
print(odd_nums)
"""
Explanation: 확인하면 학교에서 배운 것과 비슷하게 작동하는 것처럼 보인다.
End of explanation
"""
odd_nums[:10]
"""
Explanation: 주의: Ellipsis는 생략을 나타낸다.
하지만 처음 10개의 홀수를 얻기 위해 슬라이싱을 사용하면 다음과 같이 엉뚱하게 나온다.
End of explanation
"""
def odd_number(num):
L=[]
for i in range(num):
if i%2 == 1:
L.append(i)
return L
"""
Explanation: 위와 같이 작동하는 이유는 생략된 부분이 어떤 규칙으로 나열되는지 파이썬 해석기가 알지 못하기 때문이다.
반면에 반복문을 활용하는 것은 언제든지 가능하다.
예를 들어, 아래 함수는 0부터 정해진 숫자 사이의 모든 홀수를 순서대로 담은 리스트를 생성하려 리턴한다.
End of explanation
"""
odd_number(20)
"""
Explanation: 0과 20 사이의 홀수들의 리스트는 다음과 같다.
End of explanation
"""
odd_100M = odd_number(100000000)
"""
Explanation: 이제 0과 1억 사이의 홀수들의 리스트를 생성해보자.
주의: 아래와 같은 명령어는 실행하지 말자. 5천만개의 숫자를 출력하는 바보같은 일은 하지 말아야 한다.
print(odd_number(100000000))
End of explanation
"""
print(odd_100M[:20])
"""
Explanation: 좀 오래 걸린다.
사용하는 컴퓨터 사양에 따라 시간차이가 발생하지만 1억보다 작은 5천만 개의 홀수를 생성하는
데에 최신 노트북인 경우 10여초 걸린다.
홀수들의 리스트가 제대로 생성되었는지를 확인하기 위해 처음 20개의 홀수를 확인해보자.
End of explanation
"""
import time
start_time = time.clock()
odd_100M = odd_number(100000000)
end_time = time.clock()
print(end_time - start_time, "초")
"""
Explanation: 부록: 프로그램 실행시간 측정하기
프로그램의 실행시간을 확인하려면 time 모듈의 clock() 함수를 활용하면 된다.
clock() 함수의 리턴값은 이 함수를 호출할 때까지 걸린 프로세스 시간을 나타낸다.
프로세스 시간의 의미를 이해하지 못해도 상관 없다.
대신에 time 모듈의 clock() 함수의 활용법을 한 번쯤 본 것으로 만족한다.
End of explanation
"""
odd_100M = [x for x in range(100000001) if x % 2 == 1]
odd_100M[:10]
"""
Explanation: 이제 질문을 좀 다르게 하자.
odd_number 함수를 좀 더 간결하게 정의할 수 없을까?
이에 대해 파이썬에서는 리스트 조건제시법이라는 기술을 제공한다.
이 기술을 모든 언어가 지원하지는 않는다.
예를 들어, C# 언어는 from ... where ... select ... 가 비슷한 역할을 지원하지만 좀 다르고,
Java 언어에서는 함수 인터페이스를 이용하여 비슷한 기능을 구현할 수 있다.
리스트 조건제시법 이해
리스트 조건제시법은 집합 정의에 사용되는 조건제시법과 매우 비슷하게 작동한다.
예를 들어, 0부터 1억 사이의 홀수들을 순서대로 항목으로 갖는 리스트를 생성하는 과정을
설명하면서 조건제시법의 이해를 돕고자 한다.
먼저, 앞서 개요에서 설명한 대로 0부터 1억 사이의 홀수들의 집합을
조건제시법으로로 표현한다.
{x | 0 <= x <= 100000000, 단 x는 홀수}
이제 집합기호를 리스트 기호로 대체한다.
[x | 0 <= x <= 100000000, 단 x는 홀수]
집합의 짝대기(|) 기호는 for로 대체한다.
[x for 0 <= x <= 100000000, 단 x는 홀수]
짝대기 기호 오른편에 위치하고, 변수 x가 어느 범위에서 움직이는지를 설명하는
부등식인 0 <= x <= 100000000 부분을 파이썬 수식으로 변경한다.
주로, 기존에 정의된 리스트를 사용하거나 range() 함수를 활용하여
범위를 x in ... 형식으로 지정한다.
[x for x in range(100000000+1), 단 x는 홀수]
마지막으로 변수 x에 대한 제한조건인 단 x는 홀수 부분을
파이썬의 if 문장으로 변경한다.
예를 들어, x는 홀수는 파이썬의 x % 2 == 1로 나타낼 수 있다.
[x for x in range(100000001) if x % 2 == 1]
End of explanation
"""
odd_100M_square = [x**2 for x in range(100000000) if x % 2== 1]
odd_100M_square[:10]
"""
Explanation: 예제
0부터 1억 사이의 홀수들의 제곱을 항목으로 갖는 리스트를 조건제시법으로 생성할 수 있다.
End of explanation
"""
odd_100M_square = [x**2 for x in odd_100M]
odd_100M_square[:10]
"""
Explanation: 물론 앞서 만든 odd_100M을 재활용할 수 있다.
End of explanation
"""
odd_100M2 = [2 * x + 1 for x in range(50000000)]
odd_100M2[:10]
"""
Explanation: 예제
0부터 1억 사이의 홀수들을 항목으로 갖는 리스트를 다른 조건제시법으로 생성해보자.
먼저, 모든 홀수는 2*x + 1의 모양을 갖는다는 점에 주의한다.
따라서 1억보다 작은 홀수는 아래와 같이 생성할 수 있다.
End of explanation
"""
odd_100M2 = []
for x in range(50000000):
odd_100M2.append(2*x+1)
odd_100M2[:10]
"""
Explanation: 이 방식은 좀 더 쉬워 보인다. if 문이 없기 때문이다.
위에서 사용한 조건제시법을 for 반복문을 이용하여 구현하면 아래처럼 할 수 있다.
End of explanation
"""
%matplotlib inline
"""
Explanation: 오늘의 주요 예제 해결
$y = x^2$ 함수의 그래프를 그리고자 한다.
그래프를 그리기 위해 matplotlib.pyplot 이란 모듈을 이용한다.
아래 코드처럼 퍼센트 기호(%)로 시작하는 코드는 쥬피터 노트북에만 사용하는 코드이며,
아래 코드는 쥬피터 노트북에 그래프를 직접 나타내기 위해 사용한다.
spyder 등 파이썬 에디터를 사용하는 경우 필요하지 않는 코드이다.
End of explanation
"""
import matplotlib.pyplot as plt
"""
Explanation: matplotlib.pyplot 모듈 이름이 길어서 보통은 plt 라고 줄여서 부른다.
End of explanation
"""
### 그래프 준비 시작 ###
# 여기부터 아래 세 개의 우물정 표시 부분까지는 그래프를 그리기 위해 준비하는 부분이다.
# 이해하려 하지 말고 그냥 기억만 해두면 된다.
# 그림을 그리기 위한 도화지를 준비하는 용도이다.
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
# x축은 아래에, y축은 그림의 중심에 위치하도록 한다.
ax.spines['left'].set_position('center')
ax.spines['bottom'].set_position('zero')
# 그래프를 둘러싸는 상자를 없앤다.
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
### 그래프 그리기 준비 끝 ###
# x좌표와 y좌표 값들의 리스트를 제공한다.
# 여기서는 조건제시법을 활용한다.
xs = [x for x in range(-10, 11, 5)]
ys = [x**2 for x in xs]
# 이제 plot() 함수를 호출하여 그래프를 그린다.
plt.plot(xs, ys)
plt.show()
"""
Explanation: 그래프를 그리기 위해서는 먼저 필요한 만큼의 점을 찍어야 한다.
2차원 그래프의 점은 x좌표와 y좌표의 쌍으로 이루어져 있음을 기억한다.
그리고 파이썬의 경우 점들의 그래프를 그리기 위해서는 점들의 x좌표 값들의 리스트와
y좌표 값들의 리스트를 제공해야 한다.
기본적으로 점을 많이 찍을 수록 보다 정확한 그래프를 그릴 수 있지만 몇 개의 점으로도
그럴싸한 그래프를 그릴 수 있다.
예를 들어, (-10, 100), (-5, 25), (0, 0), (5, 25), (10, 100)
다섯 개의 점을 잇는 그래프를 그리기 위해
xs = [-10, -5, 0, 5, 10]
와
ys = [100, 25, 0, 25, 100]
의 각각의 점들의 x좌표 값들의 리스트와 y좌표 값들의 리스트를 활용한다.
ys 리스트의 각각의 항목은 xs 리스트의 동일한 위치에 해당하는 항목의 제곱임에 주의하라.
End of explanation
"""
### 그래프 준비 시작 ###
# 여기부터 아래 세 개의 우물정 표시 부분까지는 그래프를 그리기 위해 준비하는 부분이다.
# 이해하려 하지 말고 그냥 기억만 해두면 된다.
# 그림을 그리기 위한 도화지를 준비하는 용도이다.
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
# x축은 아래에, y축은 그림의 중심에 위치하도록 한다.
ax.spines['left'].set_position('center')
ax.spines['bottom'].set_position('zero')
# 그래프를 둘러싸는 상자를 없앤다.
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
### 그래프 그리기 준비 끝 ###
# x좌표와 y좌표 값들의 리스트를 제공한다.
# 여기서는 조건제시법을 활용한다.
xs = [x for x in range(-10, 11)]
ys = [x**2 for x in xs]
# 이제 plot() 함수를 호출하여 그래프를 그린다.
plt.plot(xs, ys)
plt.show()
"""
Explanation: 보다 많은 점을 찍으면 보다 부드러운 그래프를 얻을 수 있다.
End of explanation
"""
from math import exp
[exp(n) for n in range(10) if n % 2 == 1]
"""
Explanation: 연습문제
연습
수학에서 사용되는 대표적인 지수함수인 $f(x) = e^x$는 math 모듈의 exp()로 정의되어 있다.
아래 리스트를 조건제시법으로 구현하라.
$$[e^1, e^3, e^5, e^7, e^9]$$
주의: $e$의 값은 대략 2.718 정도이다.
견본답안:
End of explanation
"""
[exp(3*n) for n in range(1,6)]
"""
Explanation: 연습
아래 리스트를 조건제시법으로 구현하라.
$$[e^3, e^6, e^9, e^{12}, e^{15}]$$
힌트: range(1, 6)을 활용할 수 있다.
견본답안:
End of explanation
"""
about_python = 'Python is a general-purpose programming language. \
It is becoming more and more popular \
for doing data science.'
"""
Explanation: 연습
조건제시법은 데이터를 처리하는 데에 매우 효과적이다.
예를 들어, 어떤 영어 문장에 사용된 단어들의 길이를 분석할 수 있다.
아래와 같이 파이썬을 소개하는 문장이 있다.
End of explanation
"""
words = about_python.split()
words
"""
Explanation: 위 문장에 사용된 단어들의 길이를 분석하기 위해 먼저 위 문장을 단어로 쪼갠다.
이를 위해, 문자열에 사용하는 split() 메소드를 사용한다.
End of explanation
"""
L =[]
for x in words:
L.append((x.upper(), len(x)))
L
"""
Explanation: 위 words 리스트의 각 항목의 문자열들을 모두 대문자로 바꾼 단어와 그리고 해당 항목의 문자열의 길이를 항목으로 갖는 튜플들의 리스트를 작성하고자 한다.
[('PYTHON', 6), ('IS', 2), ....]
반복문을 이용하여 아래와 같이 작성할 수 있다.
End of explanation
"""
[(x.upper(), len(x)) for x in words]
"""
Explanation: 리스트 조건제시법으로는 아래와 같이 보다 간결하게 구현할 수 있다.
End of explanation
"""
[(x.upper(), len(x)) for x in words[:5]]
"""
Explanation: 처음 다섯 개의 단어만 다루고자 할 경우에는 아래처럼 하면 된다.
End of explanation
"""
[(words[n].upper(), len(words[n])) for n in range(len(words)) if n < 5]
"""
Explanation: 아래처럼 인덱스에 제한을 가하는 방식도 가능하다. 즉, if 문을 추가로 활용한다.
End of explanation
"""
[(x.strip('.').upper(), len(x.strip('.'))) for x in words]
"""
Explanation: 질문:
위 단어들 중에서 'language.'와 'science.' 두 경우에 마침표가 사용되었다.
마침표를 제외한 단어의 길이를 표시하도록 위 코드를 수정하라.
힌트: strip() 문자열 메소드를 활용한다.
견본답안:
End of explanation
"""
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.spines['left'].set_position('center')
ax.spines['bottom'].set_position('zero')
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
xs = [x for x in range(-10, 11)]
ys = [max(0, x) for x in xs]
plt.plot(xs, ys)
plt.show()
"""
Explanation: 연습
머신러닝의 인공신경만(Artificial Neural Network) 분야에서 활성화 함수(activation function)로 많이 사용되는 ReLU(Rectified Linear Unit) 함수를 그래프로 그려보자. ReLu 함수의 정의는 다음과 같다.
$$
f(x) = \begin{cases} 0 & x <0 \text{ 인 경우,} \ 1 & x \ge 0 \text{ 인 경우.}\end{cases}
$$
참조: ReLU 함수에 대한 간단한 설명은 여기에서 확인할 수 있다.
견본답안:
End of explanation
"""
|
akritichadda/K-AND | daniel/.ipynb_checkpoints/DB_project_oculomotor_v2-checkpoint.ipynb | mit | fid_VIS_SCm, fpd_VIS_SCm=get_connectivity('VIS','SCm')
fid_SCm_PRNc, fpd_SCm_PRNc=get_connectivity('SCm','PRNc')
fid_SCm_PRNr, fpd_SCm_PRNr=get_connectivity('SCm','PRNr')
fid_PRNc_III, fpd_PRNc_III=get_connectivity('PRNc','III')
fid_PRNc_VI, fpd_PRNc_VI=get_connectivity('PRNc','VI')
fid_PRNr_VI, fpd_PRNr_VI=get_connectivity('PRNr','VI')
fid_PRNr_III, fpd_PRNr_III=get_connectivity('PRNr','III')
iu_c,iw_c=get_mean_injection_density(fid_SCm_PRNc,fpd_SCm_PRNc)
iu,iw=get_mean_injection_density(fid_SCm_PRNr, fpd_SCm_PRNr)
plot_max_voxels(iu,'SCm')
plot_max_voxels(iw,'SCm')
plot_max_voxels(iw_c,'SCm')
"""
Explanation: Abbreviations for structures of interest in the oculomotor circuit
superior colliculus, motor-related: 'SCm'
pontine reticular formation, caudal part: 'PRNc'
pontine reticular formation: 'PRNr'
oculomotor nucleus: 'III'
abducens nucleus: 'VI'
get connectivity for your structures of interest
End of explanation
"""
# # plot histogram for VIS_SCm_npv data
# vs_npv = VIS_SCm_npv.values()
# _,_,_=plt.hist(vs_npv, 25, facecolor='green', alpha=0.75)
# spc_npv = SCm_PRNc_npv.values()
# _,_,_=plt.hist(spc_npv, 25, facecolor='green', alpha=0.75)
# spr_npv = SCm_PRNr_npv.values()
# _,_,_=plt.hist(spr_npv, 25, facecolor='green', alpha=0.75)
# plt.scatter(spr_npv,spc_npv,alpha=.5,s=80)
"""
Explanation: IDEAS
visualize injection locations in a particular area, color coded by NPV strength. See if there is spatial clustering.
figure out a way to infer the "path of least resistance" from one area to another, across multiple nodes.
End of explanation
"""
|
ajgpitch/qutip-notebooks | examples/qip-noisy-device-simulator.ipynb | lgpl-3.0 | import copy
import numpy as np
import matplotlib.pyplot as plt
pi = np.pi
from qutip.qip.device import Processor
from qutip.operators import sigmaz, sigmay, sigmax, destroy
from qutip.states import basis
from qutip.metrics import fidelity
from qutip.qip.operations import rx, ry, rz, hadamard_transform
"""
Explanation: Noisy quantum device simulation with QuTiP
Author: Boxi Li ([email protected])
This is the introduction notebook to the deliverable of one of the Google Summer of Code 2019 project (GSoC2019) "Noise Models in QIP Module", under the organization NumFocus. The final product of the project is a framework of noisy quantum device simulator based on QuTiP open system solvers.
The simulation of quantum information processing (QIP) is usually achieved by gate matrix product. Many simulators such as the simulation backend of Qiskit and porjectQ are based on it. QuTiP offers this common way of simulation with the class qutit.qip.QubitCircuit. It simulates QIP in the circuit model. You can find the introduction notebook for this matrix gate representation here.
The simulation introduced here is different as it simulates the dynamics of the quantum device at the level of driving Hamiltonians. It is closer to the physical realization than the matrix product approach and is more convenient when simulating the noise of physical hardware. The simulator is based on QuTiP Lindbladian equation solvers and is defined as qutip.qip.device.Processor. The basic element is the control pulse characterized by the driving Hamiltonian, target qubits, time sequence and pulse strength. Our way of simulation offers a practical way to diagnostically add noise to each pulse or the whole device at the Hamiltonian level. Based on this pulse level control, different backends can be defined for different physical systems such as Cavity QED, Ion trap or Circuit QED. For each backend, a compiler needs to be defined. In the end, the Processor will be able to transfer a simple quantum circuit into the control pulse sequence, add noise automatically and perform the noisy simulation.
This notebook contains the most basic part of this quantum device simulator, i.e. the noisy evolution under given control pulses. It demonstrates how to set up the parameters and introduce different kinds of noise into the evolution.
Note
This module is still under active development. Be ready for some adventures and unexpected edges. Please do not hesitate to raise an issue on our GitHub website if you find any bugs. A new release might break some backwards compatibility on this module, therefore we recommend you to check our GitHub website if you are facing some unexpected errors after an update.
Links to other related notebook
There is a series of notebooks on specialized subclasses and application of the simulator Processor, including finding pulses realizing certain quantum gate based on optimization algorithm or physical model and simulating simple quantum algorithms:
The notebook QuTiP example: Physical implementation of Spin Chain Qubit model shows the simulation of a spin-chain based quantum computing model both with qutit.qip.QubitCircuit and qutip.qip.device.Processor.
The notebook Examples for OptPulseProcessor describes the class OptPulseProcessor, which uses the optimal control module in QuTiP to find the control pulses for quantum gates.
The notebook Running the Deutsch–Jozsa algorithm on the noisy device simulator
gives an example of simulating simple quantum algorithms in the presence of noise.
The pulse level control
End of explanation
"""
processor = Processor(N=1)
processor.add_control(0.5 * sigmaz(), targets=0, label="sigmaz")
processor.add_control(0.5 * sigmay(), targets=0, label="sigmay")
"""
Explanation: Controlling a single qubit
The simulation of a unitary evolution with Processor is defiend by the control pulses. Each pulse is represented by a Pulse object consisting of the control Hamiltonian $H_j$, the target qubits, the pulse strength $c_j$ and the time sequence $t$. The evolution is given by
\begin{equation}
U(t)=\exp(-\mathrm{i} \sum_j c_j(t) H_j t)
\end{equation}
In this example, we define a single-qubit quantum device with $\sigma_z$ and $\sigma_y$ pulses.
End of explanation
"""
for pulse in processor.pulses:
pulse.print_info()
"""
Explanation: The list of defined pulses are saved in an attribute Processor.pulses. We can see the pulse that we just defined by
End of explanation
"""
processor.pulses[1].coeff = np.array([1.])
processor.pulses[1].tlist = np.array([0., pi])
for pulse in processor.pulses:
pulse.print_info()
"""
Explanation: We can see that the pulse strength coeff and time sequence tlist still remain undefined. To fully characterize the evolution, we need to define them both.
The pulse strength and time are both given as a NumPy array. For discrete pulses, tlist specifies the start and the end time of each pulse coefficient, and thus is one element longer than coeff. (This is different from the usual requirement in QuTiP solver where tlist and coeff needs to have the same length.) The definition below means that we turn on the $\sigma_y$ pulse for $t=\pi$ and with strength 1. (Notice that the Hamiltonian is $H=\frac{1}{2} \sigma_z$)
End of explanation
"""
basis0 = basis(2, 0)
result = processor.run_state(init_state=basis0)
result.states[-1].tidyup(1.e-5)
"""
Explanation: This pulse is a $\pi$ pulse that flips the qubit from $\left |0 \right\rangle$ to $\left |1 \right\rangle$, equivalent to a rotation around y-axis of angle $\pi$:
$$R_y(\theta) = \begin{pmatrix} cos(\theta/2) & -sin(\theta/2) \ sin(\theta/2) & cos(\theta/2) \end{pmatrix}$$
We can run the simulation to see the result of the evolution starting from $\left |0 \right\rangle$:
End of explanation
"""
processor.pulses[0].coeff = np.array([1., 0., 1.])
processor.pulses[1].coeff = np.array([0., 1., 0.])
processor.pulses[0].tlist = np.array([0., pi/2., 2*pi/2, 3*pi/2])
processor.pulses[1].tlist = np.array([0., pi/2., 2*pi/2, 3*pi/2])
result = processor.run_state(init_state=basis(2, 1))
result.states[-1].tidyup(1.0e-5)
"""
Explanation: As arbitrary single-qubit gate can be decomposed into $R_z(\theta_1) \cdot R_y(\theta_2) \cdot R_z(\theta_3)$, it is enough to use three pulses. For demonstration purpose, we choose $\theta_1=\theta_2=\theta_3=\pi/2$
End of explanation
"""
tlist = np.linspace(0., 2*np.pi, 20)
processor = Processor(N=1, spline_kind="step_func")
processor.add_control(sigmaz(), 0)
processor.pulses[0].tlist = tlist
processor.pulses[0].coeff = np.array([np.sin(t) for t in tlist])
processor.plot_pulses();
tlist = np.linspace(0., 2*np.pi, 20)
processor = Processor(N=1, spline_kind="cubic")
processor.add_control(sigmaz())
processor.pulses[0].tlist = tlist
processor.pulses[0].coeff = np.array([np.sin(t) for t in tlist])
processor.plot_pulses();
"""
Explanation: Pulse with continuous amplitude
If your pulse strength is generated somewhere else and is a discretization of a continuous function, you can also tell the Processor to use them with the cubic spline interpolation. In this case tlist and coeff must have the same length.
End of explanation
"""
a = destroy(2)
initial_state = basis(2,1)
plus_state = (basis(2,1) + basis(2,0)).unit()
tlist = np.arange(0.00, 2.02, 0.02)
H_d = 10.*sigmaz()
"""
Explanation: Noisy evolution
In real quantum devices, noise affects the perfect execution of gate-based quantum circuits, limiting their depths. In general, we can divide quantum noise into two types: coherent and incoherent noise. The former one usually dues to the deviation of the control pulse. The noisy evolution is still unitary. Incoherent noise comes from the coupling of the quantum system with the environment. This type of noise leads to the loss of information. In QIP theory, we describe this type of noise with a noisy channel, corresponding to the collapse operators in the Lindblad equation.
Although noise can, in general, be simulated with quantum channel representation, it will need some pre-analysis and approximation, which can be difficult in a large system. This simulator offers an easier, but computationally more demanding solution from the viewpoint of quantum control. Processor, as a circuit simulator, is different from the common simulator of QIP, as it simulates the evolution of the qubits under the driving Hamiltonian. The noise will be defined according to the control pulses and the evolution will be calculated using QuTiP solvers. This enables one to define more complicated noise such as cross-talk and leakage error, depending on the physical device and the problem one wants to study. On the one hand, the simulation can help one analyze the noise composition and identify the dominant noise source. On the other hand, together with a backend compiler, one can also use it to study if an algorithm is sensitive to a certain type of noise.
Decoherence
In Processor, decoherence noise is simulated by adding collapse operator into the Lindbladian equation. For single-qubit decoherence, it is equivalent to applying random bit flip and phase flip error after applying the quantum gate. For qubit relaxation, one can simply specify the $t_1$ and $t_2$ time for the device or for each qubit. Here we assume the qubit system has a drift Hamiltonian $H_d=\hbar \omega \sigma_z$, for simplicity, we let $\hbar \omega = 10$
End of explanation
"""
from qutip.qip.pulse import Pulse
t1 = 1.
processor = Processor(1, t1=t1)
# creat a dummpy pulse that has no Hamiltonian, but only a tlist.
processor.add_pulse(Pulse(None, None, tlist=tlist, coeff=False))
result = processor.run_state(init_state=initial_state, e_ops=[a.dag()*a])
fig, ax = plt.subplots()
ax.plot(tlist[0: 100: 10], result.expect[0][0: 100: 10], 'o', label="simulation")
ax.plot(tlist, np.exp(-1./t1*tlist), label="theory")
ax.set_xlabel("t")
ax.set_ylabel("population in the excited state")
ax.legend()
plt.grid()
"""
Explanation: Decay time $T_1$
The $T_1$ relaxation time describes the strength of amplitude damping and can be described, in a two-level system, by a collapse operator $\frac{1}{\sqrt{T_1}}a$, where $a$ is the annihilation operator. This leads to an exponential decay of the population of excited states proportional to $\exp({-t/T_1})$. This amplitude damping can be simulated by specifying the attribute t1 of the processor
End of explanation
"""
t1 = 1.
t2 = 0.5
processor = Processor(1, t1=t1, t2=t2)
processor.add_control(H_d, 0)
processor.pulses[0].coeff = True
processor.pulses[0].tlist = tlist
Hadamard = hadamard_transform(1)
result = processor.run_state(init_state=plus_state, e_ops=[Hadamard*a.dag()*a*Hadamard])
fig, ax = plt.subplots()
# detail about lenght of tlist needs to be fixed
ax.plot(tlist[:-1], result.expect[0][:-1], '.', label="simulation")
ax.plot(tlist[:-1], np.exp(-1./t2*tlist[:-1])*0.5 + 0.5, label="theory")
plt.xlabel("t")
plt.ylabel("Ramsey signal")
plt.legend()
ax.grid()
"""
Explanation: Decay time $T_2$
The $T_2$ time describes the dephasing process. Here one has to be careful that the amplitude damping channel characterized by $T_1$ will also lead to a dephasing proportional to $\exp(-t/2T_1)$. To make sure that the overall phase dampling is $exp(-t/T_2)$, the processor (internally) uses an collapse operator $\frac{1}{\sqrt{2*T'_2}} \sigma_z$ with $\frac{1}{T'_2}+\frac{1}{2T_1}=\frac{1}{T_2}$ to simulate the dephasing. (This also indicates that $T_2 \leqslant 2T_1$)
Usually, the $T_2$ time is measured by the Ramsey experiment, where the qubit starts from the excited state, undergoes a $\pi/2$ pulse, proceeds for a time $t$, and measured after another $\pi/2$ pulse. For simplicity, here we directly calculate the expectation value of $\rm{H}\circ a^\dagger a \circ\rm{H}$, where $\rm{H}$ denotes the Hadamard transformation. This is equivalent to measure the population of $\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$. The envelope should follow an exponential decay characterized by $T_2$.
End of explanation
"""
from qutip.qip.noise import RandomNoise
processor = Processor(N=1)
processor.add_control(0.5 * sigmaz(), targets=0, label="sigmaz")
processor.add_control(0.5 * sigmay(), targets=0, label="sigmay")
processor.coeffs = np.array([[1., 0., 1.],
[0., 1., 0.]])
processor.set_all_tlist(np.array([0., pi/2., 2*pi/2, 3*pi/2]))
processor_white = copy.deepcopy(processor)
processor_white.add_noise(RandomNoise(rand_gen=np.random.normal, dt=0.1, loc=-0.05, scale=0.02)) # gausian white noise
"""
Explanation: Random noise in the pulse intensity
Despite single-qubit decoherence, Processor can also simulate coherent control noise. For general types of noise, one can define a noise object and add it to the processor. An example of predefined noise is the random amplitude noise, where random value is added to the pulse every dt. loc and scale are key word arguments for the random number generator np.random.normal.
End of explanation
"""
result = processor.run_state(init_state=basis(2, 1))
result.states[-1].tidyup(1.0e-5)
result_white = processor_white.run_state(init_state=basis(2, 1))
result_white.states[-1].tidyup(1.0e-4)
fidelity(result.states[-1], result_white.states[-1])
"""
Explanation: We again compare the result of the evolution with and without noise.
End of explanation
"""
from qutip.bloch import Bloch
b = Bloch()
b.add_states([result.states[-1], result_white.states[-1]])
b.make_sphere()
"""
Explanation: Since the result of this this noise is still a pure state, we can visualize it on a Bloch sphere
End of explanation
"""
for pulse in processor_white.pulses:
pulse.print_info()
"""
Explanation: We can print the pulse information to see the noise.
The ideal pulses:
End of explanation
"""
for pulse in processor_white.get_noisy_pulses():
pulse.print_info()
"""
Explanation: And the noisy pulses:
End of explanation
"""
ideal_pulses = processor_white.pulses
noisy_pulses = processor_white.get_noisy_pulses(device_noise=True, drift=True)
qobjevo = processor_white.get_qobjevo(noisy=False)
noisy_qobjevo, c_ops = processor_white.get_qobjevo(noisy=True)
"""
Explanation: Getting a Pulse or QobjEvo representation
If you define a complicate Processor but don't want to run the simulation right now, you can extract an ideal/noisy Pulse representation or QobjEvo representation. The later one can be feeded directly to QuTiP sovler for the evolution.
End of explanation
"""
from qutip.ipynbtools import version_table
version_table()
"""
Explanation: Structure inside the simulator
The figures below help one understanding the workflow inside the simulator. The first figure shows how the noise is processed in the circuit processor. The noise is defined separately in a class object. When called, it takes parameters and the unitary noiseless qutip.QobjEvo from the processor, generates the noisy version and sends the noisy qutip.QobjEvo together with the collapse operators to the processor.
When calculating the evolution, the processor first creates its own qutip.QobjEvo of the noiseless evolution. It will then find all the noise objects saved in the attributes qutip.qip.device.Processor.noise and call the corresponding methods to get the qutip.QobjEvo and a list of collapse operators representing the noise. (For collapse operators, we don't want to add all the constant collapse into one time-independent operator, so we use a list). The processor then combines its own qutip.QobjEvo with those from the noise objects and give them to the solver. The figure below shows how the noiseless part and the noisy part are combined.
End of explanation
"""
|
ozorich/phys202-2015-work | assignments/assignment05/InteractEx02.ipynb | mit | %matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
"""
Explanation: Interact Exercise 2
Imports
End of explanation
"""
def plot_sine1(a,b):
x=np.linspace(0,4*np.pi,100)
y=np.sin(a*x+b)
plt.figure(figsize=(9,6))
plt.plot(x,y)
plt.title('sin(ax+b)')
plt.xlim(right=4*np.pi)
plt.xticks(np.arange(0,5*np.pi,np.pi),['0','$\pi$','$2\pi$','$3\pi$','$4\pi$'])
ax=plt.gca()
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
plot_sine1(5, 3.4)
"""
Explanation: Plotting with parameters
Write a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\pi]$.
Customize your visualization to make it effective and beautiful.
Customize the box, grid, spines and ticks to match the requirements of this data.
Use enough points along the x-axis to get a smooth plot.
For the x-axis tick locations use integer multiples of $\pi$.
For the x-axis tick labels use multiples of pi using LaTeX: $3\pi$.
End of explanation
"""
interact(plot_sine1,a=(0.0,5.0,0.1), b=(-5.0,5.0,0.1))
assert True # leave this for grading the plot_sine1 exercise
"""
Explanation: Then use interact to create a user interface for exploring your function:
a should be a floating point slider over the interval $[0.0,5.0]$ with steps of $0.1$.
b should be a floating point slider over the interval $[-5.0,5.0]$ with steps of $0.1$.
End of explanation
"""
def plot_sine2(a,b,style='b'):
x=np.linspace(0,4*np.pi,100)
y=np.sin(a*x+b)
plt.figure(figsize=(9,6))
plt.plot(x,y,style)
plt.title('sin(ax+b)')
plt.xlim(right=4*np.pi)
plt.xticks(np.arange(0,5*np.pi,np.pi),['0','$\pi$','$2\pi$','$3\pi$','$4\pi$'])
ax=plt.gca()
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
plot_sine2(4.0, -1.0, 'r--')
"""
Explanation: In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument:
dashed red: r--
blue circles: bo
dotted black: k.
Write a plot_sine2(a, b, style) function that has a third style argument that allows you to set the line style of the plot. The style should default to a blue line.
End of explanation
"""
interact(plot_sine2,a=(0.0,5.0,0.1), b=(-5.0,5.0,0.1), style={'blue':'b--','black':'ko','red':'r^'})
assert True # leave this for grading the plot_sine2 exercise
"""
Explanation: Use interact to create a UI for plot_sine2.
Use a slider for a and b as above.
Use a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles.
End of explanation
"""
|
mangeshjoshi819/ml-learn-python3 | BasicString _and_Csv.ipynb | mit | print(3+"mangesh")
print(str(3)+"mangesh")
record={"name":"mangeesh","price":34,"country":"Brazil"}
"""
Explanation: Basic Python Strings
Python3 has representation of String using unicode.Unicode is by default in python.
Python has dynamic typing e.g. print(3+"mangesh") will not work need to convert 3 to str(3)
End of explanation
"""
print_statement="{} is my name,{} is the price,{} is the country"
print_statement.format(record["name"],record["price"],record["country"])
"""
Explanation: <br>
Using format
End of explanation
"""
import csv
#precision 2
with open("mpg.csv") as csvfile:
mpg=list(csv.DictReader(csvfile))
print(len(mpg))
print(mpg[0].keys())
print(mpg[0])
sum([float(k["displacement"]) for k in mpg] )/len(mpg)
"""
Explanation: CSV_Reading
End of explanation
"""
cylinders=set([k["cylinders"] for k in mpg])
avgMpgPerCylinderDict={}
for c in cylinders:
value=0
number=0
for k in mpg:
if k["cylinders"]==c:
value+=float(k["mpg"])
number+=1
avgCylinderDict[c]=value/number
avgCylinderDict
modelyear=set([d["model_year"] for d in mpg])
modelyear
avgMPGperModelYear=[]
for y in modelyear:
value=0
number=0
for k in mpg:
if k["model_year"]==y:
value+=float(k["mpg"])
number+=1
avgMPGperModelYear.append((y,value/number))
avgMPGperModelYear.sort(key=lambda x:x[1])
avgMPGperModelYear
"""
Explanation: Sum over cylinders
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.12/_downloads/plot_read_evoked.ipynb | bsd-3-clause | # Author: Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
from mne import read_evokeds
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
# Reading
condition = 'Left Auditory'
evoked = read_evokeds(fname, condition=condition, baseline=(None, 0),
proj=True)
"""
Explanation: Reading and writing an evoked file
This script shows how to read and write evoked datasets.
End of explanation
"""
evoked.plot(exclude=[])
# Show result as a 2D image (x: time, y: channels, color: amplitude)
evoked.plot_image(exclude=[])
"""
Explanation: Show result as a butteryfly plot:
By using exclude=[] bad channels are not excluded and are shown in red
End of explanation
"""
|
googleinterns/multimodal-long-transformer-2021 | preprocessing/create_fashion_gen_metadata.ipynb | apache-2.0 | import pandas as pd
import tensorflow as tf
# i2t: image-to-text.
i2t_path = '/bigstore/mmt/raw_data/fashion_gen/fashion_gen_i2t_test_pairs.csv'
# t2i: text-to-image.
t2i_path = '/bigstore/mmt/raw_data/fashion_gen/fashion_gen_t2i_test_pairs.csv'
t2i_output_path = '/bigstore/mmt/fashion_gen/metadata/fashion_bert_t2i_test.csv'
i2t_output_path = '/bigstore/mmt/fashion_gen/metadata/fashion_bert_i2t_test.csv'
dtype = {
'image_prod_id': str,
'prod_img_id': str,
'text_prod_id': str,
}
with tf.io.gfile.GFile(i2t_path, 'r') as f:
i2t_df = pd.read_csv(f, dtype=dtype)
with tf.io.gfile.GFile(t2i_path, 'r') as f:
t2i_df = pd.read_csv(f, dtype=dtype)
i2t_df
t2i_df
def add_columns(df):
"""Adds new columns to the dataframe.
New columns: image_id, image_index, text_index, and gt.
A product will have multiple images (files). They are images from differnt
angle of view of the product. Thus, `image_prod_id` is the main product id and
`prod_img_id` is id for different angles. One product has one text description.
image_id: image file name.
image_index: unique image index for the image file.
text_index: unique text index for the product description.
gt: if the row in the dataframe is the gruth-truth pair.
Args:
df: a pd.DataFrame. Each row of the dataframe is a image-text pair.
Returns:
a pd.DataFrame.
"""
# image_id is the id for an image of a product. A product can have multiple
# image_id's (image files).
df['image_id'] = df['image_prod_id'] + '_' + df['prod_img_id']
# Gives each text_prod_id a unique index.
df['text_index'] = df.assign(id=df['text_prod_id'])['id'].astype(
'category').cat.codes
# Gives each image_id (an image file) a unique index.
df['image_index'] = df.assign(id=df['image_id'])['id'].astype(
'category').cat.codes
# If image and text have the smae product id, they are a ground-truth pair.
df['gt'] = (df['image_prod_id'] == df['text_prod_id']).astype(int)
return df
"""
Explanation: Preprocess: Create FashionGen metadata for Mmt.
Defines paths and loads raw csv metadata files from Fashion-BERT and kaleido-BERT.
End of explanation
"""
i2t_df = add_columns(i2t_df)
# Gets all ground-truth pairs in gt_df.
gt_df = i2t_df[i2t_df['gt'] == 1][['text_index', 'image_index']].rename(
columns={'image_index': 'gt_image_index'})
gt_df
# We give each text_index their ground-truth image_index if exists.
# Since FashionGen does not share the same retrieval pool (text pool for i2t),
# some text_index will not have corresponding gt_image_index.
# Thus, we fill -1 for those non-existent gt_image_index.
i2t_df = i2t_df.merge(gt_df, how='left', on='text_index').fillna(-1)
# Converts gt_image_index column from float to int.
i2t_df['gt_image_index'] = i2t_df['gt_image_index'].astype(int)
with tf.io.gfile.GFile(i2t_output_path, 'w') as f:
i2t_df.to_csv(f, index=False)
i2t_df
"""
Explanation: i2t: add columns
End of explanation
"""
t2i_df = add_columns(t2i_df)
gt_df = t2i_df[t2i_df['gt'] == 1][['text_index', 'image_index']].rename(
columns={'image_index': 'gt_image_index'})
gt_df
# We give each text_index their ground-truth image_index.
# We don't have non-existed gt_image_index becuase it iss text-to-image.
t2i_df = t2i_df.merge(gt_df, how='left', on='text_index')
with tf.io.gfile.GFile(t2i_output_path, 'w') as f:
t2i_df.to_csv(f, index=False)
t2i_df
# 989 texts have 101 images; 11 texts have 100 images.
print('101 images: ', (t2i_df['text_index'].value_counts() == 101).sum())
print('100 images: ', (t2i_df['text_index'].value_counts() == 100).sum())
# ground-truth pairs
print('# ground-truth: ', t2i_df['gt'].sum())
# 989 images have 101 text; 11 images have 100 texts.
print('101 images: ', (i2t_df['image_index'].value_counts() == 101).sum())
print('100 images: ', (i2t_df['image_index'].value_counts() == 100).sum())
# ground-truth pairs
print('# ground-truth: ', i2t_df['gt'].sum())
"""
Explanation: t2i: add columns
End of explanation
"""
|
poppy-project/community-notebooks | debug/poppy-torso_poppy-humanoid_poppy-ergo__motor_scan.ipynb | lgpl-3.0 | import pypot.dynamixel
ports = pypot.dynamixel.get_available_ports()
if not ports:
raise IOError('no port found!')
print 'ports found', ports
"""
Explanation: Motors scan
Scan all ports to find the connected Dynamixel motors
End of explanation
"""
using_XL320 = False
my_baudrate = 1000000
"""
Explanation: Protocol is not the same for XL320 servomotors, set the using_XL320 flag to True if you use them.
Set my_baudrate to the baudrate you are using (1000000 for motors already configured, 57600 for new ones).
End of explanation
"""
for port in ports:
print port
try:
if using_XL320:
dxl_io = pypot.dynamixel.Dxl320IO(port, baudrate=my_baudrate)
else:
dxl_io = pypot.dynamixel.DxlIO(port, baudrate=my_baudrate)
print "scanning"
found = dxl_io.scan(range(60))
print found
dxl_io.close()
except Exception, e:
print e
"""
Explanation: If the code below gives you an exception, try to restart all other notebooks that may be running, wait 5 seconds and try again.
End of explanation
"""
import os
for port in ports:
os.system('fuser -k '+port);
"""
Explanation: Kill whoever uses the ports (should be used only as last chance try to free the ports).
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.