Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
2,100 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
US Births dataset - Guided Project <a class="tocSkip">
Guided project from dataquest.io Data Scientist path.
Data provided by fivethirtyeight.
Import data
Step1: Cleanup
Step2: Next steps | Python Code:
from pathlib import Path
my_file = Path('US_births_1994-2003_CDC_NCHS.csv')
if my_file.is_file():
print('File exists.')
data = open('US_births_1994-2003_CDC_NCHS.csv', 'r').read()
data_lst = data.split('\n')
else:
print("File doesn't exist, will be downloaded.")
import urllib.request
url = 'https://raw.githubusercontent.com/fivethirtyeight/' + \
'data/master/births/US_births_1994-2003_CDC_NCHS.csv'
response = urllib.request.urlopen(url)
data = response.read().decode('utf-8')
with open('US_births_1994-2003_CDC_NCHS.csv', 'w') as file:
file.write(data)
data_lst = data.split('\r')
print(data_lst[:10])
def read_csv(filename):
data = open(filename, 'r').read()
string_list = data.split('\n')
final_list = []
for item in string_list[1:]:
int_fields = []
string_fields = item.split(',')
int_fields = list(map(lambda x: int(x), string_fields))
final_list.append(int_fields)
return final_list
cdc_list = read_csv('US_births_1994-2003_CDC_NCHS.csv')
print(cdc_list[:10])
Explanation: US Births dataset - Guided Project <a class="tocSkip">
Guided project from dataquest.io Data Scientist path.
Data provided by fivethirtyeight.
Import data
End of explanation
def month_births(lst):
births_per_month = {}
for item in lst:
month = item[0]
births = item[-1]
if month in births_per_month:
births_per_month[month] += births
else:
births_per_month[month] = births
return births_per_month
cdc_month_births = month_births(cdc_list)
print(cdc_month_births)
def dow_births(lst):
births_per_day = {}
for item in lst:
day_of_week = item[-2]
births = item[-1]
if day_of_week in births_per_day:
births_per_day[day_of_week] += births
else:
births_per_day[day_of_week] = births
return births_per_day
cdc_day_births = dow_births(cdc_list)
print(cdc_day_births)
def calc_counts(data, column):
feature_count = {}
for item in data:
feature = item[column]
births = item[-1]
if feature in feature_count:
feature_count[feature] += births
else:
feature_count[feature] = births
return feature_count
cdc_year_births = calc_counts(cdc_list, 0)
cdc_month_births = calc_counts(cdc_list, 1)
cdc_dom_births = calc_counts(cdc_list, 2)
cdc_dow_births = calc_counts(cdc_list, 3)
print('### Year ###')
print(cdc_year_births)
print('---')
print('### Month ###')
print(cdc_month_births)
print('---')
print('### Day of month ###')
print(cdc_dom_births)
print('---')
print('### Day of week ###')
print(cdc_dow_births)
Explanation: Cleanup
End of explanation
# the lazy approach
def min_dict(summary):
return min(summary.values())
def max_dict(summary):
return max(summary.values())
print('Minimum births per year: %i' % min_dict(cdc_year_births))
print('Maximum births per year: %i' % max_dict(cdc_year_births))
import matplotlib.pyplot as plt
from matplotlib import style
%matplotlib inline
style.use('ggplot')
cdc_year_births_sorted = sorted(cdc_year_births.items())
x, y = zip(*cdc_year_births_sorted)
ann_x = []
for i in range(len(x)):
temp = (y[i] - y[i-1]) / y[i-1] * 100
temp = temp * 100
ann_x.append(int(temp))
plt.plot(x, y, '-o')
plt.xlim([1993, 2004]) # , 0, 20
plt.ylim([3850000, 4170000])
plt.xlabel('Years')
plt.ylabel('Births')
plt.annotate('-',xy=(x[0] - 0.05, y[0]+10000))
for i in range(len(x))[1:]:
if ann_x[i] > 0:
col = '#17aa0c'
else:
col = '#ff0000'
plt.annotate(str(ann_x[i]) + '%', color=col,
xy=(x[i], y[i]+10000),
xytext=(x[i]-0.3, y[i]+20000),
#arrowprops=dict(facecolor='black')
)
#plt.show()
Explanation: Next steps
End of explanation |
2,101 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: To specify the experiments, define
Step2: Other parameters of the experiment
Step3: Load all data
Step4: Initialise the experiment
Step5: Select the trainig and testing data according to the selected fold. We split all images in 10 approximately equal parts and each fold includes these images together with all classes present in them.
Step6: Batch 1
Step7: Batch 2
Starting from Batch 3 the code will be just repeated.
Step8: Now is the time to retrain the detector and obtain new box_proposal_features. This is not done in this notebook. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
from __future__ import division
from __future__ import print_function
import math
import gym
import pandas as pd
from gym import spaces
from sklearn import neural_network, model_selection
from sklearn.neural_network import MLPClassifier
from third_party import np_box_ops
import annotator, detector, dialog, environment
Explanation: Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Experiment 1: fixed detector in many scenarios
This notebook contains the code for computing the performance of the fixed strategies in various scenarios. The full experiment is described in Sec. 5.2 of CVPR submission "Learning Intelligent Dialogs for Bounding Box Annotation". Please note that this notebook does not reproduce the experiment since the starting detector is too strong, there is no re-training, and there are only two iterations being done.
End of explanation
# desired quality: high (min_iou=0.7) and low (min_iou=0.5)
min_iou = 0.7 # @param ["0.5", "0.7"]
# drawing speed: high (time_draw=7) and low (time_draw=25)
time_draw = 7 # @param ["7", "25"]
Explanation: To specify the experiments, define:
type of drawing
desired quality of bounding boxes
End of explanation
random_seed = 80590 # global variable that fixes the random seed everywhere for replroducibility of results
# what kind of features will be used to represent the state
# numerical values 1-20 correspond to one hot encoding of class
predictive_fields = ['prediction_score', 'relative_size', 'avg_score', 'dif_avg_score', 'dif_max_score', 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]
time_verify = 1.8 # @param
Explanation: Other parameters of the experiment
End of explanation
# Download GT:
# wget wget https://storage.googleapis.com/iad_pascal_annotations_and_detections/pascal_gt_for_iad.h5
# Download detections with features
# wget https://storage.googleapis.com/iad_pascal_annotations_and_detections/pascal_proposals_plus_features_for_iad.h5
download_dir = ''
ground_truth = pd.read_hdf(download_dir + 'pascal_gt_for_iad.h5', 'ground_truth')
box_proposal_features = pd.read_hdf(download_dir + 'pascal_proposals_plus_features_for_iad.h5', 'box_proposal_features')
Explanation: Load all data
End of explanation
annotator_real = annotator.AnnotatorSimple(ground_truth, random_seed, time_verify, time_draw, min_iou)
# better call it image_class_pairs later
image_class = ground_truth[['image_id', 'class_id']]
image_class = image_class.drop_duplicates()
Explanation: Initialise the experiment
End of explanation
unique_image = image_class['image_id'].drop_duplicates()
# divide the images into exponentially growing groups
im1 = unique_image.iloc[157]
im2 = unique_image.iloc[157+157]
im3 = unique_image.iloc[157+157+314]
im4 = unique_image.iloc[157+157+314+625]
im5 = unique_image.iloc[157+157+314+625+1253]
# image_class pairs groups are determined by the images in them
image_class_array = image_class.values[:,0]
in1 = np.searchsorted(image_class_array, im1, side='right')
in2 = np.searchsorted(image_class_array, im2, side='right')
in3 = np.searchsorted(image_class_array, im3, side='right')
in4 = np.searchsorted(image_class_array, im4, side='right')
in5 = np.searchsorted(image_class_array, im5, side='right')
Explanation: Select the trainig and testing data according to the selected fold. We split all images in 10 approximately equal parts and each fold includes these images together with all classes present in them.
End of explanation
the_detector = detector.Detector(box_proposal_features, predictive_fields)
image_class_current = image_class.iloc[0:in1]
%output_height 300
env = environment.AnnotatingDataset(annotator_real, the_detector, image_class_current)
print('Running ', len(env.image_class), 'episodes with strategy X')
total_reward = 0
new_ground_truth_all = []
all_annotations = dict()
for i in range(len(env.image_class)):
print('Episode ', i, end = ': ')
state = env.reset(current_index=i)
agent = dialog.FixedDialog(0)
done = False
while not(done):
action = agent.get_next_action(state)
if action==0:
print('V', end='')
elif action==1:
print('D', end='')
next_state, reward, done, coordinates = env.step(action)
state = next_state
total_reward += reward
dataset_id = env.current_image
# ground truth with which we will initialise the new user
new_ground_truth = {}
new_ground_truth['image_id'] = dataset_id
new_ground_truth['class_id'] = env.current_class
new_ground_truth['xmax'] = coordinates['xmax']
new_ground_truth['xmin'] = coordinates['xmin']
new_ground_truth['ymax'] = coordinates['ymax']
new_ground_truth['ymin'] = coordinates['ymin']
new_ground_truth_all.append(new_ground_truth)
if dataset_id not in all_annotations:
current_annotation = dict()
current_annotation['boxes'] = np.array([[coordinates['ymin'], coordinates['xmin'], coordinates['ymax'], coordinates['xmax']]], dtype=np.int32)
current_annotation['box_labels'] = np.array([env.current_class])
all_annotations[dataset_id] = current_annotation
else:
all_annotations[dataset_id]['boxes'] = np.append(all_annotations[dataset_id]['boxes'], np.array([[coordinates['ymin'], coordinates['xmin'], coordinates['ymax'], coordinates['xmax']]], dtype=np.int32), axis=0)
all_annotations[dataset_id]['box_labels'] = np.append(all_annotations[dataset_id]['box_labels'], np.array([env.current_class]))
print()
print('total_reward = ', total_reward)
print('average episode reward = ', total_reward/len(env.image_class))
new_ground_truth_all = pd.DataFrame(new_ground_truth_all)
Explanation: Batch 1: Annotate 3.125% of data with strategy X
End of explanation
ground_truth_new = pd.DataFrame(new_ground_truth_all)
annotator_new = annotator.AnnotatorSimple(ground_truth_new, random_seed, time_verify, time_draw, min_iou)
# @title Collect data for classifier
env = environment.AnnotatingDataset(annotator_new, the_detector, image_class_current)
print('Running ', len(env.image_class), 'episodes with strategy V3X')
%output_height 300
total_reward = 0
data_for_classifier = []
for i in range(len(env.image_class)):
print(i, end = ': ')
agent = dialog.FixedDialog(3)
state = env.reset(current_index=i)
done = False
while not(done):
action = agent.get_next_action(state)
next_state, reward, done, _ = env.step(action)
if action==0:
state_dict = dict(state)
state_dict['is_accepted'] = done
data_for_classifier.append(state_dict)
print('V', end='')
elif action==1:
print('D', end='')
state = next_state
total_reward += reward
print()
print('Average episode reward = ', total_reward/len(env.image_class))
data_for_classifier = pd.DataFrame(data_for_classifier)
# @title Train classification model (might take some time)
#model_mlp = neural_network.MLPClassifier(alpha = 0.0001, activation = 'relu', hidden_layer_sizes = (50, 50, 50, 50, 50), random_state=602)
#model_for_agent = model_mlp.fit(data_from_Vx3X[predictive_fields], data_from_Vx3X['is_accepted'])
np.random.seed(random_seed) # for reproducibility of fitting the classifier and cross-validation
print('Cross-validating parameters\' values... This might take some time.')
# possible parameter values
parameters = {'hidden_layer_sizes': ((20, 20, 20), (50, 50, 50), (80, 80, 80), (20, 20, 20, 20), (50, 50, 50, 50), (80, 80, 80, 80), (20, 20, 20, 20, 20), (50, 50, 50, 50, 50), (80, 80, 80, 80, 80)), 'activation': ('logistic', 'relu'), 'alpha': [0.0001, 0.001, 0.01]}
model_mlp = neural_network.MLPClassifier()
# cross-validate parameters
grid_search = model_selection.GridSearchCV(model_mlp, parameters, scoring='neg_log_loss', refit=True)
grid_search.fit(data_for_classifier[predictive_fields], data_for_classifier['is_accepted'])
print('best score = ', grid_search.best_score_)
print('best parameters = ', grid_search.best_params_)
# use the model with the best parameters
model_for_agent = grid_search.best_estimator_
Explanation: Batch 2
Starting from Batch 3 the code will be just repeated.
End of explanation
image_class_current = image_class.iloc[in1:in2]
the_detector = detector.Detector(box_proposal_features, predictive_fields)
agent = dialog.DialogProb(model_for_agent, annotator_real)
# @title Annotating data with intelligent dialog
env = environment.AnnotatingDataset(annotator_real, the_detector, image_class_current)
print('Running ', len(env.image_class), 'episodes with strategy IAD-Prob')
%output_height 300
print('intelligent dialog strategy')
total_reward = 0
# reset the gound truth because the user only needs to annotate the last 10% of data using the detector from the rest of the data
new_ground_truth_all = []
for i in range(len(env.image_class)):
print(i, end = ': ')
state = env.reset(current_index=i)
done = False
while not(done):
action = agent.get_next_action(state)
if action==0:
print('V', end='')
elif action==1:
print('D', end='')
next_state, reward, done, coordinates = env.step(action)
state = next_state
total_reward += reward
dataset_id = env.current_image
# ground truth with which we will initialise the new user
new_ground_truth = {}
new_ground_truth['image_id'] = dataset_id
new_ground_truth['class_id'] = env.current_class
new_ground_truth['xmax'] = coordinates['xmax']
new_ground_truth['xmin'] = coordinates['xmin']
new_ground_truth['ymax'] = coordinates['ymax']
new_ground_truth['ymin'] = coordinates['ymin']
new_ground_truth_all.append(new_ground_truth)
if dataset_id not in all_annotations:
current_annotation = dict()
current_annotation['boxes'] = np.array([[coordinates['ymin'], coordinates['xmin'], coordinates['ymax'], coordinates['xmax']]], dtype=np.int32)
current_annotation['box_labels'] = np.array([env.current_class])
all_annotations[dataset_id] = current_annotation
else:
all_annotations[dataset_id]['boxes'] = np.append(all_annotations[dataset_id]['boxes'], np.array([[coordinates['ymin'], coordinates['xmin'], coordinates['ymax'], coordinates['xmax']]], dtype=np.int32), axis=0)
all_annotations[dataset_id]['box_labels'] = np.append(all_annotations[dataset_id]['box_labels'], np.array([env.current_class]))
print()
print('total_reward = ', total_reward)
print('average episode reward = ', total_reward/len(env.image_class))
new_ground_truth_all = pd.DataFrame(new_ground_truth_all)
Explanation: Now is the time to retrain the detector and obtain new box_proposal_features. This is not done in this notebook.
End of explanation |
2,102 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples and Exercises from Think Stats, 2nd Edition
http
Step1: Again, I'll load the NSFG pregnancy file and select live births
Step2: Here's the histogram of birth weights
Step3: To normalize the disrtibution, we could divide through by the total count
Step4: The result is a Probability Mass Function (PMF).
Step5: More directly, we can create a Pmf object.
Step6: Pmf provides Prob, which looks up a value and returns its probability
Step7: The bracket operator does the same thing.
Step8: The Incr method adds to the probability associated with a given values.
Step9: The Mult method multiplies the probability associated with a value.
Step10: Total returns the total probability (which is no longer 1, because we changed one of the probabilities).
Step11: Normalize divides through by the total probability, making it 1 again.
Step12: Here's the PMF of pregnancy length for live births.
Step13: Here's what it looks like plotted with Hist, which makes a bar graph.
Step14: Here's what it looks like plotted with Pmf, which makes a step function.
Step15: We can use MakeFrames to return DataFrames for all live births, first babies, and others.
Step16: Here are the distributions of pregnancy length.
Step17: And here's the code that replicates one of the figures in the chapter.
Step18: Here's the code that generates a plot of the difference in probability (in percentage points) between first babies and others, for each week of pregnancy (showing only pregnancies considered "full term").
Step19: Biasing and unbiasing PMFs
Here's the example in the book showing operations we can perform with Pmf objects.
Suppose we have the following distribution of class sizes.
Step20: This function computes the biased PMF we would get if we surveyed students and asked about the size of the classes they are in.
Step21: The following graph shows the difference between the actual and observed distributions.
Step22: The observed mean is substantially higher than the actual.
Step23: If we were only able to collect the biased sample, we could "unbias" it by applying the inverse operation.
Step24: We can unbias the biased PMF
Step25: And plot the two distributions to confirm they are the same.
Step26: Pandas indexing
Here's an example of a small DataFrame.
Step27: We can specify column names when we create the DataFrame
Step28: We can also specify an index that contains labels for the rows.
Step29: Normal indexing selects columns.
Step30: We can use the loc attribute to select rows.
Step31: If you don't want to use the row labels and prefer to access the rows using integer indices, you can use the iloc attribute
Step32: loc can also take a list of labels.
Step33: If you provide a slice of labels, DataFrame uses it to select rows.
Step34: If you provide a slice of integers, DataFrame selects rows by integer index.
Step35: But notice that one method includes the last elements of the slice and one does not.
In general, I recommend giving labels to the rows and names to the columns, and using them consistently.
Exercises
Exercise
Step36: Exercise
Step37: Exercise | Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import nsfg
import first
import thinkstats2
import thinkplot
Explanation: Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
preg = nsfg.ReadFemPreg()
live = preg[preg.outcome == 1]
Explanation: Again, I'll load the NSFG pregnancy file and select live births:
End of explanation
hist = thinkstats2.Hist(live.birthwgt_lb, label='birthwgt_lb')
thinkplot.Hist(hist)
thinkplot.Config(xlabel='Birth weight (pounds)', ylabel='Count')
Explanation: Here's the histogram of birth weights:
End of explanation
n = hist.Total()
pmf = hist.Copy()
for x, freq in hist.Items():
hist[x] = freq / n
Explanation: To normalize the disrtibution, we could divide through by the total count:
End of explanation
thinkplot.Hist(pmf)
thinkplot.Config(xlabel='Birth weight (pounds)', ylabel='PMF')
Explanation: The result is a Probability Mass Function (PMF).
End of explanation
pmf = thinkstats2.Pmf([1, 2, 2, 3, 5])
pmf
Explanation: More directly, we can create a Pmf object.
End of explanation
pmf.Prob(2)
Explanation: Pmf provides Prob, which looks up a value and returns its probability:
End of explanation
pmf[2]
Explanation: The bracket operator does the same thing.
End of explanation
pmf.Incr(2, 0.2)
pmf[2]
Explanation: The Incr method adds to the probability associated with a given values.
End of explanation
pmf.Mult(2, 0.5)
pmf[2]
Explanation: The Mult method multiplies the probability associated with a value.
End of explanation
pmf.Total()
Explanation: Total returns the total probability (which is no longer 1, because we changed one of the probabilities).
End of explanation
pmf.Normalize()
pmf.Total()
Explanation: Normalize divides through by the total probability, making it 1 again.
End of explanation
pmf = thinkstats2.Pmf(live.prglngth, label='prglngth')
Explanation: Here's the PMF of pregnancy length for live births.
End of explanation
thinkplot.Hist(pmf)
thinkplot.Config(xlabel='Pregnancy length (weeks)', ylabel='Pmf')
Explanation: Here's what it looks like plotted with Hist, which makes a bar graph.
End of explanation
thinkplot.Pmf(pmf)
thinkplot.Config(xlabel='Pregnancy length (weeks)', ylabel='Pmf')
Explanation: Here's what it looks like plotted with Pmf, which makes a step function.
End of explanation
live, firsts, others = first.MakeFrames()
Explanation: We can use MakeFrames to return DataFrames for all live births, first babies, and others.
End of explanation
first_pmf = thinkstats2.Pmf(firsts.prglngth, label='firsts')
other_pmf = thinkstats2.Pmf(others.prglngth, label='others')
Explanation: Here are the distributions of pregnancy length.
End of explanation
width=0.45
axis = [27, 46, 0, 0.6]
thinkplot.PrePlot(2, cols=2)
thinkplot.Hist(first_pmf, align='right', width=width)
thinkplot.Hist(other_pmf, align='left', width=width)
thinkplot.Config(xlabel='Pregnancy length(weeks)', ylabel='PMF', axis=axis)
thinkplot.PrePlot(2)
thinkplot.SubPlot(2)
thinkplot.Pmfs([first_pmf, other_pmf])
thinkplot.Config(xlabel='Pregnancy length(weeks)', axis=axis)
Explanation: And here's the code that replicates one of the figures in the chapter.
End of explanation
weeks = range(35, 46)
diffs = []
for week in weeks:
p1 = first_pmf.Prob(week)
p2 = other_pmf.Prob(week)
diff = 100 * (p1 - p2)
diffs.append(diff)
thinkplot.Bar(weeks, diffs)
thinkplot.Config(xlabel='Pregnancy length(weeks)', ylabel='Difference (percentage points)')
Explanation: Here's the code that generates a plot of the difference in probability (in percentage points) between first babies and others, for each week of pregnancy (showing only pregnancies considered "full term").
End of explanation
d = { 7: 8, 12: 8, 17: 14, 22: 4,
27: 6, 32: 12, 37: 8, 42: 3, 47: 2 }
pmf = thinkstats2.Pmf(d, label='actual')
Explanation: Biasing and unbiasing PMFs
Here's the example in the book showing operations we can perform with Pmf objects.
Suppose we have the following distribution of class sizes.
End of explanation
def BiasPmf(pmf, label):
new_pmf = pmf.Copy(label=label)
for x, p in pmf.Items():
new_pmf.Mult(x, x)
new_pmf.Normalize()
return new_pmf
Explanation: This function computes the biased PMF we would get if we surveyed students and asked about the size of the classes they are in.
End of explanation
biased_pmf = BiasPmf(pmf, label='observed')
thinkplot.PrePlot(2)
thinkplot.Pmfs([pmf, biased_pmf])
thinkplot.Config(xlabel='Class size', ylabel='PMF')
Explanation: The following graph shows the difference between the actual and observed distributions.
End of explanation
print('Actual mean', pmf.Mean())
print('Observed mean', biased_pmf.Mean())
Explanation: The observed mean is substantially higher than the actual.
End of explanation
def UnbiasPmf(pmf, label=None):
new_pmf = pmf.Copy(label=label)
for x, p in pmf.Items():
new_pmf[x] *= 1/x
new_pmf.Normalize()
return new_pmf
Explanation: If we were only able to collect the biased sample, we could "unbias" it by applying the inverse operation.
End of explanation
unbiased = UnbiasPmf(biased_pmf, label='unbiased')
print('Unbiased mean', unbiased.Mean())
Explanation: We can unbias the biased PMF:
End of explanation
thinkplot.PrePlot(2)
thinkplot.Pmfs([pmf, unbiased])
thinkplot.Config(xlabel='Class size', ylabel='PMF')
Explanation: And plot the two distributions to confirm they are the same.
End of explanation
import numpy as np
import pandas
array = np.random.randn(4, 2)
df = pandas.DataFrame(array)
df
Explanation: Pandas indexing
Here's an example of a small DataFrame.
End of explanation
columns = ['A', 'B']
df = pandas.DataFrame(array, columns=columns)
df
Explanation: We can specify column names when we create the DataFrame:
End of explanation
index = ['a', 'b', 'c', 'd']
df = pandas.DataFrame(array, columns=columns, index=index)
df
Explanation: We can also specify an index that contains labels for the rows.
End of explanation
df['A']
Explanation: Normal indexing selects columns.
End of explanation
df.loc['a']
Explanation: We can use the loc attribute to select rows.
End of explanation
df.iloc[0]
Explanation: If you don't want to use the row labels and prefer to access the rows using integer indices, you can use the iloc attribute:
End of explanation
indices = ['a', 'c']
df.loc[indices]
Explanation: loc can also take a list of labels.
End of explanation
df['a':'c']
Explanation: If you provide a slice of labels, DataFrame uses it to select rows.
End of explanation
df[0:2]
Explanation: If you provide a slice of integers, DataFrame selects rows by integer index.
End of explanation
resp = nsfg.ReadFemResp()
# Solution goes here
pmf = thinkstats2.Pmf(resp.numkdhh, label='actual')
# Solution goes here
print('actual')
thinkplot.pmf(pmf)
thinkplot.Config(xlabel='# of children', ylabel='PMF')
# Solution goes here
# biased numbers
biased = BiasPmf(pmf, label='Biased')
# Solution goes here
thinkplot.PrePlot(2)
thinkplot.Pmfs([pmf, biased])
thinkplot.Config(xlabel='number of children', ylabel='pmf')
# Solution goes here
# mean of pmf
pmf.Mean()
# Solution goes here
# biased mean
biased.Mean()
Explanation: But notice that one method includes the last elements of the slice and one does not.
In general, I recommend giving labels to the rows and names to the columns, and using them consistently.
Exercises
Exercise: Something like the class size paradox appears if you survey children and ask how many children are in their family. Families with many children are more likely to appear in your sample, and families with no children have no chance to be in the sample.
Use the NSFG respondent variable numkdhh to construct the actual distribution for the number of children under 18 in the respondents' households.
Now compute the biased distribution we would see if we surveyed the children and asked them how many children under 18 (including themselves) are in their household.
Plot the actual and biased distributions, and compute their means.
End of explanation
live, firsts, others = first.MakeFrames()
preg_map = nsfg.MakePregMap(live)
# Solution goes here
hist = thinkstats2.Hist()
for i,j in preg_map.items():
# print('i=',i,'j=',j)
if len(j) >= 2:
pair = (preg.loc[j[0:2]].prglngth)
diff = np.diff(pair)[0]
hist[diff] += 1
# Solution goes here
thinkplot.Hist(hist)
# Solution goes here
Explanation: Exercise: I started this book with the question, "Are first babies more likely to be late?" To address it, I computed the difference in means between groups of babies, but I ignored the possibility that there might be a difference between first babies and others for the same woman.
To address this version of the question, select respondents who have at least live births and compute pairwise differences. Does this formulation of the question yield a different result?
Hint: use nsfg.MakePregMap:
End of explanation
import relay
results = relay.ReadResults()
speeds = relay.GetSpeeds(results)
speeds = relay.BinData(speeds, 3, 12, 100)
pmf = thinkstats2.Pmf(speeds, 'actual speeds')
thinkplot.Pmf(pmf)
thinkplot.Config(xlabel='Speed (mph)', ylabel='PMF')
# Solution goes here
def Observed_Pmf(pmf, speed, label=None):
mod = pmf.Copy(label=label)
for i in mod.Values():
# print('values=',i)
diff = abs(i -speed)
mod[i] *= diff
mod.Normalize()
return mod
newpmf = Observed_Pmf(pmf, 7, label='observed')
thinkplot.Pmf(newpmf)
thinkplot.Config(xlabel='speed',ylabel='pmf')
Explanation: Exercise: In most foot races, everyone starts at the same time. If you are a fast runner, you usually pass a lot of people at the beginning of the race, but after a few miles everyone around you is going at the same speed.
When I ran a long-distance (209 miles) relay race for the first time, I noticed an odd phenomenon: when I overtook another runner, I was usually much faster, and when another runner overtook me, he was usually much faster.
At first I thought that the distribution of speeds might be bimodal; that is, there were many slow runners and many fast runners, but few at my speed.
Then I realized that I was the victim of a bias similar to the effect of class size. The race was unusual in two ways: it used a staggered start, so teams started at different times; also, many teams included runners at different levels of ability.
As a result, runners were spread out along the course with little relationship between speed and location. When I joined the race, the runners near me were (pretty much) a random sample of the runners in the race.
So where does the bias come from? During my time on the course, the chance of overtaking a runner, or being overtaken, is proportional to the difference in our speeds. I am more likely to catch a slow runner, and more likely to be caught by a fast runner. But runners at the same speed are unlikely to see each other.
Write a function called ObservedPmf that takes a Pmf representing the actual distribution of runners’ speeds, and the speed of a running observer, and returns a new Pmf representing the distribution of runners’ speeds as seen by the observer.
To test your function, you can use relay.py, which reads the results from the James Joyce Ramble 10K in Dedham MA and converts the pace of each runner to mph.
Compute the distribution of speeds you would observe if you ran a relay race at 7 mph with this group of runners.
End of explanation |
2,103 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Sklearn Classification Metrics
| Python Code::
from sklearn.metrics import classification_report, log_loss, roc_auc_score
print('Classification Report:',classification_report(y_test, y_pred))
print('Log Loss:',log_loss(y_test, y_pred))
print('ROC AUC:',roc_auc_score(y_test, y_pred))
|
2,104 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Este sistema lineal
Step1: Eigenvalores y Eigenvectores
Step2: La matriz b por el vector [y1 y2] es igual a cualquier eigenvalor (e.g. $ -1 + 2i $) por [y1 y2].
$$ \left[\begin{array}{cc}
1 & -4\
2 & -3
\end{array}\right]\left[\begin{array}{c}
y_{1}\
y_{2}
\end{array}\right]=-1+2i\left[\begin{array}{c}
y_{1}\
y_{2}
\end{array}\right] $$
De ahí se despeja este sistema de ecuaciones redundantes
Step3: Sistema de ecuaciones para graficar el campo de vectores | Python Code:
b = symbols('b')
b = Matrix([[1, -4],
[2, -3]])
b
Explanation: Este sistema lineal
End of explanation
b.eigenvects()
Explanation: Eigenvalores y Eigenvectores
End of explanation
y1, y2 = symbols("y1 y2")
solve(((-1 + 2j) * y1) - (y1 - 4 * y2),
y1)
solve(((-1 + 2j) * y2) - (2 * y1 - 3 * y2),
y2)
Explanation: La matriz b por el vector [y1 y2] es igual a cualquier eigenvalor (e.g. $ -1 + 2i $) por [y1 y2].
$$ \left[\begin{array}{cc}
1 & -4\
2 & -3
\end{array}\right]\left[\begin{array}{c}
y_{1}\
y_{2}
\end{array}\right]=-1+2i\left[\begin{array}{c}
y_{1}\
y_{2}
\end{array}\right] $$
De ahí se despeja este sistema de ecuaciones redundantes:
$$ (-1 + 2i)y_1 - (y_1 - 4y_2)=0 $$
y
$$ (-1 + 2i)y_2 - (2y_1 - 3y_2)=0 $$
Despejando $ y_2 $ en una y otra:
End of explanation
x, y = symbols("x y")
b * Matrix([x, y])
x, y = np.meshgrid(np.linspace(-40, 40, 14),
np.linspace(-40, 40, 14))
u = x - 4 * y
v = 2 * x - 3 * y
plt.quiver(x, y, u, v)
plt.show()
Explanation: Sistema de ecuaciones para graficar el campo de vectores
End of explanation |
2,105 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Slip on a planar fault in a halfspace
This is the first and simplest example of Tectosaur. Here, we'll solve for the halfspace surface displacement caused by a Gaussian slip field on a planar fault. we'll first solve the problem using Okada dislocations. Then, we will solve it using Tectosaur and compare the two solutions.
To start out, let's import the necessary modules. We use standard scientific Python packages
Step1: Now, let's make a mesh! Here, the tct.make_rect function produces a triangulated rectangle mesh, with n points on both the x and y dimensions and with the corners specified. And, we plot the mesh in the x-y plane just to make sure everything worked right! The plotting code might help understanding the layout of the mesh structure. The surf object is a tuple. The first element is the array of mesh points with shape (n_points, n_dims = 3). The second element is the array of triangles with shape (n_triangles, n_corners = 3).
Step2: And we'll do the same thing in the x-z plane for the fault.
Step3: And let's specify the physical setup of the problem. First off, we use a shear modulus of 1.0 and Poisson ratio of 0.25. While 1.0 is an absurdly small shear modulus, there is nothing numerically different between using a shear modulus of 1e0 and 1e10 -- it simply multiplies any stresses or tractions.
Step4: We specify and plot a Gaussian slip pulse centered at (0.0, -1.0).
Step5: Let's solve for the surface displacement using Okada dislocations. I discretize the fault into a 30x30 rectangular grid, and then for each observation point in the surface mesh, I loop over the fault elements and calculate the elastic effect.
Step6: Now, that we've set up and solved our problem using Okada dislocations, we get to solve using Tectosaur! We'll build up the integral equation and then solve using GMRES, an iterative linear solver.
I'll do a broad strokes overview of the Symmetric Galerking Boundary Element Method (SGBEM) approach to solving this problem. For a more detailed introduction to the method, look at the book Symmetric Galerkin Boundary Element Method (Sutradhar, Paulino and Gray 2008). To start, let's start from Somigliana's identity for a domain with a crack
Step7: Creating the $M$ "mass" operator is much simpler. We simply provide a Gauss quadrature order and the mesh.
Step8: Let's check out the matrix. It's very sparse! That's because $\phi_i(x)\phi_j(x)$ is only nonzero when the two basis functions are defined on the same triangle.
Step9: Wait, why didn't we look at the matrix for T? That's because it's partially a matrix-free operator representation. The nearfield is stored in a sparse matrix form. But, the farfield portions of the matrix are never stored. Because we can use a very low order quadrature to generate the farfield elements, the memory bandwidth requirements of storing those matrix elements are greater than the computational cost of recomputing each element every time it's needed. This is particularly true because we make heavy of GPUs where doing enormous amounts of simple arithmetic is very computationally cheap. A nice side effect of this design choice is that Tectosaur doesn't use very much RAM!
Ok, let's put together the mass and T operators into a single operator. Because the T operator is matrix free, we have to use SumOp and MultOp so that the summation and multiplication is done whenever a matrix vector product is needed, rather than right now on the matrix elements.
Step10: Next, we need to set up the boundary condition on the fault. We already have the gauss_slip_fnc for fault slip! The question then is how to calculate the coefficients of the basis functions, $\hat{u}_j\phi_j(x)$. For linear basis functions, we can just evaluate gauss_slip_fnc at the corners of each triangle. The values at the corners are the degrees of freedom (DOFs).
We'll calculate the slip for each DOF in the mesh. And we create a list of boundary conditions constraints to impose on our linear system using tct.all_bc_constraints.
Step11: Because we are using a linear basis for the displacement on the free surface, there are several DOFs for most points in the mesh -- each triangle that touches a point has a DOF at that point. We'd like to impose continuity of displacement on the free surface, which requires equality between all the DOFs that share a point. But, we also need to ensure that there's a discontinuity across the fault. So, we also pass information about the fault mesh!
Step12: So, what do we do with all these constraints? We map from the original unconstrained DOFs to a new set of constrained DOFs. The tct.build_constraint_matrix function uniquely defines a new (smaller) set of DOFs and constructs a matrix to transform from one set of DOFs to the another. Suppose we have 2000 original unconstrained DOFs, and 700 independent constraints. There will be 1300 constrained DOFs. So, the constraint matrix cm will be shaped (2000, 1300). If any of those constraints have a non-zero constant offset (e.g. $x + y = 3$), then the c_rhs vector will contain those values. To transform from constrained DOFs to unconstrained DOFs
Step13: And building $C^TAr$
Step14: And things are getting exciting! Before we solve the linear system, we provide scipy.sparse.linalg with the info needed for our custom matrix vector product implementation.
Step15: And solve!
Step16: And the last step is to calculate the unconstrained solution from the constrained solution.
Step17: You did it! You solved for the surface displacement with Tectosaur!!!
Let's plot up the solution. I want to make some matplotlib.tricontour plots. To do that, I need values for each point rather than for each triangle vertex. So, we convert from the solution DOFs to point values using the triangle array as the mapping.
Step18: Also, to easily plot both the Tectosaur and Okada solutions, I extend the Okada solution with zeros so that it has the same length as the Tectosaur solution (the total number of points in the full fault + surface mesh, not just the number of points in the surface mesh).
Step19: A quick function to plot all three components of the displacement.
Step20: The match is very nice!
Step21: Let's plot the difference. | Python Code:
import logging
import numpy as np
import matplotlib.pyplot as plt
import scipy.sparse.linalg as spsla
import okada_wrapper
import tectosaur as tct
tct.logger.setLevel(logging.INFO)
Explanation: Slip on a planar fault in a halfspace
This is the first and simplest example of Tectosaur. Here, we'll solve for the halfspace surface displacement caused by a Gaussian slip field on a planar fault. we'll first solve the problem using Okada dislocations. Then, we will solve it using Tectosaur and compare the two solutions.
To start out, let's import the necessary modules. We use standard scientific Python packages: numpy, scipy, matplotlib. The okada_wrapper module is a simple Python wrapper around the original Okada Fortran code. Finally, import Tectosaur!
End of explanation
w = 5.0
n = 61
corners = [[-w, -w, 0], [-w, w, 0], [w, w, 0], [w, -w, 0]]
surf = tct.make_rect(n, n, corners)
print('pts shape', surf[0].shape)
print('tris shape', surf[1].shape)
plt.triplot(surf[0][:,0], surf[0][:,1], surf[1], linewidth = 0.5)
plt.gca().set_aspect('equal', adjustable = 'box')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
Explanation: Now, let's make a mesh! Here, the tct.make_rect function produces a triangulated rectangle mesh, with n points on both the x and y dimensions and with the corners specified. And, we plot the mesh in the x-y plane just to make sure everything worked right! The plotting code might help understanding the layout of the mesh structure. The surf object is a tuple. The first element is the array of mesh points with shape (n_points, n_dims = 3). The second element is the array of triangles with shape (n_triangles, n_corners = 3).
End of explanation
fault_L = 1.0
top_depth = -0.5
n_fault = 15
fault_corners = [
[-fault_L, 0, top_depth], [-fault_L, 0, top_depth - 1],
[fault_L, 0, top_depth - 1], [fault_L, 0, top_depth]
]
fault = tct.make_rect(n_fault, n_fault, fault_corners)
Explanation: And we'll do the same thing in the x-z plane for the fault.
End of explanation
sm = 1.0
pr = 0.25
Explanation: And let's specify the physical setup of the problem. First off, we use a shear modulus of 1.0 and Poisson ratio of 0.25. While 1.0 is an absurdly small shear modulus, there is nothing numerically different between using a shear modulus of 1e0 and 1e10 -- it simply multiplies any stresses or tractions.
End of explanation
def gauss_slip_fnc(x, z):
return np.exp(-(x ** 2 + (z + 1.0) ** 2) * 8.0)
pt_slip = gauss_slip_fnc(fault[0][:,0], fault[0][:,2])
plt.figure(figsize = (8, 3))
plt.tricontourf(fault[0][:,0], fault[0][:,2], fault[1], pt_slip)
plt.colorbar()
plt.gca().set_aspect('equal', adjustable = 'box')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
Explanation: We specify and plot a Gaussian slip pulse centered at (0.0, -1.0).
End of explanation
lam = 2 * sm * pr / (1 - 2 * pr)
alpha = (lam + sm) / (lam + 2 * sm)
okada_u = np.zeros((surf[0].shape[0], 3))
NX = 30
NY = 30
X_vals = np.linspace(-fault_L, fault_L, NX + 1)
Y_vals = np.linspace(-1.0, 0.0, NX + 1)
for i in range(surf[0].shape[0]):
pt = surf[0][i, :]
for j in range(NX):
X1 = X_vals[j]
X2 = X_vals[j+1]
midX = (X1 + X2) / 2.0
for k in range(NY):
Y1 = Y_vals[k]
Y2 = Y_vals[k+1]
midY = (Y1 + Y2) / 2.0
slip = gauss_slip_fnc(midX, midY + top_depth)
[suc, uv, grad_uv] = okada_wrapper.dc3dwrapper(
alpha, pt, -top_depth, 90.0,
[X1, X2], [Y1, Y2], [slip, 0.0, 0.0]
)
if suc != 0:
okada_u[i, :] = 0
else:
okada_u[i, :] += uv
Explanation: Let's solve for the surface displacement using Okada dislocations. I discretize the fault into a 30x30 rectangular grid, and then for each observation point in the surface mesh, I loop over the fault elements and calculate the elastic effect.
End of explanation
full_mesh = tct.concat(surf, fault)
T = tct.RegularizedSparseIntegralOp(
8, # The coincident quadrature order
8, # The edge adjacent quadrature order
8, # The vertex adjacent quadrature order
2, # The farfield quadrature order
5, # The nearfield quadrature order
2.5, # The element length factor to separate near from farfield.
'elasticRT3', # The Green's function to integrate
'elasticRT3', #...
[sm, pr], # The material parameters (shear modulus, poisson ratio)
full_mesh[0], # The mesh points
full_mesh[1], # The mesh triangles
np.float32, # The float type to use. float32 is much faster on most GPUs
# Finally, do we use a direct (dense) farfield operator or do we use the Fast Multipole Method?
farfield_op_type = tct.TriToTriDirectFarfieldOp
#farfield_op_type = FMMFarfieldOp(mac = 4.5, pts_per_cell = 100)
)
Explanation: Now, that we've set up and solved our problem using Okada dislocations, we get to solve using Tectosaur! We'll build up the integral equation and then solve using GMRES, an iterative linear solver.
I'll do a broad strokes overview of the Symmetric Galerking Boundary Element Method (SGBEM) approach to solving this problem. For a more detailed introduction to the method, look at the book Symmetric Galerkin Boundary Element Method (Sutradhar, Paulino and Gray 2008). To start, let's start from Somigliana's identity for a domain with a crack:
$$u(x) + \int_{S} T^{}(x,y) u(y) dS + \int_{F} T^{}(x,y) s(y) dF = \int_{S} U^*(x,y) t(y) dS$$
where $S$ is the free surface, $F$ is the fault, $u$ is the displacement, $s$ is the fault slip, $t$ is the surface traction and $U^$ and $T^$ are the respective Green's functions.
We can simplify this equation given that we know $t = 0$ on $S$ and that $s$ on $F$ is given:
$$u(x) + \int_{S} T^{}(x,y) u(y) dS + \int_{F} T^{}(x,y) s(y) dF = 0$$
The form of the second two integrals is identical, so instead we can use a single integral:
$$u(x) + \int_{\hat{S}} T^{*}(x,y) \hat{u}(y) d\hat{S} = 0$$
where $\hat{S} = S \cup F$ and
$$\hat{u}(x) = \begin{cases}u(x) & x \in S\s(x) & x \in F\end{cases}$$
At the moment, we have an integral equation that gives us the displacement anywhere in the volume as a function of fault slip and surface displacement. We'd like to transform that to an integral equation that allows solving for the surface displacement from the fault slip. The first step is the enforce the integral equation on the surface multiplied by a test function:
$$\int_{S}\phi(x)\big[\frac{u(x)}{2} + \int_{\hat{S}} T^{*}(x,y) \hat{u}(y) d\hat{S}\big]dS = 0$$
(For reasons to do with boundary limits, we gain a factor of $\frac{1}{2}$.)
Then, discretizing all the fields with linear basis functions ($\phi_i(x)$) over a triangulated mesh and choosing the test functions to be equal to those basis functions results in the standard SGBEM. We get
$$\begin{align}M_{ij}u_{j} + T_{ik}\hat{u}k = 0\ M{ij} = \int_{S}\phi_i(x)\phi_j(x)dS \ T_{ik} = \int_S\phi_i(x)\int_{\hat{S}}T^*(x,y)\phi_k(y) d\hat{S} \ \hat{u}(x) = \sum_{j}\phi_j(x)\hat{u}_j\end{align}$$
To put this into practice, we'll create these two operators using Tectosaur. First we'll create the $T$ operator.
End of explanation
mass = tct.MassOp(3, full_mesh[0], full_mesh[1])
Explanation: Creating the $M$ "mass" operator is much simpler. We simply provide a Gauss quadrature order and the mesh.
End of explanation
mass.mat
Explanation: Let's check out the matrix. It's very sparse! That's because $\phi_i(x)\phi_j(x)$ is only nonzero when the two basis functions are defined on the same triangle.
End of explanation
lhs = tct.SumOp([T, tct.MultOp(mass, 0.5)])
Explanation: Wait, why didn't we look at the matrix for T? That's because it's partially a matrix-free operator representation. The nearfield is stored in a sparse matrix form. But, the farfield portions of the matrix are never stored. Because we can use a very low order quadrature to generate the farfield elements, the memory bandwidth requirements of storing those matrix elements are greater than the computational cost of recomputing each element every time it's needed. This is particularly true because we make heavy of GPUs where doing enormous amounts of simple arithmetic is very computationally cheap. A nice side effect of this design choice is that Tectosaur doesn't use very much RAM!
Ok, let's put together the mass and T operators into a single operator. Because the T operator is matrix free, we have to use SumOp and MultOp so that the summation and multiplication is done whenever a matrix vector product is needed, rather than right now on the matrix elements.
End of explanation
n_surf_tris = surf[1].shape[0]
n_fault_tris = fault[1].shape[0]
fault_tris = full_mesh[1][n_surf_tris:]
dof_pts = full_mesh[0][fault_tris]
x = dof_pts[:,:,0]
z = dof_pts[:,:,2]
slip = np.zeros((fault_tris.shape[0], 3, 3))
slip[:,:,0] = gauss_slip_fnc(x, z)
bc_cs = tct.all_bc_constraints(
n_surf_tris, # The first triangle index to apply BCs to. The first fault triangle is at index `n_surf_tris`.
n_surf_tris + n_fault_tris, # The last triangle index to apply BCs to.
slip.flatten() # The BC vector should be n_tris * 9 elements long.
)
Explanation: Next, we need to set up the boundary condition on the fault. We already have the gauss_slip_fnc for fault slip! The question then is how to calculate the coefficients of the basis functions, $\hat{u}_j\phi_j(x)$. For linear basis functions, we can just evaluate gauss_slip_fnc at the corners of each triangle. The values at the corners are the degrees of freedom (DOFs).
We'll calculate the slip for each DOF in the mesh. And we create a list of boundary conditions constraints to impose on our linear system using tct.all_bc_constraints.
End of explanation
continuity_cs = tct.continuity_constraints(
full_mesh[0], # The mesh points.
full_mesh[1], # The mesh triangles
n_surf_tris # How many surface triangles are there? The triangles are expected to be arranged so that the surface triangles come first. The remaining triangles are assumed to be fault triangles.
)
Explanation: Because we are using a linear basis for the displacement on the free surface, there are several DOFs for most points in the mesh -- each triangle that touches a point has a DOF at that point. We'd like to impose continuity of displacement on the free surface, which requires equality between all the DOFs that share a point. But, we also need to ensure that there's a discontinuity across the fault. So, we also pass information about the fault mesh!
End of explanation
cs = bc_cs + continuity_cs
cm, c_rhs, _ = tct.build_constraint_matrix(cs, lhs.shape[1])
Explanation: So, what do we do with all these constraints? We map from the original unconstrained DOFs to a new set of constrained DOFs. The tct.build_constraint_matrix function uniquely defines a new (smaller) set of DOFs and constructs a matrix to transform from one set of DOFs to the another. Suppose we have 2000 original unconstrained DOFs, and 700 independent constraints. There will be 1300 constrained DOFs. So, the constraint matrix cm will be shaped (2000, 1300). If any of those constraints have a non-zero constant offset (e.g. $x + y = 3$), then the c_rhs vector will contain those values. To transform from constrained DOFs to unconstrained DOFs: cm.dot(x) + c_rhs. So, if we start out with the linear system:
$$Ax = b$$
then in terms of the constrained DOFs, we have
$$A(Cy + r) = b$$
where $C$ is the constraint matrix, cm and $r$ is the vector of offsets, c_rhs. Next, we gain regain symmetry and square matrices by multiplying by $C^T$:
$$C^TA(Cy + r) = C^Tb$$
And rearranging, the final constrained linear system will be:
$$C^TACy = C^Tb - C^TAr$$
These next few lines construct this constrained linear system. First, $C^Tb = 0$ for this problem, so we ignore that term. What's left?
Building $C$:
End of explanation
rhs_constrained = cm.T.dot(-lhs.dot(c_rhs))
Explanation: And building $C^TAr$:
End of explanation
def mv(v, it = [0]):
it[0] += 1
print('iteration # ' + str(it[0]))
return cm.T.dot(lhs.dot(cm.dot(v)))
n = rhs_constrained.shape[0]
A = spsla.LinearOperator((n, n), matvec = mv)
Explanation: And things are getting exciting! Before we solve the linear system, we provide scipy.sparse.linalg with the info needed for our custom matrix vector product implementation.
End of explanation
gmres_out = spsla.gmres(
A, rhs_constrained, tol = 1e-6, restart = 200,
callback = lambda R: print('residual: ', str(R))
)
Explanation: And solve!
End of explanation
soln = cm.dot(gmres_out[0]) + c_rhs
Explanation: And the last step is to calculate the unconstrained solution from the constrained solution.
End of explanation
tct_u = np.zeros((full_mesh[0].shape[0], 3))
tct_u[full_mesh[1]] = soln.reshape((-1,3,3))
Explanation: You did it! You solved for the surface displacement with Tectosaur!!!
Let's plot up the solution. I want to make some matplotlib.tricontour plots. To do that, I need values for each point rather than for each triangle vertex. So, we convert from the solution DOFs to point values using the triangle array as the mapping.
End of explanation
okada_u = np.vstack((okada_u, np.zeros((full_mesh[0].shape[0] - surf[0].shape[0], 3))))
Explanation: Also, to easily plot both the Tectosaur and Okada solutions, I extend the Okada solution with zeros so that it has the same length as the Tectosaur solution (the total number of points in the full fault + surface mesh, not just the number of points in the surface mesh).
End of explanation
surf_pt_idxs = np.unique(full_mesh[1][:n_surf_tris])
def plot(pt_f, minf, maxf, field_name):
surf_pt_f = pt_f[surf_pt_idxs]
levels = np.linspace(minf, maxf, 11)
plt.figure(figsize = (15, 3.5))
for d in range(3):
plt.subplot(1, 3, d + 1)
cntf = plt.tricontourf(
full_mesh[0][:,0], full_mesh[0][:,1], full_mesh[1][:n_surf_tris],
pt_f[:,d], levels = levels, extend = 'both'
)
cbar = plt.colorbar(cntf)
cbar.set_label(field_name)
plt.gca().set_aspect('equal', adjustable = 'box')
plt.xlabel('x')
plt.ylabel('y')
plt.tight_layout()
plt.show()
Explanation: A quick function to plot all three components of the displacement.
End of explanation
plot(tct_u, -0.01, 0.01, 'surface displacement')
plot(okada_u, -0.01, 0.01, 'surface displacement')
Explanation: The match is very nice!
End of explanation
diff = 100 * np.abs(tct_u - okada_u) / np.abs(okada_u)
diff[np.isnan(diff)] = 0
diff[np.isinf(diff)] = 0
plot(diff, 'percent difference')
Explanation: Let's plot the difference.
End of explanation |
2,106 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Similarity Measures in Multimodal Retrieval
This tutorial assumes an Anaconda 3.x installation with Python 3.6.x. Missing libraries can be installed with conda or pip.
To start with prepared data, extract the attached file analysis.zip directly below the directory of this notebook.
Step1: Most likely, the ImageHash library will be missing in a typical setup. The following cell, installs the library.
Step2: Feature Extraction
In the next step, we have to find all images that we want to use in our retrieval scenario and extract the needed histogram-based and hash-based features.
Step3: To check whether the file search has succeeded, display some found images to get a feeling of the data we are going to deal with.
Step4: Detecting Similar Documents
To get started, we will inspect the results of two fairly simply approaches, the difference and intersection of two histograms.
Histogram Difference
The histogram difference is computed between the two first images displayed above. The dashed line illustrates the actual difference between the two images' histograms.
Step5: Histogram Intersection
An alternative measure is the histogram intersection which computes the "amount of change" between the two histograms.
Step6: Comparison of Different Similarity Measures and Metrics in a QBE Scenario
To compare the effectiveness of different similarity computations, we will use a query by example (QBE) scenario in which we check a query image (#1392) against all other images in the corpus to find the most similar ones. These found images will be displayed in form of a list ordered by relevance.
Step7: The next cell computes different measures and metrics and saves them in a dataframe.
Step8: If we inspect the dataframe, we will see that each measure/metric yields different results which is not very surprising...
Step9: To facilitate the assessment of the effectiveness of the different measures and metrics, the next cell creates an HTML overview document with the first found documents.
Sample results are available here.
Step10: A Local Feature - ORB
The local ORB (Oriented FAST and Rotated BRIEF) feature takes interesting regions of an image into account - the so-called keypoints. In cotrast to the presented approaches which only consider the whole image at a time and which are therefore called global features, local feature extractors search for keypoints and try to match them with the ones found in another image.
Hypothetically speaking, such features should be helpful to discover similar details in different images no matter how they differ in scale or rotation. Hence, ORB is considered relatively scale and rotation invariant.
In this section, we will investigate wheter ORB can be used to find pages in the Orbis Pictus decribing the concept of the "world" which are present in three editions of the book as displayed below.
Step11: To give an example, we will extract ORB features from the first two images and match them. The discovered matches will be illustrated below.
Step12: ATTENTION ! Depending on your computer setup, the next cell will take some time to finish. See the log below to get an estimation. The experiment has been run with with MacBook Pro (13-inch, 2018, 2,7 GHz Intel Core i7, 16 GB, and macOS Mojave).
In this naive approach, we will simply count the number of matches between the query image and each image in the corpus and use this value as a similarity score.
Step13: In a little more sophisticated approach, we will compute the average distance for each query-image pair for all matches. This value yields another similarity score.
Eventually, a HTML report file is created to compare the results of both approaches.
Sample results are available here.
Step14: Histogram-based Clustering
Sample results are available here. | Python Code:
%matplotlib inline
import os
import tarfile as TAR
import sys
from datetime import datetime
from PIL import Image
import warnings
import json
import pickle
import zipfile
from math import *
import numpy as np
import pandas as pd
from sklearn.cluster import MiniBatchKMeans
import matplotlib.pyplot as plt
import matplotlib
# enlarge plots
plt.rcParams['figure.figsize'] = [7, 5]
import imagehash
from sklearn.preprocessing import normalize
from scipy.spatial.distance import minkowski
from scipy.spatial.distance import hamming
from scipy.stats import wasserstein_distance
from scipy.stats import spearmanr
from skimage.feature import (match_descriptors, corner_harris,
corner_peaks, ORB, plot_matches, BRIEF, corner_peaks, corner_harris)
from skimage.color import rgb2gray
from skimage.io import imread,imshow
def printLog(text):
now = str(datetime.now())
print("[" + now + "]\t" + text)
# forces to output the result of the print command immediately, see: http://stackoverflow.com/questions/230751/how-to-flush-output-of-python-print
sys.stdout.flush()
def findJPEGfiles(path):
# a list for the JPEG files
jpgFilePaths=[]
for root, dirs, files in os.walk(path):
for file_ in files:
if file_.endswith(".jpg"):
# debug
# print(os.path.join(root, file_))
jpgFilePaths.append(os.path.join(root, file_))
return jpgFilePaths
outputDir= "./analysis/"
verbose=True
#general preparations, e.g., create missing output directories
if not os.path.exists(outputDir):
if verbose:
print("Creating " + outputDir)
os.mkdir(outputDir)
Explanation: Similarity Measures in Multimodal Retrieval
This tutorial assumes an Anaconda 3.x installation with Python 3.6.x. Missing libraries can be installed with conda or pip.
To start with prepared data, extract the attached file analysis.zip directly below the directory of this notebook.
End of explanation
!pip install ImageHash
Explanation: Most likely, the ImageHash library will be missing in a typical setup. The following cell, installs the library.
End of explanation
#baseDir="/Users/david/src/__datasets/orbis_pictus/sbbget_downloads_comenius/download_temp/"
baseDir="/Users/david/src/__datasets/orbis_pictus/jpegs/download_temp/"
jpgFiles=findJPEGfiles(baseDir)
# extract all features
printLog("Extracting features of %i documents..."%len(jpgFiles))
histograms=[]
# "data science" utility structures
ppnList=[]
nameList=[]
combinedHistograms=[]
combinedNormalizedHistograms=[]
jpegFilePaths=[]
pHashes=[]
for jpg in jpgFiles:
tokens=jpg.split("/")
# load an image
image = Image.open(jpg)
# bug: images are not of the same size, has to be fixed to obtain normalized histograms!!!
# q'n'd fix - brute force resizing
image=image.resize((512,512),Image.LANCZOS)
histogram = image.histogram()
histogramDict=dict()
# save its unique ID and name
histogramDict['ppn']=tokens[-3]+"/"+tokens[-2]
histogramDict['extractName']=tokens[-1]
# save the histogram data in various forms
histogramDict['redHistogram'] = histogram[0:256]
histogramDict['blueHistogram'] = histogram[256:512]
histogramDict['greenHistogram'] = histogram[512:768]
hist=np.array(histogram)
normalizedRGB = (hist)/(max(hist))
# create a perceptual hash for the image
pHashes.append(imagehash.phash(image))
image.close()
# fill the DS data structures
ppnList.append(histogramDict['ppn'])
nameList.append(histogramDict['extractName'])
combinedHistograms.append(histogramDict['redHistogram']+histogramDict['blueHistogram']+histogramDict['greenHistogram'])
combinedNormalizedHistograms.append(normalizedRGB)
jpegFilePaths.append(jpg)
printLog("Done.")
Explanation: Feature Extraction
In the next step, we have to find all images that we want to use in our retrieval scenario and extract the needed histogram-based and hash-based features.
End of explanation
img1=imread(jpegFilePaths[0])
img2=imread(jpegFilePaths[1388])
img3=imread(jpegFilePaths[1389])
#Creates two subplots and unpacks the output array immediately
f, (ax1, ax2,ax3) = plt.subplots(1, 3, sharex='all', sharey='all')
ax1.axis('off')
ax2.axis('off')
ax3.axis('off')
ax1.set_title("Image #0")
ax1.imshow(img1)
ax2.set_title("Image #1388")
ax2.imshow(img2)
ax3.set_title("Image #1389")
ax3.imshow(img3)
Explanation: To check whether the file search has succeeded, display some found images to get a feeling of the data we are going to deal with.
End of explanation
plt.plot(combinedNormalizedHistograms[0],"r")
plt.plot(combinedNormalizedHistograms[1388],"g")
histCut=np.absolute((np.subtract(combinedNormalizedHistograms[0],combinedNormalizedHistograms[1388])))
print(np.sum(histCut))
plt.plot(histCut,"k--")
plt.title("Histogramm Difference (black) of Two Histograms")
plt.show()
Explanation: Detecting Similar Documents
To get started, we will inspect the results of two fairly simply approaches, the difference and intersection of two histograms.
Histogram Difference
The histogram difference is computed between the two first images displayed above. The dashed line illustrates the actual difference between the two images' histograms.
End of explanation
plt.plot(combinedNormalizedHistograms[0],"r")
plt.plot(combinedNormalizedHistograms[1388],"g")
histCut=(np.minimum(combinedNormalizedHistograms[0],combinedNormalizedHistograms[1388]))
print(np.sum(histCut))
plt.plot(histCut,"k--")
plt.title("Histogramm Intersection (black) of Two Histograms")
plt.show()
Explanation: Histogram Intersection
An alternative measure is the histogram intersection which computes the "amount of change" between the two histograms.
End of explanation
qbeIndex=1392#1685 # beim 1685 sind p1 und p2 am Anfang gleich
img1=imread(jpegFilePaths[qbeIndex])
#plt.title(jpegFilePaths[qbeIndex])
plt.axis('off')
imshow(img1)
Explanation: Comparison of Different Similarity Measures and Metrics in a QBE Scenario
To compare the effectiveness of different similarity computations, we will use a query by example (QBE) scenario in which we check a query image (#1392) against all other images in the corpus to find the most similar ones. These found images will be displayed in form of a list ordered by relevance.
End of explanation
def squareRooted(x):
return round(sqrt(sum([a*a for a in x])),3)
def cosSimilarity(x,y):
numerator = sum(a*b for a,b in zip(x,y))
denominator = squareRooted(x)*squareRooted(y)
return round(numerator/float(denominator),3)
printLog("Calculating QBE scenarios...")
qbeHist=combinedNormalizedHistograms[qbeIndex]
dataDict={"index":[],"p1":[],"p2":[],"histdiff":[],"histcut":[],"emd":[],"cosine":[],"phash":[]}
for i,hist in enumerate(combinedNormalizedHistograms):
dataDict["index"].append(i)
# Manhattan distance
dataDict["p1"].append(minkowski(qbeHist,hist,p=1))
# Euclidean distance
dataDict["p2"].append(minkowski(qbeHist,hist,p=2))
# histogram difference
histDiff=np.absolute((np.subtract(qbeHist,combinedNormalizedHistograms[i])))
dataDict["histdiff"].append(np.sum(histDiff))
# histogram cut
histCut=np.minimum(qbeHist,combinedNormalizedHistograms[i])
dataDict["histcut"].append(np.sum(histCut))
# earth mover's distance aka Wasserstein
dataDict["emd"].append(wasserstein_distance(qbeHist,hist))
# cosine similarity
dataDict["cosine"].append(cosSimilarity(qbeHist,combinedNormalizedHistograms[i]))
# pHash with Hamming distance
dataDict["phash"].append(hamming(pHashes[qbeIndex],pHashes[i]))
df=pd.DataFrame(dataDict)
printLog("Done.")
Explanation: The next cell computes different measures and metrics and saves them in a dataframe.
End of explanation
df.sort_values(by=['p1']).head(20).describe()
Explanation: If we inspect the dataframe, we will see that each measure/metric yields different results which is not very surprising...
End of explanation
measures=["p1","p2","histdiff","histcut","emd","cosine","phash"]
ranks=dict()
printLog("Creating QBE report files...")
htmlFile=open(outputDir+"_qbe.html", "w")
printLog("HTML output will be saved to: %s"%outputDir+"_qbe.html")
htmlFile.write("<html><head>\n")
htmlFile.write("<link href='../css/helvetica.css' rel='stylesheet' type='text/css'>\n")
#htmlFile.write("<style>body {color: black;text-align: center; font-family: helvetica;} h1 {font-size:15px;position: fixed; padding-top:5px; top: 0;width: 100%;background: rgba(255,255,255,0.5);} h2 {font-size:15px;position: fixed; right: 0;width: 150px; padding-top:25px; padding-right:15px; background: rgba(255,255,255,0.5);} p {font-size:10px;} .score{font-size:6px; text-align: right;}")
htmlFile.write("</style></head>\n")
htmlFile.write("<h2>mir comparison.</h2>")
htmlFile.write("<table><tr>\n")
for measureName in measures:
typeOfMeasure="distance"
# check whether the measure is a similarity or a distance measure
# (assuming identity (i.e., identity of indiscernibles) of the measure)
if df[df.index==qbeIndex][measureName].tolist()[0]>0:
df2=df.sort_values(by=[measureName],ascending=False).head(20)
typeOfMeasure="similarity"
else:
df2=df.sort_values(by=[measureName],ascending=True).head(20)
typeOfMeasure="distance"
htmlFile.write("<td>\n")
measureTitle=measureName
if typeOfMeasure=="similarity":
measureTitle=measureName.replace("dist_","sim_")
htmlFile.write("<h1>"+measureTitle+"</h1>\n")
htmlFile.write("<p>"+typeOfMeasure+"</p>\n")
ranks[measureName]=df2.index.tolist()
jpegFilePathsReport=[]
# image directory must be relative to the directory of the html files
imgBaseDir="./extracted_images/"
for row in df2.itertuples(index=False):
i=row.index
score=getattr(row, measureName)
# create JPEG copies if not available already
tiffImage=imgBaseDir+ppnList[i]+"/"+nameList[i]
jpgPath=tiffImage.replace(".tif",".jpg")
if not os.path.exists(outputDir+jpgPath):
image = Image.open(outputDir+tiffImage)
image.thumbnail((512,512))
image.save(outputDir+jpgPath)
image.close()
os.remove(outputDir+tiffImage)
jpegFilePathsReport.append(outputDir+jpgPath)
if i==qbeIndex:
htmlFile.write("<img height=150 src='"+jpgPath+"' alt='"+str(i)+"'/>\n")
else:
htmlFile.write("<img height=150 src='"+jpgPath+"' alt='"+str(i)+"'/>\n")
#htmlFile.write("<p class='score'>"+str(score)+"</p>")
htmlFile.write("<p class='score'> </p>\n")
htmlFile.write("</td>\n")
# close the HTML file
htmlFile.write("</tr></table>\n")
htmlFile.write("</body></html>\n")
htmlFile.close()
printLog("Done.")
Explanation: To facilitate the assessment of the effectiveness of the different measures and metrics, the next cell creates an HTML overview document with the first found documents.
Sample results are available here.
End of explanation
qbeIndexLocalFeat=17#qbeIndex#17 #17=Welt
img1=imread(jpegFilePaths[qbeIndexLocalFeat],as_gray=True)
img2=imread(jpegFilePaths[1301],as_gray=True)
img3=imread(jpegFilePaths[1671],as_gray=True)
#Creates two subplots and unpacks the output array immediately
f, (ax1, ax2,ax3) = plt.subplots(1, 3, sharex='all', sharey='all')
ax1.axis('off')
ax2.axis('off')
ax3.axis('off')
ax1.set_title("Query #%i"%qbeIndexLocalFeat)
ax1.imshow(img1)
ax2.set_title("Index #1301")
ax2.imshow(img2)
ax3.set_title("Index #1671")
ax3.imshow(img3)
Explanation: A Local Feature - ORB
The local ORB (Oriented FAST and Rotated BRIEF) feature takes interesting regions of an image into account - the so-called keypoints. In cotrast to the presented approaches which only consider the whole image at a time and which are therefore called global features, local feature extractors search for keypoints and try to match them with the ones found in another image.
Hypothetically speaking, such features should be helpful to discover similar details in different images no matter how they differ in scale or rotation. Hence, ORB is considered relatively scale and rotation invariant.
In this section, we will investigate wheter ORB can be used to find pages in the Orbis Pictus decribing the concept of the "world" which are present in three editions of the book as displayed below.
End of explanation
# extract features
descriptor_extractor = ORB(n_keypoints=200)
descriptor_extractor.detect_and_extract(img1)
keypoints1 = descriptor_extractor.keypoints
descriptors1 = descriptor_extractor.descriptors
descriptor_extractor.detect_and_extract(img2)
keypoints2 = descriptor_extractor.keypoints
descriptors2 = descriptor_extractor.descriptors
# match features
matches12 = match_descriptors(descriptors1, descriptors2, cross_check=True)
# visualize the results
fig, ax = plt.subplots(nrows=1, ncols=1)
plt.gray()
plot_matches(ax, img1, img2, keypoints1, keypoints2, matches12)
ax.axis('off')
ax.set_title("Image 1 vs. Image 2")
Explanation: To give an example, we will extract ORB features from the first two images and match them. The discovered matches will be illustrated below.
End of explanation
printLog("Calculating ORB QBE scenarios...")
#qbeIndexLocalFeat
# prepare QBE image
descriptor_extractor = ORB(n_keypoints=200)
# prepare QBE image
qbeImage=imread(jpegFilePaths[qbeIndexLocalFeat],as_gray=True)
descriptor_extractor.detect_and_extract(qbeImage)
qbeKeypoints = descriptor_extractor.keypoints
qbeDescriptors = descriptor_extractor.descriptors
orbDescriptors=[]
orbMatches=[]
# match QBE image against the corpus
dataDict={"index":[],"matches_orb":[]}
for i,jpeg in enumerate(jpegFilePaths):
dataDict["index"].append(i)
compImage=imread(jpeg,as_gray=True)
descriptor_extractor.detect_and_extract(compImage)
keypoints = descriptor_extractor.keypoints
descriptors = descriptor_extractor.descriptors
orbDescriptors.append(descriptors)
matches = match_descriptors(qbeDescriptors, descriptors, cross_check=True)#,max_distance=0.5)
orbMatches.append(matches)
# naive approach: count the number of matched descriptors
dataDict["matches_orb"].append(matches.shape[0])
if i%100==0:
printLog("Processed %i documents of %i."%(i,len(jpegFilePaths)))
df=pd.DataFrame(dataDict)
printLog("Done.")
df2=df.sort_values(by=['matches_orb'],ascending=False).head(20)
df2.describe()
Explanation: ATTENTION ! Depending on your computer setup, the next cell will take some time to finish. See the log below to get an estimation. The experiment has been run with with MacBook Pro (13-inch, 2018, 2,7 GHz Intel Core i7, 16 GB, and macOS Mojave).
In this naive approach, we will simply count the number of matches between the query image and each image in the corpus and use this value as a similarity score.
End of explanation
printLog("Calculating Hamming distances for ORB features and calculating average distance...")
averageDistancePerImage=[]
for i,matches in enumerate(orbMatches):
# matches qbe
# matches[:, 0]
# matches document
# matches[:, 1]
qbeMatchIndices=matches[:, 0]
queryMatchIndices=matches[:, 1]
sumDistances=0.0
noMatches=len(qbeMatchIndices)
for j,qbeMatchIndex in enumerate(qbeMatchIndices):
sumDistances+=hamming(qbeDescriptors[qbeMatchIndex],orbDescriptors[i][queryMatchIndices[j]])
avgDistance=sumDistances/noMatches
averageDistancePerImage.append((avgDistance,i))
if i%100==0:
printLog("Processed %i documents of %i."%(i,len(orbMatches)))
averageDistancePerImage.sort(key=lambda tup: tup[0])
printLog("Done.\n")
# create the report files
measures=["matches_orb"]
ranks=dict()
printLog("Creating QBE ORB report files...")
htmlFile=open(outputDir+"_orb.html", "w")
printLog("HTML output will be saved to: %s"%outputDir+"_orb.html")
htmlFile.write("<html><head>\n")
htmlFile.write("<link href='../css/helvetica.css' rel='stylesheet' type='text/css'>\n")
#htmlFile.write("<style>body {color: black;text-align: center; font-family: helvetica;} h1 {font-size:15px;position: fixed; padding-top:5px; top: 0;width: 100%;background: rgba(255,255,255,0.5);} h2 {font-size:15px;position: fixed; right: 0;width: 150px; padding-top:25px; padding-right:15px; background: rgba(255,255,255,0.5);} p {font-size:10px;} .score{font-size:6px; text-align: right;}")
htmlFile.write("</style></head>\n")
htmlFile.write("<h2>orb comparison.</h2>")
htmlFile.write("<table><tr>\n")
for measureName in measures:
typeOfMeasure="similarity"
htmlFile.write("<td>\n")
htmlFile.write("<h1>"+measureName+"</h1>\n")
htmlFile.write("<p>"+typeOfMeasure+"</p>\n")
ranks[measureName]=df2.index.tolist()
jpegFilePathsReport=[]
# image directory must be relative to the directory of the html files
imgBaseDir="./extracted_images/"
for row in df2.itertuples(index=False):
i=row.index
score=getattr(row, measureName)
# create JPEG copies if not available already
tiffImage=imgBaseDir+ppnList[i]+"/"+nameList[i]
jpgPath=tiffImage.replace(".tif",".jpg")
if not os.path.exists(outputDir+jpgPath):
image = Image.open(outputDir+tiffImage)
image.thumbnail((512,512))
image.save(outputDir+jpgPath)
image.close()
os.remove(outputDir+tiffImage)
jpegFilePathsReport.append(outputDir+jpgPath)
if i==qbeIndex:
htmlFile.write("<img height=150 src='"+jpgPath+"' alt='"+str(i)+"'/>\n")
else:
htmlFile.write("<img height=150 src='"+jpgPath+"' alt='"+str(i)+"'/>\n")
#htmlFile.write("<p class='score'>"+str(score)+"</p>")
htmlFile.write("<p class='score'> </p>\n")
htmlFile.write("</td>\n")
# the non-naive approach using the average distance
htmlFile.write("<td>\n")
htmlFile.write("<h1>dist_avg_orb</h1>\n")
htmlFile.write("<p>"+typeOfMeasure+"</p>\n")
for (dist,index) in averageDistancePerImage[:20]:
typeOfMeasure="similarity"
jpegFilePathsReport=[]
# image directory must be relative to the directory of the html files
imgBaseDir="./extracted_images/"
i=index
score=dist
# create JPEG copies if not available already
tiffImage=imgBaseDir+ppnList[i]+"/"+nameList[i]
jpgPath=tiffImage.replace(".tif",".jpg")
if not os.path.exists(outputDir+jpgPath):
image = Image.open(outputDir+tiffImage)
image.thumbnail((512,512))
image.save(outputDir+jpgPath)
image.close()
os.remove(outputDir+tiffImage)
jpegFilePathsReport.append(outputDir+jpgPath)
if i==qbeIndex:
htmlFile.write("<img height=150 src='"+jpgPath+"' alt='"+str(i)+"'/>\n")
else:
htmlFile.write("<img height=150 src='"+jpgPath+"' alt='"+str(i)+"'/>\n")
htmlFile.write("<p class='score'>"+str(score)+"</p>")
htmlFile.write("<p class='score'> </p>\n")
htmlFile.write("</td>\n")
#eof
# close the HTML file
htmlFile.write("</tr></table>\n")
htmlFile.write("</body></html>\n")
htmlFile.close()
printLog("Done.")
Explanation: In a little more sophisticated approach, we will compute the average distance for each query-image pair for all matches. This value yields another similarity score.
Eventually, a HTML report file is created to compare the results of both approaches.
Sample results are available here.
End of explanation
printLog("Clustering...")
X=np.array(combinedHistograms)
numberOfClusters=20
kmeans = MiniBatchKMeans(n_clusters=numberOfClusters, random_state = 0, batch_size = 6)
kmeans=kmeans.fit(X)
printLog("Done.")
printLog("Creating report files...")
htmlFiles=[]
jpegFilePaths=[]
for i in range(0,numberOfClusters):
htmlFile=open(outputDir+str(i)+".html", "w")
htmlFile.write("<html><head><link href='../css/helvetica.css' rel='stylesheet' type='text/css'></head>\n<body>\n")
#htmlFile.write("<h1>Cluster "+str(i)+"</h1>\n")
htmlFile.write("<img src='"+str(i)+".png' width=200 />") # cluster center histogram will created below
htmlFiles.append(htmlFile)
# image directory must be relative to the directory of the html files
imgBaseDir="./extracted_images/"
for i, label in enumerate(kmeans.labels_):
# create JPEG copies if not available already
tiffImage=imgBaseDir+ppnList[i]+"/"+nameList[i]
jpgPath=tiffImage.replace(".tif",".jpg")
if not os.path.exists(outputDir+jpgPath):
image = Image.open(outputDir+tiffImage)
image.thumbnail((512,512))
image.save(outputDir+jpgPath)
image.close()
os.remove(outputDir+tiffImage)
jpegFilePaths.append(outputDir+jpgPath)
htmlFiles[label].write("<img height=200 src='"+jpgPath+"' alt='"+str(len(jpegFilePaths)-1)+"'/>\n")
# close the HTML files
for h in htmlFiles:
h.write("</body></html>\n")
h.close()
# create the summarization main HTML page
htmlFile = open(outputDir+"_main.html", "w")
printLog("HTML output will be saved to: %s"%outputDir+"_main.html")
htmlFile.write("<html><head><link href='../css/helvetica.css' rel='stylesheet' type='text/css'></head><body>\n")
htmlFile.write("<h2>cluster results.</h2>\n")
for i in range(0, numberOfClusters):
htmlFile.write("<iframe src='./"+str(i)+".html"+"' height=400 ><p>Long live Netscape!</p></iframe>")
htmlFile.write("</body></html>\n")
htmlFile.close()
printLog("Done.")
# save the cluster center histograms as images to assist the visualization
printLog("Rendering %i cluster center histograms..."%len(kmeans.cluster_centers_))
for j, histogram in enumerate(kmeans.cluster_centers_):
plt.figure(0)
# clean previous plots
plt.clf()
plt.title("Cluster %i"%j)
#red
for i in range(0, 256):
plt.bar(i, histogram[i],color='red', alpha=0.3)
# blue
for i in range(256, 512):
plt.bar(i-256, histogram[i], color='blue', alpha=0.3)
# green
for i in range(512, 768):
plt.bar(i-512, histogram[i], color='green', alpha=0.3)
#debug
#plt.show()
plt.savefig(outputDir+str(j)+".png")
printLog("Done.")
Explanation: Histogram-based Clustering
Sample results are available here.
End of explanation |
2,107 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Purpose
The purpose of this notebook is to work out the data structure for saving the computed results for a single session. Here we are using the xarray package to structure the data, because
Step1: Go through the steps to get the ripple triggered connectivity
Step2: Make an xarray dataset for coherence and pairwise spectral granger
Step3: Show that it is easy to select two individual tetrodes and plot a subset of their frequency for coherence.
Step4: Show the same thing for spectral granger.
Step5: Now show that we can plot all tetrodes pairs in a dataset
Step6: It is also easy to select a subset of tetrode pairs (in this case all CA1-PFC tetrode pairs).
Step7: xarray also makes it easy to compare the difference of a connectivity measure from its baseline (in this case, the baseline is the first time bin)
Step8: It is also easy to average over the tetrode pairs
Step9: And also average over the difference
Step10: Test saving as netcdf file
Step11: Show that we can open the saved dataset and recover the data
Step12: Make data structure for group delay
Step13: Make data structure for canonical coherence
Step14: Now after adding this code into the code base, test if we can compute, save, and load | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import xarray as xr
from src.data_processing import (get_LFP_dataframe, make_tetrode_dataframe,
make_tetrode_pair_info, reshape_to_segments)
from src.parameters import (ANIMALS, SAMPLING_FREQUENCY,
MULTITAPER_PARAMETERS, FREQUENCY_BANDS,
RIPPLE_COVARIATES, ALPHA)
from src.analysis import (decode_ripple_clusterless,
detect_epoch_ripples, is_overlap,
_subtract_event_related_potential)
Explanation: Purpose
The purpose of this notebook is to work out the data structure for saving the computed results for a single session. Here we are using the xarray package to structure the data, because:
It is built to handle large multi-dimensional data (orginally for earth sciences data).
It allows you to call dimensions by name (time, frequency, etc).
The plotting functions are convenient for multi-dimensional data (it has convenient heatmap plotting).
It can output to HDF5 (via the netcdf format, a geosciences data format), which is built for handling large data in a descriptive (i.e. can label units, add information about how data was constructed, etc.).
Lazily loads data so large datasets that are too big for memory can be handled (via dask).
Previously, I was using the pandas package in python and this wasn't handling the loading and combining of time-frequency data. In particular, the size of the data was problematic even on the cluster and this was frustrating to debug. pandas now recommends the usage of xarray for multi-dimesional data.
End of explanation
epoch_key = ('HPa', 6, 2)
ripple_times = detect_epoch_ripples(
epoch_key, ANIMALS, sampling_frequency=SAMPLING_FREQUENCY)
tetrode_info = make_tetrode_dataframe(ANIMALS)[epoch_key]
tetrode_info = tetrode_info[
~tetrode_info.descrip.str.endswith('Ref').fillna(False)]
tetrode_pair_info = make_tetrode_pair_info(tetrode_info)
lfps = {tetrode_key: get_LFP_dataframe(tetrode_key, ANIMALS)
for tetrode_key in tetrode_info.index}
from copy import deepcopy
from functools import partial, wraps
multitaper_parameter_name = '4Hz_Resolution'
multitaper_params = MULTITAPER_PARAMETERS[multitaper_parameter_name]
num_lfps = len(lfps)
num_pairs = int(num_lfps * (num_lfps - 1) / 2)
params = deepcopy(multitaper_params)
window_of_interest = params.pop('window_of_interest')
reshape_to_trials = partial(
reshape_to_segments,
sampling_frequency=params['sampling_frequency'],
window_offset=window_of_interest, concat_axis=1)
ripple_locked_lfps = pd.Panel({
lfp_name: _subtract_event_related_potential(
reshape_to_trials(lfps[lfp_name], ripple_times))
for lfp_name in lfps})
from src.spectral.connectivity import Connectivity
from src.spectral.transforms import Multitaper
m = Multitaper(
np.rollaxis(ripple_locked_lfps.values, 0, 3),
**params,
start_time=ripple_locked_lfps.major_axis.min())
c = Connectivity(
fourier_coefficients=m.fft(),
frequencies=m.frequencies,
time=m.time)
Explanation: Go through the steps to get the ripple triggered connectivity
End of explanation
n_lfps = len(lfps)
ds = xr.Dataset(
{'coherence_magnitude': (['time', 'frequency', 'tetrode1', 'tetrode2'], c.coherence_magnitude()),
'pairwise_spectral_granger_prediction': (['time', 'frequency', 'tetrode1', 'tetrode2'], c.pairwise_spectral_granger_prediction())},
coords={'time': c.time + np.diff(c.time)[0] / 2,
'frequency': c.frequencies + np.diff(c.frequencies)[0] / 2,
'tetrode1': tetrode_info.tetrode_id.values,
'tetrode2': tetrode_info.tetrode_id.values,
'brain_area1': ('tetrode1', tetrode_info.area.tolist()),
'brain_area2': ('tetrode2', tetrode_info.area.tolist()),
'session': np.array(['{0}_{1:02d}_{2:02d}'.format(*epoch_key)]),
}
)
ds
Explanation: Make an xarray dataset for coherence and pairwise spectral granger
End of explanation
ds.sel(
tetrode1='HPa621',
tetrode2='HPa624',
frequency=slice(0, 30)).coherence_magnitude.plot(x='time', y='frequency');
Explanation: Show that it is easy to select two individual tetrodes and plot a subset of their frequency for coherence.
End of explanation
ds.sel(
tetrode1='HPa621',
tetrode2='HPa6220',
frequency=slice(0, 30)
).pairwise_spectral_granger_prediction.plot(x='time', y='frequency');
Explanation: Show the same thing for spectral granger.
End of explanation
ds['pairwise_spectral_granger_prediction'].sel(
frequency=slice(0, 30)).plot(x='time', y='frequency', col='tetrode1', row='tetrode2', robust=True);
ds['coherence_magnitude'].sel(
frequency=slice(0, 30)).plot(x='time', y='frequency', col='tetrode1', row='tetrode2');
Explanation: Now show that we can plot all tetrodes pairs in a dataset
End of explanation
(ds.sel(
tetrode1=ds.tetrode1[ds.brain_area1=='CA1'],
tetrode2=ds.tetrode2[ds.brain_area2=='PFC'],
frequency=slice(0, 30))
.coherence_magnitude
.plot(x='time', y='frequency', col='tetrode1', row='tetrode2'));
Explanation: It is also easy to select a subset of tetrode pairs (in this case all CA1-PFC tetrode pairs).
End of explanation
((ds - ds.isel(time=0)).sel(
tetrode1=ds.tetrode1[ds.brain_area1=='CA1'],
tetrode2=ds.tetrode2[ds.brain_area2=='PFC'],
frequency=slice(0, 30))
.coherence_magnitude
.plot(x='time', y='frequency', col='tetrode1', row='tetrode2'));
Explanation: xarray also makes it easy to compare the difference of a connectivity measure from its baseline (in this case, the baseline is the first time bin)
End of explanation
(ds.sel(
tetrode1=ds.tetrode1[ds.brain_area1=='CA1'],
tetrode2=ds.tetrode2[ds.brain_area2=='PFC'],
frequency=slice(0, 30))
.coherence_magnitude.mean(['tetrode1', 'tetrode2'])
.plot(x='time', y='frequency'));
Explanation: It is also easy to average over the tetrode pairs
End of explanation
((ds - ds.isel(time=0)).sel(
tetrode1=ds.tetrode1[ds.brain_area1=='CA1'],
tetrode2=ds.tetrode2[ds.brain_area2=='PFC'],
frequency=slice(0, 30))
.coherence_magnitude.mean(['tetrode1', 'tetrode2'])
.plot(x='time', y='frequency'));
Explanation: And also average over the difference
End of explanation
import os
path = '{0}_{1:02d}_{2:02d}.nc'.format(*epoch_key)
group = '{0}/'.format(multitaper_parameter_name)
write_mode = 'a' if os.path.isfile(path) else 'w'
ds.to_netcdf(path=path, group=group, mode=write_mode)
Explanation: Test saving as netcdf file
End of explanation
with xr.open_dataset(path, group=group) as da:
da.load()
print(da)
Explanation: Show that we can open the saved dataset and recover the data
End of explanation
n_bands = len(FREQUENCY_BANDS)
delay, slope, r_value = (np.zeros((c.time.size, n_bands, m.n_signals, m.n_signals)),) * 3
for band_ind, frequency_band in enumerate(FREQUENCY_BANDS):
(delay[:, band_ind, ...],
slope[:, band_ind, ...],
r_value[:, band_ind, ...]) = c.group_delay(
FREQUENCY_BANDS[frequency_band], frequency_resolution=m.frequency_resolution)
coordinate_names = ['time', 'frequency_band', 'tetrode1', 'tetrode2']
ds = xr.Dataset(
{'delay': (coordinate_names, delay),
'slope': (coordinate_names, slope),
'r_value': (coordinate_names, r_value)},
coords={'time': c.time + np.diff(c.time)[0] / 2,
'frequency_band': list(FREQUENCY_BANDS.keys()),
'tetrode1': tetrode_info.tetrode_id.values,
'tetrode2': tetrode_info.tetrode_id.values,
'brain_area1': ('tetrode1', tetrode_info.area.tolist()),
'brain_area2': ('tetrode2', tetrode_info.area.tolist()),
'session': np.array(['{0}_{1:02d}_{2:02d}'.format(*epoch_key)]),
}
)
ds['delay'].sel(frequency_band='beta', tetrode1='HPa621', tetrode2='HPa622').plot();
Explanation: Make data structure for group delay
End of explanation
canonical_coherence, area_labels = c.canonical_coherence(tetrode_info.area.tolist())
dimension_names = ['time', 'frequency', 'brain_area1', 'brain_area2']
data_vars = {'canonical_coherence': (dimension_names, canonical_coherence)}
coordinates = {
'time': c.time + np.diff(c.time)[0] / 2,
'frequency': c.frequencies + np.diff(c.frequencies)[0] / 2,
'brain_area1': area_labels,
'brain_area2': area_labels,
'session': np.array(['{0}_{1:02d}_{2:02d}'.format(*epoch_key)]),
}
ds = xr.Dataset(data_vars, coords=coordinates)
ds.sel(brain_area1='CA1', brain_area2='PFC', frequency=slice(0, 30)).canonical_coherence.plot(x='time', y='frequency')
Explanation: Make data structure for canonical coherence
End of explanation
from src.analysis import ripple_triggered_connectivity
for parameters_name, parameters in MULTITAPER_PARAMETERS.items():
ripple_triggered_connectivity(
lfps, epoch_key, tetrode_info, ripple_times, parameters,
FREQUENCY_BANDS,
multitaper_parameter_name=parameters_name,
group_name='all_ripples')
with xr.open_dataset(path, group='2Hz_Resolution/all_ripples/canonical_coherence') as da:
da.load()
print(da)
da.sel(brain_area1='CA1', brain_area2='PFC', frequency=slice(0, 30)).canonical_coherence.plot(x='time', y='frequency')
with xr.open_dataset(path, group='10Hz_Resolution/all_ripples/canonical_coherence') as da:
da.load()
print(da)
da.sel(brain_area1='CA1', brain_area2='PFC', frequency=slice(0, 30)).canonical_coherence.plot(x='time', y='frequency')
Explanation: Now after adding this code into the code base, test if we can compute, save, and load
End of explanation |
2,108 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 7 – Ensemble Learning and Random Forests
This notebook contains all the sample code and solutions to the exercises in chapter 7.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures
Step1: Voting classifiers
Step2: Bagging ensembles
Step3: Random Forests
Step4: Out-of-Bag evaluation
Step5: Feature importance
Step6: AdaBoost
Step7: Gradient Boosting
Step8: Gradient Boosting with Early stopping | Python Code:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "ensembles"
def image_path(fig_id):
return os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id)
def save_fig(fig_id, tight_layout=True):
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(image_path(fig_id) + ".png", format='png', dpi=300)
Explanation: Chapter 7 – Ensemble Learning and Random Forests
This notebook contains all the sample code and solutions to the exercises in chapter 7.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
End of explanation
heads_proba = 0.51
coin_tosses = (np.random.rand(10000, 10) < heads_proba).astype(np.int32)
cumulative_heads_ratio = np.cumsum(coin_tosses, axis=0) / np.arange(1, 10001).reshape(-1, 1)
plt.figure(figsize=(8,3.5))
plt.plot(cumulative_heads_ratio)
plt.plot([0, 10000], [0.51, 0.51], "k--", linewidth=2, label="51%")
plt.plot([0, 10000], [0.5, 0.5], "k-", label="50%")
plt.xlabel("Number of coin tosses")
plt.ylabel("Heads ratio")
plt.legend(loc="lower right")
plt.axis([0, 10000, 0.42, 0.58])
save_fig("law_of_large_numbers_plot")
plt.show()
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=500, noise=0.30, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
log_clf = LogisticRegression(random_state=42)
rnd_clf = RandomForestClassifier(random_state=42)
svm_clf = SVC(random_state=42)
voting_clf = VotingClassifier(
estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],
voting='hard')
voting_clf.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
for clf in (log_clf, rnd_clf, svm_clf, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
log_clf = LogisticRegression(random_state=42)
rnd_clf = RandomForestClassifier(random_state=42)
svm_clf = SVC(probability=True, random_state=42)
voting_clf = VotingClassifier(
estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],
voting='soft')
voting_clf.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
for clf in (log_clf, rnd_clf, svm_clf, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
Explanation: Voting classifiers
End of explanation
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
bag_clf = BaggingClassifier(
DecisionTreeClassifier(random_state=42), n_estimators=500,
max_samples=100, bootstrap=True, n_jobs=-1, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, y_pred))
tree_clf = DecisionTreeClassifier(random_state=42)
tree_clf.fit(X_train, y_train)
y_pred_tree = tree_clf.predict(X_test)
print(accuracy_score(y_test, y_pred_tree))
from matplotlib.colors import ListedColormap
def plot_decision_boundary(clf, X, y, axes=[-1.5, 2.5, -1, 1.5], alpha=0.5, contour=True):
x1s = np.linspace(axes[0], axes[1], 100)
x2s = np.linspace(axes[2], axes[3], 100)
x1, x2 = np.meshgrid(x1s, x2s)
X_new = np.c_[x1.ravel(), x2.ravel()]
y_pred = clf.predict(X_new).reshape(x1.shape)
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x1, x2, y_pred, alpha=0.3, cmap=custom_cmap, linewidth=10)
if contour:
custom_cmap2 = ListedColormap(['#7d7d58','#4c4c7f','#507d50'])
plt.contour(x1, x2, y_pred, cmap=custom_cmap2, alpha=0.8)
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", alpha=alpha)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", alpha=alpha)
plt.axis(axes)
plt.xlabel(r"$x_1$", fontsize=18)
plt.ylabel(r"$x_2$", fontsize=18, rotation=0)
plt.figure(figsize=(11,4))
plt.subplot(121)
plot_decision_boundary(tree_clf, X, y)
plt.title("Decision Tree", fontsize=14)
plt.subplot(122)
plot_decision_boundary(bag_clf, X, y)
plt.title("Decision Trees with Bagging", fontsize=14)
save_fig("decision_tree_without_and_with_bagging_plot")
plt.show()
Explanation: Bagging ensembles
End of explanation
bag_clf = BaggingClassifier(
DecisionTreeClassifier(splitter="random", max_leaf_nodes=16, random_state=42),
n_estimators=500, max_samples=1.0, bootstrap=True, n_jobs=-1, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
from sklearn.ensemble import RandomForestClassifier
rnd_clf = RandomForestClassifier(n_estimators=500, max_leaf_nodes=16, n_jobs=-1, random_state=42)
rnd_clf.fit(X_train, y_train)
y_pred_rf = rnd_clf.predict(X_test)
np.sum(y_pred == y_pred_rf) / len(y_pred) # almost identical predictions
from sklearn.datasets import load_iris
iris = load_iris()
rnd_clf = RandomForestClassifier(n_estimators=500, n_jobs=-1, random_state=42)
rnd_clf.fit(iris["data"], iris["target"])
for name, score in zip(iris["feature_names"], rnd_clf.feature_importances_):
print(name, score)
rnd_clf.feature_importances_
plt.figure(figsize=(6, 4))
for i in range(15):
tree_clf = DecisionTreeClassifier(max_leaf_nodes=16, random_state=42 + i)
indices_with_replacement = np.random.randint(0, len(X_train), len(X_train))
tree_clf.fit(X[indices_with_replacement], y[indices_with_replacement])
plot_decision_boundary(tree_clf, X, y, axes=[-1.5, 2.5, -1, 1.5], alpha=0.02, contour=False)
plt.show()
Explanation: Random Forests
End of explanation
bag_clf = BaggingClassifier(
DecisionTreeClassifier(random_state=42), n_estimators=500,
bootstrap=True, n_jobs=-1, oob_score=True, random_state=40)
bag_clf.fit(X_train, y_train)
bag_clf.oob_score_
bag_clf.oob_decision_function_
from sklearn.metrics import accuracy_score
y_pred = bag_clf.predict(X_test)
accuracy_score(y_test, y_pred)
Explanation: Out-of-Bag evaluation
End of explanation
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
rnd_clf = RandomForestClassifier(random_state=42)
rnd_clf.fit(mnist["data"], mnist["target"])
def plot_digit(data):
image = data.reshape(28, 28)
plt.imshow(image, cmap = matplotlib.cm.hot,
interpolation="nearest")
plt.axis("off")
plot_digit(rnd_clf.feature_importances_)
cbar = plt.colorbar(ticks=[rnd_clf.feature_importances_.min(), rnd_clf.feature_importances_.max()])
cbar.ax.set_yticklabels(['Not important', 'Very important'])
save_fig("mnist_feature_importance_plot")
plt.show()
Explanation: Feature importance
End of explanation
from sklearn.ensemble import AdaBoostClassifier
ada_clf = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=1), n_estimators=200,
algorithm="SAMME.R", learning_rate=0.5, random_state=42)
ada_clf.fit(X_train, y_train)
plot_decision_boundary(ada_clf, X, y)
m = len(X_train)
plt.figure(figsize=(11, 4))
for subplot, learning_rate in ((121, 1), (122, 0.5)):
sample_weights = np.ones(m)
for i in range(5):
plt.subplot(subplot)
svm_clf = SVC(kernel="rbf", C=0.05, random_state=42)
svm_clf.fit(X_train, y_train, sample_weight=sample_weights)
y_pred = svm_clf.predict(X_train)
sample_weights[y_pred != y_train] *= (1 + learning_rate)
plot_decision_boundary(svm_clf, X, y, alpha=0.2)
plt.title("learning_rate = {}".format(learning_rate), fontsize=16)
plt.subplot(121)
plt.text(-0.7, -0.65, "1", fontsize=14)
plt.text(-0.6, -0.10, "2", fontsize=14)
plt.text(-0.5, 0.10, "3", fontsize=14)
plt.text(-0.4, 0.55, "4", fontsize=14)
plt.text(-0.3, 0.90, "5", fontsize=14)
save_fig("boosting_plot")
plt.show()
list(m for m in dir(ada_clf) if not m.startswith("_") and m.endswith("_"))
Explanation: AdaBoost
End of explanation
np.random.seed(42)
X = np.random.rand(100, 1) - 0.5
y = 3*X[:, 0]**2 + 0.05 * np.random.randn(100)
from sklearn.tree import DecisionTreeRegressor
tree_reg1 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg1.fit(X, y)
y2 = y - tree_reg1.predict(X)
tree_reg2 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg2.fit(X, y2)
y3 = y2 - tree_reg2.predict(X)
tree_reg3 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg3.fit(X, y3)
X_new = np.array([[0.8]])
y_pred = sum(tree.predict(X_new) for tree in (tree_reg1, tree_reg2, tree_reg3))
y_pred
def plot_predictions(regressors, X, y, axes, label=None, style="r-", data_style="b.", data_label=None):
x1 = np.linspace(axes[0], axes[1], 500)
y_pred = sum(regressor.predict(x1.reshape(-1, 1)) for regressor in regressors)
plt.plot(X[:, 0], y, data_style, label=data_label)
plt.plot(x1, y_pred, style, linewidth=2, label=label)
if label or data_label:
plt.legend(loc="upper center", fontsize=16)
plt.axis(axes)
plt.figure(figsize=(11,11))
plt.subplot(321)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h_1(x_1)$", style="g-", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Residuals and tree predictions", fontsize=16)
plt.subplot(322)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1)$", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Ensemble predictions", fontsize=16)
plt.subplot(323)
plot_predictions([tree_reg2], X, y2, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_2(x_1)$", style="g-", data_style="k+", data_label="Residuals")
plt.ylabel("$y - h_1(x_1)$", fontsize=16)
plt.subplot(324)
plot_predictions([tree_reg1, tree_reg2], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1)$")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.subplot(325)
plot_predictions([tree_reg3], X, y3, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_3(x_1)$", style="g-", data_style="k+")
plt.ylabel("$y - h_1(x_1) - h_2(x_1)$", fontsize=16)
plt.xlabel("$x_1$", fontsize=16)
plt.subplot(326)
plot_predictions([tree_reg1, tree_reg2, tree_reg3], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1) + h_3(x_1)$")
plt.xlabel("$x_1$", fontsize=16)
plt.ylabel("$y$", fontsize=16, rotation=0)
save_fig("gradient_boosting_plot")
plt.show()
from sklearn.ensemble import GradientBoostingRegressor
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=3, learning_rate=1.0, random_state=42)
gbrt.fit(X, y)
gbrt_slow = GradientBoostingRegressor(max_depth=2, n_estimators=200, learning_rate=0.1, random_state=42)
gbrt_slow.fit(X, y)
plt.figure(figsize=(11,4))
plt.subplot(121)
plot_predictions([gbrt], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="Ensemble predictions")
plt.title("learning_rate={}, n_estimators={}".format(gbrt.learning_rate, gbrt.n_estimators), fontsize=14)
plt.subplot(122)
plot_predictions([gbrt_slow], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("learning_rate={}, n_estimators={}".format(gbrt_slow.learning_rate, gbrt_slow.n_estimators), fontsize=14)
save_fig("gbrt_learning_rate_plot")
plt.show()
Explanation: Gradient Boosting
End of explanation
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
X_train, X_val, y_train, y_val = train_test_split(X, y, random_state=49)
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=120, random_state=42)
gbrt.fit(X_train, y_train)
errors = [mean_squared_error(y_val, y_pred)
for y_pred in gbrt.staged_predict(X_val)]
bst_n_estimators = np.argmin(errors)
gbrt_best = GradientBoostingRegressor(max_depth=2,n_estimators=bst_n_estimators, random_state=42)
gbrt_best.fit(X_train, y_train)
min_error = np.min(errors)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.plot(errors, "b.-")
plt.plot([bst_n_estimators, bst_n_estimators], [0, min_error], "k--")
plt.plot([0, 120], [min_error, min_error], "k--")
plt.plot(bst_n_estimators, min_error, "ko")
plt.text(bst_n_estimators, min_error*1.2, "Minimum", ha="center", fontsize=14)
plt.axis([0, 120, 0, 0.01])
plt.xlabel("Number of trees")
plt.title("Validation error", fontsize=14)
plt.subplot(122)
plot_predictions([gbrt_best], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("Best model (%d trees)" % bst_n_estimators, fontsize=14)
save_fig("early_stopping_gbrt_plot")
plt.show()
gbrt = GradientBoostingRegressor(max_depth=2, warm_start=True, random_state=42)
min_val_error = float("inf")
error_going_up = 0
for n_estimators in range(1, 120):
gbrt.n_estimators = n_estimators
gbrt.fit(X_train, y_train)
y_pred = gbrt.predict(X_val)
val_error = mean_squared_error(y_val, y_pred)
if val_error < min_val_error:
min_val_error = val_error
error_going_up = 0
else:
error_going_up += 1
if error_going_up == 5:
break # early stopping
print(gbrt.n_estimators)
Explanation: Gradient Boosting with Early stopping
End of explanation |
2,109 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AGDCv2 Landsat analytics example using USGS Surface Reflectance
Import the required libraries
Step2: Include some helpful functions
Step3: Plot the spatial extent of our data for each product
Step4: Inspect the available measurements for each product
Step5: Specify the Area of Interest for our analysis
Step6: Load Landsat Surface Reflectance for our Area of Interest
Step7: Load Landsat Pixel Quality for our area of interest
Step8: Visualise pixel quality information from our selected spatiotemporal subset
Step9: Plot the frequency of water classified in pixel quality
Step10: Plot the timeseries at the center point of the image
Step11: Remove the cloud and shadow pixels from the surface reflectance
Step12: Spatiotemporal summary NDVI median
Step13: NDVI trend over time in cropping area Point Of Interest
Step14: Create a subset around our point of interest
Step15: Plot subset image with POI at centre
Step16: NDVI timeseries plot | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import datacube
from datacube.model import Range
from datetime import datetime
dc = datacube.Datacube(app='dc-example')
from datacube.storage import masking
from datacube.storage.masking import mask_valid_data as mask_invalid_data
import pandas
import xarray
import numpy
import json
import vega
from datacube.utils import geometry
numpy.seterr(divide='ignore', invalid='ignore')
import folium
from IPython.display import display
import geopandas
from shapely.geometry import mapping
from shapely.geometry import MultiPolygon
import rasterio
import shapely.geometry
import shapely.ops
from functools import partial
import pyproj
from datacube.model import CRS
from datacube.utils import geometry
## From http://scikit-image.org/docs/dev/auto_examples/plot_equalize.html
from skimage import data, img_as_float
from skimage import exposure
datacube.__version__
Explanation: AGDCv2 Landsat analytics example using USGS Surface Reflectance
Import the required libraries
End of explanation
def datasets_union(dss):
thing = geometry.unary_union(ds.extent for ds in dss)
return thing.to_crs(geometry.CRS('EPSG:4326'))
import random
def plot_folium(shapes):
mapa = folium.Map(location=[17.38,78.48], zoom_start=8)
colors=['#00ff00', '#ff0000', '#00ffff', '#ffffff', '#000000', '#ff00ff']
for shape in shapes:
style_function = lambda x: {'fillColor': '#000000' if x['type'] == 'Polygon' else '#00ff00',
'color' : random.choice(colors)}
poly = folium.features.GeoJson(mapping(shape), style_function=style_function)
mapa.add_children(poly)
display(mapa)
# determine the clip parameters for a target clear (cloud free image) - identified through the index provided
def get_p2_p98(rgb, red, green, blue, index):
r = numpy.nan_to_num(numpy.array(rgb.data_vars[red][index]))
g = numpy.nan_to_num(numpy.array(rgb.data_vars[green][index]))
b = numpy.nan_to_num(numpy.array(rgb.data_vars[blue][index]))
rp2, rp98 = numpy.percentile(r, (2, 99))
gp2, gp98 = numpy.percentile(g, (2, 99))
bp2, bp98 = numpy.percentile(b, (2, 99))
return(rp2, rp98, gp2, gp98, bp2, bp98)
def plot_rgb(rgb, rp2, rp98, gp2, gp98, bp2, bp98, red, green, blue, index):
r = numpy.nan_to_num(numpy.array(rgb.data_vars[red][index]))
g = numpy.nan_to_num(numpy.array(rgb.data_vars[green][index]))
b = numpy.nan_to_num(numpy.array(rgb.data_vars[blue][index]))
r_rescale = exposure.rescale_intensity(r, in_range=(rp2, rp98))
g_rescale = exposure.rescale_intensity(g, in_range=(gp2, gp98))
b_rescale = exposure.rescale_intensity(b, in_range=(bp2, bp98))
rgb_stack = numpy.dstack((r_rescale,g_rescale,b_rescale))
img = img_as_float(rgb_stack)
return(img)
def plot_water_pixel_drill(water_drill):
vega_data = [{'x': str(ts), 'y': str(v)} for ts, v in zip(water_drill.time.values, water_drill.values)]
vega_spec = {"width":720,"height":90,"padding":{"top":10,"left":80,"bottom":60,"right":30},"data":[{"name":"wofs","values":[{"code":0,"class":"dry","display":"Dry","color":"#D99694","y_top":30,"y_bottom":50},{"code":1,"class":"nodata","display":"No Data","color":"#A0A0A0","y_top":60,"y_bottom":80},{"code":2,"class":"shadow","display":"Shadow","color":"#A0A0A0","y_top":60,"y_bottom":80},{"code":4,"class":"cloud","display":"Cloud","color":"#A0A0A0","y_top":60,"y_bottom":80},{"code":1,"class":"wet","display":"Wet","color":"#4F81BD","y_top":0,"y_bottom":20},{"code":3,"class":"snow","display":"Snow","color":"#4F81BD","y_top":0,"y_bottom":20},{"code":255,"class":"fill","display":"Fill","color":"#4F81BD","y_top":0,"y_bottom":20}]},{"name":"table","format":{"type":"json","parse":{"x":"date"}},"values":[],"transform":[{"type":"lookup","on":"wofs","onKey":"code","keys":["y"],"as":["class"],"default":null},{"type":"filter","test":"datum.y != 255"}]}],"scales":[{"name":"x","type":"time","range":"width","domain":{"data":"table","field":"x"},"round":true},{"name":"y","type":"ordinal","range":"height","domain":["water","not water","not observed"],"nice":true}],"axes":[{"type":"x","scale":"x","formatType":"time"},{"type":"y","scale":"y","tickSize":0}],"marks":[{"description":"data plot","type":"rect","from":{"data":"table"},"properties":{"enter":{"xc":{"scale":"x","field":"x"},"width":{"value":"1"},"y":{"field":"class.y_top"},"y2":{"field":"class.y_bottom"},"fill":{"field":"class.color"},"strokeOpacity":{"value":"0"}}}}]}
spec_obj = json.loads(vega_spec)
spec_obj['data'][1]['values'] = vega_data
return vega.Vega(spec_obj)
Explanation: Include some helpful functions
End of explanation
plot_folium([datasets_union(dc.index.datasets.search_eager(product='ls5_ledaps_scene')),\
datasets_union(dc.index.datasets.search_eager(product='ls7_ledaps_scene')),\
datasets_union(dc.index.datasets.search_eager(product='ls8_ledaps_scene'))])
Explanation: Plot the spatial extent of our data for each product
End of explanation
dc.list_measurements()
Explanation: Inspect the available measurements for each product
End of explanation
# Hyderbad
# 'lon': (78.40, 78.57),
# 'lat': (17.36, 17.52),
# Lake Singur
# 'lat': (17.67, 17.84),
# 'lon': (77.83, 78.0),
# Lake Singur Dam
query = {
'lat': (17.72, 17.79),
'lon': (77.88, 77.95),
}
Explanation: Specify the Area of Interest for our analysis
End of explanation
products = ['ls5_ledaps_scene','ls7_ledaps_scene','ls8_ledaps_scene']
datasets = []
for product in products:
ds = dc.load(product=product, measurements=['nir','red', 'green','blue'], output_crs='EPSG:32644',resolution=(-30,30), **query)
ds['product'] = ('time', numpy.repeat(product, ds.time.size))
datasets.append(ds)
sr = xarray.concat(datasets, dim='time')
sr = sr.isel(time=sr.time.argsort()) # sort along time dim
sr = sr.where(sr != -9999)
##### include an index here for the timeslice with representative data for best stretch of time series
# don't run this to keep the same limits as the previous sensor
#rp2, rp98, gp2, gp98, bp2, bp98 = get_p2_p98(sr,'red','green','blue', 0)
rp2, rp98, gp2, gp98, bp2, bp98 = (300.0, 2000.0, 300.0, 2000.0, 300.0, 2000.0)
print(rp2, rp98, gp2, gp98, bp2, bp98)
plt.imshow(plot_rgb(sr,rp2, rp98, gp2, gp98, bp2, bp98,'red',
'green', 'blue', 0),interpolation='nearest')
Explanation: Load Landsat Surface Reflectance for our Area of Interest
End of explanation
datasets = []
for product in products:
ds = dc.load(product=product, measurements=['cfmask'], output_crs='EPSG:32644',resolution=(-30,30), **query).cfmask
ds['product'] = ('time', numpy.repeat(product, ds.time.size))
datasets.append(ds)
pq = xarray.concat(datasets, dim='time')
pq = pq.isel(time=pq.time.argsort()) # sort along time dim
del(datasets)
Explanation: Load Landsat Pixel Quality for our area of interest
End of explanation
pq.attrs['flags_definition'] = {'cfmask': {'values': {'255': 'fill', '1': 'water', '2': 'shadow', '3': 'snow', '4': 'cloud', '0': 'clear'}, 'description': 'CFmask', 'bits': [0, 1, 2, 3, 4, 5, 6, 7]}}
pandas.DataFrame.from_dict(masking.get_flags_def(pq), orient='index')
Explanation: Visualise pixel quality information from our selected spatiotemporal subset
End of explanation
water = masking.make_mask(pq, cfmask ='water')
water.sum('time').plot(cmap='nipy_spectral')
Explanation: Plot the frequency of water classified in pixel quality
End of explanation
plot_water_pixel_drill(pq.isel(y=int(water.shape[1] / 2), x=int(water.shape[2] / 2)))
del(water)
Explanation: Plot the timeseries at the center point of the image
End of explanation
mask = masking.make_mask(pq, cfmask ='cloud')
mask = abs(mask*-1+1)
sr = sr.where(mask)
mask = masking.make_mask(pq, cfmask ='shadow')
mask = abs(mask*-1+1)
sr = sr.where(mask)
del(mask)
del(pq)
sr.attrs['crs'] = CRS('EPSG:32644')
Explanation: Remove the cloud and shadow pixels from the surface reflectance
End of explanation
ndvi_median = ((sr.nir-sr.red)/(sr.nir+sr.red)).median(dim='time')
ndvi_median.attrs['crs'] = CRS('EPSG:32644')
ndvi_median.plot(cmap='YlGn', robust='True')
Explanation: Spatiotemporal summary NDVI median
End of explanation
poi_latitude = 17.749343
poi_longitude = 77.935634
p = geometry.point(x=poi_longitude, y=poi_latitude, crs=geometry.CRS('EPSG:4326')).to_crs(sr.crs)
Explanation: NDVI trend over time in cropping area Point Of Interest
End of explanation
subset = sr.sel(x=((sr.x > p.points[0][0]-1000)), y=((sr.y < p.points[0][1]+1000)))
subset = subset.sel(x=((subset.x < p.points[0][0]+1000)), y=((subset.y > p.points[0][1]-1000)))
Explanation: Create a subset around our point of interest
End of explanation
plt.imshow(plot_rgb(subset,rp2, rp98, gp2, gp98, bp2, bp98,'red',
'green', 'blue',0),interpolation='nearest' )
Explanation: Plot subset image with POI at centre
End of explanation
((sr.nir-sr.red)/(sr.nir+sr.red)).sel(x=p.points[0][0], y=p.points[0][1], method='nearest').plot(marker='o')
Explanation: NDVI timeseries plot
End of explanation |
2,110 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I am having a problem with minimization procedure. Actually, I could not create a correct objective function for my problem. | Problem:
import scipy.optimize
import numpy as np
np.random.seed(42)
a = np.random.rand(3,5)
x_true = np.array([10, 13, 5, 8, 40])
y = a.dot(x_true ** 2)
x0 = np.array([2, 3, 1, 4, 20])
x_lower_bounds = x_true / 2
def residual_ans(x, a, y):
s = ((y - a.dot(x**2))**2).sum()
return s
bounds = [[x, None] for x in x_lower_bounds]
out = scipy.optimize.minimize(residual_ans, x0=x0, args=(a, y), method= 'L-BFGS-B', bounds=bounds).x |
2,111 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression Classification
Step1: Utility function to create the appropriate data frame for classification algorithms in MLlib
Step2: create the dataframe from a csv
Step3: Classification algorithms requires numeric values for labels
Step4: schema verification
Step5: Instantiate the Logistic Regression and the pipeline.
Step6: We use a ParamGridBuilder to construct a grid of parameters to search over.
TrainValidationSplit will try all combinations of values and determine best model using the evaluator.
Step7: In this case the estimator is simply the linear regression.
A TrainValidationSplit requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
Step8: Fit the pipeline to training documents.
Step9: Compute the predictions from the model | Python Code:
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml import Pipeline
from pyspark.mllib.regression import LabeledPoint
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import StringIndexer
from pyspark.mllib.evaluation import MulticlassMetrics
from pyspark.ml.tuning import ParamGridBuilder, TrainValidationSplit
Explanation: Logistic Regression Classification
End of explanation
def mapLibSVM(row):
return (row[5],Vectors.dense(row[:3]))
Explanation: Utility function to create the appropriate data frame for classification algorithms in MLlib
End of explanation
df = spark.read \
.format("csv") \
.option("header", "true") \
.option("inferSchema", "true") \
.load("datasets/iris.data")
Explanation: create the dataframe from a csv
End of explanation
indexer = StringIndexer(inputCol="label", outputCol="labelIndex")
indexer = indexer.fit(df).transform(df)
indexer.show()
dfLabeled = indexer.rdd.map(mapLibSVM).toDF(["label", "features"])
dfLabeled.show()
train, test = dfLabeled.randomSplit([0.9, 0.1], seed=12345)
Explanation: Classification algorithms requires numeric values for labels
End of explanation
train.printSchema()
Explanation: schema verification
End of explanation
lr = LogisticRegression(labelCol="label", maxIter=10)
Explanation: Instantiate the Logistic Regression and the pipeline.
End of explanation
paramGrid = ParamGridBuilder()\
.addGrid(lr.regParam, [0.1, 0.001]) \
.build()
Explanation: We use a ParamGridBuilder to construct a grid of parameters to search over.
TrainValidationSplit will try all combinations of values and determine best model using the evaluator.
End of explanation
tvs = TrainValidationSplit(estimator=lr,
estimatorParamMaps=paramGrid,
evaluator=RegressionEvaluator(),
# 80% of the data will be used for training, 20% for validation.
trainRatio=0.9)
Explanation: In this case the estimator is simply the linear regression.
A TrainValidationSplit requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
End of explanation
model = tvs.fit(train)
Explanation: Fit the pipeline to training documents.
End of explanation
result = model.transform(test)
predictions = result.select(["prediction", "label"])
predictions.show()
# Instantiate metrics object
metrics = MulticlassMetrics(predictions.rdd)
# Overall statistics
print("Summary Stats")
print("Precision = %s" % metrics.precision())
print("Recall = %s" % metrics.recall())
print("F1 Score = %s" % metrics.fMeasure())
print("Accuracy = %s" % metrics.accuracy)
# Weighted stats
print("Weighted recall = %s" % metrics.weightedRecall)
print("Weighted precision = %s" % metrics.weightedPrecision)
print("Weighted F(1) Score = %s" % metrics.weightedFMeasure())
print("Weighted F(0.5) Score = %s" % metrics.weightedFMeasure(beta=0.5))
print("Weighted false positive rate = %s" % metrics.weightedFalsePositiveRate)
Explanation: Compute the predictions from the model
End of explanation |
2,112 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Subject Selection Experiments disorder data - Srinivas (handle
Step1: Extracting the samples we are interested in
Step2: Dimensionality reduction
Manifold Techniques
ISOMAP
Step3: Clustering and other grouping experiments
K-Means clustering - iso
Step4: As is evident from the above 2 experiments, no clear clustering is apparent.But there is some significant overlap and there 2 clear groups
Classification Experiments
Let's experiment with a bunch of classifiers | Python Code:
# Standard
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
# Dimensionality reduction and Clustering
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.cluster import MeanShift, estimate_bandwidth
from sklearn import manifold, datasets
from itertools import cycle
# Plotting tools and classifiers
from matplotlib.colors import ListedColormap
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import train_test_split
from sklearn import preprocessing
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA
from sklearn import cross_validation
from sklearn.cross_validation import LeaveOneOut
# Let's read the data in and clean it
def get_NaNs(df):
columns = list(df.columns.get_values())
row_metrics = df.isnull().sum(axis=1)
rows_with_na = []
for i, x in enumerate(row_metrics):
if x > 0: rows_with_na.append(i)
return rows_with_na
def remove_NaNs(df):
rows_with_na = get_NaNs(df)
cleansed_df = df.drop(df.index[rows_with_na], inplace=False)
return cleansed_df
initial_data = pd.DataFrame.from_csv('Data_Adults_1_reduced_inv2.csv')
cleansed_df = remove_NaNs(initial_data)
# Let's also get rid of nominal data
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
X = cleansed_df.select_dtypes(include=numerics)
print X.shape
# Let's now clean columns getting rid of certain columns that might not be important to our analysis
cols2drop = ['GROUP_ID', 'doa', 'Baseline_header_id', 'Concentration_header_id']
X = X.drop(cols2drop, axis=1, inplace=False)
print X.shape
# For our studies children skew the data, it would be cleaner to just analyse adults
X = X.loc[X['Age'] >= 18]
Y = X.loc[X['race_id'] == 1]
X = X.loc[X['Gender_id'] == 1]
print X.shape
print Y.shape
Explanation: Subject Selection Experiments disorder data - Srinivas (handle: thewickedaxe)
Initial Data Cleaning
End of explanation
# Let's extract ADHd and Bipolar patients (mutually exclusive)
ADHD_men = X.loc[X['ADHD'] == 1]
ADHD_men = ADHD_men.loc[ADHD_men['Bipolar'] == 0]
BP_men = X.loc[X['Bipolar'] == 1]
BP_men = BP_men.loc[BP_men['ADHD'] == 0]
ADHD_cauc = Y.loc[Y['ADHD'] == 1]
ADHD_cauc = ADHD_cauc.loc[ADHD_cauc['Bipolar'] == 0]
BP_cauc = Y.loc[Y['Bipolar'] == 1]
BP_cauc = BP_cauc.loc[BP_cauc['ADHD'] == 0]
print ADHD_men.shape
print BP_men.shape
print ADHD_cauc.shape
print BP_cauc.shape
# Keeping a backup of the data frame object because numpy arrays don't play well with certain scikit functions
ADHD_men = pd.DataFrame(ADHD_men.drop(['Patient_ID', 'Gender_id', 'ADHD', 'Bipolar'], axis = 1, inplace = False))
BP_men = pd.DataFrame(BP_men.drop(['Patient_ID', 'Gender_id', 'ADHD', 'Bipolar'], axis = 1, inplace = False))
ADHD_cauc = pd.DataFrame(ADHD_cauc.drop(['Patient_ID', 'race_id', 'ADHD', 'Bipolar'], axis = 1, inplace = False))
BP_cauc = pd.DataFrame(BP_cauc.drop(['Patient_ID', 'race_id', 'ADHD', 'Bipolar'], axis = 1, inplace = False))
Explanation: Extracting the samples we are interested in
End of explanation
combined1 = pd.concat([ADHD_men, BP_men])
combined2 = pd.concat([ADHD_cauc, BP_cauc])
print combined1.shape
print combined2.shape
combined1 = preprocessing.scale(combined1)
combined2 = preprocessing.scale(combined2)
combined1 = manifold.Isomap(20, 20).fit_transform(combined1)
ADHD_men_iso = combined1[:1326]
BP_men_iso = combined1[1326:]
combined2 = manifold.Isomap(20, 20).fit_transform(combined2)
ADHD_cauc_iso = combined2[:1379]
BP_cauc_iso = combined2[1379:]
Explanation: Dimensionality reduction
Manifold Techniques
ISOMAP
End of explanation
data1 = pd.concat([pd.DataFrame(ADHD_men_iso), pd.DataFrame(BP_men_iso)])
data2 = pd.concat([pd.DataFrame(ADHD_cauc_iso), pd.DataFrame(BP_cauc_iso)])
print data1.shape
print data2.shape
kmeans = KMeans(n_clusters=2)
kmeans.fit(data1.get_values())
labels1 = kmeans.labels_
centroids1 = kmeans.cluster_centers_
print('Estimated number of clusters: %d' % len(centroids1))
for label in [0, 1]:
ds = data1.get_values()[np.where(labels1 == label)]
plt.plot(ds[:,0], ds[:,1], '.')
lines = plt.plot(centroids1[label,0], centroids1[label,1], 'o')
kmeans = KMeans(n_clusters=2)
kmeans.fit(data2.get_values())
labels2 = kmeans.labels_
centroids2 = kmeans.cluster_centers_
print('Estimated number of clusters: %d' % len(centroids2))
for label in [0, 1]:
ds2 = data2.get_values()[np.where(labels2 == label)]
plt.plot(ds2[:,0], ds2[:,1], '.')
lines = plt.plot(centroids2[label,0], centroids2[label,1], 'o')
Explanation: Clustering and other grouping experiments
K-Means clustering - iso
End of explanation
ADHD_men_iso = pd.DataFrame(ADHD_men_iso)
BP_men_iso = pd.DataFrame(BP_men_iso)
ADHD_cauc_iso = pd.DataFrame(ADHD_cauc_iso)
BP_cauc_iso = pd.DataFrame(BP_cauc_iso)
BP_men_iso['ADHD-Bipolar'] = 0
ADHD_men_iso['ADHD-Bipolar'] = 1
BP_cauc_iso['ADHD-Bipolar'] = 0
ADHD_cauc_iso['ADHD-Bipolar'] = 1
data1 = pd.concat([ADHD_men_iso, BP_men_iso])
data2 = pd.concat([ADHD_cauc_iso, BP_cauc_iso])
class_labels1 = data1['ADHD-Bipolar']
class_labels2 = data2['ADHD-Bipolar']
data1 = data1.drop(['ADHD-Bipolar'], axis = 1, inplace = False)
data2 = data2.drop(['ADHD-Bipolar'], axis = 1, inplace = False)
data1 = data1.get_values()
data2 = data2.get_values()
# Leave one Out cross validation
def leave_one_out(classifier, values, labels):
leave_one_out_validator = LeaveOneOut(len(values))
classifier_metrics = cross_validation.cross_val_score(classifier, values, labels, cv=leave_one_out_validator)
accuracy = classifier_metrics.mean()
deviation = classifier_metrics.std()
return accuracy, deviation
rf = RandomForestClassifier(n_estimators = 22)
qda = QDA()
lda = LDA()
gnb = GaussianNB()
classifier_accuracy_list = []
classifiers = [(rf, "Random Forest"), (lda, "LDA"), (qda, "QDA"), (gnb, "Gaussian NB")]
for classifier, name in classifiers:
accuracy, deviation = leave_one_out(classifier, data1, class_labels1)
print '%s accuracy is %0.4f (+/- %0.3f)' % (name, accuracy, deviation)
classifier_accuracy_list.append((name, accuracy))
for classifier, name in classifiers:
accuracy, deviation = leave_one_out(classifier, data2, class_labels2)
print '%s accuracy is %0.4f (+/- %0.3f)' % (name, accuracy, deviation)
classifier_accuracy_list.append((name, accuracy))
Explanation: As is evident from the above 2 experiments, no clear clustering is apparent.But there is some significant overlap and there 2 clear groups
Classification Experiments
Let's experiment with a bunch of classifiers
End of explanation |
2,113 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Author
Step1: First let's check if there are new or deleted files (only matching by file names).
Step2: So we have the same set of files in both versions
Step3: Let's make sure the structure hasn't changed
Step4: All files have the same columns as before
Step5: There are some minor changes in many files, but based on my knowledge of ROME, none from the main files.
The most interesting ones are in referentiel_appellation, item, and liens_rome_referentiels, so let's see more precisely.
Step6: Alright, so the only change seems to be 4 new jobs added. Let's take a look (only showing interesting fields)
Step7: These seems to be refinements of existing jobs, and two new ones are related to digital mediation.
OK, let's check at the changes in items
Step8: As anticipated it is a very minor change (hard to see it visually)
Step9: The new ones seem legit to me.
The changes in liens_rome_referentiels include changes for those items, so let's only check the changes not related to those.
Step10: So in addition to the added items, there are few fixes. Let's have a look at them | Python Code:
import collections
import glob
import os
from os import path
import matplotlib_venn
import pandas as pd
rome_path = path.join(os.getenv('DATA_FOLDER'), 'rome/csv')
OLD_VERSION = '338'
NEW_VERSION = '339'
old_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(OLD_VERSION)))
new_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(NEW_VERSION)))
Explanation: Author: Pascal, [email protected]
Date: 2019-06-25
ROME update from v338 to v339
In June 2019 a new version of the ROME was released. I want to investigate what changed and whether we need to do anything about it.
You might not be able to reproduce this notebook, mostly because it requires to have the two versions of the ROME in your data/rome/csv folder which happens only just before we switch to v339. You will have to trust me on the results ;-)
Skip the run test because it requires older versions of the ROME.
End of explanation
new_files = new_version_files - frozenset(f.replace(OLD_VERSION, NEW_VERSION) for f in old_version_files)
deleted_files = old_version_files - frozenset(f.replace(NEW_VERSION, OLD_VERSION) for f in new_version_files)
print('{:d} new files'.format(len(new_files)))
print('{:d} deleted files'.format(len(deleted_files)))
Explanation: First let's check if there are new or deleted files (only matching by file names).
End of explanation
# Load all ROME datasets for the two versions we compare.
VersionedDataset = collections.namedtuple('VersionedDataset', ['basename', 'old', 'new'])
rome_data = [VersionedDataset(
basename=path.basename(f),
old=pd.read_csv(f.replace(NEW_VERSION, OLD_VERSION)),
new=pd.read_csv(f))
for f in sorted(new_version_files)]
def find_rome_dataset_by_name(data, partial_name):
for dataset in data:
if 'unix_{}_v{}_utf8.csv'.format(partial_name, NEW_VERSION) == dataset.basename:
return dataset
raise ValueError('No dataset named {}, the list is\n{}'.format(partial_name, [d.basename for d in data]))
Explanation: So we have the same set of files in both versions: good start.
Now let's set up a dataset that, for each table, links both the old and the new file together.
End of explanation
for dataset in rome_data:
if set(dataset.old.columns) != set(dataset.new.columns):
print('Columns of {} have changed.'.format(dataset.basename))
Explanation: Let's make sure the structure hasn't changed:
End of explanation
same_row_count_files = 0
for dataset in rome_data:
diff = len(dataset.new.index) - len(dataset.old.index)
if diff > 0:
print('{:d}/{:d} values added in {}'.format(
diff, len(dataset.new.index), dataset.basename))
elif diff < 0:
print('{:d}/{:d} values removed in {}'.format(
-diff, len(dataset.old.index), dataset.basename))
else:
same_row_count_files += 1
print('{:d}/{:d} files with the same number of rows'.format(
same_row_count_files, len(rome_data)))
Explanation: All files have the same columns as before: still good.
Now let's see for each file if there are more or less rows.
End of explanation
jobs = find_rome_dataset_by_name(rome_data, 'referentiel_appellation')
new_jobs = set(jobs.new.code_ogr) - set(jobs.old.code_ogr)
obsolete_jobs = set(jobs.old.code_ogr) - set(jobs.new.code_ogr)
stable_jobs = set(jobs.new.code_ogr) & set(jobs.old.code_ogr)
matplotlib_venn.venn2((len(obsolete_jobs), len(new_jobs), len(stable_jobs)), (OLD_VERSION, NEW_VERSION));
Explanation: There are some minor changes in many files, but based on my knowledge of ROME, none from the main files.
The most interesting ones are in referentiel_appellation, item, and liens_rome_referentiels, so let's see more precisely.
End of explanation
pd.options.display.max_colwidth = 2000
jobs.new[jobs.new.code_ogr.isin(new_jobs)][['code_ogr', 'libelle_appellation_long', 'code_rome']]
Explanation: Alright, so the only change seems to be 4 new jobs added. Let's take a look (only showing interesting fields):
End of explanation
items = find_rome_dataset_by_name(rome_data, 'item')
new_items = set(items.new.code_ogr) - set(items.old.code_ogr)
obsolete_items = set(items.old.code_ogr) - set(items.new.code_ogr)
stable_items = set(items.new.code_ogr) & set(items.old.code_ogr)
matplotlib_venn.venn2((len(obsolete_items), len(new_items), len(stable_items)), (OLD_VERSION, NEW_VERSION));
Explanation: These seems to be refinements of existing jobs, and two new ones are related to digital mediation.
OK, let's check at the changes in items:
End of explanation
items.new[items.new.code_ogr.isin(new_items)].head()
Explanation: As anticipated it is a very minor change (hard to see it visually): there are 4 new ones have been created. Let's have a look at them.
End of explanation
links = find_rome_dataset_by_name(rome_data, 'liens_rome_referentiels')
old_links_on_stable_items = links.old[links.old.code_ogr.isin(stable_items)]
new_links_on_stable_items = links.new[links.new.code_ogr.isin(stable_items)]
old = old_links_on_stable_items[['code_rome', 'code_ogr']]
new = new_links_on_stable_items[['code_rome', 'code_ogr']]
links_merged = old.merge(new, how='outer', indicator=True)
links_merged['_diff'] = links_merged._merge.map({'left_only': 'removed', 'right_only': 'added'})
links_merged._diff.value_counts()
Explanation: The new ones seem legit to me.
The changes in liens_rome_referentiels include changes for those items, so let's only check the changes not related to those.
End of explanation
job_group_names = find_rome_dataset_by_name(rome_data, 'referentiel_code_rome').new.set_index('code_rome').libelle_rome
item_names = items.new.set_index('code_ogr').libelle.drop_duplicates()
links_merged['job_group_name'] = links_merged.code_rome.map(job_group_names)
links_merged['item_name'] = links_merged.code_ogr.map(item_names)
display(links_merged[links_merged._diff == 'removed'].dropna().head(5))
links_merged[links_merged._diff == 'added'].dropna().head(5)
Explanation: So in addition to the added items, there are few fixes. Let's have a look at them:
End of explanation |
2,114 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
"What just happened???"
Here we take an existing modflow model and setup a very complex parameterization system for arrays and boundary conditions. All parameters are setup as multpliers
Step1: You want some pilot points? We got that...how about one set of recharge multiplier pilot points applied to all stress periods? and sy in layer 1?
Step2: Parameterization
Step3: You want some constants (uniform value multipliers)? We got that too....
Step4: You want grid-scale parameter flexibility for hk in all layers? We got that too...and how about sy in layer 1 and vka in layer 2 while we are at it
Step5: Some people like using zones...so we have those too
Step6: But wait, boundary conditions are uncertain too...Can we add some parameter to represent that uncertainty? You know it!
Step7: Observations
Since observations are "free", we can carry lots of them around...
Step8: Here it goes...
Now we will use all these args to construct a complete PEST interface - template files, instruction files, control file and even the forward run script! All parameters are setup as multiplers against the existing inputs in the modflow model - the existing inputs are extracted (with flopy) and saved in a sub directory for safe keep and for multiplying against during a forward model run. The constructor will also write a full (covariates included) prior parameter covariance matrix, which is needed for all sorts of important analyses.|
Step9: The mpf_boss instance containts a pyemu.Pst object (its already been saved to a file, but you may want to manipulate it more)
Step10: That was crazy easy - this used to take me weeks to get a PEST interface setup with level of complexity
Step11: Lets look at that important prior covariance matrix
Step12: adjusting parameter bounds
Let's say you don't like the parameter bounds in the new control file (note you can pass a par_bounds arg to the constructor).
Step13: Let's change the welflux pars
Step14: Boom! | Python Code:
%matplotlib inline
import os
import platform
import shutil
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import flopy
import pyemu
nam_file = "freyberg.nam"
org_model_ws = "freyberg_sfr_update"
temp_model_ws = "temp"
new_model_ws = "template"
# load the model, change dir and run once just to make sure everthing is working
m = flopy.modflow.Modflow.load(nam_file,model_ws=org_model_ws,check=False, exe_name="mfnwt",
forgive=False,verbose=True)
m.change_model_ws(temp_model_ws,reset_external=True)
m.write_input()
EXE_DIR = os.path.join("..","bin")
if "window" in platform.platform().lower():
EXE_DIR = os.path.join(EXE_DIR,"win")
elif "darwin" in platform.platform().lower() or "macos" in platform.platform().lower():
EXE_DIR = os.path.join(EXE_DIR,"mac")
else:
EXE_DIR = os.path.join(EXE_DIR,"linux")
[shutil.copy2(os.path.join(EXE_DIR,f),os.path.join(temp_model_ws,f)) for f in os.listdir(EXE_DIR)]
try:
m.run_model()
except():
pass
Explanation: "What just happened???"
Here we take an existing modflow model and setup a very complex parameterization system for arrays and boundary conditions. All parameters are setup as multpliers: the original inputs from the modflow model are saved in separate files and during the forward run, they are multplied by the parameters to form new model inputs. the forward run script ("forward_run.py") is also written. And somewhat meaningful prior covariance matrix is constructed from geostatistical structures with out any additional arguements...oh yeah!
End of explanation
m.get_package_list()
Explanation: You want some pilot points? We got that...how about one set of recharge multiplier pilot points applied to all stress periods? and sy in layer 1?
End of explanation
pp_props = [["upw.sy",0], ["rch.rech",None]]
Explanation: Parameterization
End of explanation
const_props = []
for iper in range(m.nper): # recharge for past and future
const_props.append(["rch.rech",iper])
for k in range(m.nlay):
const_props.append(["upw.hk",k])
const_props.append(["upw.ss",k])
Explanation: You want some constants (uniform value multipliers)? We got that too....
End of explanation
grid_props = [["upw.sy",0],["upw.vka",1]]
for k in range(m.nlay):
grid_props.append(["upw.hk",k])
Explanation: You want grid-scale parameter flexibility for hk in all layers? We got that too...and how about sy in layer 1 and vka in layer 2 while we are at it
End of explanation
zn_array = np.loadtxt(os.path.join("Freyberg_Truth","hk.zones"))
plt.imshow(zn_array)
zone_props = [["upw.ss",0], ["rch.rech",0],["rch.rech",1]]
k_zone_dict = {k:zn_array for k in range(m.nlay)}
Explanation: Some people like using zones...so we have those too
End of explanation
bc_props = []
for iper in range(m.nper):
bc_props.append(["wel.flux",iper])
Explanation: But wait, boundary conditions are uncertain too...Can we add some parameter to represent that uncertainty? You know it!
End of explanation
# here were are building a list of stress period, layer pairs (zero-based) that we will use
# to setup obserations from every active model cell for a given pair
hds_kperk = []
for iper in range(m.nper):
for k in range(m.nlay):
hds_kperk.append([iper,k])
Explanation: Observations
Since observations are "free", we can carry lots of them around...
End of explanation
mfp_boss = pyemu.helpers.PstFromFlopyModel(nam_file,new_model_ws,org_model_ws=temp_model_ws,
pp_props=pp_props,spatial_list_props=bc_props,
zone_props=zone_props,grid_props=grid_props,
const_props=const_props,k_zone_dict=k_zone_dict,
remove_existing=True,pp_space=4,sfr_pars=True,
sfr_obs=True,hds_kperk=hds_kperk)
EXE_DIR = os.path.join("..","bin")
if "window" in platform.platform().lower():
EXE_DIR = os.path.join(EXE_DIR,"win")
elif "darwin" in platform.platform().lower():
EXE_DIR = os.path.join(EXE_DIR,"mac")
else:
EXE_DIR = os.path.join(EXE_DIR,"linux")
[shutil.copy2(os.path.join(EXE_DIR,f),os.path.join(new_model_ws,f)) for f in os.listdir(EXE_DIR)]
Explanation: Here it goes...
Now we will use all these args to construct a complete PEST interface - template files, instruction files, control file and even the forward run script! All parameters are setup as multiplers against the existing inputs in the modflow model - the existing inputs are extracted (with flopy) and saved in a sub directory for safe keep and for multiplying against during a forward model run. The constructor will also write a full (covariates included) prior parameter covariance matrix, which is needed for all sorts of important analyses.|
End of explanation
pst = mfp_boss.pst
pst.npar,pst.nobs
Explanation: The mpf_boss instance containts a pyemu.Pst object (its already been saved to a file, but you may want to manipulate it more)
End of explanation
pst.template_files
pst.instruction_files
Explanation: That was crazy easy - this used to take me weeks to get a PEST interface setup with level of complexity
End of explanation
cov = pyemu.Cov.from_ascii(os.path.join(new_model_ws,m.name+".pst.prior.cov"))
cov = cov.x
cov[cov==0] = np.NaN
plt.imshow(cov)
Explanation: Lets look at that important prior covariance matrix
End of explanation
pst.parameter_data
Explanation: adjusting parameter bounds
Let's say you don't like the parameter bounds in the new control file (note you can pass a par_bounds arg to the constructor).
End of explanation
par = pst.parameter_data #get a ref to the parameter data dataframe
wpars = par.pargp=="welflux_k02"
par.loc[wpars]
par.loc[wpars,"parubnd"] = 1.1
par.loc[wpars,"parlbnd"] = 0.9
pst.parameter_data
# now we need to rebuild the prior parameter covariance matrix
cov = mfp_boss.build_prior()
Explanation: Let's change the welflux pars
End of explanation
x = cov.x
x[x==0.0] = np.NaN
plt.imshow(x)
Explanation: Boom!
End of explanation |
2,115 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SMOGN (0.1.0)
Step1: Dependencies
Next, we load the required dependencies. Here we import smogn to later apply Synthetic Minority Over-Sampling Technique for Regression with Gaussian Noise. In addition, we use pandas for data handling, and seaborn to visualize our results.
Step2: Data
After, we load our data. In this example, we use the Ames Housing Dataset training split retreived from Kaggle, originally complied by Dean De Cock. In this case, we name our training set housing
Step3: Synthetic Minority Over-Sampling Technique for Regression with Gaussian Noise
Here we cover the focus of this example. We call the smoter function from this package (smogn.smoter) and satisfy a typtical number of arguments
Step4: Note
Step5: Further examining the results, we can see that the distribution of the response variable has changed. By calling the box_plot_stats function from this package (smogn.box_plot_stats) we quickly verify.
Notice that the modified training set's box plot five number summary has changed, where the distribution of the response variable has skewed right when compared to the original training set.
Step6: Plotting the results of both the original and modified training sets, the skewed right distribution of the response variable in the modified training set becomes more evident.
In this example, SMOGN over-sampled observations whose 'SalePrice' was found to be extremely high according to the box plot (those considered "minority") and under-sampled observations that were closer to the median (those considered "majority").
This is perhaps most useful when most of the y values of interest in predicting are found at the extremes of the distribution within a given dataset. | Python Code:
## suppress install output
%%capture
## install pypi release
# !pip install smogn
## install developer version
!pip install git+https://github.com/nickkunz/smogn.git
Explanation: SMOGN (0.1.0): Usage
Example 2: Intermediate
Installation
First, we install SMOGN from the Github repository. Alternatively, we could install from the official PyPI distribution. However, the developer version is utilized here for the latest release.
End of explanation
## load dependencies
import smogn
import pandas
import seaborn
Explanation: Dependencies
Next, we load the required dependencies. Here we import smogn to later apply Synthetic Minority Over-Sampling Technique for Regression with Gaussian Noise. In addition, we use pandas for data handling, and seaborn to visualize our results.
End of explanation
## load data
housing = pandas.read_csv(
## http://jse.amstat.org/v19n3/decock.pdf
'https://raw.githubusercontent.com/nickkunz/smogn/master/data/housing.csv'
)
Explanation: Data
After, we load our data. In this example, we use the Ames Housing Dataset training split retreived from Kaggle, originally complied by Dean De Cock. In this case, we name our training set housing
End of explanation
## conduct smogn
housing_smogn = smogn.smoter(
## main arguments
data = housing, ## pandas dataframe
y = 'SalePrice', ## string ('header name')
k = 9, ## positive integer (k < n)
samp_method = 'extreme', ## string ('balance' or 'extreme')
## phi relevance arguments
rel_thres = 0.80, ## positive real number (0 < R < 1)
rel_method = 'auto', ## string ('auto' or 'manual')
rel_xtrm_type = 'high', ## string ('low' or 'both' or 'high')
rel_coef = 2.25 ## positive real number (0 < R)
)
Explanation: Synthetic Minority Over-Sampling Technique for Regression with Gaussian Noise
Here we cover the focus of this example. We call the smoter function from this package (smogn.smoter) and satisfy a typtical number of arguments: data, y, k, samp_method, rel_thres, rel_method, rel_xtrm_type, rel_coef
The data argument takes a Pandas DataFrame, which contains the training set split. In this example, we input the previously loaded housing training set with follow input: data = housing
The y argument takes a string, which specifies a continuous reponse variable by header name. In this example, we input 'SalePrice' in the interest of predicting the sale price of homes in Ames, Iowa with the following input: y = 'SalePrice'
The k argument takes a positive integer less than 𝑛, where 𝑛 is the sample size. k specifies the number of neighbors to consider for interpolation used in over-sampling. In this example, we input 9 to consider 4 additional neighbors (default is 5) with the following input: k = 9
The samp_method argument takes a string, either 'balance' or 'extreme'. If 'balance' is specified, less over/under-sampling is conducted. If 'extreme' is specified, more over/under-sampling is conducted. In this case, we input 'extreme' (default is 'balance') to aggressively over/under-sample with the following input: samp_method = 'extreme'
The rel_thres argument takes a real number between 0 and 1. It specifies the threshold of rarity. The higher the threshold, the higher the over/under-sampling boundary. The inverse is also true, where the lower the threshold, the lower the over/under-sampling boundary. In this example, we increase the boundary to 0.80 (default is 0.50) with the following input: rel_thres = 0.80
The rel_method argument takes a string, either 'auto' or 'manual'. It specifies how relevant or rare "minority" values in y are determined. If 'auto' is specified, "minority" values are automatically determined by box plot extremes. If 'manual' is specified, "minority" values are determined by the user. In this example, we input 'auto' with the following input: rel_method = 'auto'
The rel_xtrm_type argument takes a string, either 'low' or 'both' or 'high'. It indicates which region of the response variable y should be considered rare or a "minority", when rel_method = 'auto'. In this example, we input 'high' (default is 'both') in the interest of over-sampling high "minority" values in y with the following input: rel_xtrm_type = 'high'
The rel_coef argument takes a positive real number. It specifies the box plot coefficient used to automatically determine extreme and therefore rare "minority" values in y, when rel_method = 'auto'. In this example, we input 2.25 (default is 1.50) to increase the box plot extremes with the following input: rel_coef = 2.25
End of explanation
## dimensions - original data
housing.shape
## dimensions - modified data
housing_smogn.shape
Explanation: Note:
In this example, the regions of interest within the response variable y are automatically determined by box plot extremes. The extreme values are considered rare "minorty" values are over-sampled. The values closer the median are considered "majority" values and are under-sampled.
If there are no box plot extremes contained in the reponse variable y, the argument rel_method = manual must be specified, and an input matrix must be placed into the argument rel_ctrl_pts_rg indicating the regions of rarity in y.
More information regarding the matrix input to the rel_ctrl_pts_rg argument and manual over-sampling can be found within the function's doc string, as well as in Example 3: Advanced.
It is also important to mention that by default, smogn.smoter will first automatically remove columns containing missing values and then remove rows, as it cannot input data containing missing values. This feature can be changed with the boolean arguments drop_na_col = False and drop_na_rows = False.
Results
After conducting Synthetic Minority Over-Sampling Technique for Regression with Gaussian Noise, we briefly examine the results.
We can see that the number of observations (rows) in the original training set increased from 1460 to 2643, while the number of features (columns) decreased from 81 to 62.
Recall that smogn.smoter automatically removes features containing missing values. In this case, 19 features contained missing values and were therefore omitted.
The increase in observations were a result of over-sampling. More detailed information in this regard can be found in the original paper cited in the References section.
End of explanation
## box plot stats - original data
smogn.box_plot_stats(housing['SalePrice'])['stats']
## box plot stats - modified data
smogn.box_plot_stats(housing_smogn['SalePrice'])['stats']
Explanation: Further examining the results, we can see that the distribution of the response variable has changed. By calling the box_plot_stats function from this package (smogn.box_plot_stats) we quickly verify.
Notice that the modified training set's box plot five number summary has changed, where the distribution of the response variable has skewed right when compared to the original training set.
End of explanation
## plot y distribution
seaborn.kdeplot(housing['SalePrice'], label = "Original")
seaborn.kdeplot(housing_smogn['SalePrice'], label = "Modified")
Explanation: Plotting the results of both the original and modified training sets, the skewed right distribution of the response variable in the modified training set becomes more evident.
In this example, SMOGN over-sampled observations whose 'SalePrice' was found to be extremely high according to the box plot (those considered "minority") and under-sampled observations that were closer to the median (those considered "majority").
This is perhaps most useful when most of the y values of interest in predicting are found at the extremes of the distribution within a given dataset.
End of explanation |
2,116 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DiscreteDP Example
Step1: Setup
Step2: Continuous-state benchmark
Let us compute the value function of the continuous-state version
as described in equations (2.22) and (2.23) in Section 2.3.
Step3: The optimal stopping boundary $\gamma$ for the contiuous-state version, given by (2.23)
Step4: The value function for the continuous-state version, given by (2.24)
Step5: Solving the problem with DiscreteDP
Construct a DiscreteDP instance for the disrete-state version
Step6: Let us solve the decision problem by
(0) value iteration,
(1) value iteration with span-based termination
(equivalent to modified policy iteration with step $k = 0$),
(2) policy iteration,
(3) modified policy iteration.
Following Rust (1996), we set
Step7: The numbers of iterations
Step8: Policy iteration gives the optimal policy
Step9: Takes action 1 ("replace") if and only if $s \geq \bar{\gamma}$, where $\bar{\gamma}$ is equal to
Step10: Check that the other methods gave the correct answer
Step11: The deviations of the returned value function from the continuous-state benchmark
Step12: In the following we try to reproduce Table 14.1 in Rust (1996), p.660,
although the precise definitions and procedures there are not very clear.
The maximum absolute differences of $v$ from that by policy iteration
Step13: Compute $\lVert v - T(v)\rVert$
Step14: Next we compute $\overline{b} - \underline{b}$
for the three methods other than policy iteration, where
$I$ is the number of iterations required to fulfill the termination condition, and
$$
\begin{aligned}
\underline{b} &= \frac{\beta}{1-\beta} \min\left[T(v^{I-1}) - v^{I-1}\right], \\
\overline{b} &= \frac{\beta}{1-\beta} \max\left[T(v^{I-1}) - v^{I-1}\right].
\end{aligned}
$$
Step15: For policy iteration, while it does not seem really relevant,
we compute $\overline{b} - \underline{b}$ with the returned value of $v$
in place of $v^{I-1}$
Step16: Last, time each algorithm
Step17: Notes
It appears that our value iteration with span-based termination is different in some details
from the corresponding algorithm (successive approximation with error bounds) in Rust.
In returing the value function, our algorithm returns
$T(v^{I-1}) + (\overline{b} + \underline{b})/2$,
while Rust's seems to return $v^{I-1} + (\overline{b} + \underline{b})/2$.
In fact
Step18: $\lVert v - v_{\mathrm{pi}}\rVert$
Step19: $\lVert v - T(v)\rVert$
Step20: Compare the Table in Rust.
Convergence of trajectories
Let us plot the convergence of $v^i$ for the four algorithms;
see also Figure 14.2 in Rust.
Value iteration
Step21: Value iteration with span-based termination
Step22: Policy iteration
Step23: Modified policy iteration
Step24: Increasing the discount factor
Let us consider the case with a discount factor closer to $1$, $\beta = 0.9999$.
Step25: The numbers of iterations
Step26: Policy iteration gives the optimal policy
Step27: Takes action 1 ("replace") if and only if $s \geq \bar{\gamma}$, where $\bar{\gamma}$ is equal to
Step28: Check that the other methods gave the correct answer
Step29: $\lVert v - v_{\mathrm{pi}}\rVert$
Step30: $\lVert v - T(v)\rVert$
Step31: $\overline{b} - \underline{b}$
Step32: For policy iteration | Python Code:
%matplotlib inline
import numpy as np
import itertools
import scipy.optimize
import matplotlib.pyplot as plt
import pandas as pd
from quantecon.markov import DiscreteDP
# matplotlib settings
plt.rcParams['axes.autolimit_mode'] = 'round_numbers'
plt.rcParams['axes.xmargin'] = 0
plt.rcParams['axes.ymargin'] = 0
plt.rcParams['patch.force_edgecolor'] = True
from cycler import cycler
plt.rcParams['axes.prop_cycle'] = cycler(color='bgrcmyk')
Explanation: DiscreteDP Example: Automobile Replacement
Daisuke Oyama
Faculty of Economics, University of Tokyo
We study the finite-state version of the automobile replacement problem as considered in
Rust (1996, Section 4.2.2).
J. Rust, "Numerical Dynamic Programming in Economics",
<i>Handbook of Computational Economics</i>, Volume 1, 619-729, 1996.
End of explanation
lambd = 0.5 # Exponential distribution parameter
c = 200 # (Constant) marginal cost of maintainance
net_price = 10**5 # Replacement cost
n = 100 # Number of states; s = 0, ..., n-1: level of utilization of the asset
m = 2 # Number of actions; 0: keep, 1: replace
# Reward array
R = np.empty((n, m))
R[:, 0] = -c * np.arange(n) # Costs for maintainance
R[:, 1] = -net_price - c * 0 # Costs for replacement
# Transition probability array
# For each state s, s' distributes over
# s, s+1, ..., min{s+supp_size-1, n-1} if a = 0
# 0, 1, ..., supp_size-1 if a = 1
# according to the (discretized and truncated) exponential distribution
# with parameter lambd
supp_size = 12
probs = np.empty(supp_size)
probs[0] = 1 - np.exp(-lambd * 0.5)
for j in range(1, supp_size-1):
probs[j] = np.exp(-lambd * (j - 0.5)) - np.exp(-lambd * (j + 0.5))
probs[supp_size-1] = 1 - np.sum(probs[:-1])
Q = np.zeros((n, m, n))
# a = 0
for i in range(n-supp_size):
Q[i, 0, i:i+supp_size] = probs
for k in range(supp_size):
Q[n-supp_size+k, 0, n-supp_size+k:] = probs[:supp_size-k]/probs[:supp_size-k].sum()
# a = 1
for i in range(n):
Q[i, 1, :supp_size] = probs
# Discount factor
beta = 0.95
Explanation: Setup
End of explanation
def f(x, s):
return (c/(1-beta)) * \
((x-s) - (beta/(lambd*(1-beta))) * (1 - np.exp(-lambd*(1-beta)*(x-s))))
Explanation: Continuous-state benchmark
Let us compute the value function of the continuous-state version
as described in equations (2.22) and (2.23) in Section 2.3.
End of explanation
gamma = scipy.optimize.brentq(lambda x: f(x, 0) - net_price, 0, 100)
print(gamma)
Explanation: The optimal stopping boundary $\gamma$ for the contiuous-state version, given by (2.23):
End of explanation
def value_func_cont_time(s):
return -c*gamma/(1-beta) + (s < gamma) * f(gamma, s)
v_cont = value_func_cont_time(np.arange(n))
Explanation: The value function for the continuous-state version, given by (2.24):
End of explanation
ddp = DiscreteDP(R, Q, beta)
Explanation: Solving the problem with DiscreteDP
Construct a DiscreteDP instance for the disrete-state version:
End of explanation
v_init = np.zeros(ddp.num_states)
epsilon = 1164
methods = ['vi', 'mpi', 'pi', 'mpi']
labels = ['Value iteration', 'Value iteration with span-based termination',
'Policy iteration', 'Modified policy iteration']
results = {}
for i in range(4):
k = 20 if labels[i] == 'Modified policy iteration' else 0
results[labels[i]] = \
ddp.solve(method=methods[i], v_init=v_init, epsilon=epsilon, k=k)
columns = [
'Iterations', 'Time (second)', r'$\lVert v - v_{\mathrm{pi}} \rVert$',
r'$\overline{b} - \underline{b}$', r'$\lVert v - T(v)\rVert$'
]
df = pd.DataFrame(index=labels, columns=columns)
Explanation: Let us solve the decision problem by
(0) value iteration,
(1) value iteration with span-based termination
(equivalent to modified policy iteration with step $k = 0$),
(2) policy iteration,
(3) modified policy iteration.
Following Rust (1996), we set:
$\varepsilon = 1164$ (for value iteration and modified policy iteration),
$v^0 \equiv 0$,
the number of iteration for iterative policy evaluation $k = 20$.
End of explanation
for label in labels:
print(results[label].num_iter, '\t' + '(' + label + ')')
df[columns[0]].loc[label] = results[label].num_iter
Explanation: The numbers of iterations:
End of explanation
print(results['Policy iteration'].sigma)
Explanation: Policy iteration gives the optimal policy:
End of explanation
(1-results['Policy iteration'].sigma).sum()
Explanation: Takes action 1 ("replace") if and only if $s \geq \bar{\gamma}$, where $\bar{\gamma}$ is equal to:
End of explanation
for result in results.values():
if result != results['Policy iteration']:
print(np.array_equal(result.sigma, results['Policy iteration'].sigma))
Explanation: Check that the other methods gave the correct answer:
End of explanation
diffs_cont = {}
for label in labels:
diffs_cont[label] = np.abs(results[label].v - v_cont).max()
print(diffs_cont[label], '\t' + '(' + label + ')')
label = 'Policy iteration'
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(-v_cont, label='Continuous-state')
ax.plot(-results[label].v, label=label)
ax.set_title('Comparison of discrete vs. continuous value functions')
ax.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
ax.set_xlabel('State')
ax.set_ylabel(r'Value $\times\ (-1)$')
plt.legend(loc=4)
plt.show()
Explanation: The deviations of the returned value function from the continuous-state benchmark:
End of explanation
for label in labels:
diff_pi = \
np.abs(results[label].v - results['Policy iteration'].v).max()
print(diff_pi, '\t' + '(' + label + ')')
df[columns[2]].loc[label] = diff_pi
Explanation: In the following we try to reproduce Table 14.1 in Rust (1996), p.660,
although the precise definitions and procedures there are not very clear.
The maximum absolute differences of $v$ from that by policy iteration:
End of explanation
for label in labels:
v = results[label].v
diff_max = \
np.abs(v - ddp.bellman_operator(v)).max()
print(diff_max, '\t' + '(' + label + ')')
df[columns[4]].loc[label] = diff_max
Explanation: Compute $\lVert v - T(v)\rVert$:
End of explanation
for i in range(4):
if labels[i] != 'Policy iteration':
k = 20 if labels[i] == 'Modified policy iteration' else 0
res = ddp.solve(method=methods[i], v_init=v_init, k=k,
max_iter=results[labels[i]].num_iter-1)
diff = ddp.bellman_operator(res.v) - res.v
diff_span = (diff.max() - diff.min()) * ddp.beta / (1 - ddp.beta)
print(diff_span, '\t' + '(' + labels[i] + ')')
df[columns[3]].loc[labels[i]] = diff_span
Explanation: Next we compute $\overline{b} - \underline{b}$
for the three methods other than policy iteration, where
$I$ is the number of iterations required to fulfill the termination condition, and
$$
\begin{aligned}
\underline{b} &= \frac{\beta}{1-\beta} \min\left[T(v^{I-1}) - v^{I-1}\right], \\
\overline{b} &= \frac{\beta}{1-\beta} \max\left[T(v^{I-1}) - v^{I-1}\right].
\end{aligned}
$$
End of explanation
label = 'Policy iteration'
v = results[label].v
diff = ddp.bellman_operator(v) - v
diff_span = (diff.max() - diff.min()) * ddp.beta / (1 - ddp.beta)
print(diff_span, '\t' + '(' + label + ')')
df[columns[3]].loc[label] = diff_span
Explanation: For policy iteration, while it does not seem really relevant,
we compute $\overline{b} - \underline{b}$ with the returned value of $v$
in place of $v^{I-1}$:
End of explanation
for i in range(4):
k = 20 if labels[i] == 'Modified policy iteration' else 0
print(labels[i])
t = %timeit -o ddp.solve(method=methods[i], v_init=v_init, epsilon=epsilon, k=k)
df[columns[1]].loc[labels[i]] = t.best
df
Explanation: Last, time each algorithm:
End of explanation
i = 1
k = 0
res = ddp.solve(method=methods[i], v_init=v_init, k=k,
max_iter=results[labels[i]].num_iter-1)
diff = ddp.bellman_operator(res.v) - res.v
v = res.v + (diff.max() + diff.min()) * ddp.beta / (1 - ddp.beta) / 2
Explanation: Notes
It appears that our value iteration with span-based termination is different in some details
from the corresponding algorithm (successive approximation with error bounds) in Rust.
In returing the value function, our algorithm returns
$T(v^{I-1}) + (\overline{b} + \underline{b})/2$,
while Rust's seems to return $v^{I-1} + (\overline{b} + \underline{b})/2$.
In fact:
End of explanation
np.abs(v - results['Policy iteration'].v).max()
Explanation: $\lVert v - v_{\mathrm{pi}}\rVert$:
End of explanation
np.abs(v - ddp.bellman_operator(v)).max()
Explanation: $\lVert v - T(v)\rVert$:
End of explanation
label = 'Value iteration'
iters = [2, 20, 40, 80]
v = np.zeros(ddp.num_states)
fig, ax = plt.subplots(figsize=(8,5))
for i in range(iters[-1]):
v = ddp.bellman_operator(v)
if i+1 in iters:
ax.plot(-v, label='Iteration {0}'.format(i+1))
ax.plot(-results['Policy iteration'].v, label='Fixed Point')
ax.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
ax.set_ylim(0, 2.4e5)
ax.set_yticks([0.4e5 * i for i in range(7)])
ax.set_title(label)
ax.set_xlabel('State')
ax.set_ylabel(r'Value $\times\ (-1)$')
plt.legend(loc=(0.7, 0.2))
plt.show()
Explanation: Compare the Table in Rust.
Convergence of trajectories
Let us plot the convergence of $v^i$ for the four algorithms;
see also Figure 14.2 in Rust.
Value iteration
End of explanation
label = 'Value iteration with span-based termination'
iters = [1, 10, 15, 20]
v = np.zeros(ddp.num_states)
fig, ax = plt.subplots(figsize=(8,5))
for i in range(iters[-1]):
u = ddp.bellman_operator(v)
if i+1 in iters:
diff = u - v
w = u + ((diff.max() + diff.min()) / 2) * ddp.beta / (1 - ddp.beta)
ax.plot(-w, label='Iteration {0}'.format(i+1))
v = u
ax.plot(-results['Policy iteration'].v, label='Fixed Point')
ax.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
ax.set_ylim(1.0e5, 2.4e5)
ax.set_yticks([1.0e5+0.2e5 * i for i in range(8)])
ax.set_title(label)
ax.set_xlabel('State')
ax.set_ylabel(r'Value $\times\ (-1)$')
plt.legend(loc=(0.7, 0.2))
plt.show()
Explanation: Value iteration with span-based termination
End of explanation
label = 'Policy iteration'
iters = [1, 2, 3]
v_init = np.zeros(ddp.num_states)
fig, ax = plt.subplots(figsize=(8,5))
sigma = ddp.compute_greedy(v_init)
for i in range(iters[-1]):
# Policy evaluation
v_sigma = ddp.evaluate_policy(sigma)
if i+1 in iters:
ax.plot(-v_sigma, label='Iteration {0}'.format(i+1))
# Policy improvement
new_sigma = ddp.compute_greedy(v_sigma)
sigma = new_sigma
ax.plot(-results['Policy iteration'].v, label='Fixed Point')
ax.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
ax.set_ylim(1e5, 4.2e5)
ax.set_yticks([1e5 + 0.4e5 * i for i in range(9)])
ax.set_title(label)
ax.set_xlabel('State')
ax.set_ylabel(r'Value $\times\ (-1)$')
plt.legend(loc=4)
plt.show()
Explanation: Policy iteration
End of explanation
label = 'Modified policy iteration'
iters = [1, 2, 3, 4]
v = np.zeros(ddp.num_states)
k = 20 #- 1
fig, ax = plt.subplots(figsize=(8,5))
for i in range(iters[-1]):
# Policy improvement
sigma = ddp.compute_greedy(v)
u = ddp.bellman_operator(v)
if i == results[label].num_iter-1:
diff = u - v
break
# Partial policy evaluation with k=20 iterations
for j in range(k):
u = ddp.T_sigma(sigma)(u)
v = u
if i+1 in iters:
ax.plot(-v, label='Iteration {0}'.format(i+1))
ax.plot(-results['Policy iteration'].v, label='Fixed Point')
ax.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
ax.set_ylim(0, 2.8e5)
ax.set_yticks([0.4e5 * i for i in range(8)])
ax.set_title(label)
ax.set_xlabel('State')
ax.set_ylabel(r'Value $\times\ (-1)$')
plt.legend(loc=4)
plt.show()
Explanation: Modified policy iteration
End of explanation
ddp.beta = 0.9999
v_init = np.zeros(ddp.num_states)
epsilon = 1164
ddp.max_iter = 10**5 * 2
results_9999 = {}
for i in range(4):
k = 20 if labels[i] == 'Modified policy iteration' else 0
results_9999[labels[i]] = \
ddp.solve(method=methods[i], v_init=v_init, epsilon=epsilon, k=k)
df_9999 = pd.DataFrame(index=labels, columns=columns)
Explanation: Increasing the discount factor
Let us consider the case with a discount factor closer to $1$, $\beta = 0.9999$.
End of explanation
for label in labels:
print(results_9999[label].num_iter, '\t' + '(' + label + ')')
df_9999[columns[0]].loc[label] = results_9999[label].num_iter
Explanation: The numbers of iterations:
End of explanation
print(results_9999['Policy iteration'].sigma)
Explanation: Policy iteration gives the optimal policy:
End of explanation
(1-results_9999['Policy iteration'].sigma).sum()
Explanation: Takes action 1 ("replace") if and only if $s \geq \bar{\gamma}$, where $\bar{\gamma}$ is equal to:
End of explanation
for result in results_9999.values():
if result != results_9999['Policy iteration']:
print(np.array_equal(result.sigma, results_9999['Policy iteration'].sigma))
Explanation: Check that the other methods gave the correct answer:
End of explanation
for label in labels:
diff_pi = \
np.abs(results_9999[label].v - results_9999['Policy iteration'].v).max()
print(diff_pi, '\t' + '(' + label + ')')
df_9999[columns[2]].loc[label] = diff_pi
Explanation: $\lVert v - v_{\mathrm{pi}}\rVert$:
End of explanation
for label in labels:
v = results_9999[label].v
diff_max = \
np.abs(v - ddp.bellman_operator(v)).max()
print(diff_max, '\t' + '(' + label + ')')
df_9999[columns[4]].loc[label] = diff_max
Explanation: $\lVert v - T(v)\rVert$:
End of explanation
for i in range(4):
if labels[i] != 'Policy iteration':
k = 20 if labels[i] == 'Modified policy iteration' else 0
res = ddp.solve(method=methods[i], v_init=v_init, k=k,
max_iter=results_9999[labels[i]].num_iter-1)
diff = ddp.bellman_operator(res.v) - res.v
diff_span = (diff.max() - diff.min()) * ddp.beta / (1 - ddp.beta)
print(diff_span, '\t' + '(' + labels[i] + ')')
df_9999[columns[3]].loc[labels[i]] = diff_span
Explanation: $\overline{b} - \underline{b}$:
End of explanation
label = 'Policy iteration'
v = results_9999[label].v
diff = ddp.bellman_operator(v) - v
diff_span = (diff.max() - diff.min()) * ddp.beta / (1 - ddp.beta)
print(diff_span, '\t' + '(' + label + ')')
df_9999[columns[3]].loc[label] = diff_span
for i in range(4):
k = 20 if labels[i] == 'Modified policy iteration' else 0
print(labels[i])
t = %timeit -o ddp.solve(method=methods[i], v_init=v_init, epsilon=epsilon, k=k)
df_9999[columns[1]].loc[labels[i]] = t.best
df_9999
df_time = pd.DataFrame(index=labels)
df_time[r'$\beta = 0.95$'] = df[columns[1]]
df_time[r'$\beta = 0.9999$'] = df_9999[columns[1]]
second_max = df_time[r'$\beta = 0.9999$'][1:].max()
for xlim in [None, (0, second_max*1.2)]:
ax = df_time.loc[reversed(labels)][df_time.columns[::-1]].plot(
kind='barh', legend='reverse', xlim=xlim, figsize=(8,5)
)
ax.set_xlabel('Time (second)')
import platform
print(platform.platform())
import sys
print(sys.version)
print(np.__version__)
Explanation: For policy iteration:
End of explanation |
2,117 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 20
Step1: L is the LENGTH of our box. You can set this to any value you choose however, appropriate scaling of the problem would admit 1 as the length of choice.
nx is the number of grid points we wish to have in our solution. If we have fewer we sacrifice accuracy, if we have more, the computational time increases. You should always check that your solution does not depend on the number of grid points and the grid spacing!
dx is the spacing between grid points. Similar comments as above in #2
timeStepDuration is the amount of 'time' at each step of the solution. Accuracy and stability of the solution depend on choices for the timeStepDuration and the grid point spacing and the diffusion coefficient. We will not discuss stability any further, just know that it is something that needs to be considered.
steps is the number of timesteps in dt you wish to run. You can change this value to observe the solution in different stages.
Initializing the Simulation Domain and Parameters
Step2: Here we set the diffusion coefficient.
Step3: Note
Step4: We assign to 'c' objects that are 'CellVariables'. This is a special type of variable used by FiPy to hold the values for the concentration in our solution. We also create a viewer here so that we can inspect the values of 'c'.
Step5: Note
Step6: Setting Initial Conditions
Step7: Note
Step8: This line defines the diffusion equation
Step9: These lines print out a text file with the final data
Step10: Concentration
Dependent Diffusion
Standard Imports
Step11: The parameters of our system.
L is the LENGTH of our box. You can set this to any value you choose
however, 1 is the easiest.
nx is the number of grid points we wish to have in our solution. If we have fewer we sacrifice accuracy, if we have more, the computational time increases and we are subject to roundoff error. You should always check that your solution does not depend on the number of grid points and the grid spacing!
dx is the spacing between grid points. Similar comments as above.
timeStepDuration is the amount of 'time' at each step of the solution.
Accuracy and stability of the solution depend on choices for the timeStepDuration and the grid point spacing and the diffusion coefficient. We will not discuss stability any further, just know that it is something that needs to be considered.
Step12: Note
Step13: Note
Step14: We assign to 'c' objects that are 'CellVariables'. This is a special type of variable used by FiPy to hold the values for the concentration in our solution. c1 for eqn1 and c2 for eqn2
Step15: Note
Step16: These lines set diffusant in the initial condition. They are set to the same values for easy comparison. Feel free to change these values.
Step17: Boundary conditions can be either fixed flux or fixed value. Here, fixed value is used for simple comparison between the diffusion profiles.
Step18: Note
Step19: Note
Step20: The following is an if loop that waits for user input before executing. Iterates for number of steps stated earlier for each equation. | Python Code:
%matplotlib osx
from fipy import *
%matplotlib
from fipy import *
Explanation: Lecture 20: Introduction to FiPy - Getting to Know the Diffusion Equation
Objectives:
Understand how to create the diffusion equation in FiPy.
Be able to change variables in the equation and observe the effects in the diffusion equation solution.
Understand how to save the results to a data file.
First thing we'll do is use the qt backend for interacting with matplotlib. There is documentation in matplotlib about backends. One or more of the available backends may be installed with your python distribution.
Setting up Plotting
End of explanation
L = 1.
nx = 200
dx = L / nx
timeStepDuration = 0.001
steps = 100
Explanation: L is the LENGTH of our box. You can set this to any value you choose however, appropriate scaling of the problem would admit 1 as the length of choice.
nx is the number of grid points we wish to have in our solution. If we have fewer we sacrifice accuracy, if we have more, the computational time increases. You should always check that your solution does not depend on the number of grid points and the grid spacing!
dx is the spacing between grid points. Similar comments as above in #2
timeStepDuration is the amount of 'time' at each step of the solution. Accuracy and stability of the solution depend on choices for the timeStepDuration and the grid point spacing and the diffusion coefficient. We will not discuss stability any further, just know that it is something that needs to be considered.
steps is the number of timesteps in dt you wish to run. You can change this value to observe the solution in different stages.
Initializing the Simulation Domain and Parameters
End of explanation
D11 = 0.5
Explanation: Here we set the diffusion coefficient.
End of explanation
mesh = Grid1D(dx = dx, nx = nx)
Explanation: Note: The 'mesh' command creates the mesh (gridpoints) on which we will solve the equation. This is specific to FiPy.
At this point, if you are in the IPython notebook I would suggest you try the following in the cell below:
Put your cursor to the right of the "(" and hit TAB. You will get the docstring for the Grid1D function.
Do this again with your cursor to the right of the "d" in Grid1D().
And again after the "G".
This is a powerful way to explore the available functions.
End of explanation
c = CellVariable(name = "c", mesh = mesh)
viewer = MatplotlibViewer(vars=(c,),datamin=-0.1, datamax=1.1, legend=None)
Explanation: We assign to 'c' objects that are 'CellVariables'. This is a special type of variable used by FiPy to hold the values for the concentration in our solution. We also create a viewer here so that we can inspect the values of 'c'.
End of explanation
x = mesh.cellCenters
x
x
Explanation: Note: This command sets 'x' to contain a list of numbers that define the x position of the grid-points.
End of explanation
c.setValue(0.0)
viewer.plot()
c.setValue(0.2, where=x < L/2.)
c.setValue(0.8, where=x > L/2.)
viewer.plot()
Explanation: Setting Initial Conditions
End of explanation
boundaryConditions=(FixedValue(mesh.facesLeft,0.2),
FixedValue(mesh.facesRight,0.8))
Explanation: Note: Boundary conditions come in two types. Fixed flux and fixed value. The syntax is:
FixedValue(mesh.getFacesLeft(), VALUE)
FixedFlux(mesh.getFacesLeft(), FLUX)
Fixed value boundaries, can set VALUE or FLUX as a float
End of explanation
eqn1 = TransientTerm() == ImplicitDiffusionTerm(D11)
for step in range(1000):
eqn1.solve(c, boundaryConditions = boundaryConditions, dt = timeStepDuration)
viewer.plot()
Explanation: This line defines the diffusion equation:
End of explanation
from fipy.viewers.tsvViewer import TSVViewer
TSVViewer(vars=(c)).plot(filename="output.txt")
!head output.txt
Explanation: These lines print out a text file with the final data
End of explanation
%matplotlib osx
from fipy import *
Explanation: Concentration
Dependent Diffusion
Standard Imports
End of explanation
L = 1.
nx = 50
dx = L / nx
timeStepDuration = 0.001
steps = 100
Explanation: The parameters of our system.
L is the LENGTH of our box. You can set this to any value you choose
however, 1 is the easiest.
nx is the number of grid points we wish to have in our solution. If we have fewer we sacrifice accuracy, if we have more, the computational time increases and we are subject to roundoff error. You should always check that your solution does not depend on the number of grid points and the grid spacing!
dx is the spacing between grid points. Similar comments as above.
timeStepDuration is the amount of 'time' at each step of the solution.
Accuracy and stability of the solution depend on choices for the timeStepDuration and the grid point spacing and the diffusion coefficient. We will not discuss stability any further, just know that it is something that needs to be considered.
End of explanation
D1 = 3.0
Explanation: Note: In the first equation, the diffusion coefficient is constant, concentration independent.
End of explanation
mesh = Grid1D(dx = dx, nx = nx)
Explanation: Note: You have seen all of the following code before. This time we are solving two simultaneous equations, eqn1 and eqn2.
Note: The 'mesh' command creates the mesh (gridpoints) on which we will solve the equation. This is specific to FiPy.
End of explanation
c1 = CellVariable(
name = "c1",
mesh = mesh,
hasOld = True)
c2 = CellVariable(
name = "c2",
mesh = mesh,
hasOld = True)
Explanation: We assign to 'c' objects that are 'CellVariables'. This is a special type of variable used by FiPy to hold the values for the concentration in our solution. c1 for eqn1 and c2 for eqn2
End of explanation
x = mesh.cellCenters
Explanation: Note: This command sets 'x' to contain a list of numbers that define the x position of the grid-points.
End of explanation
c1.setValue(0.8)
c1.setValue(0.2, where=x > L/3.)
c2.setValue(0.8)
c2.setValue(0.2, where=x > L/3.)
viewer.plot()
viewer = Matplotlib1DViewer(vars = (c1,c2), limits = {'datamin': 0., 'datamax': 1.})
viewer.plot()
Explanation: These lines set diffusant in the initial condition. They are set to the same values for easy comparison. Feel free to change these values.
End of explanation
boundaryConditions=(FixedValue(mesh.facesLeft,0.8), FixedValue(mesh.facesRight,0.2))
boundaryConditions=(FixedFlux(mesh.facesLeft,0.0), FixedFlux(mesh.facesRight,0.0))
Explanation: Boundary conditions can be either fixed flux or fixed value. Here, fixed value is used for simple comparison between the diffusion profiles.
End of explanation
D22_0 = 3.0
D22_1 = 0.5
D2 = (D22_1 - D22_0)*c2 + D22_0
Explanation: Note: In the second equation, the diffusion coefficient is non-constant and is a function of concentration in the system. So we use D22_0 and D22_1 as the end points of our function. The function, given by "D" is simply a linear interpolation of the two D values. You are, of course, free to try other functions.
End of explanation
eqn1 = TransientTerm() == ImplicitDiffusionTerm(D1)
eqn2 = TransientTerm() == ImplicitDiffusionTerm(D2)
Explanation: Note: These are the two diffusion equations. The first equation is as in previous code, using a concentration independent D1. The second equation uses a non-constant D described above.
End of explanation
for step in range(10):
c1.updateOld()
c2.updateOld()
eqn1.solve(c1, boundaryConditions = boundaryConditions, dt = timeStepDuration)
eqn2.solve(c2, boundaryConditions = boundaryConditions, dt = timeStepDuration)
viewer.plot()
Explanation: The following is an if loop that waits for user input before executing. Iterates for number of steps stated earlier for each equation.
End of explanation |
2,118 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building Histograms with Bayesian Priors
An Introduction to Bayesian Blocks
========
Version 0.1
By LM Walkowicz 2019 June 14
This notebook makes heavy use of Bayesian block implementations by Jeff Scargle, Jake VanderPlas, Jan Florjanczyk, and the Astropy team.
Before you begin, please download the dataset for this notebook.
Problem 1) Histograms Lie!
One of the most common and useful tools for data visualization can be incredibly misleading. Let's revisit how.
Problem 1a
First, let's make some histograms! Below, I provide some data; please make a histogram of it.
Step1: Hey, nice histogram!
But how do we know we have visualized all the relevant structure in our data?
Play around with the binning and consider
Step2: Problem 1b
What are some issues with histograms?
Take a few min to discuss this with your partner
Solution 1b
write your solution here
Problem 1c
We have previously covered a few ways to make histograms better. What are some ways you could improve your histogram?
Take a few min to discuss this with your partner
Problem 1d
There are lots of ways to improve the previous histogram-- let's implement a KDE representation instead! As you have seen in previous sessions, we will borrow a bit of code from Jake VanderPlas to estimate the KDE.
As a reminder, you have a number of choices of kernel in your KDE-- some we have used in the past
Step4: Problem 1d
Which parameters most affected the shape of the final distribution?
What are some possible issues with using a KDE representation of the data?
Discuss with your partner
Solution 1d
Write your response here
Problem 2) Histograms Episode IV
Step5: Problem 2a
Let's visualize our data again, but this time we will use Bayesian Blocks.
Plot a standard histogram (as above), but now plot the Bayesian Blocks representation of the distribution over it.
Step6: Problem 2b
How is the Bayesian Blocks representation different or similar?
How might your choice of representation affect your scientific conclusions about your data?
Take a few min to discuss this with your partner
If you are using histograms for analysis, you might infer physical meaning from the presence or absence of features in these distributions. As it happens, histograms of time-tagged event data are often used to characterize physical events in time domain astronomy, for example gamma ray bursts or stellar flares.
Problem 3) Bayesian Blocks in the wild
Now we'll apply Bayesian Blocks to some real astronomical data, and explore how our visualization choices may affect our scientific conclusions.
First, let's get some data!
All data from NASA missions is hosted on the Mikulski Archive for Space Telescopes (aka MAST). As an aside, the M in MAST used to stand for "Multimission", but was changed to honor Sen. Barbara Mikulski (D-MD) for her tireless support of science.
Some MAST data (mostly the original data products) can be directly accessed using astroquery (there's an extensive guide to interacting with MAST via astroquery here
Step7: Problem 3c
Let's look at the distribution of the small planet radii in this table, which are given in units of Earth radii. Select the planets whose radii are Neptune-sized or smaller.
Select the planet radii for all planets smaller than Neptune in the table, and visualize the distribution of planet radii using a standard histogram.
Step8: Problem 3d
What features do you see in the histogram of planet radii? Which of these features are important?
Discuss with your partner
Solution 3d
Write your answer here
Problem 3e
Now let's try visualizing these data using Bayesian Blocks. Please recreate the histogram you plotted above, and then plot the Bayesian Blocks version over it.
Step9: Problem 3f
What features do you see in the histogram of planet radii? Which of these features are important?
Discuss with your partner.
Hint
Step10: Problem 3g
Please repeat the previous problem, but this time use the astropy implementation of Bayesian Blocks.
Step11: Putting these results in context
Both standard histograms and KDEs can be useful for quickly visualizing data, and in some cases, getting an intuition for the underlying PDF of your data.
However, keep in mind that they both involve making parameter choices that are largely not motivated in any quantitative way. These choices can create wildly misleading representations of your data.
In particular, your choices may lead you to make a physical interpretation that may or may not be correct (in our example, bear in mind that the observed distribution of exoplanetary radii informs models of planet formation).
Bayesian Blocks is more than just a variable-width histogram
While KDEs often do a better job of visualizing data than standard histograms do, they also create a loss of information. Philosophically speaking, what Bayesian Blocks do is posit that the "change points", also known as the bin edges, contain information that is interesting. When you apply a KDE, you are smoothing your data by creating an approximation, and that can mean you are losing potential insights by removing information.
While Bayesian Blocks are useful as a replacement for histograms in general, their ability to identify change points makes them especially useful for time series analysis.
Problem 4) Bayesian Blocks for Time Series Analysis
While Bayesian Blocks can be very useful as a simple replacement for histograms, one of its great strengths is in finding "change points" in time series data. Finding these change points can be useful for discovering interesting events in time series data you already have, and it can be used in real-time to detect changes that might trigger another action (for example, follow up observations for LSST).
Let's take a look at a few examples of using Bayesian Blocks in the time series context.
First and foremost, it's important to understand the different between various kinds of time series data.
Event data come from photon counting instruments. In these data, the time series typically consists of photon arrival times, usually in a particular range of energies that the instrument is sensitive to. Event data are univariate, in that the time series is "how many photons at a given time", or "how many photons in a given chunk of time".
Point measurements are measurements of a (typically) continuous source at a given moment in time, often with some uncertainty associated with the measurement. These data are multivariate, as your time series relates time, your measurement (e.g. flux, magnitude, etc) and its associated uncertainty to one another.
Problem 4a
Let's look at some event data from BATSE, a high energy astrophysics experiment that flew on NASA's Compton Gamma-Ray Observatory. BATSE primarily studied gamma ray bursts (GRBs), capturing its detections in four energy channels
Step12: Problem 4b
When you reach this point, you and your partner should pick a number between 1 and 4; your number is the channel whose data you will work with.
Using the data for your channel, please visualize the photon events in both a standard histogram and using Bayesian Blocks.
Step13: Problem 4c
Let's take a moment to reflect on the differences between these two representations of our data.
Please discuss with your partner
Step14: Cool, we have loaded in the FITS file. Let's look at what's in it
Step15: We want the light curve, so let's check out what's in that part of the file!
Step16: Problem 5b
Use a scatter plot to visualize the Kepler lightcurve. I strongly suggest you try displaying it at different scales by zooming in (or playing with the axis limits), so that you can get a better sense of the shape of the lightcurve.
Step17: Problem 5c
These data consist of a variable background, with occasional bright points caused by stellar flares.
Brainstorm possible approaches to find the flare events in this data. Write down your ideas, and discuss their potential advantages, disadvantages, and any pitfalls that you think might arise.
Discuss with your partner
Solution 5c
Write your notes here
There are lots of possible ways to approach this problem. In the literature, a very common traditional approach has been to fit the background variability while ignoring the outliers, then to subtract the background fit, and flag any point beyond some threshold value as belonging to a flare. More sophisticated approaches also exist, but they are often quite time consuming (and in many cases, detailed fits require a good starting estimate for the locations of flare events).
Recall, however, that Bayesian Blocks is particularly effective at identifying change points in our data. Let's see if it can help us in this case!
Problem 5d
Use Bayesian Blocks to visualize the Kepler lightcurve. Note that you are now using data that consists of point measurements, rather than event data (as for the BATSE example).
Step18: Concluding Remarks
As you can see, using Bayesian Blocks allowed us to represent the data, including both quiescent variability and flares, without having to smooth, clip, or otherwise alter the data.
One potential drawback here is that the change points don't identify the flares themselves, or at least don't identify them as being different from the background variability-- the algorithm identifies change points, but does not know anything about what change points might be interesting to you in particular.
Another potential drawback is that it is possible that the Bayesian Block representation may not catch all events, or at least may not, on its own, provide an unambiguous sign that a subtle signal of interest is in the data.
In these cases, it is sometimes instructive to use a hybrid approach, where one combines the bins determined from a traditional histogram with the Bayesian Blocks change points. Alternatively, if one has a good model for the background, one can compared the blocks representation of a background-only simulated data set with one containing both background and model signal.
Further interesting examples (in the area of high energy physics and astrophysics) are provided by this paper | Python Code:
# execute this cell
np.random.seed(0)
x = np.concatenate([stats.cauchy(-5, 1.8).rvs(500),
stats.cauchy(-4, 0.8).rvs(2000),
stats.cauchy(-1, 0.3).rvs(500),
stats.cauchy(2, 0.8).rvs(1000),
stats.cauchy(4, 1.5).rvs(500)])
# truncate values to a reasonable range
x = x[(x > -15) & (x < 15)]
# complete
# plt.hist(
Explanation: Building Histograms with Bayesian Priors
An Introduction to Bayesian Blocks
========
Version 0.1
By LM Walkowicz 2019 June 14
This notebook makes heavy use of Bayesian block implementations by Jeff Scargle, Jake VanderPlas, Jan Florjanczyk, and the Astropy team.
Before you begin, please download the dataset for this notebook.
Problem 1) Histograms Lie!
One of the most common and useful tools for data visualization can be incredibly misleading. Let's revisit how.
Problem 1a
First, let's make some histograms! Below, I provide some data; please make a histogram of it.
End of explanation
# complete
# plt.hist
Explanation: Hey, nice histogram!
But how do we know we have visualized all the relevant structure in our data?
Play around with the binning and consider:
What features do you see in this data? Which of these features are important?
End of explanation
# execute this cell
from sklearn.neighbors import KernelDensity
def kde_sklearn(data, grid, bandwidth = 1.0, **kwargs):
kde_skl = KernelDensity(bandwidth = bandwidth, **kwargs)
kde_skl.fit(data[:, np.newaxis])
log_pdf = kde_skl.score_samples(grid[:, np.newaxis]) # sklearn returns log(density)
return np.exp(log_pdf)
# complete
# plt.hist(
# grid =
# PDF =
# plt.plot(
Explanation: Problem 1b
What are some issues with histograms?
Take a few min to discuss this with your partner
Solution 1b
write your solution here
Problem 1c
We have previously covered a few ways to make histograms better. What are some ways you could improve your histogram?
Take a few min to discuss this with your partner
Problem 1d
There are lots of ways to improve the previous histogram-- let's implement a KDE representation instead! As you have seen in previous sessions, we will borrow a bit of code from Jake VanderPlas to estimate the KDE.
As a reminder, you have a number of choices of kernel in your KDE-- some we have used in the past: tophat, Epanechnikov, Gaussian. Please plot your original histogram, and then overplot a few example KDEs on top of it.
End of explanation
# execute this cell
def bayesian_blocks(t):
Bayesian Blocks Implementation
By Jake Vanderplas. License: BSD
Based on algorithm outlined in http://adsabs.harvard.edu/abs/2012arXiv1207.5578S
Parameters
----------
t : ndarray, length N
data to be histogrammed
Returns
-------
bins : ndarray
array containing the (N+1) bin edges
Notes
-----
This is an incomplete implementation: it may fail for some
datasets. Alternate fitness functions and prior forms can
be found in the paper listed above.
# copy and sort the array
t = np.sort(t)
N = t.size
# create length-(N + 1) array of cell edges
edges = np.concatenate([t[:1],
0.5 * (t[1:] + t[:-1]),
t[-1:]])
block_length = t[-1] - edges
# arrays needed for the iteration
nn_vec = np.ones(N)
best = np.zeros(N, dtype=float)
last = np.zeros(N, dtype=int)
#-----------------------------------------------------------------
# Start with first data cell; add one cell at each iteration
#-----------------------------------------------------------------
for K in range(N):
# Compute the width and count of the final bin for all possible
# locations of the K^th changepoint
width = block_length[:K + 1] - block_length[K + 1]
count_vec = np.cumsum(nn_vec[:K + 1][::-1])[::-1]
# evaluate fitness function for these possibilities
fit_vec = count_vec * (np.log(count_vec) - np.log(width))
fit_vec -= 4 # 4 comes from the prior on the number of changepoints
fit_vec[1:] += best[:K]
# find the max of the fitness: this is the K^th changepoint
i_max = np.argmax(fit_vec)
last[K] = i_max
best[K] = fit_vec[i_max]
#-----------------------------------------------------------------
# Recover changepoints by iteratively peeling off the last block
#-----------------------------------------------------------------
change_points = np.zeros(N, dtype=int)
i_cp = N
ind = N
while True:
i_cp -= 1
change_points[i_cp] = ind
if ind == 0:
break
ind = last[ind - 1]
change_points = change_points[i_cp:]
return edges[change_points]
Explanation: Problem 1d
Which parameters most affected the shape of the final distribution?
What are some possible issues with using a KDE representation of the data?
Discuss with your partner
Solution 1d
Write your response here
Problem 2) Histograms Episode IV: A New Hope
How can we create representations of our data that are robust against the known issues with histograms and KDEs?
Introducing: Bayesian Blocks
We want to represent our data in the most general possible way, a method that
* avoids assumptions about smoothness or shape of the signal (which might place limitations on scales and resolution)
* is nonparametric (doesn't fit some model)
* finds and characterizes local structure in our time series (in contrast to periodicities)
(continued)
handles arbitrary sampling (i.e. doesn't require evenly spaced samples, doesn't care about sparse samples)
is as hands-off as possible-- user interventions should be minimal or non-existent
is applicable to multivariate data
can both analyze data after they are collected, and in real time
Bayesian Blocks works by creating a super-simple representation of the data, essentially a piecewise fit that segments our time series.
In the implementations we will use today, the model is a piecewise linear fit in time across each individual bin, or "block".
one modeling the signal as linear in time across the block:
$$x(t) = λ(1 + a(t − t_{fid}))$$
where $\lambda$ is the signal strength at the fiducial time $t_{fid}$, and the coefficient $a$ determines the rate of change over the block.
Scargle et al. (2012) point out that using a linear fit is good because it makes calculating the fit really easy, but you could potentially use something more complicated (they provide some details for using an exponential model, $x(t) = λe^{a(t−t_{fid}})$, in their Appendix C.
The Fitness Function
The insight in Bayesian Blocks is that you can use a Bayesian likelihood framework to compute a "fitness function" that depends only on the number and size of the blocks.
In every block, you are trying to maximize some goodness-of-fit measure for data in that individual block. This fit depends only on the data contained in its block, and is independent of all other data.
The optimal segmentation of the time series, then, is the segmentation that maximizes fitness-- the total goodness-of-fit over all the blocks (so for example, you could use the sum over all blocks of whatever your quantitative expression is for the goodness of fit in individual blocks).
The entire time series is then represented by a series of segments (blocks) characterized by very few parameters:
$N_{cp}$: the number of change-points
$t_{k}^{cp}$: the change-point starting block k
$X_k$: the signal amplitude in block k
for k = 1, 2, ... $N_{cp}$
When using Bayesian Blocks (particularly on time series data) we often speak of "change points" rather than segmentation, as the block edges essentially tell us the discrete times at which a signal’s statistical properties change discontinuously, though the segments themselves are constant between these points.
You looking at KDEs right now:
In some cases (such as some of the examples below), the Bayesian Block representation may look kind of clunky. HOWEVER: remember that histograms and KDEs may sometimes look nicer, but can be really misleading! If you want to derive physical insight from these representations of your data, Bayesian Blocks can provide a means of deriving physically interesting quantities (for example, better estimates of event locations, lags, amplitudes, widths, rise and decay times, etc).
On top of that, you can do all of the above without losing or hiding information via smoothing or other model assumptions.
HOW MANY BLOCKS, THO?!
We began this lesson by bemoaning that histograms force us to choose a number of bins, and that KDEs require us to choose a bandwidth. Furthermore, one of the requirements we had for a better way forward was that the user interaction be minimal or non-existent. What to do?
Bayesian Blocks works by defining a prior distribution for the number of blocks, such that a single parameter controls the steepness of this prior (in other words, the relative probability for smaller or larger numbers of blocks.
Once this prior is defined, the size, number, and locations of the blocks are determined solely and uniquely by the data.
So, what does the prior look like?
In most cases, $N_{blocks}$ << N (you are, after all, still binning your data-- if $N_{blocks}$ was close to N, you wouldn't really be doing much). Scargle et al. (2012) adopts a geometric prior (Coram 2002), which assigns smaller probability to a large number of blocks:
$$P(N_{blocks}) = P_{0}\gamma N_{blocks}$$
for $0 ≤ N_{blocks} ≤ N$, and zero otherwise since $N_{blocks}$ cannot be negative or larger than the number of data cells.
Substituting in the normalization constant $P_{0}$ gives
$$P(N_{blocks}) = \frac{1−\gamma}{1-\gamma^{N+1}}\gamma^{N_{blocks}}$$
<sub>Essentially, this prior says that finding k + 1 blocks is less likely than finding k blocks by the constant factor $\gamma$. Scargle (2012) also provides a nice intuitive way of thinking about $\gamma$: $\gamma$ is adjusting the amount of structure in the resulting representation.</sub>
The Magic Part
At this point, you may be wondering about how the algorithm is capable of finding an optimal number of blocks. As Scargle et al (2012) admits
the number of possible partitions (i.e. the number of ways N cells can be arranged in blocks) is $2^N$. This number is exponentially large, rendering an explicit exhaustive search of partition space utterly impossible for all but very small N.
In his blog post on Bayesian Blocks, Jake VdP compares the algorithm's use of dynamic programming to mathematical induction. For example: how could you prove that
$$1 + 2 + \cdots + n = \frac{n(n+1)}{2}$$
is true for all positive integers $n$? An inductive proof of this formula proceeds in the following fashion:
Base Case: We can easily show that the formula holds for $n = 1$.
Inductive Step: For some value $k$, assume that $1 + 2 + \cdots + k = \frac{k(k+1)}{2}$ holds.
Adding $(k + 1)$ to each side and rearranging the result yields
$$1 + 2 + \cdots + k + (k + 1) = \frac{(k + 1)(k + 2)}{2}$$
Looking closely at this, we see that we have shown the following: if our formula is true for $k$, then it must be true for $k + 1$.
By 1 and 2, we can show that the formula is true for any positive integer $n$, simply by starting at $n=1$ and repeating the inductive step $n - 1$ times.
In the Bayesian Blocks algorithm, one can find the optimal binning for a single data point; so by analogy with our example above (full details are given in the Appendix of Scargle et al. 2012), if you can find the optimal binning for $k$ points, it's a short step to the optimal binning for $k + 1$ points.
So, rather than performing an exhaustive search of all possible bins, which would scale as $2^N$, the time to find the optimal binning instead scales as $N^2$.
Playing with Blocks
We will begin playing with (Bayesian) blocks with a simple implementation, outlined by Jake VanderPlas in this blog: https://jakevdp.github.io/blog/2012/09/12/dynamic-programming-in-python/
End of explanation
# complete
plt.hist(
Explanation: Problem 2a
Let's visualize our data again, but this time we will use Bayesian Blocks.
Plot a standard histogram (as above), but now plot the Bayesian Blocks representation of the distribution over it.
End of explanation
# complete
Explanation: Problem 2b
How is the Bayesian Blocks representation different or similar?
How might your choice of representation affect your scientific conclusions about your data?
Take a few min to discuss this with your partner
If you are using histograms for analysis, you might infer physical meaning from the presence or absence of features in these distributions. As it happens, histograms of time-tagged event data are often used to characterize physical events in time domain astronomy, for example gamma ray bursts or stellar flares.
Problem 3) Bayesian Blocks in the wild
Now we'll apply Bayesian Blocks to some real astronomical data, and explore how our visualization choices may affect our scientific conclusions.
First, let's get some data!
All data from NASA missions is hosted on the Mikulski Archive for Space Telescopes (aka MAST). As an aside, the M in MAST used to stand for "Multimission", but was changed to honor Sen. Barbara Mikulski (D-MD) for her tireless support of science.
Some MAST data (mostly the original data products) can be directly accessed using astroquery (there's an extensive guide to interacting with MAST via astroquery here: https://astroquery.readthedocs.io/en/latest/mast/mast.html).
In addition, MAST also hosts what are called "Higher Level Science Products", or HLSPs, which are data derived by science teams in the course of doing their analyses. You can see a full list of HLSPs here: https://archive.stsci.edu/hlsp/hlsp-table
These data tend to be more heterogeneous, and so are not currently accessible through astroquery (for the most part). They will be added in the future. But never fear! You can also submit SQL queries via MAST's CasJobs interface.
Go to the MAST CasJobs http://mastweb.stsci.edu/mcasjobs/home.aspx
If I have properly remembered to tell you to create a MAST CasJobs login, you can login now (and if not, just go ahead and sign up now, it's fast)!
We will be working with the table of new planet radii by Berger et al. (2019).
If you like, you can check out the paper here! https://arxiv.org/pdf/1805.00231.pdf
From the "Query" tab, select "HLSP_KG_RADII" from the Context drop-down menu.
You can then enter your query. In this example, we are doing a simple query to get all the KG-RADII radii and fluxes from the exoplanets catalog, which you could use to reproduce the first figure, above. For short queries that can execute in less than 60 seconds, you can hit the "Quick" button and the results of your query will be displayed below, where you can export them as needed. For longer queries like this one, you can select into an output table (otherwise a default like MyDB.MyTable will be used), hit the "Submit" button, and when finished your output table will be available in the MyDB tab.
Problem 3a
Write a SQL query to fetch this table from MAST using CasJobs.
Your possible variables are KIC_ID, KOI_ID, Planet_Radius, Planet_Radius_err_upper, Planet_Radius_err_lower, Incident_Flux, Incident_Flux_err_upper, Incident_Flux_err_lower, AO_Binary_Flag
For very short queries you can use "Quick" for your query; this table is large enough that you should use "Submit".
Hint: You will want to SELECT some stuff FROM a table called exoplanet_parameters
Solution 3a
Write your SQL query here
Once your query has completed, you will go to the MyDB tab to see the tables you have generated (in the menu at left). From here, you can click on a table, and select Download. I would recommend downloading your file as a CSV (comma-separated value) file, as CSV are simple, and can easily read into python via a variety of methods.
Problem 3b
Time to read in the data! There are several ways of importing a csv into python... choose your favorite and load in the table you downloaded.
End of explanation
# complete
Explanation: Problem 3c
Let's look at the distribution of the small planet radii in this table, which are given in units of Earth radii. Select the planets whose radii are Neptune-sized or smaller.
Select the planet radii for all planets smaller than Neptune in the table, and visualize the distribution of planet radii using a standard histogram.
End of explanation
# complete
Explanation: Problem 3d
What features do you see in the histogram of planet radii? Which of these features are important?
Discuss with your partner
Solution 3d
Write your answer here
Problem 3e
Now let's try visualizing these data using Bayesian Blocks. Please recreate the histogram you plotted above, and then plot the Bayesian Blocks version over it.
End of explanation
import astropy.stats.bayesian_blocks as bb
Explanation: Problem 3f
What features do you see in the histogram of planet radii? Which of these features are important?
Discuss with your partner.
Hint: maybe you should look at some of the comments in the implementation of Bayesian Blocks we are using
Solution 3f
Write your answer here
OK, so in this case, the Bayesian Blocks representation of the data looks fairly different. A couple things might stand out to you:
* There are large spikes in the Bayesian Blocks representation that are not present in the standard histogram
* If we're just looking at the Bayesian Block representation, it's not totally clear whether one should believe that there are two peaks in the distribution.
HMMMMMmmmm....
Wait! Jake VDP told us to watch out for this implementation. Maybe we should use something a little more official instead...
GOOD NEWS~! There is a Bayesian Blocks implementation included in astropy. Let's try that.
http://docs.astropy.org/en/stable/api/astropy.stats.bayesian_blocks.html
Important note
There is a known issue in the astropy implementation of Bayesian Blocks; see: https://github.com/astropy/astropy/issues/8317
It is possible this issue will be fixed in a future release, but in the event this problem arises for you, you will need to edit bayesian_blocks.py to include the following else statement (see issue link above for exact edit):
if self.ncp_prior is None:
ncp_prior = self.compute_ncp_prior(N)
else:
ncp_prior = self.ncp_prior
End of explanation
# complete
Explanation: Problem 3g
Please repeat the previous problem, but this time use the astropy implementation of Bayesian Blocks.
End of explanation
# complete
Explanation: Putting these results in context
Both standard histograms and KDEs can be useful for quickly visualizing data, and in some cases, getting an intuition for the underlying PDF of your data.
However, keep in mind that they both involve making parameter choices that are largely not motivated in any quantitative way. These choices can create wildly misleading representations of your data.
In particular, your choices may lead you to make a physical interpretation that may or may not be correct (in our example, bear in mind that the observed distribution of exoplanetary radii informs models of planet formation).
Bayesian Blocks is more than just a variable-width histogram
While KDEs often do a better job of visualizing data than standard histograms do, they also create a loss of information. Philosophically speaking, what Bayesian Blocks do is posit that the "change points", also known as the bin edges, contain information that is interesting. When you apply a KDE, you are smoothing your data by creating an approximation, and that can mean you are losing potential insights by removing information.
While Bayesian Blocks are useful as a replacement for histograms in general, their ability to identify change points makes them especially useful for time series analysis.
Problem 4) Bayesian Blocks for Time Series Analysis
While Bayesian Blocks can be very useful as a simple replacement for histograms, one of its great strengths is in finding "change points" in time series data. Finding these change points can be useful for discovering interesting events in time series data you already have, and it can be used in real-time to detect changes that might trigger another action (for example, follow up observations for LSST).
Let's take a look at a few examples of using Bayesian Blocks in the time series context.
First and foremost, it's important to understand the different between various kinds of time series data.
Event data come from photon counting instruments. In these data, the time series typically consists of photon arrival times, usually in a particular range of energies that the instrument is sensitive to. Event data are univariate, in that the time series is "how many photons at a given time", or "how many photons in a given chunk of time".
Point measurements are measurements of a (typically) continuous source at a given moment in time, often with some uncertainty associated with the measurement. These data are multivariate, as your time series relates time, your measurement (e.g. flux, magnitude, etc) and its associated uncertainty to one another.
Problem 4a
Let's look at some event data from BATSE, a high energy astrophysics experiment that flew on NASA's Compton Gamma-Ray Observatory. BATSE primarily studied gamma ray bursts (GRBs), capturing its detections in four energy channels: ~25-55 keV, 55-110 keV, 110-320 keV, and >320 keV.
You have been given four text files that record one of the BATSE GRB detections. Please read these data in.
End of explanation
# complete
Explanation: Problem 4b
When you reach this point, you and your partner should pick a number between 1 and 4; your number is the channel whose data you will work with.
Using the data for your channel, please visualize the photon events in both a standard histogram and using Bayesian Blocks.
End of explanation
# execute this cell
kplr_hdul = fits.open('./data/kplr009726699-2011271113734_llc.fits')
Explanation: Problem 4c
Let's take a moment to reflect on the differences between these two representations of our data.
Please discuss with your partner:
* How many bursts are present in these two representations?
* How accurately would you be able to identify the time of the burst(s) from these representations? What about other quantities?
For the groups who worked with Channel 3:
You may have noticed very sharp features in your blocks representation of the data. Are they real?
To quote Jake VdP:
Simply put, there are spikes because the piecewise constant likelihood model says that spikes are favored. By saying that the spikes seem unphysical, you are effectively adding a prior on the model based on your intuition of what it should look like.
To quote Jeff Scargle:
Trust the algorithm!
Problem 5: Finding flares
As we have just seen, Bayesian Blocks can be very useful for finding transient events-- it worked great on our photon counts from BATSE! Let's try it on some slightly more complicated data: lightcurves from NASA's Kepler mission. Kepler's data consists primarily of point measures (rather than events)-- a Kepler lightcurve is just the change in the brightness of the star over time (with associated uncertainties).
Problem 5a
People often speak of transients (like the GRB we worked with above) and variables (like RR Lyrae or Cepheid stars) as being two completely different categories of changeable astronomical objects. However, some objects exhibit both variability and transient events. Magnetically active stars are one example of these objects: many of them have starspots that rotate into and out of view, creating periodic (or semi-periodic) variability, but they also have flares, magnetic reconnection events that create sudden, rapid changes in the stellar brightness.
A challenge in identifying flares is that they often appear against a background that is itself variable. While their are many approaches to fitting both quiescent and flare variability (Gaussian processes, which you saw earlier this week, are often used for exactly this purpose!), they can be very time consuming.
Let's read in some data, and see whether Bayesian Blocks can help us here.
End of explanation
# execute this cell
kplr_hdul.info()
Explanation: Cool, we have loaded in the FITS file. Let's look at what's in it:
End of explanation
# execute this cell
lcdata = kplr_hdul[1].data
lcdata.columns
# execute this cell
t = lcdata['TIME']
f = lcdata['PDCSAP_FLUX']
e = lcdata['PDCSAP_FLUX_ERR']
t = t[~np.isnan(f)]
e = e[~np.isnan(f)]
f = f[~np.isnan(f)]
nf = f / np.median(f)
ne = e / np.median(f)
Explanation: We want the light curve, so let's check out what's in that part of the file!
End of explanation
# complete
Explanation: Problem 5b
Use a scatter plot to visualize the Kepler lightcurve. I strongly suggest you try displaying it at different scales by zooming in (or playing with the axis limits), so that you can get a better sense of the shape of the lightcurve.
End of explanation
# complete
edges =
#
#
#
plt.step(
Explanation: Problem 5c
These data consist of a variable background, with occasional bright points caused by stellar flares.
Brainstorm possible approaches to find the flare events in this data. Write down your ideas, and discuss their potential advantages, disadvantages, and any pitfalls that you think might arise.
Discuss with your partner
Solution 5c
Write your notes here
There are lots of possible ways to approach this problem. In the literature, a very common traditional approach has been to fit the background variability while ignoring the outliers, then to subtract the background fit, and flag any point beyond some threshold value as belonging to a flare. More sophisticated approaches also exist, but they are often quite time consuming (and in many cases, detailed fits require a good starting estimate for the locations of flare events).
Recall, however, that Bayesian Blocks is particularly effective at identifying change points in our data. Let's see if it can help us in this case!
Problem 5d
Use Bayesian Blocks to visualize the Kepler lightcurve. Note that you are now using data that consists of point measurements, rather than event data (as for the BATSE example).
End of explanation
# complete
your_data =
# complete
def stratified_bayesian_blocks(
Explanation: Concluding Remarks
As you can see, using Bayesian Blocks allowed us to represent the data, including both quiescent variability and flares, without having to smooth, clip, or otherwise alter the data.
One potential drawback here is that the change points don't identify the flares themselves, or at least don't identify them as being different from the background variability-- the algorithm identifies change points, but does not know anything about what change points might be interesting to you in particular.
Another potential drawback is that it is possible that the Bayesian Block representation may not catch all events, or at least may not, on its own, provide an unambiguous sign that a subtle signal of interest is in the data.
In these cases, it is sometimes instructive to use a hybrid approach, where one combines the bins determined from a traditional histogram with the Bayesian Blocks change points. Alternatively, if one has a good model for the background, one can compared the blocks representation of a background-only simulated data set with one containing both background and model signal.
Further interesting examples (in the area of high energy physics and astrophysics) are provided by this paper:
https://arxiv.org/abs/1708.00810
Challenge Problem
Bayesian Blocks is so great! However, there are occasions in which it doesn't perform all that well-- particularly when there are large numbers of repeated values in the data. Jan Florjanczyk, a senior data scientist at Netflix, has written up a description of the problem, and implemented a version of Bayesian Blocks that does a better job on data with repeating values. Read his blog post on "Stratfied Bayesian Blocks" and try implementing it!
https://medium.com/@janplus/stratified-bayesian-blocks-2bd77c1e6cc7
You can either apply this to your favorite data set, or you can use it on the stellar parameters table associated with the data set we pulled from MAST earlier (recall that you previously worked with the exoplanet parameters table). You can check out the fields you can search on using MAST CasJobs, or just download a FITS file of the full table, here: https://archive.stsci.edu/prepds/kg-radii/
<sub>This GIF is of Dan Shiffman, who has a Youtube channel called Coding Train</sub>
End of explanation |
2,119 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
States
A Riemann Problem is specified by the state of the material to the left and right of the interface. In this hydrodynamic problem, the state is fully determined by an equation of state and the variables
$$
{\bf U} = \begin{pmatrix} \rho_0 \ v_x \ v_t \ \epsilon \end{pmatrix},
$$
where $\rho_0$ is the rest-mass density, $v_x$ the velocity normal to the interface, $v_t$ the velocity tangential to the interface, and $\epsilon$ the specific internal energy.
Defining a state
In r3d2 we define a state from an equation of state and the values of the key variables
Step1: Inside the notebook, the state will automatically display the values of the key variables
Step2: Adding a label to the state for output purposes requires an extra keyword
Step3: Reactive states
If the state has energy available for reactions, that information is built into the equation of state. The definition of the equation of state changes
Step4: Additional functions
A state knows its own wavespeeds. Given a wavenumber (the left acoustic wave is 0, the middle contact or advective wave is 1, and the right acoustic wave is 2), we have
Step5: A state will return the key primitive variables ($\rho, v_x, v_t, \epsilon$)
Step6: A state will return all the variables it computes, which is $\rho, v_x, v_t, \epsilon, p, W, h, c_s$ | Python Code:
from r3d2 import eos_defns, State
eos = eos_defns.eos_gamma_law(5.0/3.0)
U = State(1.0, 0.1, 0.0, 2.0, eos)
Explanation: States
A Riemann Problem is specified by the state of the material to the left and right of the interface. In this hydrodynamic problem, the state is fully determined by an equation of state and the variables
$$
{\bf U} = \begin{pmatrix} \rho_0 \ v_x \ v_t \ \epsilon \end{pmatrix},
$$
where $\rho_0$ is the rest-mass density, $v_x$ the velocity normal to the interface, $v_t$ the velocity tangential to the interface, and $\epsilon$ the specific internal energy.
Defining a state
In r3d2 we define a state from an equation of state and the values of the key variables:
End of explanation
U
Explanation: Inside the notebook, the state will automatically display the values of the key variables:
End of explanation
U2 = State(10.0, -0.3, 0.1, 5.0, eos, label="L")
U2
Explanation: Adding a label to the state for output purposes requires an extra keyword:
End of explanation
q_available = 0.1
t_ignition = 10.0
Cv = 1.0
eos_reactive = eos_defns.eos_gamma_law_react(5.0/3.0, q_available, Cv, t_ignition, eos)
U_reactive = State(5.0, 0.1, 0.1, 2.0, eos_reactive, label="Reactive")
U_reactive
Explanation: Reactive states
If the state has energy available for reactions, that information is built into the equation of state. The definition of the equation of state changes: the definition of the state itself does not:
End of explanation
print("Left wavespeed of first state is {}".format(U.wavespeed(0)))
print("Middle wavespeed of second state is {}".format(U2.wavespeed(1)))
print("Right wavespeed of reactive state is {}".format(U.wavespeed(2)))
Explanation: Additional functions
A state knows its own wavespeeds. Given a wavenumber (the left acoustic wave is 0, the middle contact or advective wave is 1, and the right acoustic wave is 2), we have:
End of explanation
print("Primitive variables of first state are {}".format(U.prim()))
Explanation: A state will return the key primitive variables ($\rho, v_x, v_t, \epsilon$):
End of explanation
print("All variables of second state are {}".format(U.state()))
Explanation: A state will return all the variables it computes, which is $\rho, v_x, v_t, \epsilon, p, W, h, c_s$: the primitive variables as above, the pressure $p$, Lorentz factor $W$, specific enthalpy $h$, and speed of sound $c_s$:
End of explanation |
2,120 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
Glossary
1. Radio Science using Interferometric Arrays
Previous
Step1: Import section specific modules
Step2: 1.6.1 Synchrotron Emission
Step3: Figure 1.6.1 Example path of a charged particle accelerated in a magnetic field
The frequency of gyration in the non-relativistic case is simply
$$\omega = \frac{qB}{mc} $$
For synchrotron radiation, this gets modified to
$$\omega_{G}= \frac{qB}{\gamma mc} $$
since, in the relativistic case, the mass is modified to $m \rightarrow \gamma m$.
In the non-relativistic case (i.e. cyclotron radiation) the frequency of gyration corresponds to the frequency of the emitted radiation. If this was also the case for the synchrotron radiation then, for magnetic fields typically found in galaxies (a few micro-Gauss or so), the resultant frequency would be less than one Hertz! Fortunately the relativistic beaming and Doppler effects come into play increasing the frequency of the observed radiation by a factor of about $\gamma^{3}$. This brings the radiation into the radio regime. This frequency, known also as the 'critical frequency' is at most of the emission takes place. It is given by
$$\nu_{c} \propto \gamma^{3}\nu_{G} \propto E^{2}$$
<span style="background-color
Step4: Figure 1.6.2 Cygnus A | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: Outline
Glossary
1. Radio Science using Interferometric Arrays
Previous: 1.5 Black body radiation
Next: 1.7 Line emission
Section status: <span style="background-color:orange"> </span>
Import standard modules:
End of explanation
from IPython.display import Image
HTML('../style/code_toggle.html')
Explanation: Import section specific modules:
End of explanation
Image(filename='figures/drawing.png', width=300)
Explanation: 1.6.1 Synchrotron Emission:
Sychrotron emission is one of the most commonly encountered forms of radiation found from astronomical radio sources. This type of radiation originates from relativistic particles get accelerated in a magnetic field.
The mechanism by which synchrotron emission occurs depends fundamentally on special relativistic effects. We won't delve into the details here. Instead we will try to explain (in a rather hand wavy way) some of the underlying physics. As we have seen in $\S$ 1.2.1 ➞,
<span style="background-color:cyan"> LB:RF:this is the original link but I don't think it points to the right place. Add a reference to where this is discussed and link to that. See also comment in previous section about where the Larmor formula is first introduced</span>
an accelerating charge emitts radiation. The acceleration is a result of the charge moving through an ambient magnetic field. The non-relativistic Larmor formula for the radiated power is:
$$P= \frac{2}{3}\frac{q^{2}a^{2}}{c^{3}}$$
If the acceleration is a result of a magnetic field $B$, we get:
$$P=\frac{2}{3}\frac{q^{2}}{c^{3}}\frac{v_{\perp}^{2}B^{2}q^{2}}{m^{2}} $$
where $v_{\perp}$ is the component of velocity of the particle perpendicular to the magnetic field, $m$ is the mass of the charged particle, $q$ is it's charge and $a$ is its acceleration. This is essentially the cyclotron radiation. Relativistic effects (i.e. as $v_\perp \rightarrow c$) modifies this to:
$$P = \gamma^{2} \frac{2}{3}\frac{q^{2}}{c^{3}}\frac{v_{\perp}^{2}B^{2}q^{2}}{m^{2}c^{2}} = \gamma^{2} \frac{2}{3}\frac{q^{4}}{c^{3}}\frac{v_{\perp}^{2}B^{2}}{m^{2}c^{2}} $$
where $$\gamma = \frac{1}{\sqrt{1+v^{2}/c^{2}}} = \frac{E}{mc^{2}} $$
<span style="background-color:yellow"> LB:IC: This is a very unusual form for the relativistic version of Larmor's formula. I suggest clarifying the derivation. </span>
is a measure of the energy of the particle. Non-relativistic particles have $\gamma \sim 1$ whereas relativistic and ultra-relativistic particles typically have $\gamma \sim 100$ and $\gamma \geq 1000$ respectively. Since $v_{\perp}= v \sin\alpha$, with $\alpha$ being the angle between the magnetic field and the velocity of the particle, the radiated power can be written as:
$$P=\gamma^{2} \frac{2}{3}\frac{q^{4}}{c^{3}}\frac{v^{2}B^{2}\sin\alpha^{2}}{m^{2}c^{2}} $$
From this equation it can be seen that the total power radiated by the particle depends on the strength of the magnetic field and that the higher the energy of the particle, the more power it radiates.
In analogy with the non-relativistic case, there is a frequency of gyration. This refers to the path the charged particle follows while being accelerated in a magnetic field. The figure below illustrates the idea.
End of explanation
Image(filename='figures/cygnusA.png')
Explanation: Figure 1.6.1 Example path of a charged particle accelerated in a magnetic field
The frequency of gyration in the non-relativistic case is simply
$$\omega = \frac{qB}{mc} $$
For synchrotron radiation, this gets modified to
$$\omega_{G}= \frac{qB}{\gamma mc} $$
since, in the relativistic case, the mass is modified to $m \rightarrow \gamma m$.
In the non-relativistic case (i.e. cyclotron radiation) the frequency of gyration corresponds to the frequency of the emitted radiation. If this was also the case for the synchrotron radiation then, for magnetic fields typically found in galaxies (a few micro-Gauss or so), the resultant frequency would be less than one Hertz! Fortunately the relativistic beaming and Doppler effects come into play increasing the frequency of the observed radiation by a factor of about $\gamma^{3}$. This brings the radiation into the radio regime. This frequency, known also as the 'critical frequency' is at most of the emission takes place. It is given by
$$\nu_{c} \propto \gamma^{3}\nu_{G} \propto E^{2}$$
<span style="background-color:yellow"> LB:IC: The last sentence is not clear. Why is it called the critical frequency? How does it come about? </span>
So far we have discussed a single particle emitting synchrotron radiation. However, what we really want to know is what happens in the case of an ensemble of radiating particles. Since, in an (approximately) uniform magnetic field, the synchrotron emission depends only on the magnetic field and the energy of the particle, all we need is the distribution function of the particles. Denoting the distribution function of the particles as $N(E)$ (i.e. the number of particles at energy $E$ per unit volume per solid angle), the spectrum resulting from an ensemble of particles is:
$$ \epsilon(E) dE = N(E) P(E) dE $$
<span style="background-color:yellow"> LB:IC: Clarify what is $P(E)$. How does the spectrum come about? </span>
The usual assumption made about the distribution $N(E)$ (based also on the observed cosmic ray distribution) is that of a power law, i.e.
$$N(E)dE=E^{-\alpha}dE $$
Plugging in this and remembering that $P(E) \propto \gamma^{2} \propto E^{2}$, we get
$$ \epsilon(E) dE \propto E^{2-\alpha} dE $$
Shifting to the frequency domain
$$\epsilon(\nu) \propto \nu^{(1-\alpha)/2} $$
The usual value for $\alpha$ is 5/2 and since flux $S_{\nu} \propto \epsilon_{\nu}$
$$S_{\nu} \propto \nu^{-0.75} $$
This shows that the synchrotron flux is also a power law, if the underlying distribution of particles is a power law.
<span style="background-color:yellow"> LB:IC: The term spectral index is used below without being properly introduced. Introduce the notion of a spectral index here. </span>
This is approximately valid for 'fresh' collection of radiating particles. However, as mentioned above, the higher energy particles lose energy through radiation much faster than the lower energy particles. This means that the distribution of particles over time gets steeper at higher frequencies (which is where the contribution from the high energy particles comes in). As we will see below, this steepening of the spectral index is a typical feature of older plasma in astrophysical scenarios.
1.6.2 Sources of Synchrotron Emission:
So where do we actually see synchrotron emission? As mentioned above, the prerequisites are magnetic fields and relativistic particles. These conditions are satisfied in a variety of situations. Prime examples are the lobes of radio galaxies. The lobes contain relativistic plasma in magnetic fields of strength ~ $\mu$G. It is believed that these plasmas and magnetic fields ultimately originate from the activity in the center of radio galaxies where a supermassive black hole resides. The figure below shows a radio image of the radio galaxy nearest to us, Cygnus A.
End of explanation
# Data taken from Steenbrugge et al.,2010, MNRAS
freq=(151.0,327.5,1345.0,4525.0,8514.9,14650.0)
flux_L=(4746,2752.7,749.8,189.4,83.4,40.5)
flux_H=(115.7,176.4,69.3,45.2,20.8,13.4)
fig,ax = plt.subplots()
ax.loglog(freq,flux_L,'bo--',label='Lobe Flux')
ax.loglog(freq,flux_H,'g*-',label='Hotspot Flux')
ax.legend()
ax.set_xlabel("Frequency (MHz)")
ax.set_ylabel("Flux (Jy)")
Explanation: Figure 1.6.2 Cygnus A: Example of Synchroton Emission
The jets, which carry relativistic charged particles or plasma originating from the centre of the host galaxy (marked as 'core' in the figure), collide with the surrounding medium at the places labelled as "hotspots" in the figure. The plasma responsible for the radio emission (the lobes) tends to stream backward from the hotspots. As a result we can expect the youngest plasma to reside in and around the hotspots. On the other hand, we can expect the plasma closest to the core to be the oldest. But is there a way to verify this?
Well, the non-thermal nature of the emission can be verified by measuring the spectrum of the radio emission. A value close to -0.7 suggests, by the reasoning given above, that the radiation results from a synchroton emission mechanism. The plots below show the spectrum of the lobes of Cygnus A within a frequency range of 150 MHz to 14.65 GHz.
<span style="background-color:cyan"> LB:RF: Add proper citation. </span>
End of explanation |
2,121 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Executed
Step1: Load software and filenames definitions
Step2: Data folder
Step3: List of data files
Step4: Data load
Initial loading of the data
Step5: Laser alternation selection
At this point we have only the timestamps and the detector numbers
Step6: We need to define some parameters
Step7: We should check if everithing is OK with an alternation histogram
Step8: If the plot looks good we can apply the parameters with
Step9: Measurements infos
All the measurement data is in the d variable. We can print it
Step10: Or check the measurements duration
Step11: Compute background
Compute the background using automatic threshold
Step12: Burst search and selection
Step14: Donor Leakage fit
Half-Sample Mode
Fit peak usng the mode computed with the half-sample algorithm (Bickel 2005).
Step15: Gaussian Fit
Fit the histogram with a gaussian
Step16: KDE maximum
Step17: Leakage summary
Step18: Burst size distribution
Step19: Fret fit
Max position of the Kernel Density Estimation (KDE)
Step20: Weighted mean of $E$ of each burst
Step21: Gaussian fit (no weights)
Step22: Gaussian fit (using burst size as weights)
Step23: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE)
Step24: The Maximum likelihood fit for a Gaussian population is the mean
Step25: Computing the weighted mean and weighted standard deviation we get
Step26: Save data to file
Step27: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
Step28: This is just a trick to format the different variables | Python Code:
ph_sel_name = "Dex"
data_id = "22d"
# ph_sel_name = "all-ph"
# data_id = "7d"
Explanation: Executed: Mon Mar 27 11:36:04 2017
Duration: 8 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
from fretbursts import *
init_notebook()
from IPython.display import display
Explanation: Load software and filenames definitions
End of explanation
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
Explanation: Data folder:
End of explanation
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
ph_sel_map = {'all-ph': Ph_sel('all'), 'Dex': Ph_sel(Dex='DAem'),
'DexDem': Ph_sel(Dex='Dem')}
ph_sel = ph_sel_map[ph_sel_name]
data_id, ph_sel_name
Explanation: List of data files:
End of explanation
d = loader.photon_hdf5(filename=files_dict[data_id])
Explanation: Data load
Initial loading of the data:
End of explanation
d.ph_times_t, d.det_t
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
plot_alternation_hist(d)
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
loader.alex_apply_period(d)
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
d
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
d.time_max
Explanation: Or check the measurements duration:
End of explanation
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
bs_kws = dict(L=10, m=10, F=7, ph_sel=ph_sel)
d.burst_search(**bs_kws)
th1 = 30
ds = d.select_bursts(select_bursts.size, th1=30)
bursts = (bext.burst_data(ds, include_bg=True, include_ph_index=True)
.round({'E': 6, 'S': 6, 'bg_d': 3, 'bg_a': 3, 'bg_aa': 3, 'nd': 3, 'na': 3, 'naa': 3, 'nda': 3, 'nt': 3, 'width_ms': 4}))
bursts.head()
burst_fname = ('results/bursts_usALEX_{sample}_{ph_sel}_F{F:.1f}_m{m}_size{th}.csv'
.format(sample=data_id, th=th1, **bs_kws))
burst_fname
bursts.to_csv(burst_fname)
assert d.dir_ex == 0
assert d.leakage == 0
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
n_bursts_do = ds_do.num_bursts[0]
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_do, n_bursts_fret
d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)
print ('D-only fraction:', d_only_frac)
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);
Explanation: Burst search and selection
End of explanation
def hsm_mode(s):
Half-sample mode (HSM) estimator of `s`.
`s` is a sample from a continuous distribution with a single peak.
Reference:
Bickel, Fruehwirth (2005). arXiv:math/0505419
s = memoryview(np.sort(s))
i1 = 0
i2 = len(s)
while i2 - i1 > 3:
n = (i2 - i1) // 2
w = [s[n-1+i+i1] - s[i+i1] for i in range(n)]
i1 = w.index(min(w)) + i1
i2 = i1 + n
if i2 - i1 == 3:
if s[i1+1] - s[i1] < s[i2] - s[i1 + 1]:
i2 -= 1
elif s[i1+1] - s[i1] > s[i2] - s[i1 + 1]:
i1 += 1
else:
i1 = i2 = i1 + 1
return 0.5*(s[i1] + s[i2])
E_pr_do_hsm = hsm_mode(ds_do.E[0])
print ("%s: E_peak(HSM) = %.2f%%" % (ds.ph_sel, E_pr_do_hsm*100))
Explanation: Donor Leakage fit
Half-Sample Mode
Fit peak usng the mode computed with the half-sample algorithm (Bickel 2005).
End of explanation
E_fitter = bext.bursts_fitter(ds_do, weights=None)
E_fitter.histogram(bins=np.arange(-0.2, 1, 0.03))
E_fitter.fit_histogram(model=mfit.factory_gaussian())
E_fitter.params
res = E_fitter.fit_res[0]
res.params.pretty_print()
E_pr_do_gauss = res.best_values['center']
E_pr_do_gauss
Explanation: Gaussian Fit
Fit the histogram with a gaussian:
End of explanation
bandwidth = 0.03
E_range_do = (-0.1, 0.15)
E_ax = np.r_[-0.2:0.401:0.0002]
E_fitter.calc_kde(bandwidth=bandwidth)
E_fitter.find_kde_max(E_ax, xmin=E_range_do[0], xmax=E_range_do[1])
E_pr_do_kde = E_fitter.kde_max_pos[0]
E_pr_do_kde
Explanation: KDE maximum
End of explanation
mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, plot_model=False)
plt.axvline(E_pr_do_hsm, color='m', label='HSM')
plt.axvline(E_pr_do_gauss, color='k', label='Gauss')
plt.axvline(E_pr_do_kde, color='r', label='KDE')
plt.xlim(0, 0.3)
plt.legend()
print('Gauss: %.2f%%\n KDE: %.2f%%\n HSM: %.2f%%' %
(E_pr_do_gauss*100, E_pr_do_kde*100, E_pr_do_hsm*100))
Explanation: Leakage summary
End of explanation
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
Explanation: Burst size distribution
End of explanation
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(center=0.5))
E_fitter.fit_res[0].params.pretty_print()
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
Explanation: Fret fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
ds_fret.fit_E_m(weights='size')
Explanation: Weighted mean of $E$ of each burst:
End of explanation
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
Explanation: Gaussian fit (no weights):
End of explanation
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr
Explanation: Gaussian fit (using burst size as weights):
End of explanation
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr
S_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr
Explanation: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
Explanation: The Maximum likelihood fit for a Gaussian population is the mean:
End of explanation
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
Explanation: Computing the weighted mean and weighted standard deviation we get:
End of explanation
sample = data_id
Explanation: Save data to file
End of explanation
variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr '
'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr '
'E_pr_do_kde E_pr_do_hsm E_pr_do_gauss nt_mean\n')
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-raw-%s.csv' % ph_sel_name, 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
Explanation: This is just a trick to format the different variables:
End of explanation |
2,122 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Benchmarking Performance and Scaling of Python Clustering Algorithms
There are a host of different clustering algorithms and implementations thereof for Python. The performance and scaling can depend as much on the implementation as the underlying algorithm. Obviously a well written implementation in C or C++ will beat a naive implementation on pure Python, but there is more to it than just that. The internals and data structures used can have a large impact on performance, and can even significanty change asymptotic performance. All of this means that, given some amount of data that you want to cluster your options as to algorithm and implementation maybe significantly constrained. I'm both lazy, and prefer empirical results for this sort of thing, so rather than analyzing the implementations and deriving asymptotic performance numbers for various implementations I'm just going to run everything and see what happens.
To begin with we need to get together all the clustering implementations, along with some plotting libraries so we can see what is going on once we've got data. Obviously this is not an exhaustive collection of clustering implementations, so if I've left off your favourite I apologise, but one has to draw a line somewhere.
The implementations being test are
Step1: Now we need some benchmarking code at various dataset sizes. Because some clustering algorithms have performance that can vary quite a lot depending on the exact nature of the dataset we'll also need to run several times on randomly generated datasets of each size so as to get a better idea of the average case performance.
We also need to generalise over algorithms which don't necessarily all have the same API. We can resolve that by taking a clustering function, argument tuple and keywords dictionary to let us do semi-arbitrary calls (fortunately all the algorithms do at least take the dataset to cluster as the first parameter).
Finally some algorithms scale poorly, and I don't want to spend forever doing clustering of random datasets so we'll cap the maximum time an algorithm can use; once it has taken longer than max time we'll just abort there and leave the remaining entries in our datasize by samples matrix unfilled.
In the end this all amounts to a fairly straightforward set of nested loops (over datasizes and number of samples) with calls to sklearn to generate mock data and the clustering function inside a timer. Add in some early abort and we're done.
Step2: Comparison of all ten implementations
Now we need a range of dataset sizes to test out our algorithm. Since the scaling performance is wildly different over the ten implementations we're going to look at it will be beneficial to have a number of very small dataset sizes, and increasing spacing as we get larger, spanning out to 32000 datapoints to cluster (to begin with). Numpy provides convenient ways to get this done via arange and vector multiplication. We'll start with step sizes of 500, then shift to steps of 1000 past 3000 datapoints, and finally steps of 2000 past 6000 datapoints.
Step3: Now it is just a matter of running all the clustering algorithms via our benchmark function to collect up all the requsite data. This could be prettier, rolled up into functions appropriately, but sometimes brute force is good enough. More importantly (for me) since this can take a significant amount of compute time, I wanted to be able to comment out algorithms that were slow or I was uninterested in easily. Which brings me to a warning for you the reader and potential user of the notebook
Step4: Now we need to plot the results so we can see what is going on. The catch is that we have several datapoints for each dataset size and ultimately we would like to try and fit a curve through all of it to get the general scaling trend. Fortunately seaborn comes to the rescue here by providing regplot which plots a regression through a dataset, supports higher order regression (we should probably use order two as most algorithms are effectively quadratic) and handles multiple datapoints for each x-value cleanly (using the x_estimator keyword to put a point at the mean and draw an error bar to cover the range of data).
Step5: A few features stand out. First of all there appear to be essentially two classes of implementation, with DeBaCl being an odd case that falls in the middle. The fast implementations tend to be implementations of single linkage agglomerative clustering, K-means, and DBSCAN. The slow cases are largely from sklearn and include agglomerative clustering (in this case using Ward instead of single linkage).
For practical purposes this means that if you have much more than 10000 datapoints your clustering options are significantly constrained
Step6: Again we can use seaborn to do curve fitting and plotting, exactly as before.
Step7: Clearly something has gone woefully wrong with the curve fitting for the scipy single linkage implementation, but what exactly? If we look at the raw data we can see.
Step8: It seems that at around 44000 points we hit a wall and the runtimes spiked. A hint is that I'm running this on a laptop with 8GB of RAM. Both single linkage algorithms use scipy.spatial.pdist to compute pairwise distances between points, which returns an array of shape (n(n-1)/2, 1) of doubles. A quick computation shows that that array of distances is quite large once we nave 44000 points
Step9: If we assume that my laptop is keeping much other than that distance array in RAM then clearly we are going to spend time paging out the distance array to disk and back and hence we will see the runtimes increase dramatically as we become disk IO bound. If we just leave off the last element we can get a better idea of the curve, but keep in mind that the scipy single linkage implementation does not scale past a limit set by your available RAM.
Step10: If we're looking for scaling we can write off the scipy single linkage implementation -- if even we didn't hit the RAM limit the $O(n^2)$ scaling is going to quickly catch up with us. Fastcluster has the same asymptotic scaling, but is heavily optimized to being the constant down much lower -- at this point it is still keeping close to the faster algorithms. It's asymtotics will still catch up with it eventually however.
In practice this is going to mean that for larger datasets you are going to be very constrained in what algorithms you can apply
Step11: Now the some differences become clear. The asymptotic complexity starts to kick in with fastcluster failing to keep up. In turn HDBSCAN and DBSCAN, while having sub-$O(n^2)$ complexity, can't achieve $O(n \log(n))$ at this dataset dimension, and start to curve upward precipitously. Finally it demonstrates again how much of a difference implementation can make
Step12: Now we run that for each of our pre-existing datasets to extrapolate out predicted performance on the relevant dataset sizes. A little pandas wrangling later and we've produced a table of roughly how large a dataset you can tackle in each time frame with each implementation. I had to leave out the scipy KMeans timings because the noise in timing results caused the model to be unrealistic at larger data sizes. Note how the $O(n\log n)$ algorithms utterly dominate here. In the meantime, for medium sizes data sets you can still get quite a lot done with HDBSCAN. | Python Code:
import hdbscan
import debacl
import fastcluster
import sklearn.cluster
import scipy.cluster
import sklearn.datasets
import numpy as np
import pandas as pd
import time
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_context('poster')
sns.set_palette('Paired', 10)
sns.set_color_codes()
Explanation: Benchmarking Performance and Scaling of Python Clustering Algorithms
There are a host of different clustering algorithms and implementations thereof for Python. The performance and scaling can depend as much on the implementation as the underlying algorithm. Obviously a well written implementation in C or C++ will beat a naive implementation on pure Python, but there is more to it than just that. The internals and data structures used can have a large impact on performance, and can even significanty change asymptotic performance. All of this means that, given some amount of data that you want to cluster your options as to algorithm and implementation maybe significantly constrained. I'm both lazy, and prefer empirical results for this sort of thing, so rather than analyzing the implementations and deriving asymptotic performance numbers for various implementations I'm just going to run everything and see what happens.
To begin with we need to get together all the clustering implementations, along with some plotting libraries so we can see what is going on once we've got data. Obviously this is not an exhaustive collection of clustering implementations, so if I've left off your favourite I apologise, but one has to draw a line somewhere.
The implementations being test are:
Sklearn (which implements several algorithms):
K-Means clustering
DBSCAN clustering
Agglomerative clustering
Spectral clustering
Affinity Propagation
Scipy (which provides basic algorithms):
K-Means clustering
Agglomerative clustering
Fastcluster (which provides very fast agglomerative clustering in C++)
DeBaCl (Density Based Clustering; similar to a mix of DBSCAN and Agglomerative)
HDBSCAN (A robust hierarchical version of DBSCAN)
Obviously a major factor in performance will be the algorithm itself. Some algorithms are simply slower -- often, but not always, because they are doing more work to provide a better clustering.
End of explanation
def benchmark_algorithm(dataset_sizes, cluster_function, function_args, function_kwds,
dataset_dimension=10, dataset_n_clusters=10, max_time=45, sample_size=2):
# Initialize the result with NaNs so that any unfilled entries
# will be considered NULL when we convert to a pandas dataframe at the end
result = np.nan * np.ones((len(dataset_sizes), sample_size))
for index, size in enumerate(dataset_sizes):
for s in range(sample_size):
# Use sklearns make_blobs to generate a random dataset with specified size
# dimension and number of clusters
data, labels = sklearn.datasets.make_blobs(n_samples=size,
n_features=dataset_dimension,
centers=dataset_n_clusters)
# Start the clustering with a timer
start_time = time.time()
cluster_function(data, *function_args, **function_kwds)
time_taken = time.time() - start_time
# If we are taking more than max_time then abort -- we don't
# want to spend excessive time on slow algorithms
if time_taken > max_time:
result[index, s] = time_taken
return pd.DataFrame(np.vstack([dataset_sizes.repeat(sample_size),
result.flatten()]).T, columns=['x','y'])
else:
result[index, s] = time_taken
# Return the result as a dataframe for easier handling with seaborn afterwards
return pd.DataFrame(np.vstack([dataset_sizes.repeat(sample_size),
result.flatten()]).T, columns=['x','y'])
Explanation: Now we need some benchmarking code at various dataset sizes. Because some clustering algorithms have performance that can vary quite a lot depending on the exact nature of the dataset we'll also need to run several times on randomly generated datasets of each size so as to get a better idea of the average case performance.
We also need to generalise over algorithms which don't necessarily all have the same API. We can resolve that by taking a clustering function, argument tuple and keywords dictionary to let us do semi-arbitrary calls (fortunately all the algorithms do at least take the dataset to cluster as the first parameter).
Finally some algorithms scale poorly, and I don't want to spend forever doing clustering of random datasets so we'll cap the maximum time an algorithm can use; once it has taken longer than max time we'll just abort there and leave the remaining entries in our datasize by samples matrix unfilled.
In the end this all amounts to a fairly straightforward set of nested loops (over datasizes and number of samples) with calls to sklearn to generate mock data and the clustering function inside a timer. Add in some early abort and we're done.
End of explanation
dataset_sizes = np.hstack([np.arange(1, 6) * 500, np.arange(3,7) * 1000, np.arange(4,17) * 2000])
Explanation: Comparison of all ten implementations
Now we need a range of dataset sizes to test out our algorithm. Since the scaling performance is wildly different over the ten implementations we're going to look at it will be beneficial to have a number of very small dataset sizes, and increasing spacing as we get larger, spanning out to 32000 datapoints to cluster (to begin with). Numpy provides convenient ways to get this done via arange and vector multiplication. We'll start with step sizes of 500, then shift to steps of 1000 past 3000 datapoints, and finally steps of 2000 past 6000 datapoints.
End of explanation
k_means = sklearn.cluster.KMeans(10)
k_means_data = benchmark_algorithm(dataset_sizes, k_means.fit, (), {})
dbscan = sklearn.cluster.DBSCAN(eps=1.25)
dbscan_data = benchmark_algorithm(dataset_sizes, dbscan.fit, (), {})
scipy_k_means_data = benchmark_algorithm(dataset_sizes,
scipy.cluster.vq.kmeans, (10,), {})
scipy_single_data = benchmark_algorithm(dataset_sizes,
scipy.cluster.hierarchy.single, (), {})
fastclust_data = benchmark_algorithm(dataset_sizes,
fastcluster.linkage_vector, (), {})
hdbscan_ = hdbscan.HDBSCAN()
hdbscan_data = benchmark_algorithm(dataset_sizes, hdbscan_.fit, (), {})
debacl_data = benchmark_algorithm(dataset_sizes,
debacl.geom_tree.geomTree, (5, 5), {'verbose':False})
agglomerative = sklearn.cluster.AgglomerativeClustering(10)
agg_data = benchmark_algorithm(dataset_sizes,
agglomerative.fit, (), {}, sample_size=4)
spectral = sklearn.cluster.SpectralClustering(10)
spectral_data = benchmark_algorithm(dataset_sizes,
spectral.fit, (), {}, sample_size=6)
affinity_prop = sklearn.cluster.AffinityPropagation()
ap_data = benchmark_algorithm(dataset_sizes,
affinity_prop.fit, (), {}, sample_size=3)
Explanation: Now it is just a matter of running all the clustering algorithms via our benchmark function to collect up all the requsite data. This could be prettier, rolled up into functions appropriately, but sometimes brute force is good enough. More importantly (for me) since this can take a significant amount of compute time, I wanted to be able to comment out algorithms that were slow or I was uninterested in easily. Which brings me to a warning for you the reader and potential user of the notebook: this next step is very expensive. We are running ten different clustering algorithms multiple times each on twenty two different dataset sizes -- and some of the clustering algorithms are slow (we are capping out at forty five seconds per run). That means that the next cell can take an hour or more to run. That doesn't mean "Don't try this at home" (I actually encourage you to try this out yourself and play with dataset parameters and clustering parameters) but it does mean you should be patient if you're going to!
End of explanation
sns.regplot(x='x', y='y', data=k_means_data, order=2,
label='Sklearn K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=dbscan_data, order=2,
label='Sklearn DBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=scipy_k_means_data, order=2,
label='Scipy K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=hdbscan_data, order=2,
label='HDBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=fastclust_data, order=2,
label='Fastcluster Single Linkage', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=scipy_single_data, order=2,
label='Scipy Single Linkage', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=debacl_data, order=2,
label='DeBaCl Geom Tree', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=spectral_data, order=2,
label='Sklearn Spectral', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=agg_data, order=2,
label='Sklearn Agglomerative', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=ap_data, order=2,
label='Sklearn Affinity Propagation', x_estimator=np.mean)
plt.gca().axis([0, 34000, 0, 120])
plt.gca().set_xlabel('Number of data points')
plt.gca().set_ylabel('Time taken to cluster (s)')
plt.title('Performance Comparison of Clustering Implementations')
plt.legend()
Explanation: Now we need to plot the results so we can see what is going on. The catch is that we have several datapoints for each dataset size and ultimately we would like to try and fit a curve through all of it to get the general scaling trend. Fortunately seaborn comes to the rescue here by providing regplot which plots a regression through a dataset, supports higher order regression (we should probably use order two as most algorithms are effectively quadratic) and handles multiple datapoints for each x-value cleanly (using the x_estimator keyword to put a point at the mean and draw an error bar to cover the range of data).
End of explanation
large_dataset_sizes = np.arange(1,16) * 4000
hdbscan_boruvka = hdbscan.HDBSCAN(algorithm='boruvka_kdtree')
large_hdbscan_boruvka_data = benchmark_algorithm(large_dataset_sizes,
hdbscan_boruvka.fit, (), {},
max_time=90, sample_size=1)
k_means = sklearn.cluster.KMeans(10)
large_k_means_data = benchmark_algorithm(large_dataset_sizes,
k_means.fit, (), {},
max_time=90, sample_size=1)
dbscan = sklearn.cluster.DBSCAN(eps=1.25, min_samples=5)
large_dbscan_data = benchmark_algorithm(large_dataset_sizes,
dbscan.fit, (), {},
max_time=90, sample_size=1)
large_fastclust_data = benchmark_algorithm(large_dataset_sizes,
fastcluster.linkage_vector, (), {},
max_time=90, sample_size=1)
large_scipy_k_means_data = benchmark_algorithm(large_dataset_sizes,
scipy.cluster.vq.kmeans, (10,), {},
max_time=90, sample_size=1)
large_scipy_single_data = benchmark_algorithm(large_dataset_sizes,
scipy.cluster.hierarchy.single, (), {},
max_time=90, sample_size=1)
Explanation: A few features stand out. First of all there appear to be essentially two classes of implementation, with DeBaCl being an odd case that falls in the middle. The fast implementations tend to be implementations of single linkage agglomerative clustering, K-means, and DBSCAN. The slow cases are largely from sklearn and include agglomerative clustering (in this case using Ward instead of single linkage).
For practical purposes this means that if you have much more than 10000 datapoints your clustering options are significantly constrained: sklearn spectral, agglomerative and affinity propagation are going to take far too long. DeBaCl may still be an option, but given that the hdbscan library provides "robust single linkage clustering" equivalent to what DeBaCl is doing (and with effectively the same runtime as hdbscan as it is a subset of that algorithm) it is probably not the best choice for large dataset sizes.
So let's drop out those slow algorithms so we can scale out a little further and get a closer look at the various algorithms that managed 32000 points in under thirty seconds. There is almost undoubtedly more to learn as we get ever larger dataset sizes.
Comparison of fast implementations
Let's compare the six fastest implementations now. We can scale out a little further as well; based on the curves above it looks like we should be able to comfortably get to 60000 data points without taking much more than a minute per run. We can also note that most of these implementations weren't that noisy so we can get away with a single run per dataset size.
End of explanation
sns.regplot(x='x', y='y', data=large_k_means_data, order=2,
label='Sklearn K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_dbscan_data, order=2,
label='Sklearn DBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_scipy_k_means_data, order=2,
label='Scipy K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_hdbscan_boruvka_data, order=2,
label='HDBSCAN Boruvka', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_fastclust_data, order=2,
label='Fastcluster Single Linkage', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_scipy_single_data, order=2,
label='Scipy Single Linkage', x_estimator=np.mean)
plt.gca().axis([0, 64000, 0, 150])
plt.gca().set_xlabel('Number of data points')
plt.gca().set_ylabel('Time taken to cluster (s)')
plt.title('Performance Comparison of Fastest Clustering Implementations')
plt.legend()
Explanation: Again we can use seaborn to do curve fitting and plotting, exactly as before.
End of explanation
large_scipy_single_data.tail(10)
Explanation: Clearly something has gone woefully wrong with the curve fitting for the scipy single linkage implementation, but what exactly? If we look at the raw data we can see.
End of explanation
size_of_array = 44000 * (44000 - 1) / 2 # from pdist documentation
bytes_in_array = size_of_array * 8 # Since doubles use 8 bytes
gigabytes_used = bytes_in_array / (1024.0 ** 3) # divide out to get the number of GB
gigabytes_used
Explanation: It seems that at around 44000 points we hit a wall and the runtimes spiked. A hint is that I'm running this on a laptop with 8GB of RAM. Both single linkage algorithms use scipy.spatial.pdist to compute pairwise distances between points, which returns an array of shape (n(n-1)/2, 1) of doubles. A quick computation shows that that array of distances is quite large once we nave 44000 points:
End of explanation
sns.regplot(x='x', y='y', data=large_k_means_data, order=2,
label='Sklearn K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_dbscan_data, order=2,
label='Sklearn DBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_scipy_k_means_data, order=2,
label='Scipy K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_hdbscan_boruvka_data, order=2,
label='HDBSCAN Boruvka', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_fastclust_data, order=2,
label='Fastcluster Single Linkage', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_scipy_single_data[:8], order=2,
label='Scipy Single Linkage', x_estimator=np.mean)
plt.gca().axis([0, 64000, 0, 150])
plt.gca().set_xlabel('Number of data points')
plt.gca().set_ylabel('Time taken to cluster (s)')
plt.title('Performance Comparison of Fastest Clustering Implementations')
plt.legend()
Explanation: If we assume that my laptop is keeping much other than that distance array in RAM then clearly we are going to spend time paging out the distance array to disk and back and hence we will see the runtimes increase dramatically as we become disk IO bound. If we just leave off the last element we can get a better idea of the curve, but keep in mind that the scipy single linkage implementation does not scale past a limit set by your available RAM.
End of explanation
huge_dataset_sizes = np.arange(1,11) * 20000
k_means = sklearn.cluster.KMeans(10)
huge_k_means_data = benchmark_algorithm(huge_dataset_sizes,
k_means.fit, (), {},
max_time=120, sample_size=2, dataset_dimension=10)
dbscan = sklearn.cluster.DBSCAN(eps=1.5)
huge_dbscan_data = benchmark_algorithm(huge_dataset_sizes,
dbscan.fit, (), {},
max_time=120, sample_size=2, dataset_dimension=10)
huge_scipy_k_means_data = benchmark_algorithm(huge_dataset_sizes,
scipy.cluster.vq.kmeans, (10,), {},
max_time=120, sample_size=2, dataset_dimension=10)
hdbscan_boruvka = hdbscan.HDBSCAN(algorithm='boruvka_kdtree')
huge_hdbscan_data = benchmark_algorithm(huge_dataset_sizes,
hdbscan_boruvka.fit, (), {},
max_time=240, sample_size=4, dataset_dimension=10)
huge_fastcluster_data = benchmark_algorithm(huge_dataset_sizes,
fastcluster.linkage_vector, (), {},
max_time=240, sample_size=2, dataset_dimension=10)
sns.regplot(x='x', y='y', data=huge_k_means_data, order=2,
label='Sklearn K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=huge_dbscan_data, order=2,
label='Sklearn DBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=huge_scipy_k_means_data, order=2,
label='Scipy K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=huge_hdbscan_data, order=2,
label='HDBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=huge_fastcluster_data, order=2,
label='Fastcluster', x_estimator=np.mean)
plt.gca().axis([0, 200000, 0, 240])
plt.gca().set_xlabel('Number of data points')
plt.gca().set_ylabel('Time taken to cluster (s)')
plt.title('Performance Comparison of K-Means and DBSCAN')
plt.legend()
Explanation: If we're looking for scaling we can write off the scipy single linkage implementation -- if even we didn't hit the RAM limit the $O(n^2)$ scaling is going to quickly catch up with us. Fastcluster has the same asymptotic scaling, but is heavily optimized to being the constant down much lower -- at this point it is still keeping close to the faster algorithms. It's asymtotics will still catch up with it eventually however.
In practice this is going to mean that for larger datasets you are going to be very constrained in what algorithms you can apply: if you get enough datapoints only K-Means, DBSCAN, and HDBSCAN will be left. This is somewhat disappointing, paritcularly as K-Means is not a particularly good clustering algorithm, paricularly for exploratory data analysis.
With this in mind it is worth looking at how these last several implementations perform at much larger sizes, to see, for example, when fastscluster starts to have its asymptotic complexity start to pull it away.
Comparison of high performance implementations
At this point we can scale out to 200000 datapoints easily enough, so let's push things at least that far so we can start to really see scaling effects.
End of explanation
import statsmodels.formula.api as sm
time_samples = [1000, 2000, 5000, 10000, 25000, 50000, 75000, 100000, 250000, 500000, 750000,
1000000, 2500000, 5000000, 10000000, 50000000, 100000000, 500000000, 1000000000]
def get_timing_series(data, quadratic=True):
if quadratic:
data['x_squared'] = data.x**2
model = sm.ols('y ~ x + x_squared', data=data).fit()
predictions = [model.params.dot([1.0, i, i**2]) for i in time_samples]
return pd.Series(predictions, index=pd.Index(time_samples))
else: # assume n log(n)
data['xlogx'] = data.x * np.log(data.x)
model = sm.ols('y ~ x + xlogx', data=data).fit()
predictions = [model.params.dot([1.0, i, i*np.log(i)]) for i in time_samples]
return pd.Series(predictions, index=pd.Index(time_samples))
Explanation: Now the some differences become clear. The asymptotic complexity starts to kick in with fastcluster failing to keep up. In turn HDBSCAN and DBSCAN, while having sub-$O(n^2)$ complexity, can't achieve $O(n \log(n))$ at this dataset dimension, and start to curve upward precipitously. Finally it demonstrates again how much of a difference implementation can make: the sklearn implementation of K-Means is far better than the scipy implementation. Since HDBSCAN clustering is a lot better than K-Means (unless you have good reasons to assume that the clusters partition your data and are all drawn from Gaussian distributions) and the scaling is still pretty good I would suggest that unless you have a truly stupendous amount of data you wish to cluster then the HDBSCAN implementation is a good choice.
But should I get a coffee?
So we know which implementations scale and which don't; a more useful thing to know in practice is, given a dataset, what can I run interactively? What can I run while I go and grab some coffee? How about a run over lunch? What if I'm willing to wait until I get in tomorrow morning? Each of these represent significant breaks in productivity -- once you aren't working interactively anymore your productivity drops measurably, and so on.
We can build a table for this. To start we'll need to be able to approximate how long a given clustering implementation will take to run. Fortunately we already gathered a lot of that data; if we load up the statsmodels package we can fit the data (with a quadratic or $n\log n$ fit depending on the implementation; DBSCAN and HDBSCAN get caught here, since while they are under $O(n^2)$ scaling, they don't have an easily described model, so I'll model them as $n^2$ for now) and use the resulting model to make our predictions. Obviously this has some caveats: if you fill your RAM with a distance matrix your runtime isn't going to fit the curve.
I've hand built a time_samples list to give a reasonable set of potential data sizes that are nice and human readable. After that we just need a function to fit and build the curves.
End of explanation
ap_timings = get_timing_series(ap_data)
spectral_timings = get_timing_series(spectral_data)
agg_timings = get_timing_series(agg_data)
debacl_timings = get_timing_series(debacl_data)
fastclust_timings = get_timing_series(large_fastclust_data.ix[:10,:].copy())
scipy_single_timings = get_timing_series(large_scipy_single_data.ix[:10,:].copy())
hdbscan_boruvka = get_timing_series(huge_hdbscan_data, quadratic=True)
#scipy_k_means_timings = get_timing_series(huge_scipy_k_means_data, quadratic=False)
dbscan_timings = get_timing_series(huge_dbscan_data, quadratic=True)
k_means_timings = get_timing_series(huge_k_means_data, quadratic=False)
timing_data = pd.concat([ap_timings, spectral_timings, agg_timings, debacl_timings,
scipy_single_timings, fastclust_timings, hdbscan_boruvka,
dbscan_timings, k_means_timings
], axis=1)
timing_data.columns=['AffinityPropagation', 'Spectral', 'Agglomerative',
'DeBaCl', 'ScipySingleLinkage', 'Fastcluster',
'HDBSCAN', 'DBSCAN', 'SKLearn KMeans'
]
def get_size(series, max_time):
return series.index[series < max_time].max()
datasize_table = pd.concat([
timing_data.apply(get_size, max_time=30),
timing_data.apply(get_size, max_time=300),
timing_data.apply(get_size, max_time=3600),
timing_data.apply(get_size, max_time=8*3600)
], axis=1)
datasize_table.columns=('Interactive', 'Get Coffee', 'Over Lunch', 'Overnight')
datasize_table
Explanation: Now we run that for each of our pre-existing datasets to extrapolate out predicted performance on the relevant dataset sizes. A little pandas wrangling later and we've produced a table of roughly how large a dataset you can tackle in each time frame with each implementation. I had to leave out the scipy KMeans timings because the noise in timing results caused the model to be unrealistic at larger data sizes. Note how the $O(n\log n)$ algorithms utterly dominate here. In the meantime, for medium sizes data sets you can still get quite a lot done with HDBSCAN.
End of explanation |
2,123 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calculates and plots the NIWA SOI
The NIWA SOI is calculated using the Troup method, where the climatological period is taken to be 1941-2010
Step1: imports
Step2: defines a function to get the BoM SLP data for Tahiti or Darwin
Step3: set up the paths
Step4: set up the plotting parameters
Step5: set up proxies
Step6: preliminary
Step7: Get the data for Tahiti
Step8: Get the data for Darwin
Step9: defines climatological period here
Step10: calculates the climatology
Step11: Calculates the SOI
Step12: writes the CSV file
Step13: stacks everything and set a Datetime index
Step14: choose the period of display
Step15: 3 months rolling mean, and some data munging
Step16: plots the SOI, lots of boilerplate here
Step17: saves the figure | Python Code:
%matplotlib inline
Explanation: Calculates and plots the NIWA SOI
The NIWA SOI is calculated using the Troup method, where the climatological period is taken to be 1941-2010:
Thus, if T and D are the monthly pressures at Tahiti and Darwin, respectively, and Tc and Dc the climatological monthly pressures, then:
SOI = [ (T – Tc) – (D – Dc) ] / [ StDev (T – D) ]
So the numerator is the anomalous Tahiti-Darwin difference for the month in question, and the denominator is the standard deviation of
the Tahiti-Darwin differences for that month over the 1941-2010 climatological period. I then round the answer to the nearest tenth
(ie, 1 decimal place).
End of explanation
import os
import sys
import matplotlib as mpl
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from numpy import ma
import urllib2
import requests
from matplotlib.dates import YearLocator, MonthLocator, DateFormatter
from dateutil import parser as dparser
from datetime import datetime, timedelta
import subprocess
Explanation: imports
End of explanation
def get_BOM_MSLP(station='tahiti'):
url = "ftp://ftp.bom.gov.au/anon/home/ncc/www/sco/soi/{}mslp.html".format(station)
r = urllib2.urlopen(url)
if r.code == 200:
print("streaming MSLP data for {} successful\n".format(station))
else:
print("!!! unable to stream MSLP data for {}\n".format(station))
sys.exit(1)
data = r.readlines()
r.close()
fout = open('./{}_text'.format(station), 'w')
if station == 'tahiti':
data = data[15:-3]
else:
data = data[14:-3]
fout.writelines(data)
fout.close()
data = pd.read_table('./{}_text'.format(station),sep='\s*', \
engine='python', na_values='*', index_col=['Year'])
subprocess.Popen(["rm {}*".format(station)], shell=True, stdout=True).communicate()
return data
Explanation: defines a function to get the BoM SLP data for Tahiti or Darwin
End of explanation
# figure
fpath = os.path.join(os.environ['HOME'], 'operational/ICU/indices/figures')
# csv file
opath = os.path.join(os.environ['HOME'], 'operational/ICU/indices/data')
Explanation: set up the paths
End of explanation
years = YearLocator()
months = MonthLocator()
mFMT = DateFormatter('%b')
yFMT = DateFormatter('\n\n%Y')
mpl.rcParams['xtick.labelsize'] = 12
mpl.rcParams['ytick.labelsize'] = 12
mpl.rcParams['axes.titlesize'] = 14
mpl.rcParams['xtick.direction'] = 'out'
mpl.rcParams['ytick.direction'] = 'out'
mpl.rcParams['xtick.major.size'] = 5
mpl.rcParams['ytick.major.size'] = 5
mpl.rcParams['xtick.minor.size'] = 2
Explanation: set up the plotting parameters
End of explanation
proxies = {}
#proxies['http'] = 'url:port'
#proxies['https'] = 'url:port'
#proxies['ftp'] = 'url:port'
### use urllib2 to open remote http files
urllib2proxy = urllib2.ProxyHandler(proxies)
opener = urllib2.build_opener(urllib2proxy)
urllib2.install_opener(opener)
Explanation: set up proxies
End of explanation
url = "http://www.bom.gov.au/climate/current/soihtm1.shtml"
r = requests.get(url, proxies=proxies)
urlcontent = r.content
date_update = urlcontent[urlcontent.find("Next SOI update expected:"):\
urlcontent.find("Next SOI update expected:")+60]
date_update = date_update.split("\n")[0]
print date_update
print(10*'='+'\n')
Explanation: preliminary: wet get the date for which the next update is likely to be made available
End of explanation
tahitidf = get_BOM_MSLP(station='tahiti')
Explanation: Get the data for Tahiti
End of explanation
darwindf = get_BOM_MSLP(station='darwin')
Explanation: Get the data for Darwin
End of explanation
clim_start = 1941
clim_end = 2010
clim = "{}_{}".format(clim_start, clim_end)
Explanation: defines climatological period here
End of explanation
tahiti_cli = tahitidf.loc[clim_start:clim_end,:]
darwin_cli = darwindf.loc[clim_start:clim_end,:]
tahiti_mean = tahiti_cli.mean(0)
darwin_mean = darwin_cli.mean(0)
Explanation: calculates the climatology
End of explanation
soi = ((tahitidf - tahiti_mean) - (darwindf - darwin_mean)) / ((tahiti_cli - darwin_cli).std(0))
soi = np.round(soi, 1)
soi.tail()
Explanation: Calculates the SOI
End of explanation
soi.to_csv(os.path.join(opath, "NICO_NIWA_SOI_{}.csv".format(clim)))
Explanation: writes the CSV file
End of explanation
ts_soi = pd.DataFrame(soi.stack())
dates = []
for i in xrange(len(ts_soi)):
dates.append(dparser.parse("{}-{}-1".format(ts_soi.index.get_level_values(0)[i], ts_soi.index.get_level_values(1)[i])))
ts_soi.index = dates
ts_soi.columns = [['soi']]
ts_soi.tail()
Explanation: stacks everything and set a Datetime index
End of explanation
ts_soi = ts_soi.truncate(before="2012/1/1")
Explanation: choose the period of display
End of explanation
ts_soi[['soirm']] = pd.rolling_mean(ts_soi, 3, center=True)
dates = np.array(ts_soi.index.to_pydatetime())
widths=np.array([(dates[j+1]-dates[j]).days for j in range(len(dates)-1)] + [30])
### middle of the month for the 3 month running mean plot
datesrm = np.array([x + timedelta(days=15) for x in dates])
soi = ts_soi['soi'].values
soim = ts_soi['soirm'].values
Explanation: 3 months rolling mean, and some data munging
End of explanation
fig, ax = plt.subplots(figsize=(14,7))
fig.subplots_adjust(bottom=0.15)
ax.bar(dates[soi>=0],soi[soi>=0], width=widths[soi>=0], facecolor='steelblue', \
alpha=.8, edgecolor='steelblue', lw=2)
ax.bar(dates[soi<0],soi[soi<0], width=widths[soi<0], facecolor='coral', \
alpha=.8, edgecolor='coral', lw=2)
ax.plot(datesrm,soim, lw=3, color='k', label='3-mth mean')
ax.xaxis.set_minor_locator(months)
ax.xaxis.set_major_locator(years)
ax.xaxis.set_minor_formatter(mFMT)
ax.xaxis.set_major_formatter(yFMT)
ax.axhline(0, color='k')
#ax.set_frame_on(False)
labels = ax.get_xminorticklabels()
for label in labels:
label.set_fontsize(14)
label.set_rotation(90)
labels = ax.get_xmajorticklabels()
for label in labels:
label.set_fontsize(18)
labels = ax.get_yticklabels()
for label in labels:
label.set_fontsize(18)
ax.grid(linestyle='--')
ax.xaxis.grid(True, which='both')
ax.legend(loc=3, fancybox=True)
ax.set_ylim(-3., 3.)
ax.set_ylabel('Monthly SOI (NIWA)', fontsize=14, backgroundcolor="w")
ax.text(dates[0],3.2,"NIWA SOI", fontsize=24, fontweight='bold')
ax.text(dates[-5], 2.8, "%s NIWA Ltd." % (u'\N{Copyright Sign}'))
textBm = "%s = %+4.1f" % (dates[-1].strftime("%B %Y"), soi[-1])
textBs = "%s to %s = %+4.1f" % (dates[-3].strftime("%b %Y"), dates[-1].strftime("%b %Y"), soi[-3:].mean())
ax.text(datesrm[8],3.2,"Latest values: %s, %s" % (textBm, textBs), fontsize=16)
ax.text(datesrm[0],2.8,date_update, fontsize=14)
Explanation: plots the SOI, lots of boilerplate here
End of explanation
fig.savefig(os.path.join(fpath, "NICO_NIWA_SOI_{}clim.png".format(clim)), dpi=200)
Explanation: saves the figure
End of explanation |
2,124 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Import packages
Importing the necessary packages, including the standard TFX component classes
Step2: Palmer Penguins example pipeline
Download Example Data
We download the example dataset for use in our TFX pipeline.
The dataset we're using is the Palmer Penguins dataset which is also used in other
TFX examples.
There are four numeric features in this dataset
Step3: Run TFX Components
In the cells that follow, we create TFX components one-by-one and generates example using exampleGen component.
Step4: As seen above, .selected_features contains the features selected after running the component with the speified parameters.
To get the info about updated Example artifact, one can view it as follows | Python Code:
!pip install -U tfx
# getting the code directly from the repo
x = !pwd
if 'feature_selection' not in str(x):
!git clone -b main https://github.com/tensorflow/tfx-addons.git
%cd tfx-addons/tfx_addons/feature_selection
Explanation: <a href="https://colab.research.google.com/github/deutranium/tfx-addons/blob/main/tfx_addons/feature_selection/example/Palmer_Penguins_example_colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
TFX Feature Selection Component
You may find the source code for the same here
This example demonstrate the use of feature selection component. This project allows the user to select different algorithms for performing feature selection on datasets artifacts in TFX pipelines
Base code taken from: https://github.com/tensorflow/tfx/blob/master/docs/tutorials/tfx/components_keras.ipynb
Setup
Install TFX
Note: In Google Colab, because of package updates, the first time you run this cell you must restart the runtime (Runtime > Restart runtime ...).
End of explanation
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
import importlib
pp = pprint.PrettyPrinter()
from tfx import v1 as tfx
import importlib
from tfx.components import CsvExampleGen
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip
# importing the feature selection component
from component import FeatureSelection
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__[0]
Explanation: Import packages
Importing the necessary packages, including the standard TFX component classes
End of explanation
# getting the dataset
_data_root = tempfile.mkdtemp(prefix='tfx-data')
DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv'
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
Explanation: Palmer Penguins example pipeline
Download Example Data
We download the example dataset for use in our TFX pipeline.
The dataset we're using is the Palmer Penguins dataset which is also used in other
TFX examples.
There are four numeric features in this dataset:
culmen_length_mm
culmen_depth_mm
flipper_length_mm
body_mass_g
All features were already normalized to have range [0,1]. We will build a
that selects 2 features to be eliminated from the dataset in other to improve the performance of the mode in predicting the species of penguins.
End of explanation
context = InteractiveContext()
#create and run exampleGen component
example_gen = CsvExampleGen(input_base=_data_root )
context.run(example_gen)
#create and run statisticsGen component
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen)
# using the feature selection component
#feature selection component
feature_selector = FeatureSelection(orig_examples = example_gen.outputs['examples'],
module_file='example.modules.penguins_module')
context.run(feature_selector)
# Display Selected Features
context.show(feature_selector.outputs['feature_selection']._artifacts[0])
Explanation: Run TFX Components
In the cells that follow, we create TFX components one-by-one and generates example using exampleGen component.
End of explanation
context.show(feature_selector.outputs['updated_data']._artifacts[0])
Explanation: As seen above, .selected_features contains the features selected after running the component with the speified parameters.
To get the info about updated Example artifact, one can view it as follows:
End of explanation |
2,125 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
There are many talks tomorrow at the CSV Conf. I want to cluster the talks
Step1: Document representation
Step2: Preprocess text
Step3: Cluster the talks
I refer to Jörn Hees (2015) to generate the hierarchical clustering and dendrogram using scipy.cluster.hierarchy.dendrogram. | Python Code:
from bs4 import BeautifulSoup
import requests
import pandas as pd
website_to_parse = "https://csvconf.com/speakers/"
# Save HTML to soup
html_data = requests.get(website_to_parse).text
soup = BeautifulSoup(html_data, "html5lib")
doc = soup.find_all("table", attrs={"class", "speakers"})[1]
names = doc.find_all("span", attrs={"class": "name"})
names = [t.getText().strip() for t in names]
titles = doc.find_all("p", attrs={"class": "title"})
titles = [t.getText().strip() for t in titles]
abstracts = doc.find_all("p", attrs={"class": "abstract"})
abstracts = [t.getText().strip() for t in abstracts]
print(len(names), len(titles), len(abstracts))
Explanation: There are many talks tomorrow at the CSV Conf. I want to cluster the talks:
Get html
Get talk titles
Match titles with description (to get more text)
Model with TF-IDF
Find clusters
Get HTML
End of explanation
df = pd.DataFrame.from_dict({
'names':names,
'titles':titles,
'abstracts':abstracts})
# Combine text of title and abstract
df['document'] = df['titles'] + " " + df['abstracts']
# Add index
df['index'] = df.index
Explanation: Document representation
End of explanation
import sys
sys.path.append("/Users/csiu/repo/kick/src/python")
import sim_doc as sim_doc
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.utils.extmath import randomized_svd
## Preprocess
_ = sim_doc.preprocess_data(df)
## TF-IDF
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(df['doc_processed'])
Explanation: Preprocess text
End of explanation
import matplotlib.pyplot as plt
from scipy.cluster.hierarchy import dendrogram, linkage
# generate the linkage matrix
Z = linkage(X.toarray(), 'ward')
# calculate full dendrogram
plt.figure(figsize=(25, 4))
plt.title('Hierarchical Clustering of CSV,Conf,V3 Non-Keynote talks')
plt.xlabel('')
plt.ylabel('Distance')
dn = dendrogram(
Z,
leaf_rotation=270, # rotates the x axis labels
leaf_font_size=12, # font size for the x axis labels
labels = df["titles"].tolist(),
color_threshold=1.45, # where to cut for clusters
above_threshold_color='#bcbddc'
)
plt.show()
Explanation: Cluster the talks
I refer to Jörn Hees (2015) to generate the hierarchical clustering and dendrogram using scipy.cluster.hierarchy.dendrogram.
End of explanation |
2,126 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 33. Nonparametric permutation testing
Step1: Figure 33.1
Step2: 33.3
Using the same fig/data as 33.1
Step3: 33.5/6
These are generated in chap 34.
33.8
Step5: 33.9
Rather than do perm testing on the spectrogram I'll just write the code below using the data we generated above. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
from scipy.stats import norm
from scipy.signal import convolve2d
import skimage.measure
Explanation: Chapter 33. Nonparametric permutation testing
End of explanation
x = np.arange(-5,5, .01)
pdf = norm.pdf(x)
data = np.random.randn(1000)
fig, ax = plt.subplots(1,2, sharex='all')
ax[0].plot(x, pdf)
ax[0].set(ylabel='PDF', xlabel='Statistical value')
ax[1].hist(data, bins=50)
ax[1].set(ylabel='counts')
fig.tight_layout()
Explanation: Figure 33.1
End of explanation
print(f'p_n = {sum(data>2)/1000:.3f}')
print(f'p_z = {1-norm.cdf(2):.3f}')
Explanation: 33.3
Using the same fig/data as 33.1
End of explanation
np.random.seed(1)
# create random smoothed map
xi, yi = np.meshgrid(np.arange(-10, 11), np.arange(-10, 11))
zi = xi**2 + yi**2
zi = 1 - (zi/np.max(zi))
map = convolve2d(np.random.randn(100,100), zi,'same')
# threshold at arb value
mapt = map.copy()
mapt[(np.abs(map)<map.flatten().std()*2)] = 0
# turn binary
bw_map = mapt!=0
conn_comp = skimage.measure.label(bw_map)
fig, ax = plt.subplots(1,2,sharex='all',sharey='all')
ax[0].imshow(mapt)
ax[1].imshow(conn_comp)
print(f'There are {len(np.unique(conn_comp))} unique blobs')
Explanation: 33.5/6
These are generated in chap 34.
33.8
End of explanation
def max_blob_size(img):
helper function to compute max blob size
bw_img = img != 0
blobbed = skimage.measure.label(bw_img)
num_blobs = len(np.unique(blobbed))
max_size = max([np.sum(blobbed==i) for i in range(1, num_blobs)])
return max_size
n_perms = 1000
max_sizes = []
for _ in range(n_perms):
mapt_flat = mapt.flatten()
rand_flat = np.random.permutation(mapt_flat)
mapt_permuted = rand_flat.reshape(mapt.shape)
max_sizes.append(max_blob_size(mapt_permuted))
plt.hist(max_sizes, label='null')
plt.vlines(max_blob_size(mapt), 0, 200, label='true', color='red')
plt.legend()
Explanation: 33.9
Rather than do perm testing on the spectrogram I'll just write the code below using the data we generated above.
End of explanation |
2,127 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading file
Step1: Syntax
python
str.split(str=" ", num=string.count(str)).
Parameters
str -- This is any delimeter, by default it is space.
num -- this is number of lines to be made
Return Value
This method returns a list of lines.
Step2: Writing file
Step3: Array to string | Python Code:
filename='LittleRedRidingHood.txt'
with open(filename) as f:
print f.read()
filename='LittleRedRidingHood.txt'
with open(filename) as f:
for line in f:
print line
Explanation: Reading file
End of explanation
line
line.split(" ")
Explanation: Syntax
python
str.split(str=" ", num=string.count(str)).
Parameters
str -- This is any delimeter, by default it is space.
num -- this is number of lines to be made
Return Value
This method returns a list of lines.
End of explanation
filename='hello.txt'
with open(filename,'w') as f:
f.write('Hello World!');
ls
Explanation: Writing file
End of explanation
A=[5,6,7]
s=""
for i in A:
s+= "{} ".format(i)
print s
s
s=""
for i in A:
s+= "%d "%(i)
print s
B=[[1,2,3],[4,5,6],[7,8,9]]
print B
s=""
for row in B:
for column in row:
s+= "%d "%(column)
s+= "\n"
print s
Explanation: Array to string
End of explanation |
2,128 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutions in JAX
JAX provides a number of interfaces to compute convolutions across data, including
Step1: The mode parameter controls how boundary conditions are treated; here we use mode='same' to ensure that the output is the same size as the input.
For more information, see the {func}jax.numpy.convolve documentation, or the documentation associated with the original {func}numpy.convolve function.
Basic N-dimensional Convolution
For N-dimensional convolution, {func}jax.scipy.signal.convolve provides a similar interface to that of {func}jax.numpy.convolve, generalized to N dimensions.
For example, here is a simple approach to de-noising an image based on convolution with a Gaussian filter
Step2: Like in the one-dimensional case, we use mode='same' to specify how we would like edges to be handled. For more information on available options in N-dimensional convolutions, see the {func}jax.scipy.signal.convolve documentation.
General Convolutions
For the more general types of batched convolutions often useful in the context of building deep neural networks, JAX and XLA offer the very general N-dimensional conv_general_dilated function, but it's not very obvious how to use it. We'll give some examples of the common use-cases.
A survey of the family of convolutional operators, a guide to convolutional arithmetic, is highly recommended reading!
Let's define a simple diagonal edge kernel
Step3: And we'll make a simple synthetic image
Step4: lax.conv and lax.conv_with_general_padding
These are the simple convenience functions for convolutions
️⚠️ The convenience lax.conv, lax.conv_with_general_padding helper function assume NCHW images and OIHW kernels.
Step5: Dimension Numbers define dimensional layout for conv_general_dilated
The important argument is the 3-tuple of axis layout arguments
Step6: SAME padding, no stride, no dilation
Step7: VALID padding, no stride, no dilation
Step8: SAME padding, 2,2 stride, no dilation
Step9: VALID padding, no stride, rhs kernel dilation ~ Atrous convolution (excessive to illustrate)
Step10: VALID padding, no stride, lhs=input dilation ~ Transposed Convolution
Step11: We can use the last to, for instance, implement transposed convolutions
Step12: 1D Convolutions
You aren't limited to 2D convolutions, a simple 1D demo is below
Step13: 3D Convolutions | Python Code:
import matplotlib.pyplot as plt
from jax import random
import jax.numpy as jnp
import numpy as np
key = random.PRNGKey(1701)
x = jnp.linspace(0, 10, 500)
y = jnp.sin(x) + 0.2 * random.normal(key, shape=(500,))
window = jnp.ones(10) / 10
y_smooth = jnp.convolve(y, window, mode='same')
plt.plot(x, y, 'lightgray')
plt.plot(x, y_smooth, 'black');
Explanation: Convolutions in JAX
JAX provides a number of interfaces to compute convolutions across data, including:
{func}jax.numpy.convolve (also {func}jax.numpy.correlate)
{func}jax.scipy.signal.convolve (also {func}~jax.scipy.signal.correlate)
{func}jax.scipy.signal.convolve2d (also {func}~jax.scipy.signal.correlate2d)
{func}jax.lax.conv_general_dilated
For basic convolution operations, the jax.numpy and jax.scipy operations are usually sufficient. If you want to do more general batched multi-dimensional convolution, the jax.lax function is where you should start.
Basic One-dimensional Convolution
Basic one-dimensional convolution is implemented by {func}jax.numpy.convolve, which provides a JAX interface for {func}numpy.convolve. Here is a simple example of 1D smoothing implemented via a convolution:
End of explanation
from scipy import misc
import jax.scipy as jsp
fig, ax = plt.subplots(1, 3, figsize=(12, 5))
# Load a sample image; compute mean() to convert from RGB to grayscale.
image = jnp.array(misc.face().mean(-1))
ax[0].imshow(image, cmap='binary_r')
ax[0].set_title('original')
# Create a noisy version by adding random Gausian noise
key = random.PRNGKey(1701)
noisy_image = image + 50 * random.normal(key, image.shape)
ax[1].imshow(noisy_image, cmap='binary_r')
ax[1].set_title('noisy')
# Smooth the noisy image with a 2D Gaussian smoothing kernel.
x = jnp.linspace(-3, 3, 7)
window = jsp.stats.norm.pdf(x) * jsp.stats.norm.pdf(x[:, None])
smooth_image = jsp.signal.convolve(noisy_image, window, mode='same')
ax[2].imshow(smooth_image, cmap='binary_r')
ax[2].set_title('smoothed');
Explanation: The mode parameter controls how boundary conditions are treated; here we use mode='same' to ensure that the output is the same size as the input.
For more information, see the {func}jax.numpy.convolve documentation, or the documentation associated with the original {func}numpy.convolve function.
Basic N-dimensional Convolution
For N-dimensional convolution, {func}jax.scipy.signal.convolve provides a similar interface to that of {func}jax.numpy.convolve, generalized to N dimensions.
For example, here is a simple approach to de-noising an image based on convolution with a Gaussian filter:
End of explanation
# 2D kernel - HWIO layout
kernel = jnp.zeros((3, 3, 3, 3), dtype=jnp.float32)
kernel += jnp.array([[1, 1, 0],
[1, 0,-1],
[0,-1,-1]])[:, :, jnp.newaxis, jnp.newaxis]
print("Edge Conv kernel:")
plt.imshow(kernel[:, :, 0, 0]);
Explanation: Like in the one-dimensional case, we use mode='same' to specify how we would like edges to be handled. For more information on available options in N-dimensional convolutions, see the {func}jax.scipy.signal.convolve documentation.
General Convolutions
For the more general types of batched convolutions often useful in the context of building deep neural networks, JAX and XLA offer the very general N-dimensional conv_general_dilated function, but it's not very obvious how to use it. We'll give some examples of the common use-cases.
A survey of the family of convolutional operators, a guide to convolutional arithmetic, is highly recommended reading!
Let's define a simple diagonal edge kernel:
End of explanation
# NHWC layout
img = jnp.zeros((1, 200, 198, 3), dtype=jnp.float32)
for k in range(3):
x = 30 + 60*k
y = 20 + 60*k
img = img.at[0, x:x+10, y:y+10, k].set(1.0)
print("Original Image:")
plt.imshow(img[0]);
Explanation: And we'll make a simple synthetic image:
End of explanation
from jax import lax
out = lax.conv(jnp.transpose(img,[0,3,1,2]), # lhs = NCHW image tensor
jnp.transpose(kernel,[3,2,0,1]), # rhs = OIHW conv kernel tensor
(1, 1), # window strides
'SAME') # padding mode
print("out shape: ", out.shape)
print("First output channel:")
plt.figure(figsize=(10,10))
plt.imshow(np.array(out)[0,0,:,:]);
out = lax.conv_with_general_padding(
jnp.transpose(img,[0,3,1,2]), # lhs = NCHW image tensor
jnp.transpose(kernel,[2,3,0,1]), # rhs = IOHW conv kernel tensor
(1, 1), # window strides
((2,2),(2,2)), # general padding 2x2
(1,1), # lhs/image dilation
(1,1)) # rhs/kernel dilation
print("out shape: ", out.shape)
print("First output channel:")
plt.figure(figsize=(10,10))
plt.imshow(np.array(out)[0,0,:,:]);
Explanation: lax.conv and lax.conv_with_general_padding
These are the simple convenience functions for convolutions
️⚠️ The convenience lax.conv, lax.conv_with_general_padding helper function assume NCHW images and OIHW kernels.
End of explanation
dn = lax.conv_dimension_numbers(img.shape, # only ndim matters, not shape
kernel.shape, # only ndim matters, not shape
('NHWC', 'HWIO', 'NHWC')) # the important bit
print(dn)
Explanation: Dimension Numbers define dimensional layout for conv_general_dilated
The important argument is the 3-tuple of axis layout arguments:
(Input Layout, Kernel Layout, Output Layout)
- N - batch dimension
- H - spatial height
- W - spatial height
- C - channel dimension
- I - kernel input channel dimension
- O - kernel output channel dimension
⚠️ To demonstrate the flexibility of dimension numbers we choose a NHWC image and HWIO kernel convention for lax.conv_general_dilated below.
End of explanation
out = lax.conv_general_dilated(img, # lhs = image tensor
kernel, # rhs = conv kernel tensor
(1,1), # window strides
'SAME', # padding mode
(1,1), # lhs/image dilation
(1,1), # rhs/kernel dilation
dn) # dimension_numbers = lhs, rhs, out dimension permutation
print("out shape: ", out.shape)
print("First output channel:")
plt.figure(figsize=(10,10))
plt.imshow(np.array(out)[0,:,:,0]);
Explanation: SAME padding, no stride, no dilation
End of explanation
out = lax.conv_general_dilated(img, # lhs = image tensor
kernel, # rhs = conv kernel tensor
(1,1), # window strides
'VALID', # padding mode
(1,1), # lhs/image dilation
(1,1), # rhs/kernel dilation
dn) # dimension_numbers = lhs, rhs, out dimension permutation
print("out shape: ", out.shape, "DIFFERENT from above!")
print("First output channel:")
plt.figure(figsize=(10,10))
plt.imshow(np.array(out)[0,:,:,0]);
Explanation: VALID padding, no stride, no dilation
End of explanation
out = lax.conv_general_dilated(img, # lhs = image tensor
kernel, # rhs = conv kernel tensor
(2,2), # window strides
'SAME', # padding mode
(1,1), # lhs/image dilation
(1,1), # rhs/kernel dilation
dn) # dimension_numbers = lhs, rhs, out dimension permutation
print("out shape: ", out.shape, " <-- half the size of above")
plt.figure(figsize=(10,10))
print("First output channel:")
plt.imshow(np.array(out)[0,:,:,0]);
Explanation: SAME padding, 2,2 stride, no dilation
End of explanation
out = lax.conv_general_dilated(img, # lhs = image tensor
kernel, # rhs = conv kernel tensor
(1,1), # window strides
'VALID', # padding mode
(1,1), # lhs/image dilation
(12,12), # rhs/kernel dilation
dn) # dimension_numbers = lhs, rhs, out dimension permutation
print("out shape: ", out.shape)
plt.figure(figsize=(10,10))
print("First output channel:")
plt.imshow(np.array(out)[0,:,:,0]);
Explanation: VALID padding, no stride, rhs kernel dilation ~ Atrous convolution (excessive to illustrate)
End of explanation
out = lax.conv_general_dilated(img, # lhs = image tensor
kernel, # rhs = conv kernel tensor
(1,1), # window strides
((0, 0), (0, 0)), # padding mode
(2,2), # lhs/image dilation
(1,1), # rhs/kernel dilation
dn) # dimension_numbers = lhs, rhs, out dimension permutation
print("out shape: ", out.shape, "<-- larger than original!")
plt.figure(figsize=(10,10))
print("First output channel:")
plt.imshow(np.array(out)[0,:,:,0]);
Explanation: VALID padding, no stride, lhs=input dilation ~ Transposed Convolution
End of explanation
# The following is equivalent to tensorflow:
# N,H,W,C = img.shape
# out = tf.nn.conv2d_transpose(img, kernel, (N,2*H,2*W,C), (1,2,2,1))
# transposed conv = 180deg kernel roation plus LHS dilation
# rotate kernel 180deg:
kernel_rot = jnp.rot90(jnp.rot90(kernel, axes=(0,1)), axes=(0,1))
# need a custom output padding:
padding = ((2, 1), (2, 1))
out = lax.conv_general_dilated(img, # lhs = image tensor
kernel_rot, # rhs = conv kernel tensor
(1,1), # window strides
padding, # padding mode
(2,2), # lhs/image dilation
(1,1), # rhs/kernel dilation
dn) # dimension_numbers = lhs, rhs, out dimension permutation
print("out shape: ", out.shape, "<-- transposed_conv")
plt.figure(figsize=(10,10))
print("First output channel:")
plt.imshow(np.array(out)[0,:,:,0]);
Explanation: We can use the last to, for instance, implement transposed convolutions:
End of explanation
# 1D kernel - WIO layout
kernel = jnp.array([[[1, 0, -1], [-1, 0, 1]],
[[1, 1, 1], [-1, -1, -1]]],
dtype=jnp.float32).transpose([2,1,0])
# 1D data - NWC layout
data = np.zeros((1, 200, 2), dtype=jnp.float32)
for i in range(2):
for k in range(2):
x = 35*i + 30 + 60*k
data[0, x:x+30, k] = 1.0
print("in shapes:", data.shape, kernel.shape)
plt.figure(figsize=(10,5))
plt.plot(data[0]);
dn = lax.conv_dimension_numbers(data.shape, kernel.shape,
('NWC', 'WIO', 'NWC'))
print(dn)
out = lax.conv_general_dilated(data, # lhs = image tensor
kernel, # rhs = conv kernel tensor
(1,), # window strides
'SAME', # padding mode
(1,), # lhs/image dilation
(1,), # rhs/kernel dilation
dn) # dimension_numbers = lhs, rhs, out dimension permutation
print("out shape: ", out.shape)
plt.figure(figsize=(10,5))
plt.plot(out[0]);
Explanation: 1D Convolutions
You aren't limited to 2D convolutions, a simple 1D demo is below:
End of explanation
import matplotlib as mpl
# Random 3D kernel - HWDIO layout
kernel = jnp.array([
[[0, 0, 0], [0, 1, 0], [0, 0, 0]],
[[0, -1, 0], [-1, 0, -1], [0, -1, 0]],
[[0, 0, 0], [0, 1, 0], [0, 0, 0]]],
dtype=jnp.float32)[:, :, :, jnp.newaxis, jnp.newaxis]
# 3D data - NHWDC layout
data = jnp.zeros((1, 30, 30, 30, 1), dtype=jnp.float32)
x, y, z = np.mgrid[0:1:30j, 0:1:30j, 0:1:30j]
data += (jnp.sin(2*x*jnp.pi)*jnp.cos(2*y*jnp.pi)*jnp.cos(2*z*jnp.pi))[None,:,:,:,None]
print("in shapes:", data.shape, kernel.shape)
dn = lax.conv_dimension_numbers(data.shape, kernel.shape,
('NHWDC', 'HWDIO', 'NHWDC'))
print(dn)
out = lax.conv_general_dilated(data, # lhs = image tensor
kernel, # rhs = conv kernel tensor
(1,1,1), # window strides
'SAME', # padding mode
(1,1,1), # lhs/image dilation
(1,1,1), # rhs/kernel dilation
dn) # dimension_numbers
print("out shape: ", out.shape)
# Make some simple 3d density plots:
from mpl_toolkits.mplot3d import Axes3D
def make_alpha(cmap):
my_cmap = cmap(jnp.arange(cmap.N))
my_cmap[:,-1] = jnp.linspace(0, 1, cmap.N)**3
return mpl.colors.ListedColormap(my_cmap)
my_cmap = make_alpha(plt.cm.viridis)
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.scatter(x.ravel(), y.ravel(), z.ravel(), c=data.ravel(), cmap=my_cmap)
ax.axis('off')
ax.set_title('input')
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.scatter(x.ravel(), y.ravel(), z.ravel(), c=out.ravel(), cmap=my_cmap)
ax.axis('off')
ax.set_title('3D conv output');
Explanation: 3D Convolutions
End of explanation |
2,129 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gene tree estimation error in sliding windows
What size window is too big such that concatenation washes away the differences among genealogies for MSC-based analyses (i.e., ASTRAL, SNAQ).
Step1: Set up a phylogenetic model
Step2: Simulate a chromosome
Step3: Add missing data as spacers between loci and allele dropout
Step4: Write data to SEQS HDF5 format
Step5: Reformat all genealogies for comparisons with inferred gene trees
The true gene trees will not distinguish among haplotypes, so we will drop one haplotype from each tip, and we will also multiply branch lengths by the mutation rate so that edge lengths are in units of mutations.
Step6: Save record of the TRUE genealogy at each position
Step7: Visualize tree variation
Step8: Infer gene trees in sliding windows along the chromosome
Step9: Infer a species tree from inferred gene trees
Step10: Infer a species tree from TRUE gene trees
Step11: Measure RF distance between trees
The normalized RF distance. Larger value means trees are more different.
Step12: Visualize gene tree error
Some kind of sliding plot ... | Python Code:
import toytree
import ipcoal
import numpy as np
import ipyrad.analysis as ipa
Explanation: Gene tree estimation error in sliding windows
What size window is too big such that concatenation washes away the differences among genealogies for MSC-based analyses (i.e., ASTRAL, SNAQ).
End of explanation
tree = toytree.rtree.unittree(ntips=12, treeheight=12e6, seed=123)
tree.draw(ts='p');
Explanation: Set up a phylogenetic model
End of explanation
model = ipcoal.Model(
tree=tree,
Ne=1e6,
nsamples=2,
mut=1e-08,
recomb=1e-09,
)
model.sim_loci(nloci=1, nsites=1e5)
Explanation: Simulate a chromosome
End of explanation
# assumed space between RAD tags
SPACER = 5000
CUTLEN = 5
# iterate over each RAD tag
for i in range(0, model.seqs.shape[2], SPACER):
# mask. [0-300=DATA][300-5300=SPACER]
model.seqs[:, :, i+300: i+SPACER] = 9
# allele dropout
cseqs = model.seqs[:, :, i:i+CUTLEN]
aseqs = model.ancestral_seq[0, i:i+CUTLEN]
mask = np.any(cseqs != aseqs, axis=2)[0]
model.seqs[:, mask, i:i+300] = 9
# check that data looks right
model.draw_seqview(0, 250, 350, height=800);
Explanation: Add missing data as spacers between loci and allele dropout
End of explanation
model.write_loci_to_hdf5(name="test", outdir="/tmp", diploid=True)
Explanation: Write data to SEQS HDF5 format
End of explanation
def convert_genealogy_to_gene_tree(gtree, mu=1e-8):
# multiply by mutation rate
gtree = gtree.set_node_values(
feature="dist",
values={i: j.dist * mu for (i, j) in gtree.idx_dict.items()},
)
# drop the -1 haplotype from each
gtree = gtree.drop_tips([i for i in gtree.get_tip_labels() if "-1" in i])
# drop -0 from names of remaining samples
gtree = gtree.set_node_values(
feature="name",
values={i: j.name[:-2] for (i, j) in gtree.idx_dict.items()},
)
return gtree
# convert genealogies to be gene-tree-like
model.df.genealogy = [
convert_genealogy_to_gene_tree(toytree.tree(i)).write()
for i in model.df.genealogy
]
Explanation: Reformat all genealogies for comparisons with inferred gene trees
The true gene trees will not distinguish among haplotypes, so we will drop one haplotype from each tip, and we will also multiply branch lengths by the mutation rate so that edge lengths are in units of mutations.
End of explanation
model.df.to_csv("/tmp/test.csv")
Explanation: Save record of the TRUE genealogy at each position
End of explanation
# show the first few trees
toytree.mtree(model.df.genealogy[:10]).draw(2, 4, height=500);
Explanation: Visualize tree variation
End of explanation
# raxml inference in sliding windows
ts = ipa.treeslider("/tmp/test.seqs.hdf5", window_size=5e4, slide_size=5e4)
ts.run(auto=True, force=True)
# inferred tree is unrooted
toytree.tree(ts.tree_table.tree[0]).draw();
Explanation: Infer gene trees in sliding windows along the chromosome
End of explanation
# infer sptree from inferred gene trees from windows
ast = ipa.astral(ts.tree_table)
ast.run()
toytree.tree(ast.tree).draw();
Explanation: Infer a species tree from inferred gene trees
End of explanation
# sample one tree every 5000bp
gtrees = []
# select gtree every SPACER LEN bp (THIS IS THE SIZE OF WINDOWS)
for point in range(0, model.df.end.max(), 5000):
# get first tree with start > point
gtree = model.df.loc[model.df.start >= point, "genealogy"].iloc[0]
gtrees.append(gtree)
import ipyrad.analysis as ipa
ast = ipa.astral(gtrees)
ast.run()
ast.tree.draw();
Explanation: Infer a species tree from TRUE gene trees
End of explanation
# get two toytrees to compare
tree1 = toytree.tree(model.df.genealogy[0])
tree2 = toytree.tree(model.df.genealogy[100])
# calculate normalized RF distance
rf, rfmax, _, _, _, _, _ = tree1.treenode.robinson_foulds(tree2.treenode)
print(rf, rfmax, rf / rfmax)
# unresolved tree example RF calc
unresolved = tree1.collapse_nodes(min_dist=5e6)
rf, rfmax, _, _, _, _, _ = unresolved.treenode.robinson_foulds(tree2.treenode, unrooted_trees=True)
print(rf, rfmax, rf / rfmax)
Explanation: Measure RF distance between trees
The normalized RF distance. Larger value means trees are more different.
End of explanation
chrom ----------------------------------------------------------------
windows --------- ---------- ------------
RAD loc - - - - - -
gt erro --- --- --- ---
# separate figure
windowsize x spptree error (astral)
Explanation: Visualize gene tree error
Some kind of sliding plot ...
End of explanation |
2,130 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Python-für-Fortgeschrittene-2" data-toc-modified-id="Python-für-Fortgeschrittene-2-1"><span class="toc-item-num">1 </span>Python für Fortgeschrittene 2</a></div><div class="lev2 toc-item"><a href="#Funktionales-Programmieren-I" data-toc-modified-id="Funktionales-Programmieren-I-11"><span class="toc-item-num">1.1 </span>Funktionales Programmieren I</a></div><div class="lev3 toc-item"><a href="#Typen-von-Programmiersprachen
Step1: Python erwartet in bestimmten Kontexten ein iterierbares Objekt, z.B. in der for-Schleife
Step2: Das ist äquivalent zu
Step3: Man kann sich die vollständige Ausgabe eines Iterators ausgeben lassen, wenn man ihn als Parameter der list()- oder tuple() Funktion übergibt.
Step4: Frage
Step5: Und hier die Version mit List Comprehension
Step6: Natürlich kann man den Rückgabewert von List Comprehensions auch in einer Variablen abspeichern.
Step7: Geschachtelte Schleifen
Man kann in list comprehensions auch mehrere geschachtelte for-Schleifen aufrufen
Step8: Und nun als List Comprehension
Step9: <h4>Aufgabe 1</h4>
<p>Ersetzen Sie eine Reihe von Worten durch eine Reihe von Zahlen, die die Anzahl der Vokale anzeigen. Z.B.
Step10: prozedurale Schreibweise
Step11: Aufgabe 2
Verwenden Sie map() um in einer Liste von Worten jedes Wort in Großbuchstaben auszugeben. Diskutieren Sie evtl. Probleme mit einem Nachbarn.
Aufgabe 3 (optional)
Lösen Sie Aufgabe 1 mit map()
filter()
filter(FunktionX, Liste)<br/>
Die Funktion FunktionX wird auf jedes Element der Liste angewandt. Konstruiert einen neuen Iterator, in den die Elemente der Liste aufgenommen werden, für die die FunktionX den Ausgabewert True hat.
<br/>Bsp.
Step12: Aufgabe 4
Verwenden Sie filter, um aus dem folgenden Text eine Wortliste zu erstellen, in der alle Pronomina, Artikel und die Worte "dass", "ist", "nicht", "auch", "und" nicht enthalten sind
Step13: itertools.repeat(iterator, [n]) wiederholt die Elemente in iterator n mal.
Step14: itertools.chain(iterator_1, iterator_2, ...) Erzeugt einen neuen Iterator, in dem die Elemente von iterator_1, _2 usw. aneinander gehängt sind.
Step15: Aufgabe 5
Verknüpfen Sie den Inhalt dreier Dateien zu einem Iterator
Teile der Ausgabe eines Iterators auswählen.
itertools.filterfalse(Prädikat, iterator) ist das Gegenstück zu filter(). Ausgabe enthält alle Elemente, für die das Prädikat falsch ist.
itertools.takewhile(Prädikat, iterator) - gibt solange Elemente aus, wie das Prädikat wahr ist
itertools.dropwhile(Prädikat, iter)entfernt alle Elemente, solange das Prädikat wahr ist. Gibt dann den Rest aus.
itertools.compress(Daten, Selektoren) Nimmt zweei Iteratoren un dgibt nur die Elemente des ersten (Daten) zurück, für die das entsprechende Element im zweiten (Selektoren) wahr ist. Stoppt, wenn einer der Iteratoren erschöpft ist.
Iteratoren kombinieren
itertools.combinations(Iterator, r) gibt alle r-Tuple Kombinationen der Elemente des Iterators wieder. Beispiel
Step16: itertools.permutations(iterator, r) gibt alle Permutationen aller Elemente unabhängig von der Reihenfolge in Iterator wieder
Step17: Aufgabe 7
Wieviele Zweier-Permutationen sind mit den graden Zahlen zwischen 1 und 101 möglich?
The operator module
Mathematische Operationen
Step18: Lambda-Funktionen
lambda erlaubt es, kleine Funktionen anonym zu definieren. Nehmen wir an, wir wollen in einer List von Zahlen alle Zahlen durch 100 teilen und mit 13 multiplizieren. Dann könnten wir das so machen
Step19: Diese Funktion können wir mit Lambda nun direkt einsetzen
Step20: Allerdings gibt es sehr unterschiedliche Meinungen darüber, ob auf diese Weise guter Code entsteht. Ich finde diesen Ratschlag anz gut
Step21: <br/>
<br/><br/><br/><br/><br/>
Aufgabe 2
Step22: <br/><br/><br/><br/><br/><br/><br/><br/>
Aufgabe 3
Step24: <br/><br/><br/><br/><br/><br/><br/><br/>
Aufgabe 4 | Python Code:
#beispiel
a = [1, 2, 3,]
my_iterator = iter(a)
my_iterator.__next__()
my_iterator.__next__()
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Python-für-Fortgeschrittene-2" data-toc-modified-id="Python-für-Fortgeschrittene-2-1"><span class="toc-item-num">1 </span>Python für Fortgeschrittene 2</a></div><div class="lev2 toc-item"><a href="#Funktionales-Programmieren-I" data-toc-modified-id="Funktionales-Programmieren-I-11"><span class="toc-item-num">1.1 </span>Funktionales Programmieren I</a></div><div class="lev3 toc-item"><a href="#Typen-von-Programmiersprachen:" data-toc-modified-id="Typen-von-Programmiersprachen:-111"><span class="toc-item-num">1.1.1 </span>Typen von Programmiersprachen:</a></div><div class="lev3 toc-item"><a href="#Weitere-Merkmale-des-funktionalen-Programmierens:" data-toc-modified-id="Weitere-Merkmale-des-funktionalen-Programmierens:-112"><span class="toc-item-num">1.1.2 </span>Weitere Merkmale des funktionalen Programmierens:</a></div><div class="lev3 toc-item"><a href="#Vorteile-des-funktionalen-Programmierens:" data-toc-modified-id="Vorteile-des-funktionalen-Programmierens:-113"><span class="toc-item-num">1.1.3 </span>Vorteile des funktionalen Programmierens:</a></div><div class="lev2 toc-item"><a href="#Iteratoren" data-toc-modified-id="Iteratoren-12"><span class="toc-item-num">1.2 </span>Iteratoren</a></div><div class="lev2 toc-item"><a href="#List-Comprehension" data-toc-modified-id="List-Comprehension-13"><span class="toc-item-num">1.3 </span>List Comprehension</a></div><div class="lev2 toc-item"><a href="#Geschachtelte-Schleifen" data-toc-modified-id="Geschachtelte-Schleifen-14"><span class="toc-item-num">1.4 </span>Geschachtelte Schleifen</a></div><div class="lev3 toc-item"><a href="#Aufgabe-1" data-toc-modified-id="Aufgabe-1-141"><span class="toc-item-num">1.4.1 </span>Aufgabe 1</a></div><div class="lev2 toc-item"><a href="#Die-Funktionen-map(),-filter()" data-toc-modified-id="Die-Funktionen-map(),-filter()-15"><span class="toc-item-num">1.5 </span>Die Funktionen map(), filter()</a></div><div class="lev3 toc-item"><a href="#map()" data-toc-modified-id="map()-151"><span class="toc-item-num">1.5.1 </span>map()</a></div><div class="lev3 toc-item"><a href="#Aufgabe-2" data-toc-modified-id="Aufgabe-2-152"><span class="toc-item-num">1.5.2 </span>Aufgabe 2</a></div><div class="lev3 toc-item"><a href="#Aufgabe-3-(optional)" data-toc-modified-id="Aufgabe-3-(optional)-153"><span class="toc-item-num">1.5.3 </span>Aufgabe 3 (optional)</a></div><div class="lev3 toc-item"><a href="#filter()" data-toc-modified-id="filter()-154"><span class="toc-item-num">1.5.4 </span>filter()</a></div><div class="lev3 toc-item"><a href="#Aufgabe-4" data-toc-modified-id="Aufgabe-4-155"><span class="toc-item-num">1.5.5 </span>Aufgabe 4</a></div><div class="lev2 toc-item"><a href="#Das-itertools-Modul" data-toc-modified-id="Das-itertools-Modul-16"><span class="toc-item-num">1.6 </span>Das itertools-Modul</a></div><div class="lev3 toc-item"><a href="#Neuen-Iterator-erzeugen" data-toc-modified-id="Neuen-Iterator-erzeugen-161"><span class="toc-item-num">1.6.1 </span>Neuen Iterator erzeugen</a></div><div class="lev3 toc-item"><a href="#Aufgabe-5" data-toc-modified-id="Aufgabe-5-162"><span class="toc-item-num">1.6.2 </span>Aufgabe 5</a></div><div class="lev3 toc-item"><a href="#Teile-der-Ausgabe-eines-Iterators-auswählen." data-toc-modified-id="Teile-der-Ausgabe-eines-Iterators-auswählen.-163"><span class="toc-item-num">1.6.3 </span>Teile der Ausgabe eines Iterators auswählen.</a></div><div class="lev3 toc-item"><a href="#Iteratoren-kombinieren" data-toc-modified-id="Iteratoren-kombinieren-164"><span class="toc-item-num">1.6.4 </span>Iteratoren kombinieren</a></div><div class="lev3 toc-item"><a href="#Aufgabe-7" data-toc-modified-id="Aufgabe-7-165"><span class="toc-item-num">1.6.5 </span>Aufgabe 7</a></div><div class="lev3 toc-item"><a href="#The-operator-module" data-toc-modified-id="The-operator-module-166"><span class="toc-item-num">1.6.6 </span>The operator module</a></div><div class="lev2 toc-item"><a href="#Lambda-Funktionen" data-toc-modified-id="Lambda-Funktionen-17"><span class="toc-item-num">1.7 </span>Lambda-Funktionen</a></div><div class="lev2 toc-item"><a href="#Hausaufgabe" data-toc-modified-id="Hausaufgabe-18"><span class="toc-item-num">1.8 </span>Hausaufgabe</a></div><div class="lev2 toc-item"><a href="#Lösungen" data-toc-modified-id="Lösungen-19"><span class="toc-item-num">1.9 </span>Lösungen</a></div><div class="lev3 toc-item"><a href="#Aufgabe-1" data-toc-modified-id="Aufgabe-1-191"><span class="toc-item-num">1.9.1 </span>Aufgabe 1</a></div><div class="lev3 toc-item"><a href="#Aufgabe-2" data-toc-modified-id="Aufgabe-2-192"><span class="toc-item-num">1.9.2 </span>Aufgabe 2</a></div><div class="lev3 toc-item"><a href="#Aufgabe-4" data-toc-modified-id="Aufgabe-4-193"><span class="toc-item-num">1.9.3 </span>Aufgabe 4</a></div>
## Python für Fortgeschrittene 2
### Funktionales Programmieren I
#### Typen von Programmiersprachen:
<ul>
<li>Prozedural<br/>
Programm besteht aus einer Liste von Anweisungen, die sequentiell abgearbeitet werden. Die meisten Programmiersprachen sind prozedural, z.B. C.</li>
<li>Deklarativ<br/>
Im Programm wird nur spezifiziert, welches Problem gelöst werden soll, der Interpreter setzt dies dann in Anweisungen um, z.B. SQL</li>
<li>Objekt-orientiert<br/>
Programme erzeugen und verwenden Objekte und manipulieren diese Objekte. Objekte haben interne Zustände, die durch Methoden gesetzt werden, z.B. Java, C++. </li>
<li>Funktional<br/>
Zerlegen ein Problem in eine Reihe von Funktionen (vergleichbar mit mathematischen Funktionen, z.B. f(x) = y. Die Funktionen haben einen definierten Input und Output, aber keine internen Zustand, der die Ausgabe eines bestimmten Input beeinflusst, z.B. Lisp oder Haskell.</li>
</ul>
#### Weitere Merkmale des funktionalen Programmierens:
<ul>
<li>Funktionen können wie Daten behandelt werden, d.h. man kann einer Funktion als Parameter eine Funktion geben bzw. die Ausgabe einer Funktion kann eine Funktion sein.</li>
<li>Rekursion ist die primäre Form der Ablaufkontrolle, etwa um Schleifen zu erzeugen.</li>
<li>Im Zentrum steht die Manipulation von Listen. </li>
<li>'Reine' funktionale Programmiersprachen vermeiden Nebeneffekte, z.B. einer Variablen erst einen Wert und dann einen anderen zuzuweisen, um so den internen Zustand des Programms zu verfolgen. Einige Funktionen werden aber nur wegen ihrer 'Nebeneffekte' aufgerufen, z.B. print() oder time.sleep() und nicht für die Rückgabewerte der Funktion. </li>
<li>Funktionale Programmiersprachen vermeiden Zuweisungen und arbeiten stattdessen mit Ausdrücken, also mit Funktionen, die Parameter haben und eine Ausgabe. Im Idealfall besteht das ganze Programm aus einer Folge von Funktionen, wobei die Ausgabe der einen Funktion zum Parameter der nächsten wird usw., z.B.:<br/>
a = 3<br/>
func3(func2(func1(a)))<br/>
<li>Funktionale Programmiersprachen verwenden vor allem Funktionen, die auf anderen Funktionen arbeiten, die auf anderen Funktionen arbeiten.
</ul>
#### Vorteile des funktionalen Programmierens:
<ul>
<li>Formale Beweisbarkeit (eher von akademischem Interesse</li>
<li>Modularität<br/>
Funktionales Programmieren erzwingt das Schreiben von sehr kleinen Funktionen, die leichter wiederzuverwenden und modular einzusetzen sind.</li>
<li>Einfachheit der Fehlersuche und des Testens<br/>
Da Ein- und Ausgabe stets klar definiert sind, sind Fehlersuche und das Erstellen von Unittests einfacher</li>
</ul>
Wie immer gilt in Python auch hier: Python ermöglicht die Verwendung des funktionalen Paradigmas, erzwingt es aber nicht durch Einschränkungen, wie es reine funktionale Programmiersprachen tun. Typischerweise verwendet man in Python prozedurale, objekt-orientierte und funktionale Verfahren, z.B. kann man objekt-orientiertes und funktionales Programmieren verwenden, indem man Funktionen definiert, die als Ein- und Ausgabe Objekte verwenden.
In Python wird das funktionale Programmieren u.a. durch folgende Komponenten realisiert:
<ul>
<li>Iteratoren</li>
<li>List Comprehension, Generator Expressions</li>
<li>Die Funktionen map(), filter()</li>
<li>Das itertools Modul </li>
</ul>
### Iteratoren
Die Methode iter() versucht für ein beliebiges Objekt einen Iterator zurückzugeben. Der Iterator gibt bei jedem Aufruf ein Objekt der Liste zurück und setzt den Pointer der Liste um eines höher. Objekte sind iterierbar (iterable) wenn sie die Methode iter() unterstützen, z.B. Listen, Dictionaries, Dateihandles usw.
End of explanation
for i in a:
print(str(i))
Explanation: Python erwartet in bestimmten Kontexten ein iterierbares Objekt, z.B. in der for-Schleife:
End of explanation
for i in iter(a):
print(str(i))
Explanation: Das ist äquivalent zu
End of explanation
#beispiel
a = [1, 2, 3,]
my_iterator = iter(a)
list(my_iterator)
my_iterator = iter(a)
tuple(my_iterator)
Explanation: Man kann sich die vollständige Ausgabe eines Iterators ausgeben lassen, wenn man ihn als Parameter der list()- oder tuple() Funktion übergibt.
End of explanation
#eine traditionelle for-Schleife:
squared = []
for x in range(10):
squared.append(x**2)
squared
Explanation: Frage: Warum habe ich im letzten Beispiel den Iterator neu erzeugt? Kann man das weglassen?
<h3>List Comprehension</h3>
<p>List Comprehension sind ein Element (von vielen) des funktionalen Programmierens in Python. Der wichtigste Vorteil ist das Vermeiden von Nebeneffekten. Was heißt das? Anstelle des Verändern des Zustands einer Datenstruktur (z.B. eines Objekts), sind funktionale Ausdrücke wie mathematische Funktionen aufgebaut, die nur aus einem klaren Input und einen ebenso eindeutig definierten Output bestehen.</p>
<p>Prinzipielle Schreibweise: <br/>
<code>[<expression> for <variable> in <iterable> <<if <condition> >>]</code>
<p>Im folgenden Beispiel ist es das Ziel, die Zahlen von 0 bis 9 ins Quadrat zu setzen. Zuerst die traditionelle Lösung mit einer for-Schleife, in deren Körper eine neue Datenstruktur aufgebaut wird.</p>
End of explanation
[x**2 for x in range(10)]
#a + bx
#2 + 0.5x
#x = 5 bis x = 10
[x*0.5 + 2 for x in range(5, 11)]
Explanation: Und hier die Version mit List Comprehension:
End of explanation
squared = [x**2 for x in range(10)]
squared
Explanation: Natürlich kann man den Rückgabewert von List Comprehensions auch in einer Variablen abspeichern.
End of explanation
#Aufgabe: vergleiche zwei Zahlenlisten und gebe alle Zahlenkombinationen aus, die ungleich sind
#Erst einmal die traditionelle Lösung mit geschachtelten Schleifen:
combs = []
for x in [1,2,3 ]:
for y in [3,1,4]:
if x != y:
combs.append((x, y))
combs
Explanation: Geschachtelte Schleifen
Man kann in list comprehensions auch mehrere geschachtelte for-Schleifen aufrufen:
End of explanation
[(x,y) for x in [1,2,3] for y in [3,1,4] if x != y]
Explanation: Und nun als List Comprehension:
End of explanation
a = ["ein Haus", "eine Tasse", "ein Kind"]
list(map(len, a))
Explanation: <h4>Aufgabe 1</h4>
<p>Ersetzen Sie eine Reihe von Worten durch eine Reihe von Zahlen, die die Anzahl der Vokale anzeigen. Z.B.: "Dies ist ein Satz" -> "2 1 2 1". </p>
Die Funktionen map(), filter()
map()
map(FunktionX, Liste)<br/>
Die Funktion FunktionX wird auf jedes Element der Liste angewandt. Ausgabe ist ein Iterator über eine neue Liste mit den Ergebnissen
End of explanation
for i in a:
print(len(i))
Explanation: prozedurale Schreibweise:
End of explanation
#returns True if x is an even number
def is_even(x):
return (x % 2) == 0
b = [2,3,4,5,6]
list(filter(is_even, b))
Explanation: Aufgabe 2
Verwenden Sie map() um in einer Liste von Worten jedes Wort in Großbuchstaben auszugeben. Diskutieren Sie evtl. Probleme mit einem Nachbarn.
Aufgabe 3 (optional)
Lösen Sie Aufgabe 1 mit map()
filter()
filter(FunktionX, Liste)<br/>
Die Funktion FunktionX wird auf jedes Element der Liste angewandt. Konstruiert einen neuen Iterator, in den die Elemente der Liste aufgenommen werden, für die die FunktionX den Ausgabewert True hat.
<br/>Bsp.:
End of explanation
import itertools
#don't try this at home:
#list(itertools.cycle([1,2,3,4,5]))
Explanation: Aufgabe 4
Verwenden Sie filter, um aus dem folgenden Text eine Wortliste zu erstellen, in der alle Pronomina, Artikel und die Worte "dass", "ist", "nicht", "auch", "und" nicht enthalten sind: <br/>
"Ich denke auch, dass ist nicht schlimm. Er hat es nicht gemerkt und das ist gut. Und überhaupt: es ist auch seine Schuld. Ehrlich, das ist wahr."
Das itertools-Modul
Die Funktionen des itertools-Moduls lassen sich einteilen in Funktionen, die:
<ul>
<li>die einen neuen Iterator auf der Basis eines existierenden Iterators erzeugen. </li>
<li>die Teile der Ausgabe eines Iterators auswählen. </li>
<li>die die Ausgabe eines Iterators gruppieren.</li>
<li>die Iteratoren kombinieren</li>
</ul>
Neuen Iterator erzeugen
Diese Funktionen erzeugen einen neuen Iterator auf der Basis eines existierenden: <br/>
itertools.count(),itertools.cycle(), itertools.repeat(), itertools.chain(), itertools.isslice(), itertools.tee()
itertools.cycle(iterator) Gibt die Liste der Elemente in iterator in einer unendlichen Schleife zurück
End of explanation
import itertools
list(itertools.repeat([1,2,3,4], 3))
Explanation: itertools.repeat(iterator, [n]) wiederholt die Elemente in iterator n mal.
End of explanation
a = [1, 2, 3]
b = [4, 5, 6]
c = [7, 8, 9]
list(itertools.chain(a, b, c))
Explanation: itertools.chain(iterator_1, iterator_2, ...) Erzeugt einen neuen Iterator, in dem die Elemente von iterator_1, _2 usw. aneinander gehängt sind.
End of explanation
tuple(itertools.combinations([1, 2, 3, 4], 2))
Explanation: Aufgabe 5
Verknüpfen Sie den Inhalt dreier Dateien zu einem Iterator
Teile der Ausgabe eines Iterators auswählen.
itertools.filterfalse(Prädikat, iterator) ist das Gegenstück zu filter(). Ausgabe enthält alle Elemente, für die das Prädikat falsch ist.
itertools.takewhile(Prädikat, iterator) - gibt solange Elemente aus, wie das Prädikat wahr ist
itertools.dropwhile(Prädikat, iter)entfernt alle Elemente, solange das Prädikat wahr ist. Gibt dann den Rest aus.
itertools.compress(Daten, Selektoren) Nimmt zweei Iteratoren un dgibt nur die Elemente des ersten (Daten) zurück, für die das entsprechende Element im zweiten (Selektoren) wahr ist. Stoppt, wenn einer der Iteratoren erschöpft ist.
Iteratoren kombinieren
itertools.combinations(Iterator, r) gibt alle r-Tuple Kombinationen der Elemente des Iterators wieder. Beispiel:
End of explanation
tuple(itertools.permutations([1, 2, 3, 4], 2))
Explanation: itertools.permutations(iterator, r) gibt alle Permutationen aller Elemente unabhängig von der Reihenfolge in Iterator wieder:
End of explanation
a = [2, -3, 8, 12, -22, -1]
list(map(abs, a))
Explanation: Aufgabe 7
Wieviele Zweier-Permutationen sind mit den graden Zahlen zwischen 1 und 101 möglich?
The operator module
Mathematische Operationen: add(), sub(), mul(), floordiv(), abs(), ... <br/>
Logische Operationen: not_(), truth()<br/>
Bit Operationen: and_(), or_(), invert()<br/>
Vergleiche: eq(), ne(), lt(), le(), gt(), and ge()<br/>
Objektidentität: is_(), is_not()<br/>
End of explanation
def calc(n):
return (n * 13) / 100
a = [1, 2, 5, 7]
list(map(calc, a))
Explanation: Lambda-Funktionen
lambda erlaubt es, kleine Funktionen anonym zu definieren. Nehmen wir an, wir wollen in einer List von Zahlen alle Zahlen durch 100 teilen und mit 13 multiplizieren. Dann könnten wir das so machen:
End of explanation
list(map(lambda x: (x * 13)/100, a))
Explanation: Diese Funktion können wir mit Lambda nun direkt einsetzen:
End of explanation
#zählt die Vokale eines strings
def cv(word):
return sum([1 for a in word if a in "aeiouAEIOUÄÖÜäöü"])
a = "Dies ist eine Lüge, oder nicht?"
[cv(w) for w in a.split()]
Explanation: Allerdings gibt es sehr unterschiedliche Meinungen darüber, ob auf diese Weise guter Code entsteht. Ich finde diesen Ratschlag anz gut:
<ul>
<li>Write a lambda function.</li>
<li>Write a comment explaining what the heck that lambda does. </li>
<li>Study the comment for a while, and think of a name that captures the essence of the comment. </li>
<li>Convert the lambda to a def statement, using that name. </li>
<li>Remove the comment. </li>
</ul>
Hausaufgabe
1) Geben Sie alle Unicode-Zeichen zwischen 34 und 250 aus und geben Sie alle aus, die keine Buchstaben oder Zahlen sind
2) Wie könnte man alle Dateien mit der Endung *.txt in einem Unterverzeichnis hintereinander ausgeben?
3) Schauen Sie sich in der Python-Dokumentation die Funktionen sort und itemgetter an. Wie kann man diese so kombinieren, dass man damit ein Dictionary nach dem value sortieren kann. (no stackoverflow :-)
<br/><br/><br/><br/><br/><br/><br/><br/>
Lösungen
Aufgabe 1
End of explanation
#uppeditys the string word
def upper(word):
return word.upper()
a = ["dies", "ist", "Ein", "satz"]
list(map(upper, a))
Explanation: <br/>
<br/><br/><br/><br/><br/>
Aufgabe 2
End of explanation
def cv(word):
return sum([1 for a in word if a in "aeiouAEIOUÄÖÜäöü"])
a = "Dies ist eine Lüge, oder nicht?"
list(map(cv, a.split()))
Explanation: <br/><br/><br/><br/><br/><br/><br/><br/>
Aufgabe 3
End of explanation
import re
#returns True if word is a function word
def is_no_function_word(word):
f_words = ["der", "die", "das", "ich", "du", "er", "sie", "es", "wir", "ihr", "dass", "ist", "hat", "auch", "und", "nicht"]
if word.lower() in f_words:
return False
else:
return True
text = Ich denke auch, dass ist nicht schlimm. Er hat es nicht gemerkt und das ist gut.
Und überhaupt: es ist auch seine Schuld. Ehrlich, das ist wahr.
list(filter(is_no_function_word, re.findall("\w+", text)))
Explanation: <br/><br/><br/><br/><br/><br/><br/><br/>
Aufgabe 4
End of explanation |
2,131 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
机器学习纳米学位
监督学习
项目2
Step1: 准备数据
在数据能够被作为输入提供给机器学习算法之前,它经常需要被清洗,格式化,和重新组织 - 这通常被叫做预处理。幸运的是,对于这个数据集,没有我们必须处理的无效或丢失的条目,然而,由于某一些特征存在的特性我们必须进行一定的调整。这个预处理都可以极大地帮助我们提升几乎所有的学习算法的结果和预测能力。
获得特征和标签
income 列是我们需要的标签,记录一个人的年收入是否高于50K。 因此我们应该把他从数据中剥离出来,单独存放。
Step2: 转换倾斜的连续特征
一个数据集有时可能包含至少一个靠近某个数字的特征,但有时也会有一些相对来说存在极大值或者极小值的不平凡分布的的特征。算法对这种分布的数据会十分敏感,并且如果这种数据没有能够很好地规一化处理会使得算法表现不佳。在人口普查数据集的两个特征符合这个描述:'capital-gain'和'capital-loss'。
运行下面的代码单元以创建一个关于这两个特征的条形图。请注意当前的值的范围和它们是如何分布的。
Step3: 对于高度倾斜分布的特征如'capital-gain'和'capital-loss',常见的做法是对数据施加一个<a href="https
Step4: 规一化数字特征
除了对于高度倾斜的特征施加转换,对数值特征施加一些形式的缩放通常会是一个好的习惯。在数据上面施加一个缩放并不会改变数据分布的形式(比如上面说的'capital-gain' or 'capital-loss');但是,规一化保证了每一个特征在使用监督学习器的时候能够被平等的对待。注意一旦使用了缩放,观察数据的原始形式不再具有它本来的意义了,就像下面的例子展示的。
运行下面的代码单元来规一化每一个数字特征。我们将使用sklearn.preprocessing.MinMaxScaler来完成这个任务。
Step5: 练习:数据预处理
从上面的数据探索中的表中,我们可以看到有几个属性的每一条记录都是非数字的。通常情况下,学习算法期望输入是数字的,这要求非数字的特征(称为类别变量)被转换。转换类别变量的一种流行的方法是使用独热编码方案。独热编码为每一个非数字特征的每一个可能的类别创建一个_“虚拟”_变量。例如,假设someFeature有三个可能的取值A,B或者C,。我们将把这个特征编码成someFeature_A, someFeature_B和someFeature_C.
| 特征X | | 特征X_A | 特征X_B | 特征X_C |
|
Step6: 混洗和切分数据
现在所有的 类别变量 已被转换成数值特征,而且所有的数值特征已被规一化。和我们一般情况下做的一样,我们现在将数据(包括特征和它们的标签)切分成训练和测试集。其中80%的数据将用于训练和20%的数据用于测试。然后再进一步把训练数据分为训练集和验证集,用来选择和优化模型。
运行下面的代码单元来完成切分。
Step7: 评价模型性能
在这一部分中,我们将尝试四种不同的算法,并确定哪一个能够最好地建模数据。四种算法包含一个天真的预测器 和三个你选择的监督学习器。
评价方法和朴素的预测器
CharityML通过他们的研究人员知道被调查者的年收入大于\$50,000最有可能向他们捐款。因为这个原因CharityML对于准确预测谁能够获得\$50,000以上收入尤其有兴趣。这样看起来使用准确率作为评价模型的标准是合适的。另外,把没有收入大于\$50,000的人识别成年收入大于\$50,000对于CharityML来说是有害的,因为他想要找到的是有意愿捐款的用户。这样,我们期望的模型具有准确预测那些能够年收入大于\$50,000的能力比模型去查全这些被调查者更重要。我们能够使用F-beta score作为评价指标,这样能够同时考虑查准率和查全率:
$$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$
尤其是,当 $\beta = 0.5$ 的时候更多的强调查准率,这叫做F$_{0.5}$ score (或者为了简单叫做F-score)。
Step8: 问题 1 - 天真的预测器的性能
通过查看收入超过和不超过 \$50,000 的人数,我们能发现多数被调查者年收入没有超过 \$50,000。如果我们简单地预测说“这个人的收入没有超过 \$50,000”,我们就可以得到一个 准确率超过 50% 的预测。这样我们甚至不用看数据就能做到一个准确率超过 50%。这样一个预测被称作是天真的。通常对数据使用一个天真的预测器是十分重要的,这样能够帮助建立一个模型表现是否好的基准。 使用下面的代码单元计算天真的预测器的相关性能。将你的计算结果赋值给'accuracy', ‘precision’, ‘recall’ 和 'fscore',这些值会在后面被使用,请注意这里不能使用scikit-learn,你需要根据公式自己实现相关计算。
如果我们选择一个无论什么情况都预测被调查者年收入大于 \$50,000 的模型,那么这个模型在验证集上的准确率,查准率,查全率和 F-score是多少?
监督学习模型
问题 2 - 模型应用
你能够在 scikit-learn 中选择以下监督学习模型
- 高斯朴素贝叶斯 (GaussianNB)
- 决策树 (DecisionTree)
- 集成方法 (Bagging, AdaBoost, Random Forest, Gradient Boosting)
- K近邻 (K Nearest Neighbors)
- 随机梯度下降分类器 (SGDC)
- 支撑向量机 (SVM)
- Logistic回归(LogisticRegression)
从上面的监督学习模型中选择三个适合我们这个问题的模型,并回答相应问题。
模型1
模型名称
回答:决策树
描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处)
回答:学生录取资格(来自机器学习课程(决策树))
这个模型的优势是什么?他什么情况下表现最好?
回答:优势:1、决策树易于实现和理解;2、计算复杂度相对较低,结果的输出易于理解。
当目标函数具有离散的输出值值表现最好
这个模型的缺点是什么?什么条件下它表现很差?
回答:可能出现过拟合问题。当过于依赖数据或参数设置不好时,它的表现很差。
根据我们当前数据集的特点,为什么这个模型适合这个问题。
回答:1、该问题是非线性问题,决策树能够很好地解决非线性问题;2、我们的数据中有大量布尔型特征且它的一些特征对于我们的目标可能相关程度并不高
模型2
模型名称
回答:高斯朴素贝叶斯
描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处)
回答:过滤垃圾邮件,可以把文档中的词作为特征进行分类(来自机器学习课程(朴素贝叶斯)))。
这个模型的优势是什么?他什么情况下表现最好?
回答:优势是在数据较少的情况下仍然有效,对缺失数据不敏感。适合小规模数据
这个模型的缺点是什么?什么条件下它表现很差?
回答:朴素贝叶斯模型假设各属性相互独立。但在实际应用中,属性之间往往有一定关联性,导致分类效果受到影响。
根据我们当前数据集的特点,为什么这个模型适合这个问题。
回答:数据集各属性关联性相对较小,且为小规模数据
模型3
模型名称
回答:AdaBoost
描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处)
回答:预测患有疝病的马是否存活
这个模型的优势是什么?他什么情况下表现最好?
回答:优势是泛化错误低,易编码,可以应用在大部分分类器上,无参数调整。对于基于错误提升分类器性能它的表现最好
这个模型的缺点是什么?什么条件下它表现很差?
回答:缺点是对离群点敏感。当输入数据有不少极端值时,它的表现很差
根据我们当前数据集的特点,为什么这个模型适合这个问题。
回答:我们的数据集特征很多,较为复杂,在后续迭代中,出现错误的数据权重可能增大,而针对这种错误的调节能力正是AdaBoost的长处
练习 - 创建一个训练和预测的流水线
为了正确评估你选择的每一个模型的性能,创建一个能够帮助你快速有效地使用不同大小的训练集并在验证集上做预测的训练和验证的流水线是十分重要的。
你在这里实现的功能将会在接下来的部分中被用到。在下面的代码单元中,你将实现以下功能:
从sklearn.metrics中导入fbeta_score和accuracy_score。
用训练集拟合学习器,并记录训练时间。
对训练集的前300个数据点和验证集进行预测并记录预测时间。
计算预测训练集的前300个数据点的准确率和F-score。
计算预测验证集的准确率和F-score。
Step9: 练习:初始模型的评估
在下面的代码单元中,您将需要实现以下功能:
- 导入你在前面讨论的三个监督学习模型。
- 初始化三个模型并存储在'clf_A','clf_B'和'clf_C'中。
- 使用模型的默认参数值,在接下来的部分中你将需要对某一个模型的参数进行调整。
- 设置random_state (如果有这个参数)。
- 计算1%, 10%, 100%的训练数据分别对应多少个数据点,并将这些值存储在'samples_1', 'samples_10', 'samples_100'中
注意:取决于你选择的算法,下面实现的代码可能需要一些时间来运行!
Step10: 提高效果
在这最后一节中,您将从三个有监督的学习模型中选择 最好的 模型来使用学生数据。你将在整个训练集(X_train和y_train)上使用网格搜索优化至少调节一个参数以获得一个比没有调节之前更好的 F-score。
问题 3 - 选择最佳的模型
基于你前面做的评价,用一到两段话向 CharityML 解释这三个模型中哪一个对于判断被调查者的年收入大于 \$50,000 是最合适的。
提示:你的答案应该包括评价指标,预测/训练时间,以及该算法是否适合这里的数据。
回答:DecisionTree在训练集上的accuracy score和F-score在三个模型中是最好的,虽然DecisionTree在测试集上的表现没这么好,在无参数调整的情况下出现了轻度的过拟合,但调整参数后应该可以消除这个问题,虽然对完整数据它的训练时间较长,但比AdaBoost快多了,且考虑到它的预测时间短,也就是查询时间短,我们一旦把模型训练出来,之后的主要任务就只有查询了,并不会过多消耗资源和开支,所以我还是决定使用DecisionTree.
问题 4 - 用通俗的话解释模型
用一到两段话,向 CharityML 用外行也听得懂的话来解释最终模型是如何工作的。你需要解释所选模型的主要特点。例如,这个模型是怎样被训练的,它又是如何做出预测的。避免使用高级的数学或技术术语,不要使用公式或特定的算法名词。
回答: 根据训练集中输入的特征进行逐步分类,并形成相应的树状结构,输入预测值的特征,根据特征的值寻找树的响应节点,知道最后的节点,就是预测的结果
练习:模型调优
调节选择的模型的参数。使用网格搜索(GridSearchCV)来至少调整模型的重要参数(至少调整一个),这个参数至少需尝试3个不同的值。你要使用整个训练集来完成这个过程。在接下来的代码单元中,你需要实现以下功能:
导入sklearn.model_selection.GridSearchCV 和 sklearn.metrics.make_scorer.
初始化你选择的分类器,并将其存储在clf中。
设置random_state (如果有这个参数)。
创建一个对于这个模型你希望调整参数的字典。
例如
Step11: 问题 5 - 最终模型评估
你的最优模型在测试数据上的准确率和 F-score 是多少?这些分数比没有优化的模型好还是差?
注意:请在下面的表格中填写你的结果,然后在答案框中提供讨论。
结果
Step12: 问题 7 - 提取特征重要性
观察上面创建的展示五个用于预测被调查者年收入是否大于\$50,000最相关的特征的可视化图像。
这五个特征的权重加起来是否超过了0.5?<br>
这五个特征和你在问题 6中讨论的特征比较怎么样?<br>
如果说你的答案和这里的相近,那么这个可视化怎样佐证了你的想法?<br>
如果你的选择不相近,那么为什么你觉得这些特征更加相关?
回答:
特征选择
如果我们只是用可用特征的一个子集的话模型表现会怎么样?通过使用更少的特征来训练,在评价指标的角度来看我们的期望是训练和预测的时间会更少。从上面的可视化来看,我们可以看到前五个最重要的特征贡献了数据中所有特征中超过一半的重要性。这提示我们可以尝试去减小特征空间,简化模型需要学习的信息。下面代码单元将使用你前面发现的优化模型,并只使用五个最重要的特征在相同的训练集上训练模型。
Step13: 问题 8 - 特征选择的影响
最终模型在只是用五个特征的数据上和使用所有的特征数据上的 F-score 和准确率相比怎么样?
如果训练时间是一个要考虑的因素,你会考虑使用部分特征的数据作为你的训练集吗?
回答:
问题 9 - 在测试集上测试你的模型
终于到了测试的时候,记住,测试集只能用一次。
使用你最有信心的模型,在测试集上测试,计算出准确率和 F-score。
简述你选择这个模型的原因,并分析测试结果 | Python Code:
# TODO:总的记录数
n_records = len(data)
# # TODO:被调查者 的收入大于$50,000的人数
n_greater_50k = len(data[data.income.str.contains('>50K')])
# # TODO:被调查者的收入最多为$50,000的人数
n_at_most_50k = len(data[data.income.str.contains('<=50K')])
# # TODO:被调查者收入大于$50,000所占的比例
greater_percent = (n_greater_50k / n_records) * 100
# 打印结果
print ("Total number of records: {}".format(n_records))
print ("Individuals making more than $50,000: {}".format(n_greater_50k))
print ("Individuals making at most $50,000: {}".format(n_at_most_50k))
print ("Percentage of individuals making more than $50,000: {:.2f}%".format(greater_percent))
Explanation: 机器学习纳米学位
监督学习
项目2: 为CharityML寻找捐献者
欢迎来到机器学习工程师纳米学位的第二个项目!在此文件中,有些示例代码已经提供给你,但你还需要实现更多的功能让项目成功运行。除非有明确要求,你无须修改任何已给出的代码。以'练习'开始的标题表示接下来的代码部分中有你必须要实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以'TODO'标出。请仔细阅读所有的提示!
除了实现代码外,你还必须回答一些与项目和你的实现有关的问题。每一个需要你回答的问题都会以'问题 X'为标题。请仔细阅读每个问题,并且在问题后的'回答'文字框中写出完整的答案。我们将根据你对问题的回答和撰写代码所实现的功能来对你提交的项目进行评分。
提示:Code 和 Markdown 区域可通过Shift + Enter快捷键运行。此外,Markdown可以通过双击进入编辑模式。
开始
在这个项目中,你将使用1994年美国人口普查收集的数据,选用几个监督学习算法以准确地建模被调查者的收入。然后,你将根据初步结果从中选择出最佳的候选算法,并进一步优化该算法以最好地建模这些数据。你的目标是建立一个能够准确地预测被调查者年收入是否超过50000美元的模型。这种类型的任务会出现在那些依赖于捐款而存在的非营利性组织。了解人群的收入情况可以帮助一个非营利性的机构更好地了解他们要多大的捐赠,或是否他们应该接触这些人。虽然我们很难直接从公开的资源中推断出一个人的一般收入阶层,但是我们可以(也正是我们将要做的)从其他的一些公开的可获得的资源中获得一些特征从而推断出该值。
这个项目的数据集来自UCI机器学习知识库。这个数据集是由Ron Kohavi和Barry Becker在发表文章_"Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid"_之后捐赠的,你可以在Ron Kohavi提供的在线版本中找到这个文章。我们在这里探索的数据集相比于原有的数据集有一些小小的改变,比如说移除了特征'fnlwgt' 以及一些遗失的或者是格式不正确的记录。
探索数据
运行下面的代码单元以载入需要的Python库并导入人口普查数据。注意数据集的最后一列'income'将是我们需要预测的列(表示被调查者的年收入会大于或者是最多50,000美元),人口普查数据中的每一列都将是关于被调查者的特征。
练习:数据探索
首先我们对数据集进行一个粗略的探索,我们将看看每一个类别里会有多少被调查者?并且告诉我们这些里面多大比例是年收入大于50,000美元的。在下面的代码单元中,你将需要计算以下量:
总的记录数量,'n_records'
年收入大于50,000美元的人数,'n_greater_50k'.
年收入最多为50,000美元的人数 'n_at_most_50k'.
年收入大于50,000美元的人所占的比例, 'greater_percent'.
提示: 您可能需要查看上面的生成的表,以了解'income'条目的格式是什么样的。
End of explanation
# 为这个项目导入需要的库
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # 允许为DataFrame使用display()
# 导入附加的可视化代码visuals.py
import visuals as vs
# 为notebook提供更加漂亮的可视化
%matplotlib inline
# 导入人口普查数据
data = pd.read_csv("census.csv")
# 成功 - 显示第一条记录
display(data.head(n=1))
# 将数据切分成特征和对应的标签
income_raw = data['income']
features_raw = data.drop('income', axis = 1)
Explanation: 准备数据
在数据能够被作为输入提供给机器学习算法之前,它经常需要被清洗,格式化,和重新组织 - 这通常被叫做预处理。幸运的是,对于这个数据集,没有我们必须处理的无效或丢失的条目,然而,由于某一些特征存在的特性我们必须进行一定的调整。这个预处理都可以极大地帮助我们提升几乎所有的学习算法的结果和预测能力。
获得特征和标签
income 列是我们需要的标签,记录一个人的年收入是否高于50K。 因此我们应该把他从数据中剥离出来,单独存放。
End of explanation
# 可视化 'capital-gain'和'capital-loss' 两个特征
vs.distribution(features_raw)
Explanation: 转换倾斜的连续特征
一个数据集有时可能包含至少一个靠近某个数字的特征,但有时也会有一些相对来说存在极大值或者极小值的不平凡分布的的特征。算法对这种分布的数据会十分敏感,并且如果这种数据没有能够很好地规一化处理会使得算法表现不佳。在人口普查数据集的两个特征符合这个描述:'capital-gain'和'capital-loss'。
运行下面的代码单元以创建一个关于这两个特征的条形图。请注意当前的值的范围和它们是如何分布的。
End of explanation
# 对于倾斜的数据使用Log转换
skewed = ['capital-gain', 'capital-loss']
features_raw[skewed] = data[skewed].apply(lambda x: np.log(x + 1))
# 可视化对数转换后 'capital-gain'和'capital-loss' 两个特征
vs.distribution(features_raw, transformed = True)
Explanation: 对于高度倾斜分布的特征如'capital-gain'和'capital-loss',常见的做法是对数据施加一个<a href="https://en.wikipedia.org/wiki/Data_transformation_(statistics)">对数转换</a>,将数据转换成对数,这样非常大和非常小的值不会对学习算法产生负面的影响。并且使用对数变换显著降低了由于异常值所造成的数据范围异常。但是在应用这个变换时必须小心:因为0的对数是没有定义的,所以我们必须先将数据处理成一个比0稍微大一点的数以成功完成对数转换。
运行下面的代码单元来执行数据的转换和可视化结果。再次,注意值的范围和它们是如何分布的。
End of explanation
from sklearn.preprocessing import MinMaxScaler
# 初始化一个 scaler,并将它施加到特征上
scaler = MinMaxScaler()
numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
features_raw[numerical] = scaler.fit_transform(data[numerical])
# 显示一个经过缩放的样例记录
display(features_raw.head(n = 1))
Explanation: 规一化数字特征
除了对于高度倾斜的特征施加转换,对数值特征施加一些形式的缩放通常会是一个好的习惯。在数据上面施加一个缩放并不会改变数据分布的形式(比如上面说的'capital-gain' or 'capital-loss');但是,规一化保证了每一个特征在使用监督学习器的时候能够被平等的对待。注意一旦使用了缩放,观察数据的原始形式不再具有它本来的意义了,就像下面的例子展示的。
运行下面的代码单元来规一化每一个数字特征。我们将使用sklearn.preprocessing.MinMaxScaler来完成这个任务。
End of explanation
# TODO:使用pandas.get_dummies()对'features_raw'数据进行独热编码
features = pd.get_dummies(features_raw)
# TODO:将'income_raw'编码成数字值
income = income_raw.replace(['>50K', '<=50K'], [1, 0])
# 打印经过独热编码之后的特征数量
encoded = list(features.columns)
print ("{} total features after one-hot encoding.".format(len(encoded)))
# 移除下面一行的注释以观察编码的特征名字
#print encoded
Explanation: 练习:数据预处理
从上面的数据探索中的表中,我们可以看到有几个属性的每一条记录都是非数字的。通常情况下,学习算法期望输入是数字的,这要求非数字的特征(称为类别变量)被转换。转换类别变量的一种流行的方法是使用独热编码方案。独热编码为每一个非数字特征的每一个可能的类别创建一个_“虚拟”_变量。例如,假设someFeature有三个可能的取值A,B或者C,。我们将把这个特征编码成someFeature_A, someFeature_B和someFeature_C.
| 特征X | | 特征X_A | 特征X_B | 特征X_C |
| :-: | | :-: | :-: | :-: |
| B | | 0 | 1 | 0 |
| C | ----> 独热编码 ----> | 0 | 0 | 1 |
| A | | 1 | 0 | 0 |
此外,对于非数字的特征,我们需要将非数字的标签'income'转换成数值以保证学习算法能够正常工作。因为这个标签只有两种可能的类别("<=50K"和">50K"),我们不必要使用独热编码,可以直接将他们编码分别成两个类0和1,在下面的代码单元中你将实现以下功能:
- 使用pandas.get_dummies()对'features_raw'数据来施加一个独热编码。
- 将目标标签'income_raw'转换成数字项。
- 将"<=50K"转换成0;将">50K"转换成1。
End of explanation
# 导入 train_test_split
from sklearn.model_selection import train_test_split
# 将'features'和'income'数据切分成训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(features, income, test_size = 0.2, random_state = 0,
stratify = income)
# 将'X_train'和'y_train'进一步切分为训练集和验证集
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=0,
stratify = y_train)
# 显示切分的结果
print ("Training set has {} samples.".format(X_train.shape[0]))
print ("Validation set has {} samples.".format(X_val.shape[0]))
print ("Testing set has {} samples.".format(X_test.shape[0]))
Explanation: 混洗和切分数据
现在所有的 类别变量 已被转换成数值特征,而且所有的数值特征已被规一化。和我们一般情况下做的一样,我们现在将数据(包括特征和它们的标签)切分成训练和测试集。其中80%的数据将用于训练和20%的数据用于测试。然后再进一步把训练数据分为训练集和验证集,用来选择和优化模型。
运行下面的代码单元来完成切分。
End of explanation
#不能使用scikit-learn,你需要根据公式自己实现相关计算。
#TODO: 计算准确率
accuracy = np.divide(n_greater_50k, float(n_records))
# TODO: 计算查准率 Precision
precision = np.divide(n_greater_50k, float(n_records))
# TODO: 计算查全率 Recall
recall = np.divide(n_greater_50k, n_greater_50k)
# TODO: 使用上面的公式,设置beta=0.5,计算F-score
fscore = (1 + np.power(0.5, 2)) * np.multiply(precision, recall) / (np.power(0.5, 2) * precision + recall)
# 打印结果
print ("Naive Predictor on validation data: \n \
Accuracy score: {:.4f} \n \
Precision: {:.4f} \n \
Recall: {:.4f} \n \
F-score: {:.4f}".format(accuracy, precision, recall, fscore))
Explanation: 评价模型性能
在这一部分中,我们将尝试四种不同的算法,并确定哪一个能够最好地建模数据。四种算法包含一个天真的预测器 和三个你选择的监督学习器。
评价方法和朴素的预测器
CharityML通过他们的研究人员知道被调查者的年收入大于\$50,000最有可能向他们捐款。因为这个原因CharityML对于准确预测谁能够获得\$50,000以上收入尤其有兴趣。这样看起来使用准确率作为评价模型的标准是合适的。另外,把没有收入大于\$50,000的人识别成年收入大于\$50,000对于CharityML来说是有害的,因为他想要找到的是有意愿捐款的用户。这样,我们期望的模型具有准确预测那些能够年收入大于\$50,000的能力比模型去查全这些被调查者更重要。我们能够使用F-beta score作为评价指标,这样能够同时考虑查准率和查全率:
$$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$
尤其是,当 $\beta = 0.5$ 的时候更多的强调查准率,这叫做F$_{0.5}$ score (或者为了简单叫做F-score)。
End of explanation
# TODO:从sklearn中导入两个评价指标 - fbeta_score和accuracy_score
from sklearn.metrics import fbeta_score, accuracy_score
def train_predict(learner, sample_size, X_train, y_train, X_val, y_val):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_val: features validation set
- y_val: income validation set
'''
results = {}
# TODO:使用sample_size大小的训练数据来拟合学习器
# TODO: Fit the learner to the training data using slicing with 'sample_size'
start = time() # 获得程序开始时间
learner = learner.fit(X_train[:sample_size],y_train[:sample_size])
end = time() # 获得程序结束时间
# TODO:计算训练时间
results['train_time'] = end - start
print(results['train_time'])
# TODO: 得到在验证集上的预测值
# 然后得到对前300个训练数据的预测结果
start = time() # 获得程序开始时间
predictions_val = learner.predict(X_val)
predictions_train = learner.predict(X_train[:300])
end = time() # 获得程序结束时间
# TODO:计算预测用时
results['pred_time'] = end - start
# TODO:计算在最前面的300个训练数据的准确率
results['acc_train'] = accuracy_score(y_train[:300],predictions_train)
# TODO:计算在验证上的准确率
results['acc_test'] = accuracy_score( y_val, predictions_val)
# TODO:计算在最前面300个训练数据上的F-score
results['f_train'] = fbeta_score(y_train[:300], predictions_train, 0.5)
# TODO:计算验证集上的F-score
results['f_test'] = fbeta_score(y_val,predictions_val,0.5)
# 成功
print ("{} trained on {} samples.".format(learner.__class__.__name__, sample_size))
# 返回结果
return results
Explanation: 问题 1 - 天真的预测器的性能
通过查看收入超过和不超过 \$50,000 的人数,我们能发现多数被调查者年收入没有超过 \$50,000。如果我们简单地预测说“这个人的收入没有超过 \$50,000”,我们就可以得到一个 准确率超过 50% 的预测。这样我们甚至不用看数据就能做到一个准确率超过 50%。这样一个预测被称作是天真的。通常对数据使用一个天真的预测器是十分重要的,这样能够帮助建立一个模型表现是否好的基准。 使用下面的代码单元计算天真的预测器的相关性能。将你的计算结果赋值给'accuracy', ‘precision’, ‘recall’ 和 'fscore',这些值会在后面被使用,请注意这里不能使用scikit-learn,你需要根据公式自己实现相关计算。
如果我们选择一个无论什么情况都预测被调查者年收入大于 \$50,000 的模型,那么这个模型在验证集上的准确率,查准率,查全率和 F-score是多少?
监督学习模型
问题 2 - 模型应用
你能够在 scikit-learn 中选择以下监督学习模型
- 高斯朴素贝叶斯 (GaussianNB)
- 决策树 (DecisionTree)
- 集成方法 (Bagging, AdaBoost, Random Forest, Gradient Boosting)
- K近邻 (K Nearest Neighbors)
- 随机梯度下降分类器 (SGDC)
- 支撑向量机 (SVM)
- Logistic回归(LogisticRegression)
从上面的监督学习模型中选择三个适合我们这个问题的模型,并回答相应问题。
模型1
模型名称
回答:决策树
描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处)
回答:学生录取资格(来自机器学习课程(决策树))
这个模型的优势是什么?他什么情况下表现最好?
回答:优势:1、决策树易于实现和理解;2、计算复杂度相对较低,结果的输出易于理解。
当目标函数具有离散的输出值值表现最好
这个模型的缺点是什么?什么条件下它表现很差?
回答:可能出现过拟合问题。当过于依赖数据或参数设置不好时,它的表现很差。
根据我们当前数据集的特点,为什么这个模型适合这个问题。
回答:1、该问题是非线性问题,决策树能够很好地解决非线性问题;2、我们的数据中有大量布尔型特征且它的一些特征对于我们的目标可能相关程度并不高
模型2
模型名称
回答:高斯朴素贝叶斯
描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处)
回答:过滤垃圾邮件,可以把文档中的词作为特征进行分类(来自机器学习课程(朴素贝叶斯)))。
这个模型的优势是什么?他什么情况下表现最好?
回答:优势是在数据较少的情况下仍然有效,对缺失数据不敏感。适合小规模数据
这个模型的缺点是什么?什么条件下它表现很差?
回答:朴素贝叶斯模型假设各属性相互独立。但在实际应用中,属性之间往往有一定关联性,导致分类效果受到影响。
根据我们当前数据集的特点,为什么这个模型适合这个问题。
回答:数据集各属性关联性相对较小,且为小规模数据
模型3
模型名称
回答:AdaBoost
描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处)
回答:预测患有疝病的马是否存活
这个模型的优势是什么?他什么情况下表现最好?
回答:优势是泛化错误低,易编码,可以应用在大部分分类器上,无参数调整。对于基于错误提升分类器性能它的表现最好
这个模型的缺点是什么?什么条件下它表现很差?
回答:缺点是对离群点敏感。当输入数据有不少极端值时,它的表现很差
根据我们当前数据集的特点,为什么这个模型适合这个问题。
回答:我们的数据集特征很多,较为复杂,在后续迭代中,出现错误的数据权重可能增大,而针对这种错误的调节能力正是AdaBoost的长处
练习 - 创建一个训练和预测的流水线
为了正确评估你选择的每一个模型的性能,创建一个能够帮助你快速有效地使用不同大小的训练集并在验证集上做预测的训练和验证的流水线是十分重要的。
你在这里实现的功能将会在接下来的部分中被用到。在下面的代码单元中,你将实现以下功能:
从sklearn.metrics中导入fbeta_score和accuracy_score。
用训练集拟合学习器,并记录训练时间。
对训练集的前300个数据点和验证集进行预测并记录预测时间。
计算预测训练集的前300个数据点的准确率和F-score。
计算预测验证集的准确率和F-score。
End of explanation
# TODO:从sklearn中导入三个监督学习模型
from sklearn.tree import DecisionTreeClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import AdaBoostClassifier
# TODO:初始化三个模型
clf_A = DecisionTreeClassifier()
clf_B = GaussianNB()
clf_C = AdaBoostClassifier()
# TODO:计算1%, 10%, 100%的训练数据分别对应多少点
samples_1 = int(len(X_train)*0.01)
samples_10 = int(len(X_train)*0.1)
samples_100 = int(len(X_train))
# 收集学习器的结果
results = {}
for clf in [clf_A, clf_B, clf_C]:
clf_name = clf.__class__.__name__
results[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
results[clf_name][i] = train_predict(clf, samples, X_train, y_train, X_val, y_val)
# 对选择的三个模型得到的评价结果进行可视化
vs.evaluate(results, accuracy, fscore)
Explanation: 练习:初始模型的评估
在下面的代码单元中,您将需要实现以下功能:
- 导入你在前面讨论的三个监督学习模型。
- 初始化三个模型并存储在'clf_A','clf_B'和'clf_C'中。
- 使用模型的默认参数值,在接下来的部分中你将需要对某一个模型的参数进行调整。
- 设置random_state (如果有这个参数)。
- 计算1%, 10%, 100%的训练数据分别对应多少个数据点,并将这些值存储在'samples_1', 'samples_10', 'samples_100'中
注意:取决于你选择的算法,下面实现的代码可能需要一些时间来运行!
End of explanation
# TODO:导入'GridSearchCV', 'make_scorer'和其他一些需要的库
from sklearn.ensemble import AdaBoostClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import fbeta_score,make_scorer
# TODO:初始化分类器
clf = AdaBoostClassifier(random_state=0)
# TODO:创建你希望调节的参数列表
parameters = {'n_estimators': [50, 100, 200]}
# TODO:创建一个fbeta_score打分对象
scorer = make_scorer(fbeta_score, beta=0.5)
# TODO:在分类器上使用网格搜索,使用'scorer'作为评价函数
grid_obj = GridSearchCV(clf, parameters,scorer)
# TODO:用训练数据拟合网格搜索对象并找到最佳参数
grid_obj = grid_obj.fit(X_train, y_train)
# 得到estimator
best_clf = grid_obj.best_estimator_
# 使用没有调优的模型做预测
predictions = (clf.fit(X_train, y_train)).predict(X_val)
best_predictions = best_clf.predict(X_val)
# 汇报调优后的模型
print ("best_clf\n------")
print (best_clf)
# 汇报调参前和调参后的分数
print ("\nUnoptimized model\n------")
print ("Accuracy score on validation data: {:.4f}".format(accuracy_score(y_val, predictions)))
print ("F-score on validation data: {:.4f}".format(fbeta_score(y_val, predictions, beta = 0.5)))
print ("\nOptimized Model\n------")
print ("Final accuracy score on the validation data: {:.4f}".format(accuracy_score(y_val, best_predictions)))
print ("Final F-score on the validation data: {:.4f}".format(fbeta_score(y_val, best_predictions, beta = 0.5)))
Explanation: 提高效果
在这最后一节中,您将从三个有监督的学习模型中选择 最好的 模型来使用学生数据。你将在整个训练集(X_train和y_train)上使用网格搜索优化至少调节一个参数以获得一个比没有调节之前更好的 F-score。
问题 3 - 选择最佳的模型
基于你前面做的评价,用一到两段话向 CharityML 解释这三个模型中哪一个对于判断被调查者的年收入大于 \$50,000 是最合适的。
提示:你的答案应该包括评价指标,预测/训练时间,以及该算法是否适合这里的数据。
回答:DecisionTree在训练集上的accuracy score和F-score在三个模型中是最好的,虽然DecisionTree在测试集上的表现没这么好,在无参数调整的情况下出现了轻度的过拟合,但调整参数后应该可以消除这个问题,虽然对完整数据它的训练时间较长,但比AdaBoost快多了,且考虑到它的预测时间短,也就是查询时间短,我们一旦把模型训练出来,之后的主要任务就只有查询了,并不会过多消耗资源和开支,所以我还是决定使用DecisionTree.
问题 4 - 用通俗的话解释模型
用一到两段话,向 CharityML 用外行也听得懂的话来解释最终模型是如何工作的。你需要解释所选模型的主要特点。例如,这个模型是怎样被训练的,它又是如何做出预测的。避免使用高级的数学或技术术语,不要使用公式或特定的算法名词。
回答: 根据训练集中输入的特征进行逐步分类,并形成相应的树状结构,输入预测值的特征,根据特征的值寻找树的响应节点,知道最后的节点,就是预测的结果
练习:模型调优
调节选择的模型的参数。使用网格搜索(GridSearchCV)来至少调整模型的重要参数(至少调整一个),这个参数至少需尝试3个不同的值。你要使用整个训练集来完成这个过程。在接下来的代码单元中,你需要实现以下功能:
导入sklearn.model_selection.GridSearchCV 和 sklearn.metrics.make_scorer.
初始化你选择的分类器,并将其存储在clf中。
设置random_state (如果有这个参数)。
创建一个对于这个模型你希望调整参数的字典。
例如: parameters = {'parameter' : [list of values]}。
注意: 如果你的学习器有 max_features 参数,请不要调节它!
使用make_scorer来创建一个fbeta_score评分对象(设置$\beta = 0.5$)。
在分类器clf上用'scorer'作为评价函数运行网格搜索,并将结果存储在grid_obj中。
用训练集(X_train, y_train)训练grid search object,并将结果存储在grid_fit中。
注意: 取决于你选择的参数列表,下面实现的代码可能需要花一些时间运行!
End of explanation
# TODO:导入一个有'feature_importances_'的监督学习模型
# TODO:在训练集上训练一个监督学习模型
model = None
# TODO: 提取特征重要性
importances = None
# 绘图
vs.feature_plot(importances, X_train, y_train)
Explanation: 问题 5 - 最终模型评估
你的最优模型在测试数据上的准确率和 F-score 是多少?这些分数比没有优化的模型好还是差?
注意:请在下面的表格中填写你的结果,然后在答案框中提供讨论。
结果:
| 评价指标 | 未优化的模型 | 优化的模型 |
| :------------: | :---------------: | :-------------: |
| 准确率 | | |
| F-score | | |
回答:
特征的重要性
在数据上(比如我们这里使用的人口普查的数据)使用监督学习算法的一个重要的任务是决定哪些特征能够提供最强的预测能力。专注于少量的有效特征和标签之间的关系,我们能够更加简单地理解这些现象,这在很多情况下都是十分有用的。在这个项目的情境下这表示我们希望选择一小部分特征,这些特征能够在预测被调查者是否年收入大于\$50,000这个问题上有很强的预测能力。
选择一个有 'feature_importance_' 属性的scikit学习分类器(例如 AdaBoost,随机森林)。'feature_importance_' 属性是对特征的重要性排序的函数。在下一个代码单元中用这个分类器拟合训练集数据并使用这个属性来决定人口普查数据中最重要的5个特征。
问题 6 - 观察特征相关性
当探索数据的时候,它显示在这个人口普查数据集中每一条记录我们有十三个可用的特征。
在这十三个记录中,你认为哪五个特征对于预测是最重要的,选择每个特征的理由是什么?你会怎样对他们排序?
回答:
- 特征1:
- 特征2:
- 特征3:
- 特征4:
- 特征5:
练习 - 提取特征重要性
选择一个scikit-learn中有feature_importance_属性的监督学习分类器,这个属性是一个在做预测的时候根据所选择的算法来对特征重要性进行排序的功能。
在下面的代码单元中,你将要实现以下功能:
- 如果这个模型和你前面使用的三个模型不一样的话从sklearn中导入一个监督学习模型。
- 在整个训练集上训练一个监督学习模型。
- 使用模型中的 'feature_importances_'提取特征的重要性。
End of explanation
# 导入克隆模型的功能
from sklearn.base import clone
# 减小特征空间
X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]]
X_val_reduced = X_val[X_val.columns.values[(np.argsort(importances)[::-1])[:5]]]
# 在前面的网格搜索的基础上训练一个“最好的”模型
clf_on_reduced = (clone(best_clf)).fit(X_train_reduced, y_train)
# 做一个新的预测
reduced_predictions = clf_on_reduced.predict(X_val_reduced)
# 对于每一个版本的数据汇报最终模型的分数
print ("Final Model trained on full data\n------")
print ("Accuracy on validation data: {:.4f}".format(accuracy_score(y_val, best_predictions)))
print ("F-score on validation data: {:.4f}".format(fbeta_score(y_val, best_predictions, beta = 0.5)))
print ("\nFinal Model trained on reduced data\n------")
print ("Accuracy on validation data: {:.4f}".format(accuracy_score(y_val, reduced_predictions)))
print ("F-score on validation data: {:.4f}".format(fbeta_score(y_val, reduced_predictions, beta = 0.5)))
Explanation: 问题 7 - 提取特征重要性
观察上面创建的展示五个用于预测被调查者年收入是否大于\$50,000最相关的特征的可视化图像。
这五个特征的权重加起来是否超过了0.5?<br>
这五个特征和你在问题 6中讨论的特征比较怎么样?<br>
如果说你的答案和这里的相近,那么这个可视化怎样佐证了你的想法?<br>
如果你的选择不相近,那么为什么你觉得这些特征更加相关?
回答:
特征选择
如果我们只是用可用特征的一个子集的话模型表现会怎么样?通过使用更少的特征来训练,在评价指标的角度来看我们的期望是训练和预测的时间会更少。从上面的可视化来看,我们可以看到前五个最重要的特征贡献了数据中所有特征中超过一半的重要性。这提示我们可以尝试去减小特征空间,简化模型需要学习的信息。下面代码单元将使用你前面发现的优化模型,并只使用五个最重要的特征在相同的训练集上训练模型。
End of explanation
#TODO test your model on testing data and report accuracy and F score
Explanation: 问题 8 - 特征选择的影响
最终模型在只是用五个特征的数据上和使用所有的特征数据上的 F-score 和准确率相比怎么样?
如果训练时间是一个要考虑的因素,你会考虑使用部分特征的数据作为你的训练集吗?
回答:
问题 9 - 在测试集上测试你的模型
终于到了测试的时候,记住,测试集只能用一次。
使用你最有信心的模型,在测试集上测试,计算出准确率和 F-score。
简述你选择这个模型的原因,并分析测试结果
End of explanation |
2,132 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DefinedAEpTandZ0 media example
Step1: Measurement of two CPWG lines with different lengths
The measurement where performed the 21th March 2017 on a Anritsu MS46524B 20GHz Vector Network Analyser. The setup is a linear frequency sweep from 1MHz to 10GHz with 10'000 points. Output power is 0dBm, IF bandwidth is 1kHz and neither averaging nor smoothing are used.
CPWGxxx is a L long, W wide, with a G wide gap to top ground, T thick copper coplanar waveguide on ground on a H height substrate with top and bottom ground plane. A closely spaced via wall is placed on both side of the line and the top and bottom ground planes are connected by many vias.
| Name | L (mm) | W (mm) | G (mm) | H (mm) | T (um) | Substrate |
|
Step2: Impedance from the line and from the connector section may be estimated on the step response.
The line section is not flat, there is some variation in the impedance which may be induced by manufacturing tolerances and dielectric inhomogeneity.
Note that the delay on the reflexion plot are twice the effective section delays because the wave travel back and forth on the line.
Connector discontinuity is about 50 ps long. TL100 line plateau (flat impedance part) is about 450 ps long.
Step3: Dielectric effective relative permittivity extraction by multiline method
Step4: Calibration results shows a very low residual noise floor. The error model is well fitted.
Step5: Relative permittivity $\epsilon_{e,eff}$ and attenuation $A$ shows a reasonable agreement.
A better agreement could be achieved by implementing the Kirschning and Jansen microstripline dispersion model or using a linear correction.
Connectors effects estimation
Step6: Connector + thru plots shows a reasonable agreement between calibration results and model.
Final check | Python Code:
%load_ext autoreload
%autoreload 2
import skrf as rf
import skrf.mathFunctions as mf
import numpy as np
from numpy import real, log, log10, sum, absolute, pi, sqrt
import matplotlib.pyplot as plt
from matplotlib.ticker import AutoMinorLocator
from scipy.optimize import minimize
rf.stylely()
Explanation: DefinedAEpTandZ0 media example
End of explanation
# Load raw measurements
TL100 = rf.Network('CPWG100.s2p')
TL200 = rf.Network('CPWG200.s2p')
TL100_dc = TL100.extrapolate_to_dc(kind='linear')
TL200_dc = TL200.extrapolate_to_dc(kind='linear')
plt.figure()
plt.suptitle('Raw measurement')
TL100.plot_s_db()
TL200.plot_s_db()
plt.figure()
t0 = -2
t1 = 4
plt.suptitle('Time domain reflexion step response (DC extrapolation)')
ax = plt.subplot(1, 1, 1)
TL100_dc.s11.plot_z_time_step(pad=2000, window='hamming', z0=50, label='TL100', ax=ax, color='0.0')
TL200_dc.s11.plot_z_time_step(pad=2000, window='hamming', z0=50, label='TL200', ax=ax, color='0.2')
ax.set_xlim(t0, t1)
ax.xaxis.set_minor_locator(AutoMinorLocator(10))
ax.yaxis.set_minor_locator(AutoMinorLocator(5))
ax.patch.set_facecolor('1.0')
ax.grid(True, color='0.8', which='minor')
ax.grid(True, color='0.4', which='major')
plt.show()
Explanation: Measurement of two CPWG lines with different lengths
The measurement where performed the 21th March 2017 on a Anritsu MS46524B 20GHz Vector Network Analyser. The setup is a linear frequency sweep from 1MHz to 10GHz with 10'000 points. Output power is 0dBm, IF bandwidth is 1kHz and neither averaging nor smoothing are used.
CPWGxxx is a L long, W wide, with a G wide gap to top ground, T thick copper coplanar waveguide on ground on a H height substrate with top and bottom ground plane. A closely spaced via wall is placed on both side of the line and the top and bottom ground planes are connected by many vias.
| Name | L (mm) | W (mm) | G (mm) | H (mm) | T (um) | Substrate |
| :--- | ---: | ---: | ---: | ---: | ---: | :--- |
| MSL100 | 100 | 1.70 | 0.50 | 1.55 | 50 | FR-4 |
| MSL200 | 200 | 1.70 | 0.50 | 1.55 | 50 | FR-4 |
The milling of the artwork is performed mechanically with a lateral wall of 45°.
The relative permittivity of the dielectric was assumed to be approximately 4.5 for design purpose.
End of explanation
Z_conn = 53.2 # ohm, connector impedance
Z_line = 51.4 # ohm, line plateau impedance
d_conn = 0.05e-9 # s, connector discontinuity delay
d_line = 0.45e-9 # s, line plateau delay, without connectors
Explanation: Impedance from the line and from the connector section may be estimated on the step response.
The line section is not flat, there is some variation in the impedance which may be induced by manufacturing tolerances and dielectric inhomogeneity.
Note that the delay on the reflexion plot are twice the effective section delays because the wave travel back and forth on the line.
Connector discontinuity is about 50 ps long. TL100 line plateau (flat impedance part) is about 450 ps long.
End of explanation
#Make the missing reflect measurement
#This is possible because we already have existing calibration
#and know what the open measurement would look like at the reference plane
#'refl_offset' needs to be set to -half_thru - connector_length.
reflect = TL100.copy()
reflect.s[:,0,0] = 1
reflect.s[:,1,1] = 1
reflect.s[:,1,0] = 0
reflect.s[:,0,1] = 0
# Perform NISTMultilineTRL algorithm. Reference plane is at the center of the thru.
cal = rf.NISTMultilineTRL([TL100, reflect, TL200], [1], [0, 100e-3], er_est=3.0, refl_offset=[-56e-3])
plt.figure()
plt.title('Corrected lines')
cal.apply_cal(TL100).plot_s_db()
cal.apply_cal(TL200).plot_s_db()
plt.show()
Explanation: Dielectric effective relative permittivity extraction by multiline method
End of explanation
from skrf.media import DefinedAEpTandZ0
freq = TL100.frequency
f = TL100.frequency.f
f_ghz = TL100.frequency.f/1e9
L = 0.1
A = 0.0
f_A = 1e9
ep_r0 = 2.0
tanD0 = 0.001
f_ep = 1e9
x0 = [ep_r0, tanD0]
ep_r_mea = cal.er_eff.real
A_mea = 20/log(10)*cal.gamma.real
def model(x, freq, ep_r_mea, A_mea, f_ep):
ep_r, tanD = x[0], x[1]
m = DefinedAEpTandZ0(frequency=freq, ep_r=ep_r, tanD=tanD, Z0=50,
f_low=1e3, f_high=1e18, f_ep=f_ep, model='djordjevicsvensson')
ep_r_mod = m.ep_r_f.real
A_mod = m.alpha * log(10)/20
return sum((ep_r_mod - ep_r_mea)**2) + 0.001*sum((20/log(10)*A_mod - A_mea)**2)
res = minimize(model, x0, args=(TL100.frequency, ep_r_mea, A_mea, f_ep),
bounds=[(2, 4), (0.001, 0.013)])
ep_r, tanD = res.x[0], res.x[1]
print('epr={:.3f}, tand={:.4f} at {:.1f} GHz.'.format(ep_r, tanD, f_ep * 1e-9))
m = DefinedAEpTandZ0(frequency=freq, ep_r=ep_r, tanD=tanD, Z0=50,
f_low=1e3, f_high=1e18, f_ep=f_ep, model='djordjevicsvensson')
plt.figure()
plt.suptitle('Effective relative permittivity and attenuation')
plt.subplot(2,1,1)
plt.ylabel('$\epsilon_{r,eff}$')
plt.plot(f_ghz, ep_r_mea, label='measured')
plt.plot(f_ghz, m.ep_r_f.real, label='model')
plt.legend()
plt.subplot(2,1,2)
plt.xlabel('Frequency [GHz]')
plt.ylabel('A (dB/m)')
plt.plot(f_ghz, A_mea, label='measured')
plt.plot(f_ghz, 20/log(10)*m.alpha, label='model')
plt.legend()
plt.show()
Explanation: Calibration results shows a very low residual noise floor. The error model is well fitted.
End of explanation
# note: a half line is embedded in connector network
coefs = cal.coefs
r = mf.sqrt_phase_unwrap(coefs['forward reflection tracking'])
s1 = np.array([[coefs['forward directivity'],r],
[r, coefs['forward source match']]]).transpose()
conn = TL100.copy()
conn.name = 'Connector'
conn.s = s1
# delay estimation,
phi_conn = (np.angle(conn.s[:500,1,0]))
z = np.polyfit(f[:500], phi_conn, 1)
p = np.poly1d(z)
delay = -z[0]/(2*np.pi)
print('Connector + half thru delay: {:.0f} ps'.format(delay * 1e12))
print('TDR readed half thru delay: {:.0f} ps'.format(d_line/2 * 1e12))
d_conn_p = delay - d_line/2
print('Connector delay: {:.0f} ps'.format(d_conn_p * 1e12))
# connector model with guessed loss
half = m.line(d_line/2, 's', z0=Z_line)
mc = DefinedAEpTandZ0(m.frequency, ep_r=1, tanD=0.025, Z0=50,
f_low=1e3, f_high=1e18, f_ep=f_ep, model='djordjevicsvensson')
left = mc.line(d_conn_p, 's', z0=Z_conn)
right = left.flipped()
check = mc.thru() ** left ** half ** mc.thru()
plt.figure()
plt.suptitle('Connector + half thru comparison')
plt.subplot(2,1,1)
conn.plot_s_deg(1, 0, label='measured')
check.plot_s_deg(1, 0, label='model')
plt.ylabel('phase (rad)')
plt.legend()
plt.subplot(2,1,2)
conn.plot_s_db(1, 0, label='Measured')
check.plot_s_db(1, 0, label='Model')
plt.xlabel('Frequency (GHz)')
plt.ylabel('Insertion Loss (dB)')
plt.legend()
plt.show()
Explanation: Relative permittivity $\epsilon_{e,eff}$ and attenuation $A$ shows a reasonable agreement.
A better agreement could be achieved by implementing the Kirschning and Jansen microstripline dispersion model or using a linear correction.
Connectors effects estimation
End of explanation
DUT = m.line(d_line, 's', Z_line)
DUT.name = 'model'
Check = m.thru() ** left ** DUT ** right ** m.thru()
Check.name = 'model with connectors'
plt.figure()
TL100.plot_s_db()
Check.plot_s_db(1,0, color='k')
Check.plot_s_db(0,0, color='k')
plt.show()
Check_dc = Check.extrapolate_to_dc(kind='linear')
plt.figure()
plt.suptitle('Time domain step-response')
ax = plt.subplot(1,1,1)
TL100_dc.s11.plot_z_time_step(pad=2000, window='hamming', label='Measured', ax=ax, color='k')
Check_dc.s11.plot_z_time_step(pad=2000, window='hamming', label='Model', ax=ax, color='b')
t0 = -2
t1 = 4
ax.set_xlim(t0, t1)
ax.xaxis.set_minor_locator(AutoMinorLocator(10))
ax.yaxis.set_minor_locator(AutoMinorLocator(5))
ax.patch.set_facecolor('1.0')
ax.grid(True, color='0.8', which='minor')
ax.grid(True, color='0.5', which='major')
Explanation: Connector + thru plots shows a reasonable agreement between calibration results and model.
Final check
End of explanation |
2,133 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BigO, Complexity, Time Complexity, Space Complexity, Algorithm Analysis
cf. pp. 40 McDowell, 6th Ed. VI BigO
cf. 2.2. What Is Algorithm Analysis?
Step1: A good basic unit of computation for comparing the summation algorithms might be to count the number of assignment statements performed.
Step3: cf. 2.4. An Anagram Detection Example
Step4: 2.4.2. Sort and Compare Solution 2 | Python Code:
def sumOfN(n):
theSum = 0
for i in range(1,n+1):
theSum = theSum + i
return theSum
print(sumOfN(10))
def foo(tom):
fred = 0
for bill in range(1,tom+1):
barney = bill
fred = fred + barney
return fred
print(foo(10))
import time
def sumOfN2(n):
start = time.time()
theSum = 0 # 1 assignment
for i in range(1,n+1):
theSum = theSum + i # n assignments
end = time.time()
return theSum, end-start # (1 + n) assignements
for i in range(5):
print("Sum is %d required %10.7f seconds " % sumOfN2(10000) )
for i in range(5):
print("Sum is %d required %10.7f seconds " % sumOfN2(100000) )
for i in range(5):
print("Sum is %d required %10.7f seconds " % sumOfN2(1000000) )
def sumOfN3(n):
start=time.time()
theSum = (n*(n+1))/2
end=time.time()
return theSum, end-start
print(sumOfN3(10))
for i in range(5):
print("Sum is %d required %10.7f seconds " % sumOfN3(10000*10**(i)) )
Explanation: BigO, Complexity, Time Complexity, Space Complexity, Algorithm Analysis
cf. pp. 40 McDowell, 6th Ed. VI BigO
cf. 2.2. What Is Algorithm Analysis?
End of explanation
def findmin(X):
start=time.time()
minval= X[0]
for ele in X:
if minval > ele:
minval = ele
end=time.time()
return minval, end-start
def findmin2(X):
start=time.time()
L = len(X)
overallmin = X[0]
for i in range(L):
minval_i = X[i]
for j in range(L):
if minval_i > X[j]:
minval_i = X[j]
if overallmin > minval_i:
overallmin = minval_i
end=time.time()
return overallmin, end-start
import random
for i in range(5):
print("findmin is %d required %10.7f seconds" % findmin( [random.randrange(1000000) for _ in range(10000*10**i)] ) )
for i in range(5):
print("findmin2 is %d required %10.7f seconds" % findmin2( [random.randrange(1000000) for _ in range(10000*10**i)] ) )
Explanation: A good basic unit of computation for comparing the summation algorithms might be to count the number of assignment statements performed.
End of explanation
def anagramSolution(s1,s2):
@fn anagramSolution
@details 1 string is an anagram of another if the 2nd is simply a rearrangement of the 1st
'heart' and 'earth' are anagrams
'python' and 'typhon' are anagrams
A = list(s2) # Python strings are immutable, so make a list
pos1 = 0
stillOK = True
while pos1 < len(s1) and stillOK:
pos2 = 0
found = False
while pos2 < len(A) and not found:
if s1[pos1] == A[pos2]: # given s1[pos1], try to find it in A, changing pos2
found = True
else:
pos2 = pos2+1
if found:
A[pos2] = None
else:
stillOK = False
pos1 = pos1 + 1
return stillOK
anagramSolution("heart","earth")
anagramSolution("python","typhon")
anagramSolution("anagram","example")
Explanation: cf. 2.4. An Anagram Detection Example
End of explanation
def anagramSolution2(s1,s2):
Explanation: 2.4.2. Sort and Compare Solution 2
End of explanation |
2,134 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem Set 12
First the exercises
Step1: Let us load up a sample dataset.
Step6: Now construct a KNN classifier
Step7: Calculate accuracy on this very small subset.
Step8: Let's time these different methods to see if the "faster_preds" is actually faster
Step11: Okay now, let us try the clustering algorithm.
Step12: Let us load the credit card dataset and extract a small dataframe of numerical features to test on.
Step14: Now let us write our transformation function.
Step15: Now let us build some simple loss functions for 1d labels.
Step17: Now let us define the find split function.
Step18: One hot encode our dataset
Step20: Test this to see if it is reasonable
Step21: Test this out.
Step22: The naive option | Python Code:
import numpy as np
import pandas as pd
import keras
from keras.datasets import mnist
Explanation: Problem Set 12
First the exercises:
* Let $\mu=\frac{1}{|S|}\sum_{x_i\in S} x_i$ let us expand
\begin{align}
\sum_{x_i\in S} ||x_i-\mu||^2 &=\sum_{x_i\in S}(x_i-\mu)^T(x_i-\mu)\
&= |S|\mu^T\mu+\sum_{x_i\in S}\left( x_i^Tx_i-2\mu^T x_i \right) \
&= \frac{1}{|S|}\left(\sum_{(x_i,x_j)\in S\times S} x_i^T x_j\right) + \sum_{x_i\in S} \left( x_i^Tx_i-\frac{2}{|S|}\left(\sum_{x_j\in S} x_j^T x_i\right)\right)\
&= \sum_{x_i\in S} x_i^Tx_i-\frac{1}{|S|}\sum_{(x_i,x_j)\in S\times S} x_j^T x_i\
&= \frac{1}{2}\left(\sum_{x_i\in S} x_i^Tx_i-\frac{2}{|S|}\sum_{(x_i,x_j)\in S\times S} x_j^T x_i+\sum_{x_j\in S} x_j^Tx_j \right)\
&= \frac{1}{2|S|}\left(\sum_{(x_i,x_j)\in S\times S} x_i^Tx_i-2\sum_{(x_i,x_j)\in S\times S} x_j^T x_i+\sum_{(x_i,x_j)\in S\times S} x_j^Tx_j \right)\
&= \frac{1}{2|S|}\sum_{(x_i,x_j)\in S\times S} (x_i-x_j)^T(x_i-x_j)\
&= \frac{1}{2|S|}\sum_{(x_i,x_j)\in S\times S} ||x_i-x_j||^2
\end{align}
as desired.
* So the $K$-means algorithm consists of iterations of two steps, we will show that either the algorithm has stabilized or that each of these steps decreases
[ T=\sum_{c=1}^K \sum_{x_i\in S_c}||x_i-\mu_c||^2,] where $S_c$ is the $c$th cluster and $\mu_c$ is the previously defined mean over that cluster. The sequence defined by these sums is therefore monotonically decreasing and bounded below so it will eventually approach the maximal lower bound.
The value of $\mu$ that minimizes $\sum_{x_i\in S_c}||x_i-\mu||^2$ is $\frac{1}{|S_c|}\sum_{x_i\in S_c} x_i$ (we can check this by setting the derivative with respect to $\mu$ to zero). So updating the mean estimates will never increase $T$. If we do not update the mean estimates than the cluster assignments will not change on the next step.
The next step maps samples to their closest mean which can only decrease the sum $T$.
Python Lab
Now let us load our standard libraries.
End of explanation
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train.shape
import matplotlib.pyplot as plt
%matplotlib inline
randix = np.random.randint(0,60000)
plt.imshow(x_train[randix])
print("Label is {}.".format(y_train[randix]))
x_train_f = x_train.reshape(60000,-1)
x_train_f.shape
x_test_f = x_test.reshape(-1, 28**2)
x_test_f.shape
from sklearn.preprocessing import OneHotEncoder as OHE
ohe = OHE(sparse = False)
y_train_ohe = ohe.fit_transform(y_train.reshape(-1,1))
y_test_ohe = ohe.fit_transform(y_test.reshape(-1,1))
np.argmax(y_train_ohe[randix]) == y_train[randix]
Explanation: Let us load up a sample dataset.
End of explanation
from scipy.spatial.distance import cdist
from sklearn.neighbors import KDTree
class KNNClassifier(object):
def fit(self,x,y,k=1,fun=lambda x: np.mean(x,axis=0)):
Fits a KNN regressor.
Args:
x (numpy array) Array of samples indexed along first axis.
y (numpy array) Array of corresponding labels.
k (int) the number of neighbors
fun (function numpy array --> desired output) Function to be applied to k-nearest
neighbors for predictions
self.x = x[:]
self.y = y[:]
self.k = k
self.f = fun
self.tree = KDTree(self.x)
def predict_one(self, sample):
Run prediction on sample
Args:
new_x (numpy array) sample
dists = cdist(sample.reshape(1,-1),self.x)
ix = np.argpartition(dists,self.k-1)[0,0:self.k]
return self.f(self.y[ix])
def predict(self, samples):
Run predictions on list.
Args:
samples (numpy array) samples
return np.array([self.predict_one(x) for x in samples])
def faster_predict(self,samples):
Run faster predictions on list.
Args:
samples (numpy array) samples
_, ixs = self.tree.query(samples, k=self.k)
#print(ixs)
return np.array([self.f(self.y[ix]) for ix in ixs])
classifier = KNNClassifier()
classifier.fit(x_train_f, y_train_ohe, k=1)
preds=classifier.predict(x_test_f[:500])
Explanation: Now construct a KNN classifier
End of explanation
np.mean(np.argmax(preds,axis=1)==y_test[:500])
faster_preds = classifier.faster_predict(x_test_f[:500])
np.mean(np.argmax(faster_preds,axis=1)==y_test[:500])
Explanation: Calculate accuracy on this very small subset.
End of explanation
from timeit import default_timer as timer
start = timer()
classifier.predict(x_test_f[:500])
end = timer()
print(end-start)
start = timer()
classifier.faster_predict(x_test_f[:500])
end = timer()
print(end-start)
Explanation: Let's time these different methods to see if the "faster_preds" is actually faster:
End of explanation
def cluster_means(x,cluster_assignments,k):
Return the new cluster means and the within cluster squared distance given the cluster assignments
cluster_counter = np.zeros((k,1))
cluster_means = np.zeros((k, x.shape[1]))
for cluster, pt in zip(cluster_assignments, x):
#print(x)
cluster_means[cluster] += pt
cluster_counter[cluster]+=1
cluster_means = cluster_means/cluster_counter
wcss = 0.
for cluster, pt in zip(cluster_assignments, x):
wcss+=np.sum((pt-cluster_means[cluster])**2)
return cluster_means, wcss
class KMeansCluster(object):
#Fit a clustering object on a dataset x consisting of samples on each row
#by the K-means algorithm into k clusters
def fit(self,x,k):
Fit k-means clusterer
Args:
x (numpy array) samples
k (int) number of clusters
num_samples, num_features = x.shape[0], x.shape[1]
#Randomly assign clusters
cluster_assignments = np.random.randint(0,k,num_samples)
#initialize
cluster_mus = np.zeros((k,num_features))
#update
new_cluster_mus, wcss = cluster_means(x,cluster_assignments,k)
count = 1
while (cluster_mus!=new_cluster_mus).any() and count < 10**3:
count += 1
print("Iteration {:3d}, WCSS = {:10f}".format(count,wcss),end="\r")
cluster_mus = new_cluster_mus
#calculate distances
distances = cdist(x,cluster_mus, metric = 'sqeuclidean')
np.argmin(distances, axis = 1, out = cluster_assignments)
new_cluster_mus, wcss = cluster_means(x,cluster_assignments,k)
self.cluster_means = cluster_means
self.cluster_assignments = cluster_assignments
self.x = x[:]
self.wcss = wcss
clusterer = KMeansCluster()
clusterer.fit(x_train_f,10)
clusterer2 = KMeansCluster()
clusterer2.fit(x_train_f,10)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_train, clusterer2.cluster_assignments)
cluster_samples = clusterer2.x[clusterer2.cluster_assignments == 0]
plt.imshow(cluster_samples[0].reshape(28,28))
plt.imshow(cluster_samples[1].reshape(28,28))
plt.imshow(cluster_samples[23].reshape(28,28))
plt.imshow(cluster_samples[50].reshape(28,28))
np.mean(classifier.faster_predict(cluster_samples),axis=0)
Explanation: Okay now, let us try the clustering algorithm.
End of explanation
big_df = pd.read_csv("UCI_Credit_Card.csv")
big_df.head()
len(big_df)
len(big_df.dropna())
df = big_df.drop(labels = ['ID'], axis = 1)
labels = df['default.payment.next.month']
df.drop('default.payment.next.month', axis = 1, inplace = True)
num_samples = 25000
train_x, train_y = df[0:num_samples], labels[0:num_samples]
test_x, test_y = df[num_samples:], labels[num_samples:]
test_x.head()
train_y.head()
Explanation: Let us load the credit card dataset and extract a small dataframe of numerical features to test on.
End of explanation
class bin_transformer(object):
def __init__(self, df, num_quantiles = 2):
#identify list of quantiles
self.quantiles = df.quantile(np.linspace(1./num_quantiles, 1.-1./num_quantiles,num_quantiles-1))
def transform(self, df):
Args:
df (pandas dataframe) : dataframe to transform
Returns:
new (pandas dataframe) : new dataframe where for every feature of the original there will be
num_quantiles-1 features corresponding to whether or not the original values where greater
than or equal to the corresponding quantile.
fns (dictionary (string,float)) returns dictionary of quantiles
new = pd.DataFrame()
fns = {}
for col_name in df.axes[1]:
for ix, q in self.quantiles.iterrows():
quart = q[col_name]
new[col_name+str(ix)] = (df[col_name] >= quart)
fn = quart
fns[col_name+str(ix)] = [col_name, fn]
return new, fns
transformer = bin_transformer(train_x,2)
train_x_t, tr_fns = transformer.transform(train_x)
test_x_t, test_fns = transformer.transform(test_x)
train_x_t.head()
Explanation: Now let us write our transformation function.
End of explanation
def bdd_cross_entropy(pred, label):
return np.mean(-np.sum(label*np.log(pred+10**(-8)),axis=1))
def MSE(pred,label):
return np.mean(np.sum((pred-label)**2, axis=1))
def acc(pred,label):
return np.mean(np.argmax(pred,axis=1)==np.argmax(label, axis=1))
def SSE(x,y):
return np.sum((x-y)**2)
def gini(x,y):
return 1-np.sum(np.mean(y,axis=0)**2)
Explanation: Now let us build some simple loss functions for 1d labels.
End of explanation
def find_split(x, y, loss, verbose = False):
Args:
x (dataframe) : dataframe of boolean values
y (dataframe (1 column)) : dataframe of labeled values
loss (function: (yvalue, dataframe of labels)-->float) : calculates loss for prediction of yvalue
for a dataframe of true values.
verbose (bool) : whether or not to include debugging info
min_ax = None
N = x.shape[0]
base_loss = loss(np.mean(y,axis=0),y)
min_loss = base_loss
for col_name in x.axes[1]:
mask = x[col_name]
num_pos = np.sum(mask)
num_neg = N - num_pos
if num_neg*num_pos == 0:
continue
pos_y = np.mean(y[mask], axis = 0)
neg_y = np.mean(y[~mask], axis = 0)
l = (num_pos*loss(pos_y, y[mask]) + num_neg*loss(neg_y, y[~mask]))/N
if verbose:
print("Column {0} split has improved loss {1}".format(col_name, base_loss-l))
if l < min_loss:
min_loss = l
min_ax = col_name
return min_ax, min_loss, base_loss-min_loss
Explanation: Now let us define the find split function.
End of explanation
ohe = OHE(sparse = False)
train_y_ohe = ohe.fit_transform(train_y.values.reshape(-1,1))
train_y_ohe[0:5],train_y.values[0:5]
test_y_ohe = ohe.transform(test_y.values.reshape(-1,1))
Explanation: One hot encode our dataset
End of explanation
find_split(train_x_t, train_y_ohe, bdd_cross_entropy, verbose = False)
np.mean(train_y_ohe[train_x_t['LIMIT_BAL0.5']],axis=0)
np.mean(train_y_ohe[~train_x_t['LIMIT_BAL0.5']],axis = 0)
np.mean(train_y_ohe,axis=0)
#Slow but simple
class decision_tree(object):
def __init__(self):
self.f = None
def fit(self, x,y,depth=5,loss=MSE, minsize = 1, quintiles = 2, verbose = False):
#Construct default function
mu = np.mean(y, axis=0)
self.f = lambda a: mu
# Check our stopping criteria
if(x.shape[0]<=minsize or depth == 0):
return
# transform our data
tr = bin_transformer(x, quintiles)
tr_x, fns = tr.transform(x)
split, split_loss, improvement = find_split(tr_x,y,loss)
if verbose:
print("Improvement: {}".format(improvement))
#if no good split was found return
if split == None:
return
# Build test function
col_to_split = fns[split][0]
splitter = lambda a: (a[col_to_split] >= fns[split][1])
mask = tr_x[split]
left = decision_tree()
right = decision_tree()
left.fit(x[~mask],y[~mask],depth-1,loss, minsize, quintiles)
right.fit(x[mask],y[mask],depth-1,loss, minsize, quintiles)
def g(z):
if(splitter(z)):
return right.f(z)
else:
return left.f(z)
self.f = g
def predict(self, x):
Used for bulk prediction
num_samples = x.shape[0]
return np.array([self.f(x.iloc[ix,:]) for ix in range(num_samples)])
Explanation: Test this to see if it is reasonable:
End of explanation
dt = decision_tree()
dt.fit(train_x, train_y_ohe, loss = MSE, minsize = 1, depth = 6, quintiles = 50)
dt.predict(test_x.iloc[0:3,:]), test_y_ohe[0:3]
preds = dt.predict(train_x)
np.mean(np.argmax(preds, axis=1)==train_y)
Explanation: Test this out.
End of explanation
1-np.mean(test_y)
class gradient_boosting_trees(object):
def fit(self, x, y, depth = 2, quintiles = 10, num_trees = 10):
self.forest = [None]*num_trees
cur_y = y[:]
for ix in range(num_trees):
self.forest[ix] = decision_tree()
self.forest[ix].fit(x, cur_y, loss=MSE, depth = depth, quintiles = quintiles, minsize = 1)
preds = self.forest[ix].predict(x)
cur_y = cur_y - preds
def predict(self,x):
s = 0.
preds = [tree.predict(x) for tree in self.forest]
for t in preds:
s+=t
return s
forest = gradient_boosting_trees()
train_y_ohe = ohe.fit_transform(train_y.values.reshape(-1,1))
forest.fit(train_x, train_y_ohe, depth = 20, num_trees = 5, quintiles = 20)
forest.predict(test_x.iloc[0:3,:]), test_y_ohe[0:3]
for_preds = forest.predict(train_x)
for_preds[0:5,:]
train_y_ohe[0:3]
np.mean(np.argmax(for_preds, axis=1)==train_y)
for_preds = forest.predict(test_x)
np.mean(np.argmax(for_preds, axis=1)==test_y)
from sklearn import tree
sktree = tree.DecisionTreeClassifier(max_depth=20)
sktree.fit(train_x, train_y_ohe)
Explanation: The naive option:
End of explanation |
2,135 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simulation Runs 3 – 16 based on experiment fits
<div id="toc-wrapper"><h3> Table of Contents </h3><div id="toc" style="max-height
Step1: Run 4
Step2: Run 5
Step3: Run 14
Step4: Run 15
Step5: Run 16
Step6: Run 6
Step7: Run 7
Step8: Run 8
Step9: Run 9
Step10: Run 11 | Python Code:
%%writefile simulation_run_3.py
#!/usr/bin/env python
#SBATCH --mem=8000
import subprocess as sp
import os
import sys
jobindex = int(sys.argv[1])
currentindex = -1
mrnafiles = filter(lambda x: x.startswith('yfp'), os.listdir('../annotations/simulations/run3/'))
mrnafiles = ['../annotations/simulations/run3/' + File for File in mrnafiles]
terminationandStallStrengths = [
('--5prime-preterm-rate',0,'../processeddata/simulations/run3_stallstrengthfits_trafficjam.tsv'),
('--5prime-preterm-rate',1,'../processeddata/simulations/run3_stallstrengthfits_5primepreterm.tsv'),
('--selective-preterm-rate',1,'../processeddata/simulations/run3_stallstrengthfits_selpreterm.tsv'),
]
for mrnafile in mrnafiles:
currentindex += 1
if currentindex != jobindex:
continue
for typeOfTermination, terminationRate, stallstrengthfile in terminationandStallStrengths:
cmd = ' '.join([
'./reporter_simulation',
'--trna-concn', '../annotations/simulations/leucine.starvation.average.trna.concentrations.tsv',
typeOfTermination,
'%0.2g'%terminationRate,
'--threshold-accommodation-rate', '22',
'--output-prefix','../rawdata/simulations/run3/',
'--stall-strength-file', stallstrengthfile,
'--input-genes', mrnafile
])
sp.check_output(cmd, shell=True)
import subprocess as sp
# loop submits each simulation to a different node of the cluster
for index in range(40):
sp.check_output([
'sbatch', # for SLURM cluster; this line can be commented out if running locally
'-t', '30', # for SLURM cluster; this line can be commented out if running locally
'-n', '1', # for SLURM cluster; this line can be commented out if running locally
'simulation_run_3.py',
str(index)
])
Explanation: Simulation Runs 3 – 16 based on experiment fits
<div id="toc-wrapper"><h3> Table of Contents </h3><div id="toc" style="max-height: 787px;"><ol class="toc-item"><li><a href="#Run-3:-Predict-YFP-synthesis-rate-of-initiation-mutants-based-on-fit-of-stall-strengths-to-single-mutant-data-(for-Fig-4,-Fig.-4-supplement-1A–G)">Run 3: Predict YFP synthesis rate of initiation mutants based on fit of stall strengths to single mutant data (for Fig 4, Fig. 4 supplement 1A–G)</a></li><li><a href="#Run-4:-Predict-YFP-synthesis-rate-of-CTC,-CTT-double-mutants-based-on-fit-of-stall-strengths-to-single-mutant-data-(for-Fig.-5-figure-supplement-1A,-1B)">Run 4: Predict YFP synthesis rate of CTC, CTT double mutants based on fit of stall strengths to single mutant data (for Fig. 5 figure supplement 1A, 1B)</a></li><li><a href="#Run-5:-Predict-YFP-synthesis-rate-of-CTC-distance-mutants-based-on-fit-of-stall-strengths-to-single-mutant-data-(for-Fig.-6-figure-supplement-1)">Run 5: Predict YFP synthesis rate of CTC distance mutants based on fit of stall strengths to single mutant data (for Fig. 6 figure supplement 1)</a></li><li><a href="#Run-14:-Predict-YFP-synthesis-rate-of-serine-initiation-mutants-based-on-fit-of-stall-strengths-to-single-mutant-data-(for-Fig.-4-supplement-1H)">Run 14: Predict YFP synthesis rate of serine initiation mutants based on fit of stall strengths to single mutant data (for Fig. 4 supplement 1H)</a></li><li><a href="#Run-15:-Predict-YFP-synthesis-rate-of-serine-double-mutants-based-on-fit-of-stall-strengths-to-single-mutant-data-(for-Fig.-5-figure-supplement-1C)">Run 15: Predict YFP synthesis rate of serine double mutants based on fit of stall strengths to single mutant data (for Fig. 5 figure supplement 1C)</a></li><li><a href="#Run-16:-Predict-YFP-synthesis-rate-of-CTA-multiple-mutants-based-on-fit-of-stall-strengths-to-single-mutant-data-(for-Fig.-5)">Run 16: Predict YFP synthesis rate of CTA multiple mutants based on fit of stall strengths to single mutant data (for Fig. 5)</a></li><li><a href="#Run-6:-Vary-initiation-rate-systematically-for-3-different-models-(for-Fig.-3A)">Run 6: Vary initiation rate systematically for 3 different models (for Fig. 3A)</a></li><li><a href="#Run-7:-Vary-number-of-stall-sites-systematically-for-3-different-models-(for-Fig.-3B)">Run 7: Vary number of stall sites systematically for 3 different models (for Fig. 3B)</a></li><li><a href="#Run-8:-Vary-distance-between-stall-sites-systematically-for-3-different-models-(for-Fig.-3C)">Run 8: Vary distance between stall sites systematically for 3 different models (for Fig. 3C)</a></li><li><a href="#Run-9-:-Vary-abortive-termination-rate-systematically-for-3-different-models-(for-Fig.-7)">Run 9 : Vary abortive termination rate systematically for 3 different models (for Fig. 7)</a></li><li><a href="#Run-11:-Predict-YFP-synthesis-rate-of-CTA-distance-mutants-based-on-fit-of-stall-strengths-to-single-mutant-data-(for-Fig.-6)">Run 11: Predict YFP synthesis rate of CTA distance mutants based on fit of stall strengths to single mutant data (for Fig. 6)</a></li></ol></div></div>
Run 3: Predict YFP synthesis rate of initiation mutants based on fit of stall strengths to single mutant data (for Fig 4, Fig. 4 supplement 1A–G)
End of explanation
%%writefile simulation_run_4.py
#!/usr/bin/env python
#SBATCH --mem=8000
import subprocess as sp
import os
import sys
jobindex = int(sys.argv[1])
currentindex = -1
mrnafiles = filter(lambda x: x.startswith('yfp'), os.listdir('../annotations/simulations/run4/'))
mrnafiles = ['../annotations/simulations/run4/' + File for File in mrnafiles]
terminationandStallStrengths = [
('--5prime-preterm-rate',0,'../processeddata/simulations/run4_stallstrengthfits_trafficjam.tsv'),
('--5prime-preterm-rate',1,'../processeddata/simulations/run4_stallstrengthfits_5primepreterm.tsv'),
('--selective-preterm-rate',1,'../processeddata/simulations/run4_stallstrengthfits_selpreterm.tsv'),
]
for mrnafile in mrnafiles:
currentindex += 1
if currentindex != jobindex:
continue
for typeOfTermination, terminationRate, stallstrengthfile in terminationandStallStrengths:
cmd = ' '.join([
'./reporter_simulation',
'--trna-concn', '../annotations/simulations/leucine.starvation.average.trna.concentrations.tsv',
typeOfTermination,
'%0.2g'%terminationRate,
'--threshold-accommodation-rate', '22',
'--output-prefix','../rawdata/simulations/run4/',
'--stall-strength-file', stallstrengthfile,
'--input-genes', mrnafile
])
sp.check_output(cmd, shell=True)
import subprocess as sp
# loop submits each simulation to a different node of the cluster
for index in range(30):
sp.check_output([
'sbatch', # for SLURM cluster; this line can be commented out if running locally
'-t', '30', # for SLURM cluster; this line can be commented out if running locally
'-n', '1', # for SLURM cluster; this line can be commented out if running locally
'simulation_run_4.py',
str(index)
])
Explanation: Run 4: Predict YFP synthesis rate of CTC, CTT double mutants based on fit of stall strengths to single mutant data (for Fig. 5 figure supplement 1A, 1B)
End of explanation
%%writefile simulation_run_5.py
#!/usr/bin/env python
#SBATCH --mem=8000
import subprocess as sp
import os
import sys
jobindex = int(sys.argv[1])
currentindex = -1
mrnafiles = filter(lambda x: x.startswith('yfp'), os.listdir('../annotations/simulations/run5/'))
mrnafiles = ['../annotations/simulations/run5/' + File for File in mrnafiles]
terminationandStallStrengths = [
('--5prime-preterm-rate',0,'../processeddata/simulations/run5_stallstrengthfits_trafficjam.tsv'),
('--5prime-preterm-rate',1,'../processeddata/simulations/run5_stallstrengthfits_5primepreterm.tsv'),
('--selective-preterm-rate',1,'../processeddata/simulations/run5_stallstrengthfits_selpreterm.tsv'),
]
for mrnafile in mrnafiles:
currentindex += 1
if currentindex != jobindex:
continue
for typeOfTermination, terminationRate, stallstrengthfile in terminationandStallStrengths:
cmd = ' '.join([
'./reporter_simulation',
'--trna-concn', '../annotations/simulations/leucine.starvation.average.trna.concentrations.tsv',
typeOfTermination,
'%0.2g'%terminationRate,
'--threshold-accommodation-rate', '22',
'--output-prefix','../rawdata/simulations/run5/',
'--stall-strength-file', stallstrengthfile,
'--input-genes', mrnafile
])
sp.check_output(cmd, shell=True)
import subprocess as sp
# loop submits each simulation to a different node of the cluster
for index in range(20):
sp.check_output([
'sbatch', # for SLURM cluster; this line can be commented out if running locally
'-t', '30', # for SLURM cluster; this line can be commented out if running locally
'-n', '1', # for SLURM cluster; this line can be commented out if running locally
'simulation_run_5.py',
str(index)
])
Explanation: Run 5: Predict YFP synthesis rate of CTC distance mutants based on fit of stall strengths to single mutant data (for Fig. 6 figure supplement 1)
End of explanation
%%writefile simulation_run_14.py
#!/usr/bin/env python
#SBATCH --mem=8000
import subprocess as sp
import os
import sys
jobindex = int(sys.argv[1])
currentindex = -1
mrnafiles = filter(lambda x: x.startswith('yfp'), os.listdir('../annotations/simulations/run14/'))
mrnafiles = ['../annotations/simulations/run14/' + File for File in mrnafiles]
terminationandStallStrengths = [
('--5prime-preterm-rate',0,'../processeddata/simulations/run14_stallstrengthfits_trafficjam.tsv'),
('--5prime-preterm-rate',1,'../processeddata/simulations/run14_stallstrengthfits_5primepreterm.tsv'),
('--selective-preterm-rate',1,'../processeddata/simulations/run14_stallstrengthfits_selpreterm.tsv'),
]
for mrnafile in mrnafiles:
currentindex += 1
if currentindex != jobindex:
continue
for typeOfTermination, terminationRate, stallstrengthfile in terminationandStallStrengths:
cmd = ' '.join([
'./reporter_simulation',
'--trna-concn', '../annotations/simulations/serine.starvation.average.trna.concentrations.tsv',
typeOfTermination,
'%0.2g'%terminationRate,
'--threshold-accommodation-rate', '22',
'--output-prefix','../rawdata/simulations/run14/',
'--stall-strength-file', stallstrengthfile,
'--input-genes', mrnafile
])
sp.check_output(cmd, shell=True)
import subprocess as sp
# loop submits each simulation to a different node of the cluster
for index in range(15):
sp.check_output([
'sbatch', # for SLURM cluster; this line can be commented out if running locally
'-t', '30', # for SLURM cluster; this line can be commented out if running locally
'-n', '1', # for SLURM cluster; this line can be commented out if running locally
'simulation_run_14.py',
str(index)
])
Explanation: Run 14: Predict YFP synthesis rate of serine initiation mutants based on fit of stall strengths to single mutant data (for Fig. 4 supplement 1H)
End of explanation
%%writefile simulation_run_15.py
#!/usr/bin/env python
#SBATCH --mem=8000
import subprocess as sp
import os
import sys
jobindex = int(sys.argv[1])
currentindex = -1
mrnafiles = filter(lambda x: x.startswith('yfp'), os.listdir('../annotations/simulations/run15/'))
mrnafiles = ['../annotations/simulations/run15/' + File for File in mrnafiles]
terminationandStallStrengths = [
('--5prime-preterm-rate',0,'../processeddata/simulations/run15_stallstrengthfits_trafficjam.tsv'),
('--5prime-preterm-rate',1,'../processeddata/simulations/run15_stallstrengthfits_5primepreterm.tsv'),
('--selective-preterm-rate',1,'../processeddata/simulations/run15_stallstrengthfits_selpreterm.tsv'),
]
for mrnafile in mrnafiles:
currentindex += 1
if currentindex != jobindex:
continue
for typeOfTermination, terminationRate, stallstrengthfile in terminationandStallStrengths:
cmd = ' '.join([
'./reporter_simulation',
'--trna-concn', '../annotations/simulations/serine.starvation.average.trna.concentrations.tsv',
typeOfTermination,
'%0.2g'%terminationRate,
'--threshold-accommodation-rate', '22',
'--output-prefix','../rawdata/simulations/run15/',
'--stall-strength-file', stallstrengthfile,
'--input-genes', mrnafile
])
sp.check_output(cmd, shell=True)
import subprocess as sp
# loop submits each simulation to a different node of the cluster
for index in range(15):
sp.check_output([
'sbatch', # for SLURM cluster; this line can be commented out if running locally
'-t', '30', # for SLURM cluster; this line can be commented out if running locally
'-n', '1', # for SLURM cluster; this line can be commented out if running locally
'simulation_run_15.py',
str(index)
])
Explanation: Run 15: Predict YFP synthesis rate of serine double mutants based on fit of stall strengths to single mutant data (for Fig. 5 figure supplement 1C)
End of explanation
%%writefile simulation_run_16.py
#!/usr/bin/env python
#SBATCH --mem=8000
import subprocess as sp
import os
import sys
jobindex = int(sys.argv[1])
currentindex = -1
mrnafiles = filter(lambda x: x.startswith('yfp'), os.listdir('../annotations/simulations/run16/'))
mrnafiles = ['../annotations/simulations/run16/' + File for File in mrnafiles]
terminationandStallStrengths = [
('--5prime-preterm-rate',0,'../processeddata/simulations/run16_stallstrengthfits_trafficjam.tsv'),
('--5prime-preterm-rate',1,'../processeddata/simulations/run16_stallstrengthfits_5primepreterm.tsv'),
('--selective-preterm-rate',1,'../processeddata/simulations/run16_stallstrengthfits_selpreterm.tsv'),
]
for mrnafile in mrnafiles:
currentindex += 1
if currentindex != jobindex:
continue
for typeOfTermination, terminationRate, stallstrengthfile in terminationandStallStrengths:
cmd = ' '.join([
'./reporter_simulation',
'--trna-concn', '../annotations/simulations/leucine.starvation.average.trna.concentrations.tsv',
typeOfTermination,
'%0.2g'%terminationRate,
'--threshold-accommodation-rate', '22',
'--output-prefix','../rawdata/simulations/run16/',
'--stall-strength-file', stallstrengthfile,
'--input-genes', mrnafile
])
sp.check_output(cmd, shell=True)
import subprocess as sp
# loop submits each simulation to a different node of the cluster
for index in range(18):
sp.check_output([
'sbatch', # for SLURM cluster; this line can be commented out if running locally
'-t', '30', # for SLURM cluster; this line can be commented out if running locally
'-n', '1', # for SLURM cluster; this line can be commented out if running locally
'simulation_run_16.py',
str(index)
])
Explanation: Run 16: Predict YFP synthesis rate of CTA multiple mutants based on fit of stall strengths to single mutant data (for Fig. 5)
End of explanation
%%writefile simulation_run_6.py
#!/usr/bin/env python
#SBATCH --mem=8000
import subprocess as sp
import os
import sys
jobindex = int(sys.argv[1])
currentindex = -1
mrnafiles = filter(lambda x: x.startswith('yfp'), os.listdir('../annotations/simulations/run6/'))
mrnafiles = ['../annotations/simulations/run6/' + File for File in mrnafiles]
terminationandStallStrengths = [
('--5prime-preterm-rate',0,'../processeddata/simulations/runs678_stallstrengthfits_trafficjam.tsv'),
('--5prime-preterm-rate',1,'../processeddata/simulations/runs678_stallstrengthfits_5primepreterm.tsv'),
('--selective-preterm-rate',1,'../processeddata/simulations/runs678_stallstrengthfits_selpreterm.tsv'),
]
for mrnafile in mrnafiles:
currentindex += 1
if currentindex != jobindex:
continue
for typeOfTermination, terminationRate, stallstrengthfile in terminationandStallStrengths:
cmd = ' '.join([
'./reporter_simulation',
'--trna-concn', '../annotations/simulations/leucine.starvation.average.trna.concentrations.tsv',
typeOfTermination,
'%0.2g'%terminationRate,
'--threshold-accommodation-rate', '22',
'--output-prefix','../rawdata/simulations/run6/',
'--stall-strength-file', stallstrengthfile,
'--input-genes', mrnafile
])
sp.check_output(cmd, shell=True)
import subprocess as sp
# loop submits each simulation to a different node of the cluster
for index in range(8):
sp.check_output([
'sbatch', # for SLURM cluster; this line can be commented out if running locally
'-t', '10', # for SLURM cluster; this line can be commented out if running locally
'-n', '1', # for SLURM cluster; this line can be commented out if running locally
'simulation_run_6.py',
str(index)
])
Explanation: Run 6: Vary initiation rate systematically for 3 different models (for Fig. 3A)
End of explanation
%%writefile simulation_run_7.py
#!/usr/bin/env python
#SBATCH --mem=8000
import subprocess as sp
import os
import sys
jobindex = int(sys.argv[1])
currentindex = -1
mrnafiles = filter(lambda x: x.startswith('yfp'), os.listdir('../annotations/simulations/run7/'))
mrnafiles = ['../annotations/simulations/run7/' + File for File in mrnafiles]
terminationandStallStrengths = [
('--5prime-preterm-rate',0,'../processeddata/simulations/runs678_stallstrengthfits_trafficjam.tsv'),
('--5prime-preterm-rate',1,'../processeddata/simulations/runs678_stallstrengthfits_5primepreterm.tsv'),
('--selective-preterm-rate',1,'../processeddata/simulations/runs678_stallstrengthfits_selpreterm.tsv'),
]
for mrnafile in mrnafiles:
currentindex += 1
if currentindex != jobindex:
continue
for typeOfTermination, terminationRate, stallstrengthfile in terminationandStallStrengths:
cmd = ' '.join([
'./reporter_simulation',
'--trna-concn', '../annotations/simulations/leucine.starvation.average.trna.concentrations.tsv',
typeOfTermination,
'%0.2g'%terminationRate,
'--threshold-accommodation-rate', '22',
'--output-prefix','../rawdata/simulations/run7/',
'--stall-strength-file', stallstrengthfile,
'--input-genes', mrnafile
])
sp.check_output(cmd, shell=True)
import subprocess as sp
# loop submits each simulation to a different node of the cluster
for index in range(9):
sp.check_output([
'sbatch', # for SLURM cluster; this line can be commented out if running locally
'-t', '30', # for SLURM cluster; this line can be commented out if running locally
'-n', '1', # for SLURM cluster; this line can be commented out if running locally
'simulation_run_7.py',
str(index)
])
Explanation: Run 7: Vary number of stall sites systematically for 3 different models (for Fig. 3B)
End of explanation
%%writefile simulation_run_8.py
#!/usr/bin/env python
#SBATCH --mem=8000
import subprocess as sp
import os
import sys
jobindex = int(sys.argv[1])
currentindex = -1
mrnafiles = filter(lambda x: x.startswith('yfp'), os.listdir('../annotations/simulations/run8/'))
mrnafiles = ['../annotations/simulations/run8/' + File for File in mrnafiles]
terminationandStallStrengths = [
('--5prime-preterm-rate',0,'../processeddata/simulations/runs678_stallstrengthfits_trafficjam.tsv'),
('--5prime-preterm-rate',1,'../processeddata/simulations/runs678_stallstrengthfits_5primepreterm.tsv'),
('--selective-preterm-rate',1,'../processeddata/simulations/runs678_stallstrengthfits_selpreterm.tsv'),
]
for mrnafile in mrnafiles:
currentindex += 1
if currentindex != jobindex:
continue
for typeOfTermination, terminationRate, stallstrengthfile in terminationandStallStrengths:
cmd = ' '.join([
'./reporter_simulation',
'--trna-concn', '../annotations/simulations/leucine.starvation.average.trna.concentrations.tsv',
typeOfTermination,
'%0.2g'%terminationRate,
'--threshold-accommodation-rate', '22',
'--output-prefix','../rawdata/simulations/run8/',
'--stall-strength-file', stallstrengthfile,
'--input-genes', mrnafile
])
sp.check_output(cmd, shell=True)
import subprocess as sp
# loop submits each simulation to a different node of the cluster
for index in range(238):
sp.check_output([
'sbatch', # for SLURM cluster; this line can be commented out if running locally
'-t', '10', # for SLURM cluster; this line can be commented out if running locally
'-n', '1', # for SLURM cluster; this line can be commented out if running locally
'simulation_run_8.py',
str(index)
])
Explanation: Run 8: Vary distance between stall sites systematically for 3 different models (for Fig. 3C)
End of explanation
%%writefile simulation_run_9.py
#!/usr/bin/env python
#SBATCH --mem=8000
import subprocess as sp
import os
import sys
import numpy as np
jobindex = int(sys.argv[1])
currentindex = -1
mrnafiles = ['../annotations/simulations/run4/yfp_cta18_initiationrate_0.3.csv']
# use experimental fits for stall strengths from run 4
terminationandStallStrengths = [
('--5prime-preterm-rate','../processeddata/simulations/run4_stallstrengthfits_5primepreterm.tsv'),
('--background-preterm-rate','../processeddata/simulations/run4_stallstrengthfits_selpreterm.tsv'),
('--selective-preterm-rate','../processeddata/simulations/run4_stallstrengthfits_selpreterm.tsv'),
]
for mrnafile in mrnafiles:
for typeOfTermination, stallstrengthfile in terminationandStallStrengths:
for terminationRate in [0] + list(10.0**np.arange(-2,1.01,0.05)):
currentindex += 1
if currentindex != jobindex:
continue
cmd = ' '.join([
'./reporter_simulation',
'--trna-concn', '../annotations/simulations/leucine.starvation.average.trna.concentrations.tsv',
typeOfTermination,
'%0.4g'%terminationRate,
'--threshold-accommodation-rate', '22',
'--output-prefix','../rawdata/simulations/run9/',
'--stall-strength-file', stallstrengthfile,
'--input-genes', mrnafile
])
sp.check_output(cmd, shell=True)
import subprocess as sp
# loop submits each simulation to a different node of the cluster
for index in range(200):
sp.check_output([
'sbatch', # for SLURM cluster; this line can be commented out if running locally
'-t', '20', # for SLURM cluster; this line can be commented out if running locally
'-n', '1', # for SLURM cluster; this line can be commented out if running locally
'simulation_run_9.py',
str(index)
])
Explanation: Run 9 : Vary abortive termination rate systematically for 3 different models (for Fig. 7)
End of explanation
%%writefile simulation_run_11.py
#!/usr/bin/env python
#SBATCH --mem=8000
import subprocess as sp
import os
import sys
jobindex = int(sys.argv[1])
currentindex = -1
mrnafiles = list(filter(lambda x: x.startswith('yfp'), os.listdir('../annotations/simulations/run11/')))
mrnafiles = ['../annotations/simulations/run11/' + File for File in mrnafiles]
terminationandStallStrengths = [
('--5prime-preterm-rate',0,'../processeddata/simulations/run11_stallstrengthfits_trafficjam.tsv'),
('--5prime-preterm-rate',1,'../processeddata/simulations/run11_stallstrengthfits_5primepreterm.tsv'),
('--selective-preterm-rate',1,'../processeddata/simulations/run11_stallstrengthfits_selpreterm.tsv'),
]
for mrnafile in mrnafiles:
currentindex += 1
if currentindex != jobindex:
continue
for typeOfTermination, terminationRate, stallstrengthfile in terminationandStallStrengths:
cmd = ' '.join([
'./reporter_simulation',
'--trna-concn', '../annotations/simulations/leucine.starvation.average.trna.concentrations.tsv',
typeOfTermination,
'%0.2g'%terminationRate,
'--threshold-accommodation-rate', '22',
'--output-prefix','../rawdata/simulations/run11/',
'--stall-strength-file', stallstrengthfile,
'--input-genes', mrnafile
])
sp.check_output(cmd, shell=True)
import subprocess as sp
# loop submits each simulation to a different node of the cluster
for index in range(20):
sp.check_output([
'sbatch', # for SLURM cluster; this line can be commented out if running locally
'-t', '30', # for SLURM cluster; this line can be commented out if running locally
'-n', '1', # for SLURM cluster; this line can be commented out if running locally
'simulation_run_11.py',
str(index)
])
Explanation: Run 11: Predict YFP synthesis rate of CTA distance mutants based on fit of stall strengths to single mutant data (for Fig. 6)
End of explanation |
2,136 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The python Language Reference
CPython -> Python implmentation in C<br>
Python program is read by a parser, input to the parser is a stream of tokens, generated by lexical analyzer.<br>
<ol>
<li>Logical Lines -> The end of a logical line is represented by NEWLINE</li>
<li>Physical Lines -> It is a sequence of characters terminated by an end-of-line sequence</li>
<li>Comments -> It starts with #
</ol>
A comment in the first or second line of the form coding[=
Step1: Everything is represented by objects in python or relation among objects. Every object has a value, type, identity. Identity can be thought of as address of object in memory which never changes. The 'is' operator compares the identity of the objects. Type of an object is also unchangable.<br>
List of types available in python<ol>
<li>None -> Objects of type None have value None</li>
<li>NotImplemented -> Same as None but it is returned when methods do not implement the operations on the operand</li>
<li>Elipses -> Singled value accessed using ... or Ellipses. It contains int, bool, float(Real), complex </li>
<li>Sequences -> We can use len() to find number of itemsin the sequencem, also support slicing. These are strings. Tuples, Bytes. ord() converts string to int, chr() converts char to int, str.encode() to convert str to bytes and bytes.decode() to convert bytes to str. Mutable sequences are Lists, Byte Arrays</li>
<li>Set types -> They cannot be indexed by subscripts only iterated, unique with no repetitions. These are sets and frozen sets</li>
<li>Mapping -> Dictionaries</li>
</ol>
User definded functions
<ul>
<li>\__doc\__ -> The description of the object</li>
<li>\__name\__ -> It tells the function's name </li>
<li>\__qualname\__ -> It shows the path from a module's global scope to a class.</li>
<li>\__module\__ -> name of module function was defined in </li>
<li>\__code\__ </li>
<li>\__globals\__ -> reference to the global variables of a function</li>
<li>\__dict\__ -> Namespace supporting arbitrary functions attributes</li>
<li>\__closure\__ -> None or tuple of cells containing binding of function's free variables</li>
</ul>
<br>
Instance methods
<ul>
<li>\__self\__ -> It is the class instance object</li>
<li>\__doc\__</li>
<li>\__name\__</li>
<li>\__module\__</li>
<li>\__bases\__ -> Tuples containing the base classes</li>
</ul>
<br>
Generator functions -> which uses yield. It return an iterator object which can be used to execute the body of the function. <BR>
async def is called a coroutine function, which returns a coroutine object. It contains await, async with, async for.<br>
async def which uses yield is called asyncronous generator function, whose return object can be used in async for<br>
<p>__Class Instances__
A class instance has a namespace implemented as a dictionary where attribute references are first searched. When not found than instance's class has also a atrribute by that name. If not found than _self_ is checked and if still not found than \__getattr\__() is checked. Attribute assignments and deletions always update the instance's dictionary.<\p>
Various internal types used internally by the interpreter
<ol>
<li>Code objects -> Represent byte-complied execuatble Python code or bytecode. They contain no references to mutable objects.</li>
<li>Frame objects -> Represent execution frames, occur in traceback objects. Support one method called frame.clear() which clears all references to local variables.</li>
<li>Traceback objects -> Represent a stack trace of an exception. It occurs when an exception occurs</li>
<li>Slice objects -> Slices of \__getitem\__() methods or by slice(). </li>
<li>Static method objects -> Wrapper around any other object. When a static method object is retrieved from a class or a class instance, the object actually returns a wrapper object.</li>
<li>Class method objects -> A wrapper aroudn another classes or class instances.</li>
</ol>
Special method names
<ol>
<li>object.\__new\__(cls[]) -> To create a new instance class cls</li>
<li>object.\__init\__(self[]) -> A new instance when created by 1) then this it is called first before sending the control to the caller.</li>
<li>object.\__del\__(self) -> To destroy the object</li>
<li>object.\__repr\__(self) -> To compute the official string representation of an object.</li>
</ol>
By default instances of classes have a dictioanry for attribute storage. This space consumption can become large when a large number of instances of a class are created. This default can be overridden by using \__slots\__ in a class definition. It reserves sapce for the declared vaariabels and prevents automatic creation of dict of each instance.<br>
If you want to dynamically declare new vairbales then add \__dict\__ to the sequence of strings in \__slots\__ declaration.
__Metaclasses__<br>
The class creation process can be customized by passing the metaclass keyword argument in the class definition line or by inheriting from an existing class that included such ar argument. | Python Code:
def \
quicksort():
pass
Explanation: The python Language Reference
CPython -> Python implmentation in C<br>
Python program is read by a parser, input to the parser is a stream of tokens, generated by lexical analyzer.<br>
<ol>
<li>Logical Lines -> The end of a logical line is represented by NEWLINE</li>
<li>Physical Lines -> It is a sequence of characters terminated by an end-of-line sequence</li>
<li>Comments -> It starts with #
</ol>
A comment in the first or second line of the form coding[=:]\s*([-\w.]+) is processed as a encoding declaration.<br>
Two or more physical lines can be interpretted as logical lines by using backslash().<br>
Lines in parenthesis can be split without backslahes<br>
End of explanation
class Meta(type):
pass
class MyClass(metaclass = Meta):
pass
class MySubclass(MyClass):
pass
Explanation: Everything is represented by objects in python or relation among objects. Every object has a value, type, identity. Identity can be thought of as address of object in memory which never changes. The 'is' operator compares the identity of the objects. Type of an object is also unchangable.<br>
List of types available in python<ol>
<li>None -> Objects of type None have value None</li>
<li>NotImplemented -> Same as None but it is returned when methods do not implement the operations on the operand</li>
<li>Elipses -> Singled value accessed using ... or Ellipses. It contains int, bool, float(Real), complex </li>
<li>Sequences -> We can use len() to find number of itemsin the sequencem, also support slicing. These are strings. Tuples, Bytes. ord() converts string to int, chr() converts char to int, str.encode() to convert str to bytes and bytes.decode() to convert bytes to str. Mutable sequences are Lists, Byte Arrays</li>
<li>Set types -> They cannot be indexed by subscripts only iterated, unique with no repetitions. These are sets and frozen sets</li>
<li>Mapping -> Dictionaries</li>
</ol>
User definded functions
<ul>
<li>\__doc\__ -> The description of the object</li>
<li>\__name\__ -> It tells the function's name </li>
<li>\__qualname\__ -> It shows the path from a module's global scope to a class.</li>
<li>\__module\__ -> name of module function was defined in </li>
<li>\__code\__ </li>
<li>\__globals\__ -> reference to the global variables of a function</li>
<li>\__dict\__ -> Namespace supporting arbitrary functions attributes</li>
<li>\__closure\__ -> None or tuple of cells containing binding of function's free variables</li>
</ul>
<br>
Instance methods
<ul>
<li>\__self\__ -> It is the class instance object</li>
<li>\__doc\__</li>
<li>\__name\__</li>
<li>\__module\__</li>
<li>\__bases\__ -> Tuples containing the base classes</li>
</ul>
<br>
Generator functions -> which uses yield. It return an iterator object which can be used to execute the body of the function. <BR>
async def is called a coroutine function, which returns a coroutine object. It contains await, async with, async for.<br>
async def which uses yield is called asyncronous generator function, whose return object can be used in async for<br>
<p>__Class Instances__
A class instance has a namespace implemented as a dictionary where attribute references are first searched. When not found than instance's class has also a atrribute by that name. If not found than _self_ is checked and if still not found than \__getattr\__() is checked. Attribute assignments and deletions always update the instance's dictionary.<\p>
Various internal types used internally by the interpreter
<ol>
<li>Code objects -> Represent byte-complied execuatble Python code or bytecode. They contain no references to mutable objects.</li>
<li>Frame objects -> Represent execution frames, occur in traceback objects. Support one method called frame.clear() which clears all references to local variables.</li>
<li>Traceback objects -> Represent a stack trace of an exception. It occurs when an exception occurs</li>
<li>Slice objects -> Slices of \__getitem\__() methods or by slice(). </li>
<li>Static method objects -> Wrapper around any other object. When a static method object is retrieved from a class or a class instance, the object actually returns a wrapper object.</li>
<li>Class method objects -> A wrapper aroudn another classes or class instances.</li>
</ol>
Special method names
<ol>
<li>object.\__new\__(cls[]) -> To create a new instance class cls</li>
<li>object.\__init\__(self[]) -> A new instance when created by 1) then this it is called first before sending the control to the caller.</li>
<li>object.\__del\__(self) -> To destroy the object</li>
<li>object.\__repr\__(self) -> To compute the official string representation of an object.</li>
</ol>
By default instances of classes have a dictioanry for attribute storage. This space consumption can become large when a large number of instances of a class are created. This default can be overridden by using \__slots\__ in a class definition. It reserves sapce for the declared vaariabels and prevents automatic creation of dict of each instance.<br>
If you want to dynamically declare new vairbales then add \__dict\__ to the sequence of strings in \__slots\__ declaration.
__Metaclasses__<br>
The class creation process can be customized by passing the metaclass keyword argument in the class definition line or by inheriting from an existing class that included such ar argument.
End of explanation |
2,137 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Using Sklearn RFE to Select Features
| Python Code::
from sklearn.ensemble import RandomForestRegressor
from sklearn.feature_selection import RFE
rf = RandomForestRegressor(random_state=101)
rfe = RFE(rf, n_features_to_select=8)
rfe = rfe.fit(X_train, y_train)
predictions = rfe.predict(X_test)
#Print feature rankings
feature_rankings = pd.DataFrame({'feature_names':np.array(X_train.columns),'feature_ranking':rfe.ranking_})
print(feature_rankings)
|
2,138 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Evaluation des modèles pour l'extraction supercritique
L'extraction supercritique est de plus en plus utilisée afin de retirer des matières organiques de différents liquides ou matrices solides. Cela est dû au fait que les fluides supercritiques ont des avantages non négligeables par rapport aux autres solvants, ils ont des caractèreistiques comprises entre celles des gaz et celles des solides. En changeant la température et la pression ils peuvent capter des composés différents, ils sont donc très efficaces.
Le méchanisme de l'extraction supercritique est le suivant
Step1: Ejemplo 2 funciona
Step2: Fonction
Modelo Reverchon
Mathematical Modeling of Supercritical Extraction of Sage Oil
Step6: Trabajo futuro
Realizar modificaciones de los parametros para observar cómo afectan al comportamiento del modelo.
Realizar un ejemplo de optimización de parámetros utilizando el modelo de Reverchon.
Referencias
[1] E. Reverchon, Mathematical modelling of supercritical extraction of sage oil, AIChE J. 42 (1996) 1765–1771.
https | Python Code:
import numpy as np
from scipy import integrate
from matplotlib.pylab import *
Explanation: Evaluation des modèles pour l'extraction supercritique
L'extraction supercritique est de plus en plus utilisée afin de retirer des matières organiques de différents liquides ou matrices solides. Cela est dû au fait que les fluides supercritiques ont des avantages non négligeables par rapport aux autres solvants, ils ont des caractèreistiques comprises entre celles des gaz et celles des solides. En changeant la température et la pression ils peuvent capter des composés différents, ils sont donc très efficaces.
Le méchanisme de l'extraction supercritique est le suivant :
- Transport du fluide vers la particule, en premier lieu sur sa surface et en deuxième lieu a l'intérieur de la particule par diffusion
- Dissolution du soluté avec le fluide supercritique
- Transport du solvant de l'intérieur vers la surface de la particule
- Transport du solvant et des solutés de la surface de la particule vers la masse du solvant
A - Le modèle de Reverchon :
Afin d'utiliser ce modèle, définissons les variables qui vont y être admises, ci-dessous la nomenclature du modèle :
Le modèle :
Il est basé sur l'intégration des bilans de masses différentielles tout le long de l'extraction, avec les hypothèses suivants :
- L'écoulement piston existe à l'intérieur du lit, comme le montre le schéma ci-contre :
- La dispersion axiale du lit est négligeable
- Le débit, la température et la pression sont constants
Cela nous permet d'obtenir les équations suivantes :
- $uV.\frac{\partial c_{c}}{\partial t}+eV.\frac{\partial c_{c}}{\partial t}+ AK(q-q) = 0$
- $(1-e).V.uV\frac{\partial c_{q}}{\partial t}= -AK(q-q*)$
Les conditions initiales sont les suivantes : C = 0, q=q0 à t = 0 et c(0,t) à h=0
La phase d'équilibre est : $c = k.q*$
Sachant que le fluide et la phase sont uniformes à chaque stage, nous pouvons définir le modèle en utilisant les équations différentielles ordinaires (2n). Les équations sont les suivantes :
- $(\frac{W}{p}).(Cn- Cn-1) + e (\frac{v}{n}).(\frac{dcn}{dt})+(1-e).(\frac{v}{n}).(\frac{dcn}{dt}) = 0$
- $(\frac{dqn}{dt} = - (\frac{1}{ti})(qn-qn*)$
- Les conditions initiales sont : cn = 0, qn = q0 à t = 0
Ejemplo ODE
End of explanation
import numpy as np
from scipy import integrate
import matplotlib.pyplot as plt
def vdp1(t, y):
return np.array([y[1], (1 - y[0]**2)*y[1] - y[0]])
t0, t1 = 0, 20 # start and end
t = np.linspace(t0, t1, 100) # the points of evaluation of solution
y0 = [2, 0] # initial value
y = np.zeros((len(t), len(y0))) # array for solution
y[0, :] = y0
r = integrate.ode(vdp1).set_integrator("dopri5") # choice of method
r.set_initial_value(y0, t0) # initial values
for i in range(1, t.size):
y[i, :] = r.integrate(t[i]) # get one more value, add it to the array
if not r.successful():
raise RuntimeError("Could not integrate")
plt.plot(t, y)
plt.show()
Explanation: Ejemplo 2 funciona
End of explanation
P = 9 #MPa
T = 323 # K
Q = 8.83 #g/min
e = 0.4
rho = 285 #kg/m3
miu = 2.31e-5 # Pa*s
dp = 0.75e-3 # m
Dl = 0.24e-5 #m2/s
De = 8.48e-12 # m2/s
Di = 6e-13
u = 0.455e-3 #m/s
kf = 1.91e-5 #m/s
de = 0.06 # m
W = 0.160 # kg
kp = 0.2
r = 0.31 #m
n = 10
V = 12
#C = kp * qE
C = 0.1
qE = C / kp
Cn = 0.05
Cm = 0.02
t = np.linspace(0,10, 1)
ti = (r ** 2) / (15 * Di)
def reverchon(x,t):
#Ecuaciones diferenciales del modelo Reverchon
#dCdt = - (n/(e * V)) * (W * (Cn - Cm) / rho + (1 - e) * V * dqdt)
#dqdt = - (1 / ti) * (q - qE)
q = x[0]
C = x[1]
qE = C / kp
dqdt = - (1 / ti) * (q - qE)
dCdt = - (n/(e * V)) * (W * (C - Cm) / rho + (1 - e) * V * dqdt)
return [dqdt, dCdt]
reverchon([1, 2], 0)
x0 = [0, 0]
t = np.linspace(0, 3000, 500)
resultado = odeint(reverchon, x0, t)
qR = resultado[:, 0]
CR = resultado[:, 1]
plt.plot(t, CR)
plt.title("Modelo Reverchon")
plt.xlabel("t [=] min")
plt.ylabel("C [=] $kg/m^3$")
x0 = [0, 0]
t = np.linspace(0, 3000, 500)
resultado = odeint(reverchon, x0, t)
qR = resultado[:, 0]
CR = resultado[:, 1]
plt.plot(t, qR)
plt.title("Modelo Reverchon")
plt.xlabel("t [=] min")
plt.ylabel("C solid–fluid interface [=] $kg/m^3$")
print(CR)
r = 0.31 #m
x0 = [0, 0]
t = np.linspace(0, 3000, 500)
resultado = odeint(reverchon, x0, t)
qR = resultado[:, 0]
CR = resultado[:, 1]
plt.plot(t, CR)
plt.title("Modelo Reverchon")
plt.xlabel("t [=] min")
plt.ylabel("C [=] $kg/m^3$")
r = 0.231 #m
x0 = [0, 0]
t = np.linspace(0, 3000, 500)
resultado = odeint(reverchon, x0, t)
qR = resultado[:, 0]
CR = resultado[:, 1]
plt.plot(t, CR)
plt.title("Modelo Reverchon")
plt.xlabel("t [=] min")
plt.ylabel("C [=] $kg/m^3$")
fig,axes=plt.subplots(2,2)
axes[0,0].plot(t,CR)
axes[1,0].plot(t,qR)
Explanation: Fonction
Modelo Reverchon
Mathematical Modeling of Supercritical Extraction of Sage Oil
End of explanation
#Datos experimentales
x_data = np.linspace(0,9,10)
y_data = np.array([0.000,0.416,0.489,0.595,0.506,0.493,0.458,0.394,0.335,0.309])
def f(y, t, k):
sistema de ecuaciones diferenciales ordinarias
return (-k[0]*y[0], k[0]*y[0]-k[1]*y[1], k[1]*y[1])
def my_ls_func(x,teta):
f2 = lambda y, t: f(y, t, teta)
# calcular el valor de la ecuación diferencial en cada punto
r = integrate.odeint(f2, y0, x)
return r[:,1]
def f_resid(p):
# definir la función de minimos cuadrados para cada valor de y
return y_data - my_ls_func(x_data,p)
#resolver el problema de optimización
guess = [0.2, 0.3] #valores inicales para los parámetros
y0 = [1,0,0] #valores inciales para el sistema de ODEs
(c, kvg) = optimize.leastsq(f_resid, guess) #get params
print("parameter values are ",c)
# interpolar los valores de las ODEs usando splines
xeval = np.linspace(min(x_data), max(x_data),30)
gls = interpolate.UnivariateSpline(xeval, my_ls_func(xeval,c), k=3, s=0)
xeval = np.linspace(min(x_data), max(x_data), 200)
#Gráficar los resultados
pp.plot(x_data, y_data,'.r',xeval,gls(xeval),'-b')
pp.xlabel('t [=] min',{"fontsize":16})
pp.ylabel("C",{"fontsize":16})
pp.legend(('Datos','Modelo'),loc=0)
pp.show()
f_resid(guess)
#Datos experimentales
x_data = np.linspace(0,9,10)
y_data = np.array([0.000,0.416,0.489,0.595,0.506,0.493,0.458,0.394,0.335,0.309])
print(y_data)
# def f(y, t, k):
# sistema de ecuaciones diferenciales ordinarias
# return (-k[0]*y[0], k[0]*y[0]-k[1]*y[1], k[1]*y[1])
def reverchon(x,t,Di):
#Ecuaciones diferenciales del modelo Reverchon
#dCdt = - (n/(e * V)) * (W * (Cn - Cm) / rho + (1 - e) * V * dqdt)
#dqdt = - (1 / ti) * (q - qE)
q = x[0]
C = x[1]
qE = C / kp
ti = (r**2) / (15 * Di)
dqdt = - (1 / ti) * (q - qE)
dCdt = - (n/(e * V)) * (W * (C - Cm) / rho + (1 - e) * V * dqdt)
return [dqdt, dCdt]
def my_ls_func(x,teta):
f2 = lambda y, t: reverchon(y, t, teta)
# calcular el valor de la ecuación diferencial en cada punto
rr = integrate.odeint(f2, y0, x)
print(f2)
return rr[:,1]
def f_resid(p):
# definir la función de minimos cuadrados para cada valor de y
return y_data - my_ls_func(p,x_data)
#resolver el problema de optimización
guess = np.array([0.2]) #valores inicales para los parámetros
y0 = [0,0] #valores inciales para el sistema de ODEs
(c, kvg) = optimize.leastsq(f_resid, guess) #get params
print("parameter values are ",c)
# interpolar los valores de las ODEs usando splines
xeval = np.linspace(min(x_data), max(x_data),30)
gls = interpolate.UnivariateSpline(xeval, my_ls_func(xeval,c), k=3, s=0)
xeval = np.linspace(min(x_data), max(x_data), 200)
#Gráficar los resultados
pp.plot(x_data, y_data,'.r',xeval,gls(xeval),'-b')
pp.xlabel('t [=] min',{"fontsize":16})
pp.ylabel("C",{"fontsize":16})
pp.legend(('Datos','Modelo'),loc=0)
pp.show()
def my_ls_func(x,teta):
f2 = lambda y, t: reverchon(y, t, teta)
# calcular el valor de la ecuación diferencial en cada punto
r = integrate.odeint(f2, y0, x)
print(f2)
return r[:,1]
my_ls_func(y0,guess)
f_resid(guess)
Explanation: Trabajo futuro
Realizar modificaciones de los parametros para observar cómo afectan al comportamiento del modelo.
Realizar un ejemplo de optimización de parámetros utilizando el modelo de Reverchon.
Referencias
[1] E. Reverchon, Mathematical modelling of supercritical extraction of sage oil, AIChE J. 42 (1996) 1765–1771.
https://onlinelibrary.wiley.com/doi/pdf/10.1002/aic.690420627
[2] Amit Rai, Kumargaurao D.Punase, Bikash Mohanty, Ravindra Bhargava, Evaluation of models for supercritical fluid extraction, International Journal of Heat and Mass Transfer Volume 72, May 2014, Pages 274-287. https://www.sciencedirect.com/science/article/pii/S0017931014000398
Ajuste de parámetros con ODEs: modelo Reverchon
Explicaciones :
- Poner los datos experimentales
- Definir las ecuaciones diferenciales ordinarias del systema con los diferentes parametros
- Calcular el valor de la ecuacion diferencial en cada punto, se necesita una otra funcion para integrar la ecuacion
- Despues tenemos que definir una funcion de minimos cuadrados para cada valor de y : minimos cuadrados es una tecnica de analisis numerico enmarcada dentro de la optimizacion matematica y se intenta encontrar la funcion continua entre los variables independentes y dependentes
- Para resolverlo se necesita las varoles iniciales para los parametros y los ecuacions ordinarias, para obtener los paramètros de la funcion. Despues se necesita hacer una interpolacion para los valores de las ODEs y para hacerlo se usa splines (spline es una funcion definida per partes por los polynomios), en Python splines es un método que se usa cuando hay problemas de interpolacion.
End of explanation |
2,139 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Load Iris Data
Step2: Create Random Forest Classifier
Step3: Train Random Forest Classifier
Step4: Predict Previously Unseen Observation | Python Code:
# Load libraries
from sklearn.ensemble import RandomForestClassifier
from sklearn import datasets
Explanation: Title: Random Forest Classifier
Slug: random_forest_classifier
Summary: Training a random forest classifier in scikit-learn.
Date: 2017-09-21 12:00
Category: Machine Learning
Tags: Trees And Forests
Authors: Chris Albon
Preliminaries
End of explanation
# Load data
iris = datasets.load_iris()
X = iris.data
y = iris.target
Explanation: Load Iris Data
End of explanation
# Create random forest classifer object that uses entropy
clf = RandomForestClassifier(criterion='entropy', random_state=0, n_jobs=-1)
Explanation: Create Random Forest Classifier
End of explanation
# Train model
model = clf.fit(X, y)
Explanation: Train Random Forest Classifier
End of explanation
# Make new observation
observation = [[ 5, 4, 3, 2]]
# Predict observation's class
model.predict(observation)
Explanation: Predict Previously Unseen Observation
End of explanation |
2,140 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with Multidimensional Coordinates
Author
Step1: As an example, consider this dataset from the xarray-data repository.
Step2: In this example, the logical coordinates are x and y, while the physical coordinates are xc and yc, which represent the latitudes and longitude of the data.
Step3: Plotting
Let's examine these coordinate variables by plotting them.
Step4: Note that the variables xc (longitude) and yc (latitude) are two-dimensional scalar fields.
If we try to plot the data variable Tair, by default we get the logical coordinates.
Step5: In order to visualize the data on a conventional latitude-longitude grid, we can take advantage of xarray's ability to apply cartopy map projections.
Step6: Multidimensional Groupby
The above example allowed us to visualize the data on a regular latitude-longitude grid. But what if we want to do a calculation that involves grouping over one of these physical coordinates (rather than the logical coordinates), for example, calculating the mean temperature at each latitude. This can be achieved using xarray's groupby function, which accepts multidimensional variables. By default, groupby will use every unique value in the variable, which is probably not what we want. Instead, we can use the groupby_bins function to specify the output coordinates of the group. | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import xarray as xr
import cartopy.crs as ccrs
from matplotlib import pyplot as plt
Explanation: Working with Multidimensional Coordinates
Author: Ryan Abernathey
Many datasets have physical coordinates which differ from their logical coordinates. Xarray provides several ways to plot and analyze such datasets.
End of explanation
ds = xr.tutorial.open_dataset('rasm').load()
ds
Explanation: As an example, consider this dataset from the xarray-data repository.
End of explanation
print(ds.xc.attrs)
print(ds.yc.attrs)
Explanation: In this example, the logical coordinates are x and y, while the physical coordinates are xc and yc, which represent the latitudes and longitude of the data.
End of explanation
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(14,4))
ds.xc.plot(ax=ax1)
ds.yc.plot(ax=ax2)
Explanation: Plotting
Let's examine these coordinate variables by plotting them.
End of explanation
ds.Tair[0].plot()
Explanation: Note that the variables xc (longitude) and yc (latitude) are two-dimensional scalar fields.
If we try to plot the data variable Tair, by default we get the logical coordinates.
End of explanation
plt.figure(figsize=(14,6))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_global()
ds.Tair[0].plot.pcolormesh(ax=ax, transform=ccrs.PlateCarree(), x='xc', y='yc', add_colorbar=False)
ax.coastlines()
ax.set_ylim([0,90]);
Explanation: In order to visualize the data on a conventional latitude-longitude grid, we can take advantage of xarray's ability to apply cartopy map projections.
End of explanation
# define two-degree wide latitude bins
lat_bins = np.arange(0,91,2)
# define a label for each bin corresponding to the central latitude
lat_center = np.arange(1,90,2)
# group according to those bins and take the mean
Tair_lat_mean = ds.Tair.groupby_bins('xc', lat_bins, labels=lat_center).mean(dim=xr.ALL_DIMS)
# plot the result
Tair_lat_mean.plot()
Explanation: Multidimensional Groupby
The above example allowed us to visualize the data on a regular latitude-longitude grid. But what if we want to do a calculation that involves grouping over one of these physical coordinates (rather than the logical coordinates), for example, calculating the mean temperature at each latitude. This can be achieved using xarray's groupby function, which accepts multidimensional variables. By default, groupby will use every unique value in the variable, which is probably not what we want. Instead, we can use the groupby_bins function to specify the output coordinates of the group.
End of explanation |
2,141 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Network Traffic Forecasting (using time series data)
In telco, accurately forecasting KPIs (e.g. network traffic, utilizations, user experience, etc.) for communication networks ( 2G/3G/4G/5G/wired) can help predict network failures, allocate resource, or save energy.
In this notebook, we demonstrate a reference use case where we use the network traffic KPI(s) in the past to predict traffic KPI(s) in the future. We demostrate how to do multivariate multistep forecasting using Project Chronos.
For demonstration, we use the publicly available network traffic data repository maintained by the WIDE project and in particular, the network traffic traces aggregated every 2 hours (i.e. AverageRate in Mbps/Gbps and Total Bytes) in year 2018 and 2019 at the transit link of WIDE to the upstream ISP (dataset link).
Helper functions
This section defines some helper functions to be used in the following procedures. You can refer to it later when they're used.
Step2: Step 0
Step3: Visualize the target KPIs
Step4: Step 1
Step5: Initialize train, valid and test tsdataset from raw pandas dataframe.
Step6: Preprocess the datasets. Here we perform
Step7: Convert TSDataset to numpy.
Step8: Step 2
Step9: You can use this method to print the parameter list.
Step10: After training is finished. You can use the forecaster to do prediction and evaluation.
Step11: Since we have used standard scaler to scale the input data (including the target values), we need to inverse the scaling on the predicted values too.
Step12: Calculate mean square error and the symetric mean absolute percentage error.
Step13: You may save & restore the forecaster.
Step14: If you only want to save the pytorch model
Step15: Visualization
Plot actual and prediction values for AvgRate KPI
Step16: Plot actual and prediction values for total bytes KPI | Python Code:
import warnings
warnings.filterwarnings('ignore')
import matplotlib.pyplot as plt
%matplotlib inline
def plot_predict_actual_values(date, y_pred, y_test, ylabel):
plot the predicted values and actual values (for the test data)
fig, axs = plt.subplots(figsize=(12,5))
axs.plot(date, y_pred, color='red', label='predicted values')
axs.plot(date, y_test, color='blue', label='actual values')
axs.set_title('the predicted values and actual values (for the test data)')
plt.xlabel('test datetime')
plt.ylabel(ylabel)
plt.legend(loc='upper left')
plt.show()
Explanation: Network Traffic Forecasting (using time series data)
In telco, accurately forecasting KPIs (e.g. network traffic, utilizations, user experience, etc.) for communication networks ( 2G/3G/4G/5G/wired) can help predict network failures, allocate resource, or save energy.
In this notebook, we demonstrate a reference use case where we use the network traffic KPI(s) in the past to predict traffic KPI(s) in the future. We demostrate how to do multivariate multistep forecasting using Project Chronos.
For demonstration, we use the publicly available network traffic data repository maintained by the WIDE project and in particular, the network traffic traces aggregated every 2 hours (i.e. AverageRate in Mbps/Gbps and Total Bytes) in year 2018 and 2019 at the transit link of WIDE to the upstream ISP (dataset link).
Helper functions
This section defines some helper functions to be used in the following procedures. You can refer to it later when they're used.
End of explanation
from bigdl.chronos.data.utils.public_dataset import PublicDataset
df = PublicDataset(name='network_traffic',
path='~/.chronos/dataset',
redownload=False).get_public_data().preprocess_network_traffic().df
df.head()
Explanation: Step 0: Prepare dataset
Chronos has provided built-in dataset APIs for easily download and preprocessed public dataset. You could find API guide here.
With below APIs, we first download network traffic data and then preprocess the downloaded dataset. The pre-processing mainly contains 2 parts:
Convert "StartTime" values from string to Pandas TimeStamp
Unify the measurement scale for "AvgRate" values - some uses Mbps, some uses Gbps
End of explanation
ax = df.plot(y='AvgRate', figsize=(16,6), title="AvgRate of network traffic data")
ax = df.plot(y='total', figsize=(16,6), title='total bytes of network traffic data')
Explanation: Visualize the target KPIs: "AvgRate" and "total"
End of explanation
from bigdl.chronos.data import TSDataset
from sklearn.preprocessing import StandardScaler
Explanation: Step 1: Data transformation and feature engineering using Chronos TSDataset
TSDataset is our abstract of time series dataset for data transformation and feature engineering. Here we use it to preprocess the data.
End of explanation
tsdata_train, _, tsdata_test = TSDataset.from_pandas(df, dt_col="StartTime", target_col=["AvgRate", "total"], with_split=True, test_ratio=0.1)
Explanation: Initialize train, valid and test tsdataset from raw pandas dataframe.
End of explanation
look_back = 84
horizon = 12
standard_scaler = StandardScaler()
for tsdata in [tsdata_train, tsdata_test]:
tsdata.gen_dt_feature(features=["HOUR", "WEEKDAY"], one_hot_features=["HOUR", "WEEKDAY"])\
.impute(mode="last")\
.scale(standard_scaler, fit=(tsdata is tsdata_train))\
.roll(lookback=look_back, horizon=horizon)
Explanation: Preprocess the datasets. Here we perform:
gen_dt_feature: generate feature from datetime (e.g. month, day...)
impute: fill the missing values
scale: scale each feature to standard distribution.
roll: sample the data with sliding window.
For forecasting task, we will look back 1 weeks' historical data (84 records with sample frequency of 2h) and predict the value of next 1 day (12 records).
We perform the same transformation processes on train and test set.
End of explanation
x_train, y_train = tsdata_train.to_numpy()
x_test, y_test = tsdata_test.to_numpy()
#x.shape = (num of sample, lookback, num of input feature)
#y.shape = (num of sample, horizon, num of output feature)
x_train.shape, y_train.shape, x_test.shape, y_test.shape
Explanation: Convert TSDataset to numpy.
End of explanation
from bigdl.chronos.forecaster.tcn_forecaster import TCNForecaster
forecaster = TCNForecaster(past_seq_len = look_back,
future_seq_len = horizon,
input_feature_num = x_train.shape[-1],
output_feature_num = 2, # "AvgRate" and "total"
num_channels = [30] * 7,
repo_initialization = False,
kernel_size = 3,
dropout = 0.1,
lr = 0.001,
seed = 0)
forecaster.num_processes = 1
Explanation: Step 2: Time series forecasting using Chronos Forecaster
We demonstrate how to use chronos TCNForecaster for multi-variate and multi-step forecasting. For more details, you can refer to TCNForecaster document here.
First, we initialize a forecaster.
* num_channels: The filter numbers of the convolutional layers. It can be a list.
* kernel_size: Convolutional layer filter height.
End of explanation
forecaster.data_config, forecaster.model_config
%%time
forecaster.fit((x_train, y_train), epochs=20, batch_size=64)
Explanation: You can use this method to print the parameter list.
End of explanation
# make prediction
y_pred = forecaster.predict(x_test)
Explanation: After training is finished. You can use the forecaster to do prediction and evaluation.
End of explanation
y_pred_unscale = tsdata_test.unscale_numpy(y_pred)
y_test_unscale = tsdata_test.unscale_numpy(y_test)
Explanation: Since we have used standard scaler to scale the input data (including the target values), we need to inverse the scaling on the predicted values too.
End of explanation
# evaluate with mse, smape
from bigdl.orca.automl.metrics import Evaluator
avgrate_mse = Evaluator.evaluate("mse", y_test_unscale[:, :, 0], y_pred_unscale[:, :, 0], multioutput='uniform_average')
avgrate_smape = Evaluator.evaluate("smape", y_test_unscale[:, :, 0], y_pred_unscale[:, :, 0], multioutput='uniform_average')
total_mse = Evaluator.evaluate("mse", y_test_unscale[:, :, 1], y_pred_unscale[:, :, 1], multioutput='uniform_average')
total_smape = Evaluator.evaluate("smape", y_test_unscale[:, :, 1], y_pred_unscale[:, :, 1], multioutput='uniform_average')
print(f"Evaluation result for AvgRate: mean squared error is {'%.2f' % avgrate_mse}, sMAPE is {'%.2f' % avgrate_smape}")
print(f"Evaluation result for total: mean squared error is {'%.2f' % total_mse}, sMAPE is {'%.2f' % total_smape}")
Explanation: Calculate mean square error and the symetric mean absolute percentage error.
End of explanation
forecaster.save("network_traffic.fxt")
forecaster.load("network_traffic.fxt")
Explanation: You may save & restore the forecaster.
End of explanation
model = forecaster.get_model()
import torch
torch.save(model, "tcn.pt")
Explanation: If you only want to save the pytorch model
End of explanation
test_date=df[-y_pred_unscale.shape[0]:].index
# You can choose the number of painting steps by specifying the step by yourself.
step = 0 # the first step
target_name = "AvgRate"
target_index = 0
plot_predict_actual_values(date=test_date, y_pred=y_pred_unscale[:, step, target_index], y_test=y_test_unscale[:, step, target_index], ylabel=target_name)
Explanation: Visualization
Plot actual and prediction values for AvgRate KPI
End of explanation
target_name = "total"
target_index = 1
plot_predict_actual_values(date=test_date, y_pred=y_pred_unscale[:, step, target_index], y_test=y_test_unscale[:, step, target_index], ylabel=target_name)
Explanation: Plot actual and prediction values for total bytes KPI
End of explanation |
2,142 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Harvesting WMS into CKAN
This notebook illustrates harvesting of a WMS endpoint into a CKAN instance.
Context
The harvested WMS endpoint belongs to Landgate's Spatial Land Information Program (SLIP). The layers within are authored by partner agencies or Landgate. There are one or several different web service endpoints per WMS layer.
Organisations
From a spreadsheet of agency references, names, and further information, CKAN organisations are initially created and subsequently used as owners of the respective harvested WMS layers.
Topics
The WMS layers are organised by topics, which will be created both as CKAN groups and keywords. Harvested datasets will be allocated to releveant CKAN groups.
Layer names
The WMS layer names contain the layer ID, consisting of agancy slug and layer reference, and the publishing date, and will be split up during harvesting.
Additional resources
Additional web service end points, as well as a list of published PDFs with further information, are added as extra resources to the CKAN datasets from harvested WMS layers.
CKAN credentials
Sensitive information and related configuration, such as CKAN URLs and credentials, are stored in a separate file.
To use this workbook on your own CKAN instance, write the following contents into a file secret.py in the same directory as this workbook
Step1: OGC W*S endpoints
Step2: Additional Lookups
Step3: Create Organisations and Groups
The next step will create or update CKAN organisations from organisations.csv, and CKAN groups from WMS topics.
Step4: Prepare data
The following step will prepare a dictionary of dataset metadata, ready to be inserted into CKAN.
It parses the WMS endpoint and looks up dictionaries organisations, groups, and pdf_dict.
This step runs very quickly, as it only handles dictionaries of WMS layers, organisations and groups (both
Step5: Delete old datasets
Note
Step6: Update datasets in CKAN
First pass
Step7: Second pass | Python Code:
import ckanapi
from harvest_helpers import *
from secret import CKAN, SOURCES
## enable one of:
#ckan = ckanapi.RemoteCKAN(CKAN["ct"]["url"], apikey=CKAN["ct"]["key"])
#ckan = ckanapi.RemoteCKAN(CKAN["ca"]["url"], apikey=CKAN["ca"]["key"])
ckan = ckanapi.RemoteCKAN(CKAN["cb"]["url"], apikey=CKAN["cb"]["key"])
print("Using CKAN {0}".format(ckan.address))
Explanation: Harvesting WMS into CKAN
This notebook illustrates harvesting of a WMS endpoint into a CKAN instance.
Context
The harvested WMS endpoint belongs to Landgate's Spatial Land Information Program (SLIP). The layers within are authored by partner agencies or Landgate. There are one or several different web service endpoints per WMS layer.
Organisations
From a spreadsheet of agency references, names, and further information, CKAN organisations are initially created and subsequently used as owners of the respective harvested WMS layers.
Topics
The WMS layers are organised by topics, which will be created both as CKAN groups and keywords. Harvested datasets will be allocated to releveant CKAN groups.
Layer names
The WMS layer names contain the layer ID, consisting of agancy slug and layer reference, and the publishing date, and will be split up during harvesting.
Additional resources
Additional web service end points, as well as a list of published PDFs with further information, are added as extra resources to the CKAN datasets from harvested WMS layers.
CKAN credentials
Sensitive information and related configuration, such as CKAN URLs and credentials, are stored in a separate file.
To use this workbook on your own CKAN instance, write the following contents into a file secret.py in the same directory as this workbook:
```
CKAN = {
"ca":{
"url": "http://catalogue.alpha.data.wa.gov.au/",
"key": "your-api-key"
},
"cb":{
"url": "http://catalogue.beta.data.wa.gov.au/",
"key": "your-api-key"
}
}
SOURCES = {
"NAME": {
"proxy": "proxy_url",
"url": "https://www2.landgate.wa.gov.au/ows/wmspublic"
},
...
}
ARCGIS = {
"SLIPFUTURE" : {
"url": "http://services.slip.wa.gov.au/arcgis/rest/services",
"folders": ["QC", ...]
},
...
}
``
Insert your catalogue names, urls, and importantly, your write-permitted CKAN API keys.
Next we'll import the whole dictionaryCKAN`.
End of explanation
wmsP = WebMapService(SOURCES["wmspublic"]["proxy"])
wmsP_url = SOURCES["wmspublic"]["url"]
wmsCM = WebMapService(SOURCES["wmsCsMosaic"]["proxy"])
wmsCM_url = SOURCES["wmsCsMosaic"]["url"]
wmsCC = WebMapService(SOURCES["wmsCsCadastre"]["proxy"])
wmsCC_url = SOURCES["wmsCsCadastre"]["url"]
wfsP = WebFeatureService(SOURCES["wfspublic_4326"]["proxy"])
wfsP_url = SOURCES["wfspublic_4326"]["url"]
wfsCA = WebFeatureService(SOURCES["wfsCsAdmin_4283"]["proxy"])
wfsCA_url = SOURCES["wfsCsAdmin_4283"]["url"]
wfsCC = WebFeatureService(SOURCES["wfsCsCadastre_4283"]["proxy"])
wfsCC_url = SOURCES["wfsCsCadastre_4283"]["url"]
#wfsCT = WebFeatureService(SOURCES["wfsCsTopo_4283"]["proxy"])
#wfsCT_url = SOURCES["wfsCsTopo_4283"]["url"]
Explanation: OGC W*S endpoints
End of explanation
pdfs = get_pdf_dict("data-dictionaries.csv")
org_dict = get_org_dict("organisations.csv")
group_dict = get_group_dict(wmsP)
Explanation: Additional Lookups
End of explanation
orgs = upsert_orgs(org_dict, ckan, debug=False)
groups = upsert_groups(group_dict, ckan, debug=False)
Explanation: Create Organisations and Groups
The next step will create or update CKAN organisations from organisations.csv, and CKAN groups from WMS topics.
End of explanation
l_wmsP = get_layer_dict(wmsP, wmsP_url, ckan, orgs, groups, pdfs, res_format="WMS", debug=False)
l_wmsCC = get_layer_dict(wmsCC, wmsCC_url, ckan, orgs, groups, pdfs, res_format="WMS", debug=False)
l_wmsCM = get_layer_dict(wmsCM, wmsCM_url, ckan, orgs, groups, pdfs, res_format="WMS", debug=False)
l_wfsP = get_layer_dict(wfsP, wfsP_url, ckan, orgs, groups, pdfs, res_format="WFS", debug=False)
l_wfsCA = get_layer_dict(wfsCA, wfsCA_url, ckan, orgs, groups, pdfs, res_format="WFS", debug=False)
l_wfsCC = get_layer_dict(wfsCC, wfsCC_url, ckan, orgs, groups, pdfs, res_format="WFS", debug=False)
Explanation: Prepare data
The following step will prepare a dictionary of dataset metadata, ready to be inserted into CKAN.
It parses the WMS endpoint and looks up dictionaries organisations, groups, and pdf_dict.
This step runs very quickly, as it only handles dictionaries of WMS layers, organisations and groups (both: name and id) and PDFs (name, id, url). There are no API calls to either CKAN or the WMS involved.
End of explanation
# Delete all datasets with old SLIP layer id name slug
kill_list = [n for n in ckan.action.package_list() if re.match(r"(.)*-[0-9][0-9][0-9]$", n)]
#killed = [ckan.action.package_delete(id=n) for n in kill_list]
print("Killed {0} obsolete datasets".format(len(kill_list)))
Explanation: Delete old datasets
Note: With great power comes great responsibility. Execute the next chunk with care and on your own risk.
End of explanation
p_wmsP = upsert_datasets(l_wmsP, ckan, overwrite_metadata=True, drop_existing_resources=True)
print("{0} datasets created or updated from {1} Public WMS layers".format(len(p_wmsP), len(wmsP.contents)))
Explanation: Update datasets in CKAN
First pass: add public WMS layer, overwrite metadata if dataset exists and drop any existing resources.
End of explanation
p_wfs = upsert_datasets(l_wfsP, ckan, overwrite_metadata=False, drop_existing_resources=False)
print("{0} datasets created or updated from {1} public WFS layers".format(len(p_wfs), len(wfsP.contents)))
p_wmsCC = upsert_datasets(l_wmsCC, ckan, overwrite_metadata=False, drop_existing_resources=False, debug=False)
print("{0} datasets created or updated from {1} Cadastre WMS layers".format(len(p_wmsCC), len(wmsCC.contents)))
p_wfsCC = upsert_datasets(l_wfsCC, ckan, overwrite_metadata=False, drop_existing_resources=False)
print("{0} datasets created or updated from {1} Cadastre WFS layers".format(len(p_wfsCC), len(wfsCC.contents)))
p_wfsCA = upsert_datasets(l_wfsCA, ckan, overwrite_metadata=False, drop_existing_resources=False)
print("{0} datasets created or updated from {1} Cadastre Admin WFS layers".format(len(p_wfsCA), len(wfsCA.contents)))
Explanation: Second pass: add public WFS, but retain metadata and resources of existing datasets. Repeat this mode for remaining sources.
End of explanation |
2,143 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
Step1: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
Step2: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise
Step3: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise
Step11: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise
Step12: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation
Step13: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise
Step14: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[
Step15: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
Step16: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
Step17: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
Step18: Testing | Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
# Create your dictionary that maps vocab words to integers here
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: i for i, word in enumerate(vocab, 1)}
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints = []
for review in reviews:
reviews_ints.append([vocab_to_int[word] for word in review.split()])
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels = labels.split('\n')
labels = np.array([1 if label == 'positive' else 0 for label in labels])
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
print(review_lens[10])
Explanation: If you built labels correctly, you should see the next output.
End of explanation
# Filter out that review with 0 length
reviews_ints = [each for each in reviews_ints if len(each) > 0]
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
seq_len = 200
features = np.zeros((len(reviews), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
features[:10,:100]
Explanation: If you build features correctly, it should look like that cell output below.
End of explanation
split_frac = 0.8
split_idx = int(len(features) * split_frac)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x) * 0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, your network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state)
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Testing
End of explanation |
2,144 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework assignment #3
These problem sets focus on using the Beautiful Soup library to scrape web pages.
Problem Set #1
Step1: Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of <h3> tags contained in widgets2016.html.
Step2: Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the "Widget Catalog" header.
Step3: In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, widget_names should evaluate to a list that looks like this (though not necessarily in this order)
Step4: Problem set #2
Step5: In the cell below, duplicate your code from the previous question. Modify the code to ensure that the values for price and quantity in each dictionary are floating-point numbers and integers, respectively. I.e., after executing the cell, your code should display something like this
Step6: Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the widgets list created in the cell above to calculate the total number of widgets that the factory has in its warehouse.
Expected output
Step7: In the cell below, write some Python code that prints the names of widgets whose price is above $9.30.
Expected output
Step9: Problem set #3
Step10: If our task was to create a dictionary that maps the name of the cheese to the description that follows in the <p> tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a .find_next_sibling() method, which allows us to search for the next tag that is a sibling of the tag you're calling it on (i.e., the two tags share a parent), that also matches particular criteria. So, for example, to accomplish the task outlined above
Step11: With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the .find_next_sibling() method, to print the part numbers of the widgets that are in the table just beneath the header "Hallowed Widgets."
Expected output
Step12: Okay, now, the final task. If you can accomplish this, you are truly an expert web scraper. I'll have little web scraper certificates made up and I'll give you one, if you manage to do this thing. And I know you can do it!
In the cell below, I've created a variable category_counts and assigned to it an empty dictionary. Write code to populate this dictionary so that its keys are "categories" of widgets (e.g., the contents of the <h3> tags on the page | Python Code:
!pip3 install bs4
from bs4 import BeautifulSoup
from urllib.request import urlopen
html_str = urlopen("http://static.decontextualize.com/widgets2016.html").read()
document = BeautifulSoup(html_str, "html.parser")
Explanation: Homework assignment #3
These problem sets focus on using the Beautiful Soup library to scrape web pages.
Problem Set #1: Basic scraping
I've made a web page for you to scrape. It's available here. The page concerns the catalog of a famous widget company. You'll be answering several questions about this web page. In the cell below, I've written some code so that you end up with a variable called html_str that contains the HTML source code of the page, and a variable document that stores a Beautiful Soup object.
End of explanation
h3_tags = document.find_all('h3')
print("There is", len(h3_tags), "“h3” tags in widgets2016.html.")
Explanation: Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of <h3> tags contained in widgets2016.html.
End of explanation
tel = document.find('a', {'class': 'tel'})
print("The telephone number is", tel.string)
Explanation: Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the "Widget Catalog" header.
End of explanation
widget_names = document.find_all('td', {'class': 'wname'})
for name in widget_names:
print(name.string)
Explanation: In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, widget_names should evaluate to a list that looks like this (though not necessarily in this order):
Skinner Widget
Widget For Furtiveness
Widget For Strawman
Jittery Widget
Silver Widget
Divided Widget
Manicurist Widget
Infinite Widget
Yellow-Tipped Widget
Unshakable Widget
Self-Knowledge Widget
Widget For Cinema
End of explanation
widgets = []
# your code here
widget_infos = document.find_all('tr', {'class': 'winfo'})
for info in widget_infos:
partno = info.find('td', {'class': 'partno'})
price = info.find('td', {'class': 'price'})
quantity = info.find('td', {'class': 'quantity'})
wname = info.find('td', {'class': 'wname'})
widgets.append({'partno': partno.string, 'price': price.string, 'quantity': quantity.string, 'wname': wname.string})
# end your code
widgets
Explanation: Problem set #2: Widget dictionaries
For this problem set, we'll continue to use the HTML page from the previous problem set. In the cell below, I've made an empty list and assigned it to a variable called widgets. Write code that populates this list with dictionaries, one dictionary per widget in the source file. The keys of each dictionary should be partno, wname, price, and quantity, and the value for each of the keys should be the value for the corresponding column for each row. After executing the cell, your list should look something like this:
[{'partno': 'C1-9476',
'price': '$2.70',
'quantity': u'512',
'wname': 'Skinner Widget'},
{'partno': 'JDJ-32/V',
'price': '$9.36',
'quantity': '967',
'wname': u'Widget For Furtiveness'},
...several items omitted...
{'partno': '5B-941/F',
'price': '$13.26',
'quantity': '919',
'wname': 'Widget For Cinema'}]
And this expression:
widgets[5]['partno']
... should evaluate to:
LH-74/O
End of explanation
widgets = []
# your code here
widget_infos = document.find_all('tr', {'class': 'winfo'})
for info in widget_infos:
partno = info.find('td', {'class': 'partno'})
price = info.find('td', {'class': 'price'})
quantity = info.find('td', {'class': 'quantity'})
wname = info.find('td', {'class': 'wname'})
widgets.append({'partno': partno.string, 'price': float(price.string[1:]), 'quantity': int(quantity.string), 'wname': wname.string})
# end your code
widgets
Explanation: In the cell below, duplicate your code from the previous question. Modify the code to ensure that the values for price and quantity in each dictionary are floating-point numbers and integers, respectively. I.e., after executing the cell, your code should display something like this:
[{'partno': 'C1-9476',
'price': 2.7,
'quantity': 512,
'widgetname': 'Skinner Widget'},
{'partno': 'JDJ-32/V',
'price': 9.36,
'quantity': 967,
'widgetname': 'Widget For Furtiveness'},
... some items omitted ...
{'partno': '5B-941/F',
'price': 13.26,
'quantity': 919,
'widgetname': 'Widget For Cinema'}]
(Hint: Use the float() and int() functions. You may need to use string slices to convert the price field to a floating-point number.)
End of explanation
total_nb_widgets = 0
for widget in widgets:
total_nb_widgets += widget['quantity']
print(total_nb_widgets)
Explanation: Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the widgets list created in the cell above to calculate the total number of widgets that the factory has in its warehouse.
Expected output: 7928
End of explanation
for widget in widgets:
if widget['price'] > 9.30:
print(widget['wname'])
Explanation: In the cell below, write some Python code that prints the names of widgets whose price is above $9.30.
Expected output:
Widget For Furtiveness
Jittery Widget
Silver Widget
Infinite Widget
Widget For Cinema
End of explanation
example_html =
<h2>Camembert</h2>
<p>A soft cheese made in the Camembert region of France.</p>
<h2>Cheddar</h2>
<p>A yellow cheese made in the Cheddar region of... France, probably, idk whatevs.</p>
Explanation: Problem set #3: Sibling rivalries
In the following problem set, you will yet again be working with the data in widgets2016.html. In order to accomplish the tasks in this problem set, you'll need to learn about Beautiful Soup's .find_next_sibling() method. Here's some information about that method, cribbed from the notes:
Often, the tags we're looking for don't have a distinguishing characteristic, like a class attribute, that allows us to find them using .find() and .find_all(), and the tags also aren't in a parent-child relationship. This can be tricky! For example, take the following HTML snippet, (which I've assigned to a string called example_html):
End of explanation
example_doc = BeautifulSoup(example_html, "html.parser")
cheese_dict = {}
for h2_tag in example_doc.find_all('h2'):
cheese_name = h2_tag.string
cheese_desc_tag = h2_tag.find_next_sibling('p')
cheese_dict[cheese_name] = cheese_desc_tag.string
cheese_dict
Explanation: If our task was to create a dictionary that maps the name of the cheese to the description that follows in the <p> tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a .find_next_sibling() method, which allows us to search for the next tag that is a sibling of the tag you're calling it on (i.e., the two tags share a parent), that also matches particular criteria. So, for example, to accomplish the task outlined above:
End of explanation
hallowed_header = document.find('h3', text='Hallowed widgets')
sibling_table = hallowed_header.find_next_sibling()
for part in sibling_table.find_all('td', {'class': 'partno'}):
print(part.string)
Explanation: With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the .find_next_sibling() method, to print the part numbers of the widgets that are in the table just beneath the header "Hallowed Widgets."
Expected output:
MZ-556/B
QV-730
T1-9731
5B-941/F
End of explanation
category_counts = {}
# your code here
categories = document.find_all('h3')
for category in categories:
table = category.find_next_sibling('table')
widgets = table.select('td.wname')
category_counts[category.string] = len(widgets)
# end your code
category_counts
Explanation: Okay, now, the final task. If you can accomplish this, you are truly an expert web scraper. I'll have little web scraper certificates made up and I'll give you one, if you manage to do this thing. And I know you can do it!
In the cell below, I've created a variable category_counts and assigned to it an empty dictionary. Write code to populate this dictionary so that its keys are "categories" of widgets (e.g., the contents of the <h3> tags on the page: "Forensic Widgets", "Mood widgets", "Hallowed Widgets") and the value for each key is the number of widgets that occur in that category. I.e., after your code has been executed, the dictionary category_counts should look like this:
{'Forensic Widgets': 3,
'Hallowed widgets': 4,
'Mood widgets': 2,
'Wondrous widgets': 3}
End of explanation |
2,145 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandana demo
Sam Maurer, July 2020
This notebook demonstrates the main features of the Pandana library, a Python package for network analysis that uses contraction hierarchies to calculate super-fast travel accessibility metrics and shortest paths.
See full documentation here
Step1: Suppress scientific notation in the output.
Step2: <a id='section1'></a>
1. Loading data
Load street networks directly from Open Street Map
This requires installing a Pandana extension called OSMnet.
- conda install osmnet or pip install osmnet
You can use http
Step3: pandana.loaders.osm.pdna_network_from_bbox()
Step4: What does the network look like?
Edges and nodes are visible as DataFrames.
Step5: Saving and reloading networks
You can't directly save a Pandana network object, but you can easily recreate it from the nodes and edges.
- pandana.Network()
Step6: <a id='section2'></a>
2. Shortest paths
This functionality was added in Pandana v0.5.
Load some restaurant locations
Here we'll load the locations of restaurants listed on Open Street Map (using the same OSMnet extension as above), and then calculate some shortest paths between them.
- pandana.loaders.osm.node_query()
Step7: Choose two at random
Step8: Calculate the shortest route between them
First, identify the nearest node to each restaurant.
- network.get_node_ids()
Step9: Then get the routing between the nodes.
- network.shortest_path()
- network.shortest_path_length()
- network.shortest_path_lengths()
Step10: This network's distance units are meters.
Calculate many shortest paths at once
Pandana can generate several million shortest paths in less than a minute.
Step11: Now we have the distance from each restaurant to each other restaurant.
Step12: <a id='section3'></a>
3. Proximity
Find the closest restaurants to each node
To do a network-wide calculation like this, we first need to formally add the restaurants into the network.
- network.set_pois()
Step13: (The maxdist and maxitems parameters are the maximum distance and item counts you anticipate using in proximity searches, so that Pandana can optimize the caching.)
Now we can run the query.
- network.nearest_pois()
Step14: These are the distances (in meters) and IDs of the three closest restaurants to each network node.
<a id='section4'></a>
4. Accessibility
How many restaurants are within 500 meters of each node?
Pandana calls this kind of calculation an aggregation. It requires passing a list of network nodes and associated values.
In this case, the "value" is just presence of a restaurant, but it could also be characteristics like square footage of a building or income of a household.
network.get_node_ids()
network.set()
Step15: Now we can run the query.
network.aggregate()
Step16: Note that you can also calculate means, sums, percentiles, and other metrics, as well as applying linear or exponential "decay" to more distant values.
<a id='section5'></a>
5. Visualization
Pandana's built-in plot function uses an older Matplotlib extension called Basemap that's now difficult to install. So here we'll just use Matplotlib directly.
Mapping restaurant accessibility | Python Code:
import numpy as np
import pandas as pd
import pandana
print(pandana.__version__)
Explanation: Pandana demo
Sam Maurer, July 2020
This notebook demonstrates the main features of the Pandana library, a Python package for network analysis that uses contraction hierarchies to calculate super-fast travel accessibility metrics and shortest paths.
See full documentation here: http://udst.github.io/pandana/
Sections
1. Loading data
2. Shortest paths
3. Proximity
4. Accessibility
5. Visualization
End of explanation
pd.options.display.float_format = '{:.2f}'.format
Explanation: Suppress scientific notation in the output.
End of explanation
from pandana.loaders import osm
import warnings
warnings.filterwarnings('ignore')
Explanation: <a id='section1'></a>
1. Loading data
Load street networks directly from Open Street Map
This requires installing a Pandana extension called OSMnet.
- conda install osmnet or pip install osmnet
You can use http://boundingbox.klokantech.com/ to get the coordinates of bounding boxes.
End of explanation
network = osm.pdna_network_from_bbox(37.698, -122.517, 37.819, -122.354) # San Francisco, CA
Explanation: pandana.loaders.osm.pdna_network_from_bbox()
End of explanation
network.nodes_df.head()
network.edges_df.head()
Explanation: What does the network look like?
Edges and nodes are visible as DataFrames.
End of explanation
network.nodes_df.to_csv('nodes.csv')
network.edges_df.to_csv('edges.csv')
nodes = pd.read_csv('nodes.csv', index_col=0)
edges = pd.read_csv('edges.csv', index_col=[0,1])
network = pandana.Network(nodes['x'], nodes['y'],
edges['from'], edges['to'], edges[['distance']])
Explanation: Saving and reloading networks
You can't directly save a Pandana network object, but you can easily recreate it from the nodes and edges.
- pandana.Network()
End of explanation
restaurants = osm.node_query(
37.698, -122.517, 37.819, -122.354, tags='"amenity"="restaurant"')
Explanation: <a id='section2'></a>
2. Shortest paths
This functionality was added in Pandana v0.5.
Load some restaurant locations
Here we'll load the locations of restaurants listed on Open Street Map (using the same OSMnet extension as above), and then calculate some shortest paths between them.
- pandana.loaders.osm.node_query()
End of explanation
res = restaurants.sample(2)
res
Explanation: Choose two at random:
End of explanation
nodes = network.get_node_ids(res.lon, res.lat).values
nodes
Explanation: Calculate the shortest route between them
First, identify the nearest node to each restaurant.
- network.get_node_ids()
End of explanation
network.shortest_path(nodes[0], nodes[1])
network.shortest_path_length(nodes[0], nodes[1])
Explanation: Then get the routing between the nodes.
- network.shortest_path()
- network.shortest_path_length()
- network.shortest_path_lengths()
End of explanation
restaurant_nodes = network.get_node_ids(restaurants.lon, restaurants.lat).values
origs = [o for o in restaurant_nodes for d in restaurant_nodes]
dests = [d for o in restaurant_nodes for d in restaurant_nodes]
%%time
distances = network.shortest_path_lengths(origs, dests)
Explanation: This network's distance units are meters.
Calculate many shortest paths at once
Pandana can generate several million shortest paths in less than a minute.
End of explanation
pd.Series(distances).describe()
Explanation: Now we have the distance from each restaurant to each other restaurant.
End of explanation
network.set_pois(category = 'restaurants',
maxdist = 1000,
maxitems = 3,
x_col = restaurants.lon,
y_col = restaurants.lat)
Explanation: <a id='section3'></a>
3. Proximity
Find the closest restaurants to each node
To do a network-wide calculation like this, we first need to formally add the restaurants into the network.
- network.set_pois()
End of explanation
results = network.nearest_pois(distance = 1000,
category = 'restaurants',
num_pois = 3,
include_poi_ids = True)
results.head()
Explanation: (The maxdist and maxitems parameters are the maximum distance and item counts you anticipate using in proximity searches, so that Pandana can optimize the caching.)
Now we can run the query.
- network.nearest_pois()
End of explanation
restaurant_nodes = network.get_node_ids(restaurants.lon, restaurants.lat)
network.set(restaurant_nodes,
name = 'restaurants')
Explanation: These are the distances (in meters) and IDs of the three closest restaurants to each network node.
<a id='section4'></a>
4. Accessibility
How many restaurants are within 500 meters of each node?
Pandana calls this kind of calculation an aggregation. It requires passing a list of network nodes and associated values.
In this case, the "value" is just presence of a restaurant, but it could also be characteristics like square footage of a building or income of a household.
network.get_node_ids()
network.set()
End of explanation
accessibility = network.aggregate(distance = 500,
type = 'count',
name = 'restaurants')
accessibility.describe()
Explanation: Now we can run the query.
network.aggregate()
End of explanation
import matplotlib
from matplotlib import pyplot as plt
print(matplotlib.__version__)
fig, ax = plt.subplots(figsize=(10,8))
plt.title('San Francisco: Restaurants within 500m')
plt.scatter(network.nodes_df.x, network.nodes_df.y,
c=accessibility, s=1, cmap='YlOrRd',
norm=matplotlib.colors.LogNorm())
cb = plt.colorbar()
plt.show()
Explanation: Note that you can also calculate means, sums, percentiles, and other metrics, as well as applying linear or exponential "decay" to more distant values.
<a id='section5'></a>
5. Visualization
Pandana's built-in plot function uses an older Matplotlib extension called Basemap that's now difficult to install. So here we'll just use Matplotlib directly.
Mapping restaurant accessibility
End of explanation |
2,146 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 7 – Ensemble Learning and Random Forests
This notebook contains all the sample code and solutions to the exercises in chapter 7.
<table align="left">
<td>
<a target="_blank" href="https
Step1: Voting classifiers
Step2: Warning
Step3: Bagging ensembles
Step4: Random Forests
Step5: Out-of-Bag evaluation
Step6: Feature importance
Step7: AdaBoost
Step8: Gradient Boosting
Step9: Gradient Boosting with Early stopping
Step10: Using XGBoost
Step11: Exercise solutions
1. to 7.
See Appendix A.
8. Voting Classifier
Exercise
Step12: Exercise
Step13: The linear SVM is far outperformed by the other classifiers. However, let's keep it for now since it may improve the voting classifier's performance.
Exercise
Step14: Let's remove the SVM to see if performance improves. It is possible to remove an estimator by setting it to None using set_params() like this
Step15: This updated the list of estimators
Step16: However, it did not update the list of trained estimators
Step17: So we can either fit the VotingClassifier again, or just remove the SVM from the list of trained estimators
Step18: Now let's evaluate the VotingClassifier again
Step19: A bit better! The SVM was hurting performance. Now let's try using a soft voting classifier. We do not actually need to retrain the classifier, we can just set voting to "soft"
Step20: That's a significant improvement, and it's much better than each of the individual classifiers.
Once you have found one, try it on the test set. How much better does it perform compared to the individual classifiers?
Step21: The voting classifier reduced the error rate from about 4.0% for our best model (the MLPClassifier) to just 3.1%. That's about 22.5% less errors, not bad!
9. Stacking Ensemble
Exercise
Step22: You could fine-tune this blender or try other types of blenders (e.g., an MLPClassifier), then select the best one using cross-validation, as always.
Exercise | Python Code:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "ensembles"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
Explanation: Chapter 7 – Ensemble Learning and Random Forests
This notebook contains all the sample code and solutions to the exercises in chapter 7.
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml/blob/master/07_ensemble_learning_and_random_forests.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
</table>
Warning: this is the code for the 1st edition of the book. Please visit https://github.com/ageron/handson-ml2 for the 2nd edition code, with up-to-date notebooks using the latest library versions.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
End of explanation
heads_proba = 0.51
coin_tosses = (np.random.rand(10000, 10) < heads_proba).astype(np.int32)
cumulative_heads_ratio = np.cumsum(coin_tosses, axis=0) / np.arange(1, 10001).reshape(-1, 1)
plt.figure(figsize=(8,3.5))
plt.plot(cumulative_heads_ratio)
plt.plot([0, 10000], [0.51, 0.51], "k--", linewidth=2, label="51%")
plt.plot([0, 10000], [0.5, 0.5], "k-", label="50%")
plt.xlabel("Number of coin tosses")
plt.ylabel("Heads ratio")
plt.legend(loc="lower right")
plt.axis([0, 10000, 0.42, 0.58])
save_fig("law_of_large_numbers_plot")
plt.show()
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=500, noise=0.30, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
Explanation: Voting classifiers
End of explanation
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
log_clf = LogisticRegression(solver="liblinear", random_state=42)
rnd_clf = RandomForestClassifier(n_estimators=10, random_state=42)
svm_clf = SVC(gamma="auto", random_state=42)
voting_clf = VotingClassifier(
estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],
voting='hard')
voting_clf.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
for clf in (log_clf, rnd_clf, svm_clf, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
log_clf = LogisticRegression(solver="liblinear", random_state=42)
rnd_clf = RandomForestClassifier(n_estimators=10, random_state=42)
svm_clf = SVC(gamma="auto", probability=True, random_state=42)
voting_clf = VotingClassifier(
estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],
voting='soft')
voting_clf.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
for clf in (log_clf, rnd_clf, svm_clf, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
Explanation: Warning: In Scikit-Learn 0.20, some hyperparameters (solver, n_estimators, gamma, etc.) start issuing warnings about the fact that their default value will change in Scikit-Learn 0.22. To avoid these warnings and ensure that this notebooks keeps producing the same outputs as in the book, I set the hyperparameters to their old default value. In your own code, you can simply rely on the latest default values instead.
End of explanation
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
bag_clf = BaggingClassifier(
DecisionTreeClassifier(random_state=42), n_estimators=500,
max_samples=100, bootstrap=True, n_jobs=-1, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, y_pred))
tree_clf = DecisionTreeClassifier(random_state=42)
tree_clf.fit(X_train, y_train)
y_pred_tree = tree_clf.predict(X_test)
print(accuracy_score(y_test, y_pred_tree))
from matplotlib.colors import ListedColormap
def plot_decision_boundary(clf, X, y, axes=[-1.5, 2.5, -1, 1.5], alpha=0.5, contour=True):
x1s = np.linspace(axes[0], axes[1], 100)
x2s = np.linspace(axes[2], axes[3], 100)
x1, x2 = np.meshgrid(x1s, x2s)
X_new = np.c_[x1.ravel(), x2.ravel()]
y_pred = clf.predict(X_new).reshape(x1.shape)
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x1, x2, y_pred, alpha=0.3, cmap=custom_cmap)
if contour:
custom_cmap2 = ListedColormap(['#7d7d58','#4c4c7f','#507d50'])
plt.contour(x1, x2, y_pred, cmap=custom_cmap2, alpha=0.8)
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", alpha=alpha)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", alpha=alpha)
plt.axis(axes)
plt.xlabel(r"$x_1$", fontsize=18)
plt.ylabel(r"$x_2$", fontsize=18, rotation=0)
plt.figure(figsize=(11,4))
plt.subplot(121)
plot_decision_boundary(tree_clf, X, y)
plt.title("Decision Tree", fontsize=14)
plt.subplot(122)
plot_decision_boundary(bag_clf, X, y)
plt.title("Decision Trees with Bagging", fontsize=14)
save_fig("decision_tree_without_and_with_bagging_plot")
plt.show()
Explanation: Bagging ensembles
End of explanation
bag_clf = BaggingClassifier(
DecisionTreeClassifier(splitter="random", max_leaf_nodes=16, random_state=42),
n_estimators=500, max_samples=1.0, bootstrap=True, n_jobs=-1, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
from sklearn.ensemble import RandomForestClassifier
rnd_clf = RandomForestClassifier(n_estimators=500, max_leaf_nodes=16, n_jobs=-1, random_state=42)
rnd_clf.fit(X_train, y_train)
y_pred_rf = rnd_clf.predict(X_test)
np.sum(y_pred == y_pred_rf) / len(y_pred) # almost identical predictions
from sklearn.datasets import load_iris
iris = load_iris()
rnd_clf = RandomForestClassifier(n_estimators=500, n_jobs=-1, random_state=42)
rnd_clf.fit(iris["data"], iris["target"])
for name, score in zip(iris["feature_names"], rnd_clf.feature_importances_):
print(name, score)
rnd_clf.feature_importances_
plt.figure(figsize=(6, 4))
for i in range(15):
tree_clf = DecisionTreeClassifier(max_leaf_nodes=16, random_state=42 + i)
indices_with_replacement = np.random.randint(0, len(X_train), len(X_train))
tree_clf.fit(X[indices_with_replacement], y[indices_with_replacement])
plot_decision_boundary(tree_clf, X, y, axes=[-1.5, 2.5, -1, 1.5], alpha=0.02, contour=False)
plt.show()
Explanation: Random Forests
End of explanation
bag_clf = BaggingClassifier(
DecisionTreeClassifier(random_state=42), n_estimators=500,
bootstrap=True, n_jobs=-1, oob_score=True, random_state=40)
bag_clf.fit(X_train, y_train)
bag_clf.oob_score_
bag_clf.oob_decision_function_
from sklearn.metrics import accuracy_score
y_pred = bag_clf.predict(X_test)
accuracy_score(y_test, y_pred)
Explanation: Out-of-Bag evaluation
End of explanation
try:
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1, as_frame=False)
mnist.target = mnist.target.astype(np.int64)
except ImportError:
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
rnd_clf = RandomForestClassifier(n_estimators=10, random_state=42)
rnd_clf.fit(mnist["data"], mnist["target"])
def plot_digit(data):
image = data.reshape(28, 28)
plt.imshow(image, cmap = mpl.cm.hot,
interpolation="nearest")
plt.axis("off")
plot_digit(rnd_clf.feature_importances_)
cbar = plt.colorbar(ticks=[rnd_clf.feature_importances_.min(), rnd_clf.feature_importances_.max()])
cbar.ax.set_yticklabels(['Not important', 'Very important'])
save_fig("mnist_feature_importance_plot")
plt.show()
Explanation: Feature importance
End of explanation
from sklearn.ensemble import AdaBoostClassifier
ada_clf = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=1), n_estimators=200,
algorithm="SAMME.R", learning_rate=0.5, random_state=42)
ada_clf.fit(X_train, y_train)
plot_decision_boundary(ada_clf, X, y)
m = len(X_train)
plt.figure(figsize=(11, 4))
for subplot, learning_rate in ((121, 1), (122, 0.5)):
sample_weights = np.ones(m)
plt.subplot(subplot)
for i in range(5):
svm_clf = SVC(kernel="rbf", C=0.05, gamma="auto", random_state=42)
svm_clf.fit(X_train, y_train, sample_weight=sample_weights)
y_pred = svm_clf.predict(X_train)
sample_weights[y_pred != y_train] *= (1 + learning_rate)
plot_decision_boundary(svm_clf, X, y, alpha=0.2)
plt.title("learning_rate = {}".format(learning_rate), fontsize=16)
if subplot == 121:
plt.text(-0.7, -0.65, "1", fontsize=14)
plt.text(-0.6, -0.10, "2", fontsize=14)
plt.text(-0.5, 0.10, "3", fontsize=14)
plt.text(-0.4, 0.55, "4", fontsize=14)
plt.text(-0.3, 0.90, "5", fontsize=14)
save_fig("boosting_plot")
plt.show()
list(m for m in dir(ada_clf) if not m.startswith("_") and m.endswith("_"))
Explanation: AdaBoost
End of explanation
np.random.seed(42)
X = np.random.rand(100, 1) - 0.5
y = 3*X[:, 0]**2 + 0.05 * np.random.randn(100)
from sklearn.tree import DecisionTreeRegressor
tree_reg1 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg1.fit(X, y)
y2 = y - tree_reg1.predict(X)
tree_reg2 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg2.fit(X, y2)
y3 = y2 - tree_reg2.predict(X)
tree_reg3 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg3.fit(X, y3)
X_new = np.array([[0.8]])
y_pred = sum(tree.predict(X_new) for tree in (tree_reg1, tree_reg2, tree_reg3))
y_pred
def plot_predictions(regressors, X, y, axes, label=None, style="r-", data_style="b.", data_label=None):
x1 = np.linspace(axes[0], axes[1], 500)
y_pred = sum(regressor.predict(x1.reshape(-1, 1)) for regressor in regressors)
plt.plot(X[:, 0], y, data_style, label=data_label)
plt.plot(x1, y_pred, style, linewidth=2, label=label)
if label or data_label:
plt.legend(loc="upper center", fontsize=16)
plt.axis(axes)
plt.figure(figsize=(11,11))
plt.subplot(321)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h_1(x_1)$", style="g-", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Residuals and tree predictions", fontsize=16)
plt.subplot(322)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1)$", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Ensemble predictions", fontsize=16)
plt.subplot(323)
plot_predictions([tree_reg2], X, y2, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_2(x_1)$", style="g-", data_style="k+", data_label="Residuals")
plt.ylabel("$y - h_1(x_1)$", fontsize=16)
plt.subplot(324)
plot_predictions([tree_reg1, tree_reg2], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1)$")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.subplot(325)
plot_predictions([tree_reg3], X, y3, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_3(x_1)$", style="g-", data_style="k+")
plt.ylabel("$y - h_1(x_1) - h_2(x_1)$", fontsize=16)
plt.xlabel("$x_1$", fontsize=16)
plt.subplot(326)
plot_predictions([tree_reg1, tree_reg2, tree_reg3], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1) + h_3(x_1)$")
plt.xlabel("$x_1$", fontsize=16)
plt.ylabel("$y$", fontsize=16, rotation=0)
save_fig("gradient_boosting_plot")
plt.show()
from sklearn.ensemble import GradientBoostingRegressor
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=3, learning_rate=1.0, random_state=42)
gbrt.fit(X, y)
gbrt_slow = GradientBoostingRegressor(max_depth=2, n_estimators=200, learning_rate=0.1, random_state=42)
gbrt_slow.fit(X, y)
plt.figure(figsize=(11,4))
plt.subplot(121)
plot_predictions([gbrt], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="Ensemble predictions")
plt.title("learning_rate={}, n_estimators={}".format(gbrt.learning_rate, gbrt.n_estimators), fontsize=14)
plt.subplot(122)
plot_predictions([gbrt_slow], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("learning_rate={}, n_estimators={}".format(gbrt_slow.learning_rate, gbrt_slow.n_estimators), fontsize=14)
save_fig("gbrt_learning_rate_plot")
plt.show()
Explanation: Gradient Boosting
End of explanation
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
X_train, X_val, y_train, y_val = train_test_split(X, y, random_state=49)
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=120, random_state=42)
gbrt.fit(X_train, y_train)
errors = [mean_squared_error(y_val, y_pred)
for y_pred in gbrt.staged_predict(X_val)]
bst_n_estimators = np.argmin(errors) + 1
gbrt_best = GradientBoostingRegressor(max_depth=2,n_estimators=bst_n_estimators, random_state=42)
gbrt_best.fit(X_train, y_train)
min_error = np.min(errors)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.plot(errors, "b.-")
plt.plot([bst_n_estimators, bst_n_estimators], [0, min_error], "k--")
plt.plot([0, 120], [min_error, min_error], "k--")
plt.plot(bst_n_estimators, min_error, "ko")
plt.text(bst_n_estimators, min_error*1.2, "Minimum", ha="center", fontsize=14)
plt.axis([0, 120, 0, 0.01])
plt.xlabel("Number of trees")
plt.title("Validation error", fontsize=14)
plt.subplot(122)
plot_predictions([gbrt_best], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("Best model (%d trees)" % bst_n_estimators, fontsize=14)
save_fig("early_stopping_gbrt_plot")
plt.show()
gbrt = GradientBoostingRegressor(max_depth=2, warm_start=True, random_state=42)
min_val_error = float("inf")
error_going_up = 0
for n_estimators in range(1, 120):
gbrt.n_estimators = n_estimators
gbrt.fit(X_train, y_train)
y_pred = gbrt.predict(X_val)
val_error = mean_squared_error(y_val, y_pred)
if val_error < min_val_error:
min_val_error = val_error
error_going_up = 0
else:
error_going_up += 1
if error_going_up == 5:
break # early stopping
print(gbrt.n_estimators)
print("Minimum validation MSE:", min_val_error)
Explanation: Gradient Boosting with Early stopping
End of explanation
try:
import xgboost
except ImportError as ex:
print("Error: the xgboost library is not installed.")
xgboost = None
if xgboost is not None: # not shown in the book
xgb_reg = xgboost.XGBRegressor(random_state=42)
xgb_reg.fit(X_train, y_train)
y_pred = xgb_reg.predict(X_val)
val_error = mean_squared_error(y_val, y_pred)
print("Validation MSE:", val_error)
if xgboost is not None: # not shown in the book
xgb_reg.fit(X_train, y_train,
eval_set=[(X_val, y_val)], early_stopping_rounds=2)
y_pred = xgb_reg.predict(X_val)
val_error = mean_squared_error(y_val, y_pred)
print("Validation MSE:", val_error)
%timeit xgboost.XGBRegressor().fit(X_train, y_train) if xgboost is not None else None
%timeit GradientBoostingRegressor().fit(X_train, y_train)
Explanation: Using XGBoost
End of explanation
from sklearn.model_selection import train_test_split
X_train_val, X_test, y_train_val, y_test = train_test_split(
mnist.data, mnist.target, test_size=10000, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(
X_train_val, y_train_val, test_size=10000, random_state=42)
Explanation: Exercise solutions
1. to 7.
See Appendix A.
8. Voting Classifier
Exercise: Load the MNIST data and split it into a training set, a validation set, and a test set (e.g., use 50,000 instances for training, 10,000 for validation, and 10,000 for testing).
The MNIST dataset was loaded earlier.
End of explanation
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.svm import LinearSVC
from sklearn.neural_network import MLPClassifier
random_forest_clf = RandomForestClassifier(n_estimators=10, random_state=42)
extra_trees_clf = ExtraTreesClassifier(n_estimators=10, random_state=42)
svm_clf = LinearSVC(random_state=42)
mlp_clf = MLPClassifier(random_state=42)
estimators = [random_forest_clf, extra_trees_clf, svm_clf, mlp_clf]
for estimator in estimators:
print("Training the", estimator)
estimator.fit(X_train, y_train)
[estimator.score(X_val, y_val) for estimator in estimators]
Explanation: Exercise: Then train various classifiers, such as a Random Forest classifier, an Extra-Trees classifier, and an SVM.
End of explanation
from sklearn.ensemble import VotingClassifier
named_estimators = [
("random_forest_clf", random_forest_clf),
("extra_trees_clf", extra_trees_clf),
("svm_clf", svm_clf),
("mlp_clf", mlp_clf),
]
voting_clf = VotingClassifier(named_estimators)
voting_clf.fit(X_train, y_train)
voting_clf.score(X_val, y_val)
[estimator.score(X_val, y_val) for estimator in voting_clf.estimators_]
Explanation: The linear SVM is far outperformed by the other classifiers. However, let's keep it for now since it may improve the voting classifier's performance.
Exercise: Next, try to combine them into an ensemble that outperforms them all on the validation set, using a soft or hard voting classifier.
End of explanation
voting_clf.set_params(svm_clf=None)
Explanation: Let's remove the SVM to see if performance improves. It is possible to remove an estimator by setting it to None using set_params() like this:
End of explanation
voting_clf.estimators
Explanation: This updated the list of estimators:
End of explanation
voting_clf.estimators_
Explanation: However, it did not update the list of trained estimators:
End of explanation
del voting_clf.estimators_[2]
Explanation: So we can either fit the VotingClassifier again, or just remove the SVM from the list of trained estimators:
End of explanation
voting_clf.score(X_val, y_val)
Explanation: Now let's evaluate the VotingClassifier again:
End of explanation
voting_clf.voting = "soft"
voting_clf.score(X_val, y_val)
Explanation: A bit better! The SVM was hurting performance. Now let's try using a soft voting classifier. We do not actually need to retrain the classifier, we can just set voting to "soft":
End of explanation
voting_clf.score(X_test, y_test)
[estimator.score(X_test, y_test) for estimator in voting_clf.estimators_]
Explanation: That's a significant improvement, and it's much better than each of the individual classifiers.
Once you have found one, try it on the test set. How much better does it perform compared to the individual classifiers?
End of explanation
X_val_predictions = np.empty((len(X_val), len(estimators)), dtype=np.float32)
for index, estimator in enumerate(estimators):
X_val_predictions[:, index] = estimator.predict(X_val)
X_val_predictions
rnd_forest_blender = RandomForestClassifier(n_estimators=200, oob_score=True, random_state=42)
rnd_forest_blender.fit(X_val_predictions, y_val)
rnd_forest_blender.oob_score_
Explanation: The voting classifier reduced the error rate from about 4.0% for our best model (the MLPClassifier) to just 3.1%. That's about 22.5% less errors, not bad!
9. Stacking Ensemble
Exercise: Run the individual classifiers from the previous exercise to make predictions on the validation set, and create a new training set with the resulting predictions: each training instance is a vector containing the set of predictions from all your classifiers for an image, and the target is the image's class. Train a classifier on this new training set.
End of explanation
X_test_predictions = np.empty((len(X_test), len(estimators)), dtype=np.float32)
for index, estimator in enumerate(estimators):
X_test_predictions[:, index] = estimator.predict(X_test)
y_pred = rnd_forest_blender.predict(X_test_predictions)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
Explanation: You could fine-tune this blender or try other types of blenders (e.g., an MLPClassifier), then select the best one using cross-validation, as always.
Exercise: Congratulations, you have just trained a blender, and together with the classifiers they form a stacking ensemble! Now let's evaluate the ensemble on the test set. For each image in the test set, make predictions with all your classifiers, then feed the predictions to the blender to get the ensemble's predictions. How does it compare to the voting classifier you trained earlier?
End of explanation |
2,147 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Post-processing Examples
This notebook provides some examples for using the post-processing features in RESSPyLab.
Automatic table generation and calculation of the consistency metric $\xi_2$ are shown for both the original and updated Voce-Chaboche (UVC) models.
Note that there is an example for plotting output in each of the calibration examples.
Step1: Original Voce-Chaboche model
First we will use RESSPyLab to generate a formatted table of parameters including the relative error metric, $\bar{\varphi}$.
The inputs to this function are
Step2: Tables can be easily generated following a standard format for several data sets by appending additional entries to the lists of values in material_def and to x_logs_all and data_all.
Now we will generate the consistency metric, $\xi_2$.
The input arguments are
Step3: The value of $\xi_2 = 65$ %, indicating that the two sets of parameters are inconsistent for this data set.
Updated Voce-Chaboche model
The inputs to generate the tables are the same as for the original model, however the input parameters have to come from optimization using the updated model. | Python Code:
# First load RESSPyLab and necessary packages
import numpy as np
import RESSPyLab as rpl
Explanation: Post-processing Examples
This notebook provides some examples for using the post-processing features in RESSPyLab.
Automatic table generation and calculation of the consistency metric $\xi_2$ are shown for both the original and updated Voce-Chaboche (UVC) models.
Note that there is an example for plotting output in each of the calibration examples.
End of explanation
# Identify the material
material_def = {'material_id': ['Example 1'], 'load_protocols': ['1,5']}
# Set the path to the x log file
x_log_file_1 = './output/x_log.txt'
x_logs_all = [x_log_file_1]
# Load the data
data_files_1 = ['example_1.csv']
data_1 = rpl.load_data_set(data_files_1)
data_all = [data_1]
# Make the tables
param_table, metric_table = rpl.summary_tables_maker_vc(material_def, x_logs_all, data_all)
Explanation: Original Voce-Chaboche model
First we will use RESSPyLab to generate a formatted table of parameters including the relative error metric, $\bar{\varphi}$.
The inputs to this function are:
1. Information about the name of the data set and the load protocols used in the optimization.
2. The file containing the history of parameters (generated from the optimization).
3. The data used in the optimization.
Two tables are returned (as pandas DataFrames) and are printed to screen in LaTeX format.
If you want the tables in some other format it is best to operate on the DataFrames directly (e.g., use to_csv()).
End of explanation
# Load the base parameters, we want the last entry in the file
x_base = np.loadtxt(x_log_file_1, delimiter=' ')
x_base = x_base[-1]
# Load (or set) the sample parameters
x_sample = np.array([179750., 318.47, 100.72, 8.00, 11608.17, 145.22, 1026.33, 4.68])
# Calculate the metric
consistency_metric = rpl.vc_consistency_metric(x_base, x_sample, data_1)
print consistency_metric
Explanation: Tables can be easily generated following a standard format for several data sets by appending additional entries to the lists of values in material_def and to x_logs_all and data_all.
Now we will generate the consistency metric, $\xi_2$.
The input arguments are:
1. The parameters of the base case.
2. The parameters of the case that you would like to compare with.
3. The set of data to compute this metric over.
The metric is returned (the raw value, NOT as a percent) directly from this function.
End of explanation
# Identify the material
material_def = {'material_id': ['Example 1'], 'load_protocols': ['1']}
# Set the path to the x log file
x_log_file_2 = './output/x_log_upd.txt'
x_logs_all = [x_log_file_2]
# Load the data
data_files_2 = ['example_1.csv']
data_2 = rpl.load_data_set(data_files_2)
data_all = [data_2]
# Make the tables
param_table, metric_table = rpl.summary_tables_maker_uvc(material_def, x_logs_all, data_all)
Explanation: The value of $\xi_2 = 65$ %, indicating that the two sets of parameters are inconsistent for this data set.
Updated Voce-Chaboche model
The inputs to generate the tables are the same as for the original model, however the input parameters have to come from optimization using the updated model.
End of explanation |
2,148 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Define and Preview Sets
Step1: Define Metric
Also, show values
Step2: Clip and compare
We are going to create a comparison object which contains sets that are proper subsets of the original (we will be dividing the number of samples in half). However, since the Voronoi cells that are implicitly defined and consitute the $\sigma$-algebra are going to be fundamentally different, we observe that the two densities reflect the differences in geometry.
Our chosen densities are uniform and centered in the middle of the domain. The integration sample set is copied during the clipping procedure by default, but can be changed by passing copy=False to clip if you prefer the two comparisons are linked.
Step3: Observe how these are distinctly different objects in memory
Step4: Density Emulation
We will now estimate the densities on the two comparison objects (remember, one is a clipped version of the other, but they share the same integration_sample_set).
Step5: Clipped
Step6: Distances
Step7: Interactive Demonstration of compP.density
This will require ipywidgets. It is a minimalistic example of using the density method without the comparison class.
Step8: Below, we show an example of using the comparison object to get a better picture of the sets defined above, without necessarily needing to compare two measures. | Python Code:
num_samples_left = 50
num_samples_right = 50
delta = 0.5 # width of measure's support per dimension
L = unit_center_set(2, num_samples_left, delta)
R = unit_center_set(2, num_samples_right, delta)
plt.scatter(L._values[:,0], L._values[:,1], c=L._probabilities)
plt.xlim([0,1])
plt.ylim([0,1])
plt.show()
plt.scatter(R._values[:,0], R._values[:,1], c=R._probabilities)
plt.xlim([0,1])
plt.ylim([0,1])
plt.show()
Explanation: Define and Preview Sets
End of explanation
num_emulation_samples = 2000
mm = compP.compare(L, R, num_emulation_samples) # initialize metric
# mm.get_left().get_values()
# mm.get_right().get_values()
Explanation: Define Metric
Also, show values
End of explanation
# cut both sample sets in half
mc = mm.clip(num_samples_left//2,num_samples_right//2)
# mc.get_left().get_values()
# mc.get_right().get_values()
Explanation: Clip and compare
We are going to create a comparison object which contains sets that are proper subsets of the original (we will be dividing the number of samples in half). However, since the Voronoi cells that are implicitly defined and consitute the $\sigma$-algebra are going to be fundamentally different, we observe that the two densities reflect the differences in geometry.
Our chosen densities are uniform and centered in the middle of the domain. The integration sample set is copied during the clipping procedure by default, but can be changed by passing copy=False to clip if you prefer the two comparisons are linked.
End of explanation
mm, mc
Explanation: Observe how these are distinctly different objects in memory:
End of explanation
ld1,rd1 = mm.estimate_density()
I = mc.get_emulated().get_values()
plt.scatter(I[:,0], I[:,1], c=rd1,s =10, alpha=0.5)
plt.scatter(R._values[:,0], R._values[:,1], marker='o', s=50, c='k')
plt.xlim([0,1])
plt.ylim([0,1])
plt.title("Right Density")
plt.show()
plt.scatter(I[:,0], I[:,1], c=ld1, s=10, alpha=0.5)
plt.scatter(L._values[:,0], L._values[:,1], marker='o', s=50, c='k')
plt.xlim([0,1])
plt.ylim([0,1])
plt.title("Left Density")
plt.show()
Explanation: Density Emulation
We will now estimate the densities on the two comparison objects (remember, one is a clipped version of the other, but they share the same integration_sample_set).
End of explanation
ld2,rd2 = mc.estimate_density()
plt.scatter(I[:,0], I[:,1], c=rd2,s =10, alpha=0.5)
plt.scatter(mc.get_right()._values[:,0],
mc.get_right()._values[:,1],
marker='o', s=50, c='k')
plt.xlim([0,1])
plt.ylim([0,1])
plt.title("Right Density")
plt.show()
plt.scatter(I[:,0], I[:,1], c=ld2, s=10, alpha=0.5)
plt.scatter(mc.get_left()._values[:,0],
mc.get_left()._values[:,1],
marker='o', s=50, c='k')
plt.xlim([0,1])
plt.ylim([0,1])
plt.title("Left Density")
plt.show()
Explanation: Clipped
End of explanation
from scipy.stats import entropy as kl_div
mm.set_left(unit_center_set(2, 1000, delta/2))
mm.set_right(unit_center_set(2, 1000, delta))
print([mm.value(kl_div),
mm.value('tv'),
mm.value('totvar'),
mm.value('mink', w=0.5, p=1),
mm.value('norm'),
mm.value('sqhell'),
mm.value('hell'),
mm.value('hellinger')])
Explanation: Distances
End of explanation
import ipywidgets as wd
def show_clip(samples=100, delta=0.5):
np.random.seed(int(121))
S = unit_center_set(2, samples, delta)
compP.density(S)
plt.figure()
plt.scatter(S._values[:,0], S._values[:,1],
c=S._density.ravel())
plt.show()
wd.interact(show_clip, samples=(20,500), delta=(0.05,1,0.05))
Explanation: Interactive Demonstration of compP.density
This will require ipywidgets. It is a minimalistic example of using the density method without the comparison class.
End of explanation
import scipy.stats as sstats
def show_clipm(samples=100, delta=0.5):
np.random.seed(int(121))
S = unit_center_set(2, samples, delta)
# alternative probabilities
xprobs = sstats.distributions.norm(0.5, delta).pdf(S._values[:,0])
yprobs = sstats.distributions.norm(0.5, delta).pdf(S._values[:,1])
probs = xprobs*yprobs
S.set_probabilities(probs*S._volumes)
I = mm.get_emulated()
m = compP.comparison(I,S,None)
m.estimate_density_left()
plt.figure()
plt.scatter(I._values[:,0], I._values[:,1],
c=S._emulated_density.ravel())
plt.scatter([0.5], [0.5], marker='x')
plt.show()
wd.interact(show_clipm, samples=(20,500), delta=(0.1,1,0.05))
Explanation: Below, we show an example of using the comparison object to get a better picture of the sets defined above, without necessarily needing to compare two measures.
End of explanation |
2,149 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Sylbreak in Jupyter Notebook
ဒီ Jupyter Notebook က GitHub မှာ ကျွန်တော်တင်ပေးထားတဲ့ Sylbreak Python ပရိုဂရမ် https
Step2: စိတ်ထဲမှာ ပေါ်လာတာကို ကောက်ရေးပြီးတော့ syllable segmentation လုပ်ခိုင်းလိုက်တာပါ။
Step3: Typing order
မြန်မာစာနဲ့ ပတ်သက်တဲ့ NLP (Natural Language Processing) အလုပ် တစ်ခုခု လုပ်ဖို့အတွက် syllable segmentation လုပ်ကြမယ်ဆိုရင် တကယ်တမ်းက မလုပ်ခင်မှာ၊ မြန်မာစာ စာကြောင်းတွေရဲ့ typing order အပါအဝင် တခြား ဖြစ်တတ်တဲ့ အမှားတွေကိုလည်း cleaning လုပ်ရပါတယ်။ အဲဒီလိုမလုပ်ရင် sylbreak က ကျွန်တော် အကြမ်းမျဉ်းသတ်မှတ်ထားတဲ့ မြန်မာစာ syllable unit တွေအဖြစ် မှန်မှန်ကန်ကန် ဖြတ်ပေးနိုင်မှာ မဟုတ်ပါဘူး။ မြန်မာစာ စာကြောင်းတွေထဲမှာ ရှိတတ်တဲ့အမှား တွေက တကယ့်ကို အများကြီးပါ။ တချို့ အမှားတွေက မျက်လုံးနဲ့ကြည့်ယုံနဲ့ မခွဲခြားနိုင်တာမျိုးတွေလည်း ရှိပါတယ်။ ဒီနေရာမှာတော့ အမှားအမျိုးအစားတွေထဲက တစ်မျိုးဖြစ်တဲ့ typing order အမှား တစ်မျိုး၊ နှစ်မျိုးကို ဥပမာအနေနဲ့ရှင်းပြရင်း၊ အဲဒီလိုအခြေအနေမျိုးမှာ ဖြစ်တတ်တဲ့ sylbreak က ထွက်လာမယ့် အမှား output တွေကိုလည်း လေ့လာကြည့်ကြရအောင်။
အောက်မှာ သုံးပြထားတဲ့ "ခန့်" က "ခ န ့ ်" (ခခွေး နငယ် အောက်မြစ် အသတ်) ဆိုတဲ့ အစီအစဉ် အမှားနဲ့ ရိုက်ထားတာဖြစ်ပါတယ်။ အဲဒါကြောင့် sylbreak က ထွက်လာတဲ့အခါမှာ "ခခွေး" နဲ့ "နငယ် အသတ် အောက်မြစ်" က ကွဲနေတာဖြစ်ပါတယ်။
Step4: တကယ်တန်း မှန်ကန်တဲ့ "ခန့်" ရဲ့ typing order က "ခ န ် ့" (ခခွေး နငယ် အသတ် အောက်မြစ်) ပါ။
အမြင်အားဖြင့်ကတော့ မခွဲနိုင်ပေမဲ့၊ မှန်ကန်တဲ့ typing order နဲ့ ရိုက်ထားရင်တော့ "ခန့်" ဆိုပြီး syllable တစ်ခုအနေနဲ့ ရိုက်ထုတ်ပြပေးပါလိမ့်မယ်။
Step5: နောက်ထပ် typing order အမှားတစ်ခုကို ကြည့်ကြရအောင်။
Step6: "ညကြီး အောက်မြစ် အသတ်" ဆိုတဲ့ မှားနေတဲ့ အစီအစဉ်ကို "ညကြီး အသတ် အောက်မြစ်" ဆိုပြီး
ပြောင်းရိုက်ပြီးတော့ sylbreak လုပ်ကြည့်ရင်တော့ အောက်ပါအတိုင်း "ထ" နဲ့ "ည့်", "သ" နဲ့ "ည့်" တွေက ကွဲမနေတော့ပဲ မှန်မှန်ကန်ကန်ဖြတ်ပေးပါလိမ့်မယ်။
Step7: တချို့အမှားတွေကတော့ ဂရုစိုက်ရင် မျက်စိနဲ့ မြင်နိုင်ပါတယ်။
ဥပမာ "ဥ" (အက္ခရာ ဥ) နဲ့ "ဉ" (ညကလေး) ကိုမှားရိုက်တဲ့ကိစ္စပါ။
သို့သော် ကျွန်တော်မြန်မာစာကြောင်းတွေအများကြီးကို ကိုင်တွယ်အလုပ်လုပ်တဲ့အခါတိုင်းမှာ ဒီလိုအမှားက အမြဲတမ်းကို ပါတတ်ပါတယ်။
ဖောင့် (font) မှာလည်း မှန်မှန်ကန်ကန်ခွဲထားမယ်ဆိုရင်၊ အမှန်က ညကလေးဆိုရင် အမြီးက ရှည်ပါတယ်။
စာရိုက်သူအများစုက သတိမပြုမိတဲ့ အကြောင်းအရင်း တစ်ခုကလည်း တချို့ text editor တွေမှာ "အက္ခရာ ဥ" နှင့် ညကလေး "ဉ" ကို ကွဲပြားအောင် မပြသပေးနိုင်လို့ပါ။
Step8: ဝီကီပီးဒီးယားက မှားနေတဲ့ "ညကလေး" ကို "အက္ခရာ ဥ" နဲ့ပြန်ပြင်ရိုက်ထားတဲ့ စာကြောင်းနဲ့ နောက်တစ်ခေါက် syllable ဖြတ်ထားတာက အောက်ပါအတိုင်းဖြစ်ပါတယ်။ "ညကလေး" နဲ့ "အက္ခရာ ဥ" အမှားကိစ္စမှာတော့ syllable segmentation ဖြတ်တဲ့အပိုင်းမှာတော့ ထူးထူးခြားခြား အပြောင်းအလဲ မရှိပါဘူး။ | Python Code:
# Regular Expression Python Library ကို သုံးလို့ရအောင် import လုပ်တာ
import re
# စာလုံးတွေကို အုပ်စုဖွဲ့တာ (သို့) variable declaration လုပ်တာ
# တကယ်လို့ syllable break လုပ်တဲ့ အခါမှာ မြန်မာစာလုံးချည်းပဲ သပ်သပ် လုပ်ချင်တာဆိုရင် enChar က မလိုပါဘူး
myConsonant = "က-အ"
enChar = "a-zA-Z0-9"
otherChar = "ဣဤဥဦဧဩဪဿ၌၍၏၀-၉၊။!-/:-@[-`{-~\s"
ssSymbol = '္'
ngaThat = 'င်'
aThat = '်'
# Regular expression pattern for Myanmar syllable breaking
# *** a consonant not after a subscript symbol AND
# a consonant is not followed by a-That character or a subscript symbol
# မြန်မာစာကို syllable segmentation လုပ်ဖို့အတွက်က ဒီ RE pattern တစ်ခုတည်းနဲ့ အဆင်ပြေတယ်။
BreakPattern = re.compile(r"((?<!" + ssSymbol + r")["+ myConsonant + r"](?![" + aThat + ssSymbol + r"])" + r"|[" + enChar + otherChar + r"])", re.UNICODE)
# sylbreak function ဆောက်တဲ့ အပိုင်း
def sylbreak(line):
line = re.sub(r"\s+","", line)
line = BreakPattern.sub(r" " + r"\1", line)
return line
# sylbreak function ကိုခေါ်သုံးကြည့်ရအောင်
sylbreak("မြန်မာစာသည် တို့စာ။ တို့စာကို သုတေသန လုပ်ပါ။")
Explanation: Using Sylbreak in Jupyter Notebook
ဒီ Jupyter Notebook က GitHub မှာ ကျွန်တော်တင်ပေးထားတဲ့ Sylbreak Python ပရိုဂရမ် https://github.com/ye-kyaw-thu/sylbreak/blob/master/python/sylbreak.py ကို Jupyter Notebook, Python 3 Kernel မှာ function တစ်ခုအနေနဲ့ဆောက်ပြီး သုံးတဲ့ပုံစံကို နမူနာအနေနဲ့ ပြသထားတာ ဖြစ်ပါတယ်။
"sylbreak.py" ကို သုံးစဉ်က input လုပ်တဲ့ စာကြောင်းမှာပါတဲ့ "space" ကို အရင်ဖြတ်ပြီးတော့ (e.g. cat input.txt | sed 's/ //g' | python sylbreak.py ...) သုံးတဲ့ပုံစံနဲ့ သွားခဲ့ပေမဲ့၊ ဒီ sylbreak function မှာတော့ function အတွင်းက ဖြတ်ပေးတဲ့ပုံစံနဲ့ ရေးပြထားပါတယ်။
အဲဒီအတွက် "line = re.sub(ur"\s+","", line)" ဆိုတဲ့ statement တစ်ကြောင်းကို ဖြည့်ရေးထားပါတယ်။
End of explanation
sylbreak(အာခီမီးဒီးစ်ကို ဘီစီ ၂၈၇ ခန့်က ရှေးဟောင်း မဂ္ဂနာဂရေစီယာပြည်လက်အောက်ခံ စစ္စလီပြည် ဆိုင်ရာကျူးစ် မြို့ တွင် မွေးဖွားခဲ့သည်။ ဘိုင်ဇန်တိုင်းဂရိခေတ် က သမိုင်းပညာရှင် ဂျွန်ဇီဇီ ၏ မှတ်တမ်းအရ အာခီမီးဒီးစ်သည် အသက် ၇၅ နှစ်အထိ နေထိုင်သွားရကြောင်း သိရသည်။ အာခီမီးဒီးစ်သည် သူ၏ တီထွင်မှု တစ်ခုဖြစ်သော သဲနာရီ နှင့် ပတ်သက်၍ ရေးသားထားသော Sand Reckoners အမည်ရှိ စာတမ်းများတွင် သူ၏ ဖခင်အမည်ကို နက္ခတ္တဗေဒပညာရှင် ဖီးဒီးယပ်စ် ဟု ဖော်ပြထားသည်။ သမိုင်းပညာရှင် ပလူးတပ် ရေးသားသော ခေတ်ပြိုင်ပုဂ္ဂိုလ်ထူးကြီးများ စာအုပ်တွင် အာခီမီးဒီးစ်သည် ဆိုင်ရာကျူးစ်ဘုရင် ဒုတိယမြောက်ဟီရိုးနှင့် ဆွေမျိုး တော်စပ်ကြောင်း ဖော်ပြထားသည်။ သူငယ်ရွယ်စဉ်က အီဂျစ်ပြည် အလက်ဇန္ဒြီးယားမြို့ တွင် ပညာဆည်းပူး ခဲ့သည်ဟု ယူဆရသည်။ ဘီစီ ၂၁၂ တွင် အာခီမီးဒီးစ် သေဆုံးခဲ့သည်။ ရောမစစ်ဗိုလ်ချုပ် မားကပ်စ် ကလောဒီးယပ်စ် မာဆဲလပ်စ် က နှစ်နှစ်ကြာဝိုင်းရံ ပိတ်ဆို့ပြီးနောက် ဆိုင်ရာကျူးစ် မြို့ကို သိမ်းပိုက်လိုက်သည်။ ထိုအချိန်တွင် အာခီမီးဒီးသည် ဂျော်မက်ထရီ ပုစ္ဆာတစ်ပုဒ်ကို စဉ်းစား အဖြေရှာနေခိုက် ဖြစ်သည်။ ရောမစစ်သားက သူ့အား ဖမ်းဆီးလိုက်ပြီး ဗိုလ်ချုပ် မာဆဲလပ်စ် နှင့် တွေ့ဆုံရန် ပြောဆိုရာ သူက သူ၏ပုစ္ဆာစဉ်းစားနေဆဲဖြစ်၍ မတွေ့လိုကြောင်း ငြင်းဆိုသည်တွင် ရောမစစ်သားက ဒေါသထွက်ကာ ဓားဖြင့် ထိုးသတ်လိုက်သည်ဟု ပလူးတပ် က ရေးသားခဲ့သည်။ ဗိုလ်ချုပ် မာဆဲလပ်စ်သည် အာခီမီးဒီးစ် သေဆုံးသွားသည့် အတွက် များစွာ နှမြောတသဖြစ်ရသည်။ အာခီမီးဒီးစ်အား ပညာရှင် တစ်ယောက်အဖြစ် သိရှိထားသောကြောင့် မသတ်ရန် ကြိုတင် အမိန့်ပေးထားခဲ့သည်။ “ငါ့စက်ဝိုင်းတွေပေါ် တက်မနင်းပါနဲ့”ဟူသော စကားကို အာခီမီးဒီးစ် နောက်ဆုံး ပြောဆိုခဲ့သည်ဟု အချို့က ယူဆကြသော်လည်း သမိုင်းပညာရှင် ပလူးတပ် ရေးသော စာအုပ်တွင်မူ မပါရှိပေ။ အာခီမီးဒီးစ်၏ ဂူဗိမ္မာန်တွင် ထုလုံးရှည်မှန်တစ်ခုအတွင်း စက်လုံးတစ်ခုကို ထည့်သွင်းထားသည့် ရုပ်တုတစ်ခုကို စိုက်ထူထားသည်။ အာခီမီးဒီးစ် သေဆုံးပြီး နှစ်ပေါင်း ၁၃၇နှစ်အကြာ ဘီစီ ၇၅တွင် ရောမခေတ် နိုင်ငံရေးသုခမိန် ဆီဇာရိုက အာခီမီးဒီးစ် အကြောင်းကြားသိရ၍ သူ၏ အုတ်ဂူအား ရှာဖွေခဲ့သည်။ ခြုံနွယ်ပိတ်ပေါင်းများ ဖုံးအုပ်နေသော အာခီမီးဒီးစ်၏ အုတ်ဂူကို ဆိုင်ရာကျူးစ်မြို့အနီးတွင် ရှာဖွေ တွေ့ရှိခဲ့ပြီး သန့်ရှင်းရေးပြုလုပ်ကာ အုတ်ဂူပေါ်မှ စာသားများကို ဖတ်ရှုသွားသည်။ ဆိုင်ရာကျူးစ်စစ်ပွဲ အပြီး နှစ်ပေါင်း ၇၀ အကြာတွင် ပိုလီးဘီးယပ်စ် ရေးသားသော ဆိုင်ရာကျူးစ်စစ်ပွဲ အကြောင်း စာအုပ်တွင် အာခီမီးဒီးစ်နှင့် ပတ်သက်သော အကြောင်းများ ပါရှိ၍ သမိုင်းပညာရှင် ပလူးတပ် က ထပ်မံ ရေးသားနိုင်ခဲ့ခြင်း ဖြစ်ပါသည်။ ဆိုင်ရာကျူးစ်မြို့ ကာကွယ်ရေးအတွက် စစ်ပွဲဝင် စက်ကိရိယာ လက်နက်ဆန်းများကိုလည်း အာခီမီးဒီးစ်က တီထွင်ပေးခဲ့ကြောင်း အဆိုပါ စာအုပ်တွင် ဖော်ပြပါရှိပါသည်။
)
Explanation: စိတ်ထဲမှာ ပေါ်လာတာကို ကောက်ရေးပြီးတော့ syllable segmentation လုပ်ခိုင်းလိုက်တာပါ။ :)
နောက်ထပ် ဥပမာအနေနဲ့ Wikipedia Myanmar မှာရေးထားတဲ့ အာခီမီးဒီးစ် ရဲ့ အတ္ထုပ္ပတ္တိအကျဉ်း ထဲမှာရေးထားတဲ့
စာကြောင်းတွေကို sylbreak နဲ့ ဖြတ်ကြည့်ရအောင်။
End of explanation
sylbreak("ဘီစီ ၂၈၇ ခန့်")
Explanation: Typing order
မြန်မာစာနဲ့ ပတ်သက်တဲ့ NLP (Natural Language Processing) အလုပ် တစ်ခုခု လုပ်ဖို့အတွက် syllable segmentation လုပ်ကြမယ်ဆိုရင် တကယ်တမ်းက မလုပ်ခင်မှာ၊ မြန်မာစာ စာကြောင်းတွေရဲ့ typing order အပါအဝင် တခြား ဖြစ်တတ်တဲ့ အမှားတွေကိုလည်း cleaning လုပ်ရပါတယ်။ အဲဒီလိုမလုပ်ရင် sylbreak က ကျွန်တော် အကြမ်းမျဉ်းသတ်မှတ်ထားတဲ့ မြန်မာစာ syllable unit တွေအဖြစ် မှန်မှန်ကန်ကန် ဖြတ်ပေးနိုင်မှာ မဟုတ်ပါဘူး။ မြန်မာစာ စာကြောင်းတွေထဲမှာ ရှိတတ်တဲ့အမှား တွေက တကယ့်ကို အများကြီးပါ။ တချို့ အမှားတွေက မျက်လုံးနဲ့ကြည့်ယုံနဲ့ မခွဲခြားနိုင်တာမျိုးတွေလည်း ရှိပါတယ်။ ဒီနေရာမှာတော့ အမှားအမျိုးအစားတွေထဲက တစ်မျိုးဖြစ်တဲ့ typing order အမှား တစ်မျိုး၊ နှစ်မျိုးကို ဥပမာအနေနဲ့ရှင်းပြရင်း၊ အဲဒီလိုအခြေအနေမျိုးမှာ ဖြစ်တတ်တဲ့ sylbreak က ထွက်လာမယ့် အမှား output တွေကိုလည်း လေ့လာကြည့်ကြရအောင်။
အောက်မှာ သုံးပြထားတဲ့ "ခန့်" က "ခ န ့ ်" (ခခွေး နငယ် အောက်မြစ် အသတ်) ဆိုတဲ့ အစီအစဉ် အမှားနဲ့ ရိုက်ထားတာဖြစ်ပါတယ်။ အဲဒါကြောင့် sylbreak က ထွက်လာတဲ့အခါမှာ "ခခွေး" နဲ့ "နငယ် အသတ် အောက်မြစ်" က ကွဲနေတာဖြစ်ပါတယ်။
End of explanation
sylbreak("ဘီစီ ၂၈၇ ခန့်")
Explanation: တကယ်တန်း မှန်ကန်တဲ့ "ခန့်" ရဲ့ typing order က "ခ န ် ့" (ခခွေး နငယ် အသတ် အောက်မြစ်) ပါ။
အမြင်အားဖြင့်ကတော့ မခွဲနိုင်ပေမဲ့၊ မှန်ကန်တဲ့ typing order နဲ့ ရိုက်ထားရင်တော့ "ခန့်" ဆိုပြီး syllable တစ်ခုအနေနဲ့ ရိုက်ထုတ်ပြပေးပါလိမ့်မယ်။
End of explanation
sylbreak("ထည့်သွင်းထားသည့်ရုပ်တု")
Explanation: နောက်ထပ် typing order အမှားတစ်ခုကို ကြည့်ကြရအောင်။
End of explanation
sylbreak("ထည့်သွင်းထားသည့်ရုပ်တု")
Explanation: "ညကြီး အောက်မြစ် အသတ်" ဆိုတဲ့ မှားနေတဲ့ အစီအစဉ်ကို "ညကြီး အသတ် အောက်မြစ်" ဆိုပြီး
ပြောင်းရိုက်ပြီးတော့ sylbreak လုပ်ကြည့်ရင်တော့ အောက်ပါအတိုင်း "ထ" နဲ့ "ည့်", "သ" နဲ့ "ည့်" တွေက ကွဲမနေတော့ပဲ မှန်မှန်ကန်ကန်ဖြတ်ပေးပါလိမ့်မယ်။
End of explanation
sylbreak("ကာရီသည်ဒီနှစ်၏ပါရမီရှင်တစ်ဉီးနှင့်ထိုက်တန်သောအမျိုးသမီးအဆိုရှင်ဖြစ်သည်။")
Explanation: တချို့အမှားတွေကတော့ ဂရုစိုက်ရင် မျက်စိနဲ့ မြင်နိုင်ပါတယ်။
ဥပမာ "ဥ" (အက္ခရာ ဥ) နဲ့ "ဉ" (ညကလေး) ကိုမှားရိုက်တဲ့ကိစ္စပါ။
သို့သော် ကျွန်တော်မြန်မာစာကြောင်းတွေအများကြီးကို ကိုင်တွယ်အလုပ်လုပ်တဲ့အခါတိုင်းမှာ ဒီလိုအမှားက အမြဲတမ်းကို ပါတတ်ပါတယ်။
ဖောင့် (font) မှာလည်း မှန်မှန်ကန်ကန်ခွဲထားမယ်ဆိုရင်၊ အမှန်က ညကလေးဆိုရင် အမြီးက ရှည်ပါတယ်။
စာရိုက်သူအများစုက သတိမပြုမိတဲ့ အကြောင်းအရင်း တစ်ခုကလည်း တချို့ text editor တွေမှာ "အက္ခရာ ဥ" နှင့် ညကလေး "ဉ" ကို ကွဲပြားအောင် မပြသပေးနိုင်လို့ပါ။
End of explanation
sylbreak("ကာရီသည်ဒီနှစ်၏ပါရမီရှင်တစ်ဦးနှင့်ထိုက်တန်သောအမျိုးသမီးအဆိုရှင်ဖြစ်သည်။")
Explanation: ဝီကီပီးဒီးယားက မှားနေတဲ့ "ညကလေး" ကို "အက္ခရာ ဥ" နဲ့ပြန်ပြင်ရိုက်ထားတဲ့ စာကြောင်းနဲ့ နောက်တစ်ခေါက် syllable ဖြတ်ထားတာက အောက်ပါအတိုင်းဖြစ်ပါတယ်။ "ညကလေး" နဲ့ "အက္ခရာ ဥ" အမှားကိစ္စမှာတော့ syllable segmentation ဖြတ်တဲ့အပိုင်းမှာတော့ ထူးထူးခြားခြား အပြောင်းအလဲ မရှိပါဘူး။
End of explanation |
2,150 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Misc Advance Topics
copy — Duplicate Objects
Shallow Copies
The shallow copy created by copy() is a new container populated with references to the contents of the original object. When making a shallow copy of a list object, a new list is constructed and the elements of the original object are appended to it.
Step1: Deep Copies
The deep copy created by deepcopy() is a new container populated with copies of the contents of the original object. To make a deep copy of a list, a new list is constructed, the elements of the original list are copied, and then those copies are appended to the new list.
Replacing the call to copy() with deepcopy() makes the difference in the output apparent. | Python Code:
import copy
class MyTry:
def __init__(self):
self.lst = [1,2,3,4,5]
a = MyTry()
dup = copy.copy(a)
a.lst.append(6)
print(a.lst, dup.lst)
print(id(a), id(dup))
import copy
class MyTry:
def __init__(self):
self.lst = [1,2,3,4,5]
a = MyTry()
dup = copy.copy(a)
a.lst.append(6)
print(a.lst, dup.lst)
print(id(a), id(dup))
print(id(a.lst), id(dup.lst))
Explanation: Misc Advance Topics
copy — Duplicate Objects
Shallow Copies
The shallow copy created by copy() is a new container populated with references to the contents of the original object. When making a shallow copy of a list object, a new list is constructed and the elements of the original object are appended to it.
End of explanation
import copy
class MyTry:
def __init__(self):
self.lst = [1,2,3,4,5]
a = MyTry()
dup = copy.deepcopy(a)
a.lst.append(6)
print(a.lst, dup.lst)
print(id(a), id(dup))
print(id(a.lst), id(dup.lst))
lst = [1,2,3,4,5]
dup_lst = copy.deepcopy(lst)
print(id(lst), id(dup_lst))
import copy
class MyTry:
def __init__(self):
self.lst = [1,2,3,4,[5]]
a = MyTry()
dup = copy.deepcopy(a)
a.lst.append(6)
print(a.lst, dup.lst)
print(id(a), id(dup))
print(id(a.lst), id(dup.lst))
Explanation: Deep Copies
The deep copy created by deepcopy() is a new container populated with copies of the contents of the original object. To make a deep copy of a list, a new list is constructed, the elements of the original list are copied, and then those copies are appended to the new list.
Replacing the call to copy() with deepcopy() makes the difference in the output apparent.
End of explanation |
2,151 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LAB 4c
Step1: Verify CSV files exist
In the seventh lab of this series 4a_sample_babyweight, we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.
Step2: Create Keras model
Lab Task #1
Step5: Lab Task #2
Step7: Lab Task #3
Step9: Lab Task #4
Step11: Lab Task #5
Step13: Lab Task #6
Step15: Lab Task #7
Step16: We can visualize the wide and deep network using the Keras plot_model utility.
Step17: Run and evaluate model
Lab Task #8
Step18: Visualize loss curve
Step19: Save the model | Python Code:
import datetime
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
print(tf.__version__)
Explanation: LAB 4c: Create Keras Wide and Deep model.
Learning Objectives
Set CSV Columns, label column, and column defaults
Make dataset of features and label from CSV files
Create input layers for raw features
Create feature columns for inputs
Create wide layer, deep dense hidden layers, and output layer
Create custom evaluation metric
Build wide and deep model tying all of the pieces together
Train and evaluate
Introduction
In this notebook, we'll be using Keras to create a wide and deep model to predict the weight of a baby before it is born.
We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a wide and deep neural network in Keras. We'll create a custom evaluation metric and build our wide and deep model. Finally, we'll train and evaluate our model.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Load necessary libraries
End of explanation
%%bash
ls *.csv
%%bash
head -5 *.csv
Explanation: Verify CSV files exist
In the seventh lab of this series 4a_sample_babyweight, we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.
End of explanation
# Determine CSV, label, and key columns
# TODO: Create list of string column headers, make sure order matches.
CSV_COLUMNS = [""]
# TODO: Add string name for label column
LABEL_COLUMN = ""
# Set default values for each CSV column as a list of lists.
# Treat is_male and plurality as strings.
DEFAULTS = []
Explanation: Create Keras model
Lab Task #1: Set CSV Columns, label column, and column defaults.
Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function.
* CSV_COLUMNS are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files
* LABEL_COLUMN is the header name of the column that is our label. We will need to know this to pop it from our features dictionary.
* DEFAULTS is a list with the same length as CSV_COLUMNS, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column.
End of explanation
def features_and_labels(row_data):
Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: tf.estimator.ModeKeys to determine if training or evaluating.
Returns:
`Dataset` object.
# TODO: Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset()
# TODO: Map dataset to features and label
dataset = dataset.map() # features, label
# Shuffle and repeat for training
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
Explanation: Lab Task #2: Make dataset of features and label from CSV files.
Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use tf.data.experimental.make_csv_dataset. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors.
End of explanation
def create_input_layers():
Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
# TODO: Create dictionary of tf.keras.layers.Input for each dense feature
deep_inputs = {}
# TODO: Create dictionary of tf.keras.layers.Input for each sparse feature
wide_inputs = {}
inputs = {**wide_inputs, **deep_inputs}
return inputs
Explanation: Lab Task #3: Create input layers for raw features.
We'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers (tf.Keras.layers.Input) by defining:
* shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.
* name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.
* dtype: The data type expected by the input, as a string (float32, float64, int32...)
End of explanation
def create_feature_columns(nembeds):
Creates wide and deep dictionaries of feature columns from inputs.
Args:
nembeds: int, number of dimensions to embed categorical column down to.
Returns:
Wide and deep dictionaries of feature columns.
# TODO: Create deep feature columns for numeric features
deep_fc = {}
# TODO: Create wide feature columns for categorical features
wide_fc = {}
# TODO: Bucketize the float fields. This makes them wide
# TODO: Cross all the wide cols, have to do the crossing before we one-hot
# TODO: Embed cross and add to deep feature columns
return wide_fc, deep_fc
Explanation: Lab Task #4: Create feature columns for inputs.
Next, define the feature columns. mother_age and gestation_weeks should be numeric. The others, is_male and plurality, should be categorical. Remember, only dense feature columns can be inputs to a DNN.
End of explanation
def get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units):
Creates model architecture and returns outputs.
Args:
wide_inputs: Dense tensor used as inputs to wide side of model.
deep_inputs: Dense tensor used as inputs to deep side of model.
dnn_hidden_units: List of integers where length is number of hidden
layers and ith element is the number of neurons at ith layer.
Returns:
Dense tensor output from the model.
# Hidden layers for the deep side
layers = [int(x) for x in dnn_hidden_units]
deep = deep_inputs
# TODO: Create DNN model for the deep side
deep_out =
# TODO: Create linear model for the wide side
wide_out =
# Concatenate the two sides
both = tf.keras.layers.concatenate(
inputs=[deep_out, wide_out], name="both")
# TODO: Create final output layer
return output
Explanation: Lab Task #5: Create wide and deep model and output layer.
So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. We need to create a wide and deep model now. The wide side will just be a linear regression or dense layer. For the deep side, let's create some hidden dense layers. All of this will end with a single dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right.
End of explanation
def rmse(y_true, y_pred):
Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
# TODO: Calculate RMSE from true and predicted labels
pass
Explanation: Lab Task #6: Create custom evaluation metric.
We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels.
End of explanation
def build_wide_deep_model(dnn_hidden_units=[64, 32], nembeds=3):
Builds wide and deep model using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
# Create input layers
inputs = create_input_layers()
# Create feature columns
wide_fc, deep_fc = create_feature_columns(nembeds)
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
# TODO: Add wide and deep feature colummns
wide_inputs = tf.keras.layers.DenseFeatures(
feature_columns=#TODO, name="wide_inputs")(inputs)
deep_inputs = tf.keras.layers.DenseFeatures(
feature_columns=#TODO, name="deep_inputs")(inputs)
# Get output of model given inputs
output = get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
# TODO: Add custom eval metrics to list
model.compile(optimizer="adam", loss="mse", metrics=["mse"])
return model
print("Here is our wide and deep architecture so far:\n")
model = build_wide_deep_model()
print(model.summary())
Explanation: Lab Task #7: Build wide and deep model tying all of the pieces together.
Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is NOT a simple feedforward model with no branching, side inputs, etc. so we can't use Keras' Sequential Model API. We're instead going to use Keras' Functional Model API. Here we will build the model using tf.keras.models.Model giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.
End of explanation
tf.keras.utils.plot_model(
model=model, to_file="wd_model.png", show_shapes=False, rankdir="LR")
Explanation: We can visualize the wide and deep network using the Keras plot_model utility.
End of explanation
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around
NUM_EVALS = 5 # how many times to evaluate
# Enough to get a reasonable sample, but not so much that it slows down
NUM_EVAL_EXAMPLES = 10000
# TODO: Load training dataset
trainds = load_dataset()
# TODO: Load evaluation dataset
evalds = load_dataset().take(count=NUM_EVAL_EXAMPLES // 1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
logdir = os.path.join(
"logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=logdir, histogram_freq=1)
# TODO: Fit model on training dataset and evaluate every so often
history = model.fit()
Explanation: Run and evaluate model
Lab Task #8: Train and evaluate.
We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. Also, don't forget to add the callback to TensorBoard.
End of explanation
# Plot
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(["loss", "rmse"]):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history["val_{}".format(key)])
plt.title("model {}".format(key))
plt.ylabel(key)
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left");
Explanation: Visualize loss curve
End of explanation
OUTPUT_DIR = "babyweight_trained_wd"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(
OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
!ls $EXPORT_PATH
Explanation: Save the model
End of explanation |
2,152 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Schilling distribution in CorMap
András Wacha
27th Oct. 2016.
Initialization
Step6: Definition of the algorithms
Algorithms according Schilling's paper
I have implemented these in Cython for the sake of speed
Step8: The cormap_pval algorithm from Daniel Franke, EMBL Hamburg
Step9: Validation of the Cython-algorithms
Check 1
Amatrix() returns the same values as Schilling's recursive formula for A_n(x)
Step10: Check 2
$A_n(x) = \sum_{j=0}^{x}a_n(j)$
Step11: Check 3
$p_n(x) = 2^{-n}a_n(x)$
Step12: Check 4
Well-known special cases
1 toss
2 possible outcomes
Step13: Estimate the execution time
Step14: Some visualization
Step15: Large numbers
Calculate a large p matrix (takes some time, see above) for use of later computations
Step16: Visualize the matrix
Step17: Compare with the algorithm in CorMap
Note that in CorMap, the p-value is the probability that the longest continuous sequence of either heads or tails in $n$ coin-tosses is not shorter than $x$, while our pmatrix() gives the probability that it is exactly $x$ long. The desired quantity is obtained from the pmatrix() approach as $\sum_{j \ge(x-1)} p_n(j)$. | Python Code:
%pylab inline
%load_ext cython
import time
import ipy_table
import numpy as np
import matplotlib.pyplot as plt
Explanation: The Schilling distribution in CorMap
András Wacha
27th Oct. 2016.
Initialization
End of explanation
%%cython
cimport numpy as np
import numpy as np
np.import_array()
cdef Py_ssize_t A_(Py_ssize_t n, Py_ssize_t x):
Calculate A_n(x) as per Schilling's original paper
cdef Py_ssize_t j
cdef Py_ssize_t val=0
if n<=x:
return 2**n
else:
for j in range(0, x+1):
val += A_(n-1-j, x)
return val
def A(n, x):
Python interface for A_(n, x)
return A_(n,x)
def Amatrix(Py_ssize_t N):
Calculate an NxN matrix of the Schilling distribution
The elements A[n, x] are the number of possible outcomes of an n-sequence of
independent coin-tosses where the maximum length of consecutive heads is
_not_larger_than x.
cdef np.ndarray[np.uint64_t, ndim=2] result
cdef Py_ssize_t n,x
result = np.empty((N, N), np.uint64)
for x in range(N):
for n in range(0, x+1):
result[n,x]=(2**n)
for n in range(x+1,N):
result[n, x]=result[n-1-x:n,x].sum()
return result
def amatrix(Py_ssize_t N):
Calculate an NxN matrix of the Schilling distribution
The elements a[n, x] are the number of possible outcomes of an n-sequence of
independent coin-tosses where the maximum length of consecutive heads is
_exactly_ x. Thus a[n, x] = A[n, x] - A[n, x-1]
cdef np.ndarray[np.uint64_t, ndim=2] result
cdef Py_ssize_t n,x
cdef Py_ssize_t val
result = np.zeros((N, N), np.uint64)
result[:,0] = 1 # a_n(x=0) = 1
for n in range(N):
result[n,n] = 1 #a_n(x=n) = 1
# a_n(x>n) = 0
for x in range(1, n):
result[n,x] = result[n-1-x:n,x].sum() + result[n-1-x,:x].sum()
return result
def pmatrix(Py_ssize_t N):
Calculate an NxN matrix of the Schilling distribution.
The elements p[n, x] of the resulting matrix are the probabilities that
the length of the longest head-run in a sequence of n independent tosses
of a fair coin is exactly x.
It holds that p[n, x] = a[n, x] / 2 ** n = (A[n, x] - A[n, x-1]) / 2 ** n
Note that the probability that the length of the longest run
(no matter if head or tail) in a sequence of n independent
tosses of a fair coin is _exactly_ x is p[n-1, x-1].
cdef np.ndarray[np.double_t, ndim=2] result
cdef Py_ssize_t n,x,j
cdef double val
result = np.zeros((N, N), np.double)
for n in range(N):
result[n, 0] = 2.0**(-n)
result[n, n] = 2.0**(-n) #p_n(x=n) = 1/2**n
# p_n(x>n) = 0
for x in range(1, n):
val=0
for j in range(n-1-x,n):
val+=2.0**(j-n)*result[j,x]
for j in range(0, x):
val += 2.0**(-x-1)*result[n-1-x,j]
result[n,x] = val
return result
Explanation: Definition of the algorithms
Algorithms according Schilling's paper
I have implemented these in Cython for the sake of speed
End of explanation
def cormap_pval(n, x):
Cormap P-value algorithm, giving the probability that from
n coin-tosses the longest continuous sequence of either heads
or tails is _not_shorter_ than x.
Python version of the original Fortran90 code of Daniel Franke
dbl_max_exponent=np.finfo(np.double).maxexp-1
P=np.zeros(dbl_max_exponent)
if x <=1:
pval = 1
elif x>n:
pval = 0
elif x>dbl_max_exponent:
pval = 0
elif x==n:
pval = 2.0**(1-x)
else:
half_pow_x = 2**(-x)
P[1:x] = 0
i_x = 0
P[i_x]=2*half_pow_x
for i in range(x+1, n+1):
im1_x = i_x # == (i-1) % x
i_x = i % x
P[i_x] = P[im1_x] + half_pow_x * (1-P[i_x])
pval = P[i_x]
return pval
Explanation: The cormap_pval algorithm from Daniel Franke, EMBL Hamburg
End of explanation
N=20
# Calculate the matrix for A using the slow method
A_slow = np.empty((N,N), np.uint64)
for n in range(N):
for x in range(N):
A_slow[n,x]=A(n,x)
A_fast = Amatrix(N)
print('The two matrices are the same:',(np.abs(A_slow - A_fast)).sum()==0)
Explanation: Validation of the Cython-algorithms
Check 1
Amatrix() returns the same values as Schilling's recursive formula for A_n(x)
End of explanation
N=50
a=amatrix(N)
A_fast=Amatrix(N)
A_constructed=np.empty((N,N), np.uint64)
for x in range(N):
A_constructed[:, x]=a[:,:x+1].sum(axis=1)
print('The two matrices are the same:',(np.abs(A_fast - A_constructed).sum()==0))
Explanation: Check 2
$A_n(x) = \sum_{j=0}^{x}a_n(j)$
End of explanation
N=50
p=pmatrix(N)
a=amatrix(N)
p_from_a=np.empty((N,N), np.double)
for n in range(N):
p_from_a[n, :] = a[n, :]/2**n
print('The two matrices are the same:',(np.abs(p - p_from_a).sum()==0))
Explanation: Check 3
$p_n(x) = 2^{-n}a_n(x)$
End of explanation
p=pmatrix(50)
for n in range(1,5):
print('{} toss(es):'.format(n))
for x in range(1,n+1):
print(' p_{}({}) = {}'.format(n,x,p[n-1,x-1]))
Explanation: Check 4
Well-known special cases
1 toss
2 possible outcomes: H, T
<table>
<tr><th>Max length</th><th>Outcomes</th><th>p</th></tr>
<tr><td>1</td><td>2</td><td>1</td></tr>
</table>
2 tosses
4 possible outcomes: HH, HT, TH, TT
<table>
<tr><th>Max length</th><th>Outcomes</th><th>p</th></tr>
<tr><td>1</td><td>2</td><td>0.5</td></tr>
<tr><td>2</td><td>2</td><td>0.5</td></tr>
</table>
3 tosses
8 possible outcomes: HHT, HTT, THT, TTT, HHH, HTH, THH, TTH
<table>
<tr><th>Max length</th><th>Outcomes</th><th>p</th></tr>
<tr><td>1</td><td>2</td><td>0.25</td></tr>
<tr><td>2</td><td>4</td><td>0.5</td></tr>
<tr><td>2</td><td>2</td><td>0.25</td></tr>
</table>
4 tosses
16 possible outcomes:
<table>
<tr><th>Outcome</th><th>Longest sequence length</th></tr>
<tr><td>HHTH</td><td>2</td></tr>
<tr><td>HTTH</td><td>2</td></tr>
<tr><td>THTH</td><td>1</td></tr>
<tr><td>TTTH</td><td>3</td></tr>
<tr><td>HHHH</td><td>4</td></tr>
<tr><td>HTHH</td><td>2</td></tr>
<tr><td>THHH</td><td>3</td></tr>
<tr><td>TTHH</td><td>2</td></tr>
<tr><td>HHTT</td><td>2</td></tr>
<tr><td>HTTT</td><td>3</td></tr>
<tr><td>THTT</td><td>2</td></tr>
<tr><td>TTTT</td><td>4</td></tr>
<tr><td>HHHT</td><td>3</td></tr>
<tr><td>HTHT</td><td>1</td></tr>
<tr><td>THHT</td><td>2</td></tr>
<tr><td>TTHT</td><td>2</td></tr>
</table>
<table>
<tr><th>Max length</th><th>Outcomes</th><th>p</th></tr>
<tr><td>1</td><td>2</td><td>0.125</td></tr>
<tr><td>2</td><td>8</td><td>0.5</td></tr>
<tr><td>3</td><td>4</td><td>0.25</td></tr>
<tr><td>4</td><td>2</td><td>0.125</td></tr>
</table>
End of explanation
Nmax=200
times = np.empty(Nmax)
for i in range(Nmax):
t0=time.monotonic()
p=pmatrix(i)
times[i]=time.monotonic()-t0
plt.loglog(np.arange(Nmax),times,'o-',label='Execution time')
plt.xlabel('N')
plt.ylabel('Execution time of pmatrix(N) (sec)')
x=np.arange(Nmax)
a,b=np.polyfit(np.log(x[x>100]), np.log(times[x>100]),1)
plt.loglog(x,np.exp(np.log(x)*a+b),'r-', label='$\\propto N^{%.3f}$' % a)
plt.legend(loc='best')
Explanation: Estimate the execution time
End of explanation
N=50
p=pmatrix(N+1)
bar(left=np.arange(p.shape[1])-0.5,height=p[N,:])
plt.axis(xmin=0,xmax=14)
plt.xlabel('Length of the longest head-run in {} tosses'.format(N))
plt.ylabel('Probability')
print('Most probable maximum head-run length:',p[N,:].argmax())
Explanation: Some visualization
End of explanation
N=2050
print('Calculating {0}x{0} p-matrix...', flush=True)
t0=time.monotonic()
p=pmatrix(N)
t=time.monotonic()-t0
print('Done in {} seconds'.format(t))
np.savez('cointoss_p.npz',p=p)
Explanation: Large numbers
Calculate a large p matrix (takes some time, see above) for use of later computations
End of explanation
plt.figure(figsize=(8,4))
plt.subplot(1,2,1)
plt.imshow(p,norm=matplotlib.colors.Normalize(),interpolation='nearest')
plt.xlabel('x')
plt.ylabel('n')
plt.title('Linear colour scale')
plt.colorbar()
plt.subplot(1,2,2)
plt.imshow(p,norm=matplotlib.colors.LogNorm(), interpolation='nearest')
plt.xlabel('x')
plt.ylabel('n')
plt.title('Logarithmic colour scale')
plt.colorbar()
Explanation: Visualize the matrix:
End of explanation
table =[['n', 'x', 'p (D. Franke)', 'p (A. Wacha)']]
for n, x in [(449,137),(449,10),(2039,338),(2039,18),(200,11),(10,2), (1,0), (1,1), (2,0), (2,1), (2,2), (3,0), (3,1), (3,2), (3,3), (4,0), (4,1), (4,2), (4,3), (4,4)]:
table.append([n,x,cormap_pval(n,x), p[n-1,max(0,x-1):].sum()])
tab=ipy_table.IpyTable(table)
tab.apply_theme('basic')
display(tab)
Explanation: Compare with the algorithm in CorMap
Note that in CorMap, the p-value is the probability that the longest continuous sequence of either heads or tails in $n$ coin-tosses is not shorter than $x$, while our pmatrix() gives the probability that it is exactly $x$ long. The desired quantity is obtained from the pmatrix() approach as $\sum_{j \ge(x-1)} p_n(j)$.
End of explanation |
2,153 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Iteradores
Una de las cosas más maravillosas de las compus es que podemos repetir un mismo cálculo para muchos valores de forma automática. Ya hemos visto al menos un iterator (iterador), que no es una lista... es otro objeto.
Step1: Pero range, la verdad no es una lista. Es un iterador y aprender cómo funciona es útil en varios ámbitos.
Iterar sobre listas
Step2: En este caso, lo primero que hace el iterador es chequear si el objeto del otro lado del in es un iterador. Esto se puede chequear con la función iter, parecida a type.
Step3: range()
Step4: Y es así como Python lo trata como si fuera una lista
Step5: Si una lista fuera a crear un trillón de valores ($10^{12}$), necesitaríamos terabytes de memoria para almacenarlos.
Algunos iteradores útiles
enumerate
Algunas veces queremos no solo iterar sobre los valores en una lista, sino también imprimir el índice de ellos.
Step6: Pero hay una sintaxis más limpia para esto
Step7: zip
La función zip itera sobre dos iterables y produce una tupla
Step8: Si las listas son de diferente largo, el largo del zip va a estar dado por la lista más corta.
map y filter
Un poco más intenso
Step9: El iterador filter toma una función y la aplica sobre todos los valores de un iterador devolviendo sólo aquellos valores que "pasan" el filtro.
Step10: iteradores especializados | Python Code:
for i in range(10):
print(i, end=' ')
Explanation: Iteradores
Una de las cosas más maravillosas de las compus es que podemos repetir un mismo cálculo para muchos valores de forma automática. Ya hemos visto al menos un iterator (iterador), que no es una lista... es otro objeto.
End of explanation
for value in [2, 4, 6, 8, 10]:
# do some operation
print(value + 1, end=' ')
Explanation: Pero range, la verdad no es una lista. Es un iterador y aprender cómo funciona es útil en varios ámbitos.
Iterar sobre listas
End of explanation
iter([2, 4, 6, 8, 10])
I = iter([2, 4, 6, 8, 10])
print(next(I))
print(next(I))
Explanation: En este caso, lo primero que hace el iterador es chequear si el objeto del otro lado del in es un iterador. Esto se puede chequear con la función iter, parecida a type.
End of explanation
range(10)
iter(range(10))
Explanation: range(): Una lista no siempre es una lista
range() como una lista, expone un iterador:
End of explanation
N = 10 ** 12
for i in range(N):
if i >= 10: break
print(i, end=', ')
Explanation: Y es así como Python lo trata como si fuera una lista:
End of explanation
L = [2, 4, 6, 8, 10]
for i in range(len(L)):
print(i, L[i])
Explanation: Si una lista fuera a crear un trillón de valores ($10^{12}$), necesitaríamos terabytes de memoria para almacenarlos.
Algunos iteradores útiles
enumerate
Algunas veces queremos no solo iterar sobre los valores en una lista, sino también imprimir el índice de ellos.
End of explanation
for i, val in enumerate(L):
print(i, val)
Explanation: Pero hay una sintaxis más limpia para esto:
End of explanation
L = [2, 4, 6, 8, 10]
R = [3, 6, 9, 12, 15]
for lval, rval in zip(L, R):
print(lval, rval)
Explanation: zip
La función zip itera sobre dos iterables y produce una tupla:
End of explanation
# find the first 10 square numbers
square = lambda x: x ** 2
for val in map(square, range(10)):
print(val, end=' ')
Explanation: Si las listas son de diferente largo, el largo del zip va a estar dado por la lista más corta.
map y filter
Un poco más intenso: el iterador map toma una función y la aplica sobre todos los valores de un iterador:
End of explanation
# find values up to 10 for which x % 2 is zero
is_even = lambda x: x % 2 == 0
for val in filter(is_even, range(10)):
print(val, end=' ')
Explanation: El iterador filter toma una función y la aplica sobre todos los valores de un iterador devolviendo sólo aquellos valores que "pasan" el filtro.
End of explanation
from itertools import permutations
p = permutations(range(3))
print(*p)
from itertools import combinations
c = combinations(range(4), 2)
print(*c)
from itertools import product
p = product('ab', range(3))
print(*p)
Explanation: iteradores especializados: itertools
Ya vimos el count de itertools. Este módulo contiene un montón de funciones útiles. Por ejemplo, aqui veremos itertools.permutations, itertools.combinations, itertools.product.
End of explanation |
2,154 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
신경망 성능 개선
신경망의 예측 성능 및 수렴 성능을 개선하기 위해서는 다음과 같은 추가적인 고려를 해야 한다.
오차(목적) 함수 개선
Step1: 교차 엔트로피 오차 함수 (Cross-Entropy Cost Function)
이러한 수렴 속도 문제를 해결하는 방법의 하나는 오차 제곱합 형태가 아닌 교차 엔트로피(Cross-Entropy) 형태의 오차함수를 사용하는 것이다.
$$
\begin{eqnarray}
C = -\frac{1}{n} \sum_x \left[y \ln z + (1-y) \ln (1-z) \right],
\end{eqnarray}
$$
미분값은 다음과 같다.
$$
\begin{eqnarray}
\frac{\partial C}{\partial w_j} & = & -\frac{1}{n} \sum_x \left(
\frac{y }{z} -\frac{(1-y)}{1-z} \right)
\frac{\partial z}{\partial w_j} \
& = & -\frac{1}{n} \sum_x \left(
\frac{y}{\sigma(a)}
-\frac{(1-y)}{1-\sigma(a)} \right)\sigma'(a) x_j \
& = &
\frac{1}{n}
\sum_x \frac{\sigma'(a) x_j}{\sigma(a) (1-\sigma(a))}
(\sigma(a)-y) \
& = & \frac{1}{n} \sum_x x_j(\sigma(a)-y) \
& = & \frac{1}{n} \sum_x (z-y) x_j\ \
\frac{\partial C}{\partial b} &=& \frac{1}{n} \sum_x (z-y)
\end{eqnarray}
$$
이 식에서 보다시피 기울기(gradient)가 예측 오차(prediction error) $z-y$에 비례하기 때문에
오차가 크면 수렴 속도가 빠르고
오차가 적으면 속도가 감소하여 발산을 방지한다.
교차 엔트로피 구현 예
https
Step6: 과최적화 문제
신경망 모형은 파라미터의 수가 다른 모형에 비해 많다.
* (28x28)x(30)x(10) => 24,000
* (28x28)x(100)x(10) => 80,000
이렇게 파라미터의 수가 많으면 과최적화 발생 가능성이 증가한다. 즉, 정확도가 나아지지 않거나 나빠져도 오차 함수는 계속 감소하는 현상이 발생한다.
예
Step10: Hyper-Tangent Activation and Rectified Linear Unit (ReLu) Activation
시그모이드 함수 이외에도 하이퍼 탄젠트 및 ReLu 함수를 사용할 수도 있다.
하이퍼 탄젠트 activation 함수는 음수 값을 가질 수 있으며 시그모이드 activation 함수보다 일반적으로 수렴 속도가 빠르다.
$$
\begin{eqnarray}
\tanh(w \cdot x+b),
\end{eqnarray}
$$
$$
\begin{eqnarray}
\tanh(a) \equiv \frac{e^a-e^{-a}}{e^a+e^{-a}}.
\end{eqnarray}
$$
$$
\begin{eqnarray}
\sigma(a) = \frac{1+\tanh(a/2)}{2},
\end{eqnarray}
$$
Step11: Rectified Linear Unit (ReLu) Activation 함수는 무한대 크기의 activation 값이 가능하며 가중치총합 $a$가 큰 경우에도 기울기(gradient)가 0 이되며 사라지지 않는다는 장점이 있다.
$$
\begin{eqnarray}
\max(0, w \cdot x+b).
\end{eqnarray}
$$ | Python Code:
sigmoid = lambda x: 1/(1+np.exp(-x))
sigmoid_prime = lambda x: sigmoid(x)*(1-sigmoid(x))
xx = np.linspace(-10, 10, 1000)
plt.plot(xx, sigmoid(xx));
plt.plot(xx, sigmoid_prime(xx));
Explanation: 신경망 성능 개선
신경망의 예측 성능 및 수렴 성능을 개선하기 위해서는 다음과 같은 추가적인 고려를 해야 한다.
오차(목적) 함수 개선: cross-entropy cost function
정규화: regularization
가중치 초기값: weight initialization
Softmax 출력
Activation 함수 선택: hyper-tangent and ReLu
기울기와 수렴 속도 문제
일반적으로 사용하는 잔차 제곱합(sum of square) 형태의 오차 함수는 대부분의 경우에 기울기 값이 0 이므로 (near-zero gradient) 수렴이 느려지는 단점이 있다.
http://neuralnetworksanddeeplearning.com/chap3.html
$$
\begin{eqnarray}
z = \sigma (wx+b)
\end{eqnarray}
$$
$$
\begin{eqnarray}
C = \frac{(y-z)^2}{2},
\end{eqnarray}
$$
$$
\begin{eqnarray}
\frac{\partial C}{\partial w} & = & (z-y)\sigma'(a) x \
\frac{\partial C}{\partial b} & = & (z-y)\sigma'(a)
\end{eqnarray}
$$
if $x=1$, $y=0$,
$$
\begin{eqnarray}
\frac{\partial C}{\partial w} & = & a \sigma'(a) \
\frac{\partial C}{\partial b} & = & a \sigma'(z)
\end{eqnarray}
$$
$\sigma'$는 대부분의 경우에 zero.
End of explanation
%cd /home/dockeruser/neural-networks-and-deep-learning/src
%ls
import mnist_loader
import network2
training_data, validation_data, test_data = mnist_loader.load_data_wrapper()
net = network2.Network([784, 30, 10], cost=network2.QuadraticCost)
net.large_weight_initializer()
%time result1 = net.SGD(training_data, 10, 10, 0.5, evaluation_data=test_data, monitor_evaluation_accuracy=True)
net = network2.Network([784, 30, 10], cost=network2.CrossEntropyCost)
net.large_weight_initializer()
%time result2 = net.SGD(training_data, 10, 10, 0.5, evaluation_data=test_data, monitor_evaluation_accuracy=True)
plt.plot(result1[1], 'bo-', label="quadratic cost")
plt.plot(result2[1], 'rs-', label="cross-entropy cost")
plt.legend(loc=0)
plt.show()
Explanation: 교차 엔트로피 오차 함수 (Cross-Entropy Cost Function)
이러한 수렴 속도 문제를 해결하는 방법의 하나는 오차 제곱합 형태가 아닌 교차 엔트로피(Cross-Entropy) 형태의 오차함수를 사용하는 것이다.
$$
\begin{eqnarray}
C = -\frac{1}{n} \sum_x \left[y \ln z + (1-y) \ln (1-z) \right],
\end{eqnarray}
$$
미분값은 다음과 같다.
$$
\begin{eqnarray}
\frac{\partial C}{\partial w_j} & = & -\frac{1}{n} \sum_x \left(
\frac{y }{z} -\frac{(1-y)}{1-z} \right)
\frac{\partial z}{\partial w_j} \
& = & -\frac{1}{n} \sum_x \left(
\frac{y}{\sigma(a)}
-\frac{(1-y)}{1-\sigma(a)} \right)\sigma'(a) x_j \
& = &
\frac{1}{n}
\sum_x \frac{\sigma'(a) x_j}{\sigma(a) (1-\sigma(a))}
(\sigma(a)-y) \
& = & \frac{1}{n} \sum_x x_j(\sigma(a)-y) \
& = & \frac{1}{n} \sum_x (z-y) x_j\ \
\frac{\partial C}{\partial b} &=& \frac{1}{n} \sum_x (z-y)
\end{eqnarray}
$$
이 식에서 보다시피 기울기(gradient)가 예측 오차(prediction error) $z-y$에 비례하기 때문에
오차가 크면 수렴 속도가 빠르고
오차가 적으면 속도가 감소하여 발산을 방지한다.
교차 엔트로피 구현 예
https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/network2.py
```python
Define the quadratic and cross-entropy cost functions
class QuadraticCost(object):
@staticmethod
def fn(a, y):
Return the cost associated with an output ``a`` and desired output
``y``.
return 0.5*np.linalg.norm(a-y)**2
@staticmethod
def delta(z, a, y):
Return the error delta from the output layer.
return (a-y) * sigmoid_prime(z)
class CrossEntropyCost(object):
@staticmethod
def fn(a, y):
Return the cost associated with an output ``a`` and desired output
``y``. Note that np.nan_to_num is used to ensure numerical
stability. In particular, if both ``a`` and ``y`` have a 1.0
in the same slot, then the expression (1-y)*np.log(1-a)
returns nan. The np.nan_to_num ensures that that is converted
to the correct value (0.0).
return np.sum(np.nan_to_num(-y*np.log(a)-(1-y)*np.log(1-a)))
@staticmethod
def delta(z, a, y):
Return the error delta from the output layer. Note that the
parameter ``z`` is not used by the method. It is included in
the method's parameters in order to make the interface
consistent with the delta method for other cost classes.
return (a-y)
```
End of explanation
from ipywidgets import interactive
from IPython.display import Audio, display
def softmax_plot(z1=0, z2=0, z3=0, z4=0):
exps = np.array([np.exp(z1), np.exp(z2), np.exp(z3), np.exp(z4)])
exp_sum = exps.sum()
plt.bar(range(len(exps)), exps/exp_sum)
plt.xlim(-0.3, 4.1)
plt.ylim(0, 1)
plt.xticks([])
v = interactive(softmax_plot, z1=(-3, 5, 0.01), z2=(-3, 5, 0.01), z3=(-3, 5, 0.01), z4=(-3, 5, 0.01))
display(v)
Explanation: 과최적화 문제
신경망 모형은 파라미터의 수가 다른 모형에 비해 많다.
* (28x28)x(30)x(10) => 24,000
* (28x28)x(100)x(10) => 80,000
이렇게 파라미터의 수가 많으면 과최적화 발생 가능성이 증가한다. 즉, 정확도가 나아지지 않거나 나빠져도 오차 함수는 계속 감소하는 현상이 발생한다.
예:
python
net = network2.Network([784, 30, 10], cost=network2.CrossEntropyCost)
net.large_weight_initializer()
net.SGD(training_data[:1000], 400, 10, 0.5, evaluation_data=test_data,
monitor_evaluation_accuracy=True, monitor_training_cost=True)
<img src="http://neuralnetworksanddeeplearning.com/images/overfitting1.png" style="width:90%;">
<img src="http://neuralnetworksanddeeplearning.com/images/overfitting3.png" style="width:90%;">
<img src="http://neuralnetworksanddeeplearning.com/images/overfitting4.png" style="width:90%;">
<img src="http://neuralnetworksanddeeplearning.com/images/overfitting2.png" style="width:90%;">
L2 정규화
이러한 과최적화를 방지하기 위해서는 오차 함수에 다음과 같이 정규화 항목을 추가하여야 한다.
$$
\begin{eqnarray} C = -\frac{1}{n} \sum_{j} \left[ y_j \ln z^L_j+(1-y_j) \ln
(1-z^L_j)\right] + \frac{\lambda}{2n} \sum_i w_i^2
\end{eqnarray}
$$
또는
$$
\begin{eqnarray} C = C_0 + \frac{\lambda}{2n}
\sum_i w_i^2,
\end{eqnarray}
$$
$$
\begin{eqnarray}
\frac{\partial C}{\partial w} & = & \frac{\partial C_0}{\partial w} + \frac{\lambda}{n} w \
\frac{\partial C}{\partial b} & = & \frac{\partial C_0}{\partial b}
\end{eqnarray}
$$
$$
\begin{eqnarray}
w & \rightarrow & w-\eta \frac{\partial C_0}{\partial w}-\frac{\eta \lambda}{n} w \
& = & \left(1-\frac{\eta \lambda}{n}\right) w -\eta \frac{\partial C_0}{\partial w}
\end{eqnarray}
$$
L2 정규화 구현 예
`python
def total_cost(self, data, lmbda, convert=False):
Return the total cost for the data setdata. The flagconvertshould be set to False if the data set is the
training data (the usual case), and to True if the data set is
the validation or test data. See comments on the similar (but
reversed) convention for theaccuracy`` method, above.
cost = 0.0
for x, y in data:
a = self.feedforward(x)
if convert: y = vectorized_result(y)
cost += self.cost.fn(a, y)/len(data)
cost += 0.5(lmbda/len(data))sum(np.linalg.norm(w)**2 for w in self.weights)
return cost
def update_mini_batch(self, mini_batch, eta, lmbda, n):
Update the network's weights and biases by applying gradient
descent using backpropagation to a single mini batch. The
mini_batch is a list of tuples (x, y), eta is the
learning rate, lmbda is the regularization parameter, and
n is the total size of the training data set.
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
for x, y in mini_batch:
delta_nabla_b, delta_nabla_w = self.backprop(x, y)
nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
self.weights = [(1-eta(lmbda/n))w-(eta/len(mini_batch))nw for w, nw in zip(self.weights, nabla_w)]
self.biases = [b-(eta/len(mini_batch))nb for b, nb in zip(self.biases, nabla_b)]
```
python
net.SGD(training_data[:1000], 400, 10, 0.5, evaluation_data=test_data, lmbda = 0.1,
monitor_evaluation_cost=True, monitor_evaluation_accuracy=True,
monitor_training_cost=True, monitor_training_accuracy=True)
<img src="http://neuralnetworksanddeeplearning.com/images/regularized1.png" style="width:90%;" >
<img src="http://neuralnetworksanddeeplearning.com/images/regularized2.png" style="width:90%;" >
L1 정규화
L2 정규화 대신 다음과 같은 L1 정규화를 사용할 수도 있다.
$$
\begin{eqnarray} C = -\frac{1}{n} \sum_{j} \left[ y_j \ln z^L_j+(1-y_j) \ln
(1-z^L_j)\right] + \frac{\lambda}{2n} \sum_i \| w_i \|
\end{eqnarray}
$$
$$
\begin{eqnarray}
\frac{\partial C}{\partial w} = \frac{\partial C_0}{\partial w} + \frac{\lambda}{n} \, {\rm sgn}(w)
\end{eqnarray}
$$
$$
\begin{eqnarray}
w \rightarrow w' = w-\frac{\eta \lambda}{n} \mbox{sgn}(w) - \eta \frac{\partial C_0}{\partial w}
\end{eqnarray}
$$
Dropout 정규화
Dropout 정규화 방법은 epoch 마다 임의의 hidden layer neurons $100p$%(보통 절반)를 dropout 하여 최적화 과정에 포함하지 않는 방법이다. 이 방법을 사용하면 가중치 값들 값들이 동시에 움직이는 것(co-adaptations) 방지하며 모형 averaging 효과를 가져다 준다.
<img src="http://neuralnetworksanddeeplearning.com/images/tikz31.png">
가중치 갱신이 끝나고 테스트 시점에는 가중치에 $p$를 곱하여 스케일링한다.
<img src="https://datascienceschool.net/upfiles/8e5177d1e7dd46a69d5b316ee8748e00.png">
가중치 초기화 (Weight initialization)
뉴런에 대한 입력의 수 $n_{in}$가 증가하면 가중 총합 $a$값의 표준편차도 증가한다.
$$ \text{std}(a) \propto \sqrt{n_{in}} $$
<img src="http://neuralnetworksanddeeplearning.com/images/tikz32.png">
예를 들어 입력이 1000개, 그 중 절반이 1이면 표준편차는 약 22.4 이 된다.
$$ \sqrt{501} \approx 22.4 $$
<img src="https://docs.google.com/drawings/d/1PZwr7wS_3gg7bXtp16XaZCbvxj4tMrfcbCf6GJhaX_0/pub?w=608&h=153">
이렇게 표준 편가가 크면 수렴이 느려지기 때문에 입력 수에 따라 초기화 가중치의 표준편차를 감소하는 초기화 값 조정이 필요하다.
$$\dfrac{1}{\sqrt{n_{in}} }$$
가중치 초기화 구현 예
python
def default_weight_initializer(self):
Initialize each weight using a Gaussian distribution with mean 0
and standard deviation 1 over the square root of the number of
weights connecting to the same neuron. Initialize the biases
using a Gaussian distribution with mean 0 and standard
deviation 1.
Note that the first layer is assumed to be an input layer, and
by convention we won't set any biases for those neurons, since
biases are only ever used in computing the outputs from later
layers.
self.biases = [np.random.randn(y, 1) for y in self.sizes[1:]]
self.weights = [np.random.randn(y, x)/np.sqrt(x) for x, y in zip(self.sizes[:-1], self.sizes[1:])]
<img src="http://neuralnetworksanddeeplearning.com/images/weight_initialization_30.png" style="width:90%;">
소프트맥스 출력
소프트맥스(softmax) 함수는 입력과 출력이 다변수(multiple variable) 인 함수이다. 최고 출력의 위치를 변화하지 않으면서 츨력의 합이 1이 되도록 조정하기 때문에 출력에 확률론적 의미를 부여할 수 있다. 보통 신경망의 최종 출력단에 적용한다.
$$
\begin{eqnarray}
y^L_j = \frac{e^{a^L_j}}{\sum_k e^{a^L_k}},
\end{eqnarray}
$$
$$
\begin{eqnarray}
\sum_j y^L_j & = & \frac{\sum_j e^{a^L_j}}{\sum_k e^{a^L_k}} = 1
\end{eqnarray}
$$
<img src="https://www.tensorflow.org/versions/master/images/softmax-regression-scalargraph.png" style="width:60%;">
End of explanation
z = np.linspace(-5, 5, 100)
a = np.tanh(z)
plt.plot(z, a)
plt.show()
Explanation: Hyper-Tangent Activation and Rectified Linear Unit (ReLu) Activation
시그모이드 함수 이외에도 하이퍼 탄젠트 및 ReLu 함수를 사용할 수도 있다.
하이퍼 탄젠트 activation 함수는 음수 값을 가질 수 있으며 시그모이드 activation 함수보다 일반적으로 수렴 속도가 빠르다.
$$
\begin{eqnarray}
\tanh(w \cdot x+b),
\end{eqnarray}
$$
$$
\begin{eqnarray}
\tanh(a) \equiv \frac{e^a-e^{-a}}{e^a+e^{-a}}.
\end{eqnarray}
$$
$$
\begin{eqnarray}
\sigma(a) = \frac{1+\tanh(a/2)}{2},
\end{eqnarray}
$$
End of explanation
z = np.linspace(-5, 5, 100)
a = np.maximum(z, 0)
plt.plot(z, a)
plt.show()
Explanation: Rectified Linear Unit (ReLu) Activation 함수는 무한대 크기의 activation 값이 가능하며 가중치총합 $a$가 큰 경우에도 기울기(gradient)가 0 이되며 사라지지 않는다는 장점이 있다.
$$
\begin{eqnarray}
\max(0, w \cdot x+b).
\end{eqnarray}
$$
End of explanation |
2,155 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Boston Light Swim temperature analysis with Python
In the past we demonstrated how to perform a CSW catalog search with OWSLib,
and how to obtain near real-time data with pyoos.
In this notebook we will use both to find all observations and model data around the Boston Harbor to access the sea water temperature.
This workflow is part of an example to advise swimmers of the annual Boston lighthouse swim of the Boston Harbor water temperature conditions prior to the race. For more information regarding the workflow presented here see Signell, Richard P.; Fernandes, Filipe; Wilcox, Kyle. 2016. "Dynamic Reusable Workflows for Ocean Science." J. Mar. Sci. Eng. 4, no. 4
Step1: This notebook is quite big and complex,
so to help us keep things organized we'll define a cell with the most important options and switches.
Below we can define the date,
bounding box, phenomena SOS and CF names and units,
and the catalogs we will search.
Step2: We'll print some of the search configuration options along the way to keep track of them.
Step3: We already created an OWSLib.fes filter before.
The main difference here is that we do not want the atmosphere model data,
so we are filtering out all the GRIB-2 data format.
Step4: In the cell below we ask the catalog for all the returns that match the filter and have an OPeNDAP endpoint.
Step5: We found some models, and observations from NERACOOS there.
However, we do know that there are some buoys from NDBC and CO-OPS available too.
Also, those NERACOOS observations seem to be from a CTD mounted at 65 meters below the sea surface. Rendering them useless from our purpose.
So let's use the catalog only for the models by filtering the observations with is_station below.
And we'll rely CO-OPS and NDBC services for the observations.
Step6: Now we can use pyoos collectors for NdbcSos,
Step7: and CoopsSos.
Step8: We will join all the observations into an uniform series, interpolated to 1-hour interval, for the model-data comparison.
This step is necessary because the observations can be 7 or 10 minutes resolution,
while the models can be 30 to 60 minutes.
Step9: In this next cell we will save the data for quicker access later.
Step10: Taking a quick look at the observations
Step11: Now it is time to loop the models we found above,
Step12: Next, we will match them with the nearest observed time-series. The max_dist=0.08 is in degrees, that is roughly 8 kilometers.
Step13: Now it is possible to compute some simple comparison metrics. First we'll calculate the model mean bias
Step14: And the root mean squared rrror of the deviations from the mean
Step15: The next 2 cells make the scores "pretty" for plotting.
Step16: The cells from [20] to [25] create a folium map with bokeh for the time-series at the observed points.
Note that we did mark the nearest model cell location used in the comparison.
Step17: Here we use a dictionary with some models we expect to find so we can create a better legend for the plots. If any new models are found, we will use its filename in the legend as a default until we can go back and add a short name to our library. | Python Code:
import warnings
# Suppresing warnings for a "pretty output."
warnings.simplefilter("ignore")
Explanation: The Boston Light Swim temperature analysis with Python
In the past we demonstrated how to perform a CSW catalog search with OWSLib,
and how to obtain near real-time data with pyoos.
In this notebook we will use both to find all observations and model data around the Boston Harbor to access the sea water temperature.
This workflow is part of an example to advise swimmers of the annual Boston lighthouse swim of the Boston Harbor water temperature conditions prior to the race. For more information regarding the workflow presented here see Signell, Richard P.; Fernandes, Filipe; Wilcox, Kyle. 2016. "Dynamic Reusable Workflows for Ocean Science." J. Mar. Sci. Eng. 4, no. 4: 68.
End of explanation
%%writefile config.yaml
# Specify a YYYY-MM-DD hh:mm:ss date or integer day offset.
# If both start and stop are offsets they will be computed relative to datetime.today() at midnight.
# Use the dates commented below to reproduce the last Boston Light Swim event forecast.
date:
start: -5 # 2016-8-16 00:00:00
stop: +4 # 2016-8-29 00:00:00
run_name: 'latest'
# Boston harbor.
region:
bbox: [-71.3, 42.03, -70.57, 42.63]
# Try the bounding box below to see how the notebook will behave for a different region.
#bbox: [-74.5, 40, -72., 41.5]
crs: 'urn:ogc:def:crs:OGC:1.3:CRS84'
sos_name: 'sea_water_temperature'
cf_names:
- sea_water_temperature
- sea_surface_temperature
- sea_water_potential_temperature
- equivalent_potential_temperature
- sea_water_conservative_temperature
- pseudo_equivalent_potential_temperature
units: 'celsius'
catalogs:
- https://data.ioos.us/csw
Explanation: This notebook is quite big and complex,
so to help us keep things organized we'll define a cell with the most important options and switches.
Below we can define the date,
bounding box, phenomena SOS and CF names and units,
and the catalogs we will search.
End of explanation
import os
import shutil
from datetime import datetime
from ioos_tools.ioos import parse_config
config = parse_config("config.yaml")
# Saves downloaded data into a temporary directory.
save_dir = os.path.abspath(config["run_name"])
if os.path.exists(save_dir):
shutil.rmtree(save_dir)
os.makedirs(save_dir)
fmt = "{:*^64}".format
print(fmt("Saving data inside directory {}".format(save_dir)))
print(fmt(" Run information "))
print("Run date: {:%Y-%m-%d %H:%M:%S}".format(datetime.utcnow()))
print("Start: {:%Y-%m-%d %H:%M:%S}".format(config["date"]["start"]))
print("Stop: {:%Y-%m-%d %H:%M:%S}".format(config["date"]["stop"]))
print(
"Bounding box: {0:3.2f}, {1:3.2f},"
"{2:3.2f}, {3:3.2f}".format(*config["region"]["bbox"])
)
Explanation: We'll print some of the search configuration options along the way to keep track of them.
End of explanation
def make_filter(config):
from owslib import fes
from ioos_tools.ioos import fes_date_filter
kw = dict(
wildCard="*", escapeChar="\\", singleChar="?", propertyname="apiso:AnyText"
)
or_filt = fes.Or(
[fes.PropertyIsLike(literal=("*%s*" % val), **kw) for val in config["cf_names"]]
)
not_filt = fes.Not([fes.PropertyIsLike(literal="GRIB-2", **kw)])
begin, end = fes_date_filter(config["date"]["start"], config["date"]["stop"])
bbox_crs = fes.BBox(config["region"]["bbox"], crs=config["region"]["crs"])
filter_list = [fes.And([bbox_crs, begin, end, or_filt, not_filt])]
return filter_list
filter_list = make_filter(config)
Explanation: We already created an OWSLib.fes filter before.
The main difference here is that we do not want the atmosphere model data,
so we are filtering out all the GRIB-2 data format.
End of explanation
from ioos_tools.ioos import get_csw_records, service_urls
from owslib.csw import CatalogueServiceWeb
dap_urls = []
print(fmt(" Catalog information "))
for endpoint in config["catalogs"]:
print("URL: {}".format(endpoint))
try:
csw = CatalogueServiceWeb(endpoint, timeout=120)
except Exception as e:
print("{}".format(e))
continue
csw = get_csw_records(csw, filter_list, esn="full")
OPeNDAP = service_urls(csw.records, identifier="OPeNDAP:OPeNDAP")
odp = service_urls(
csw.records, identifier="urn:x-esri:specification:ServiceType:odp:url"
)
dap = OPeNDAP + odp
dap_urls.extend(dap)
print("Number of datasets available: {}".format(len(csw.records.keys())))
for rec, item in csw.records.items():
print("{}".format(item.title))
if dap:
print(fmt(" DAP "))
for url in dap:
print("{}.html".format(url))
print("\n")
# Get only unique endpoints.
dap_urls = list(set(dap_urls))
Explanation: In the cell below we ask the catalog for all the returns that match the filter and have an OPeNDAP endpoint.
End of explanation
from ioos_tools.ioos import is_station
from timeout_decorator import TimeoutError
# Filter out some station endpoints.
non_stations = []
for url in dap_urls:
url = f"{url}#fillmismatch"
try:
if not is_station(url):
non_stations.append(url)
except (IOError, OSError, RuntimeError, TimeoutError) as e:
print("Could not access URL {}.html\n{!r}".format(url, e))
dap_urls = non_stations
print(fmt(" Filtered DAP "))
for url in dap_urls:
print("{}.html".format(url))
Explanation: We found some models, and observations from NERACOOS there.
However, we do know that there are some buoys from NDBC and CO-OPS available too.
Also, those NERACOOS observations seem to be from a CTD mounted at 65 meters below the sea surface. Rendering them useless from our purpose.
So let's use the catalog only for the models by filtering the observations with is_station below.
And we'll rely CO-OPS and NDBC services for the observations.
End of explanation
from pyoos.collectors.ndbc.ndbc_sos import NdbcSos
collector_ndbc = NdbcSos()
collector_ndbc.set_bbox(config["region"]["bbox"])
collector_ndbc.end_time = config["date"]["stop"]
collector_ndbc.start_time = config["date"]["start"]
collector_ndbc.variables = [config["sos_name"]]
ofrs = collector_ndbc.server.offerings
title = collector_ndbc.server.identification.title
print(fmt(" NDBC Collector offerings "))
print("{}: {} offerings".format(title, len(ofrs)))
import pandas as pd
from ioos_tools.ioos import collector2table
ndbc = collector2table(
collector=collector_ndbc, config=config, col="sea_water_temperature (C)"
)
if ndbc:
data = dict(
station_name=[s._metadata.get("station_name") for s in ndbc],
station_code=[s._metadata.get("station_code") for s in ndbc],
sensor=[s._metadata.get("sensor") for s in ndbc],
lon=[s._metadata.get("lon") for s in ndbc],
lat=[s._metadata.get("lat") for s in ndbc],
depth=[s._metadata.get("depth") for s in ndbc],
)
table = pd.DataFrame(data).set_index("station_code")
table
Explanation: Now we can use pyoos collectors for NdbcSos,
End of explanation
from pyoos.collectors.coops.coops_sos import CoopsSos
collector_coops = CoopsSos()
collector_coops.set_bbox(config["region"]["bbox"])
collector_coops.end_time = config["date"]["stop"]
collector_coops.start_time = config["date"]["start"]
collector_coops.variables = [config["sos_name"]]
ofrs = collector_coops.server.offerings
title = collector_coops.server.identification.title
print(fmt(" Collector offerings "))
print("{}: {} offerings".format(title, len(ofrs)))
coops = collector2table(
collector=collector_coops, config=config, col="sea_water_temperature (C)"
)
if coops:
data = dict(
station_name=[s._metadata.get("station_name") for s in coops],
station_code=[s._metadata.get("station_code") for s in coops],
sensor=[s._metadata.get("sensor") for s in coops],
lon=[s._metadata.get("lon") for s in coops],
lat=[s._metadata.get("lat") for s in coops],
depth=[s._metadata.get("depth") for s in coops],
)
table = pd.DataFrame(data).set_index("station_code")
table
Explanation: and CoopsSos.
End of explanation
data = ndbc + coops
index = pd.date_range(
start=config["date"]["start"].replace(tzinfo=None),
end=config["date"]["stop"].replace(tzinfo=None),
freq="1H",
)
# Preserve metadata with `reindex`.
observations = []
for series in data:
_metadata = series._metadata
series.index = series.index.tz_localize(None)
series.index = series.index.tz_localize(None)
obs = series.reindex(index=index, limit=1, method="nearest")
obs._metadata = _metadata
observations.append(obs)
Explanation: We will join all the observations into an uniform series, interpolated to 1-hour interval, for the model-data comparison.
This step is necessary because the observations can be 7 or 10 minutes resolution,
while the models can be 30 to 60 minutes.
End of explanation
import iris
from ioos_tools.tardis import series2cube
attr = dict(
featureType="timeSeries",
Conventions="CF-1.6",
standard_name_vocabulary="CF-1.6",
cdm_data_type="Station",
comment="Data from http://opendap.co-ops.nos.noaa.gov",
)
cubes = iris.cube.CubeList([series2cube(obs, attr=attr) for obs in observations])
outfile = os.path.join(save_dir, "OBS_DATA.nc")
iris.save(cubes, outfile)
Explanation: In this next cell we will save the data for quicker access later.
End of explanation
%matplotlib inline
ax = pd.concat(data).plot(figsize=(11, 2.25))
Explanation: Taking a quick look at the observations:
End of explanation
from ioos_tools.ioos import get_model_name
from ioos_tools.tardis import get_surface, is_model, proc_cube, quick_load_cubes
from iris.exceptions import ConstraintMismatchError, CoordinateNotFoundError, MergeError
print(fmt(" Models "))
cubes = dict()
for k, url in enumerate(dap_urls):
print("\n[Reading url {}/{}]: {}".format(k + 1, len(dap_urls), url))
try:
cube = quick_load_cubes(url, config["cf_names"], callback=None, strict=True)
if is_model(cube):
cube = proc_cube(
cube,
bbox=config["region"]["bbox"],
time=(config["date"]["start"], config["date"]["stop"]),
units=config["units"],
)
else:
print("[Not model data]: {}".format(url))
continue
cube = get_surface(cube)
mod_name = get_model_name(url)
cubes.update({mod_name: cube})
except (
RuntimeError,
ValueError,
ConstraintMismatchError,
CoordinateNotFoundError,
IndexError,
) as e:
print("Cannot get cube for: {}\n{}".format(url, e))
Explanation: Now it is time to loop the models we found above,
End of explanation
import iris
from ioos_tools.tardis import (
add_station,
ensure_timeseries,
get_nearest_water,
make_tree,
remove_ssh,
)
from iris.pandas import as_series
for mod_name, cube in cubes.items():
fname = "{}.nc".format(mod_name)
fname = os.path.join(save_dir, fname)
print(fmt(" Downloading to file {} ".format(fname)))
try:
tree, lon, lat = make_tree(cube)
except CoordinateNotFoundError:
print("Cannot make KDTree for: {}".format(mod_name))
continue
# Get model series at observed locations.
raw_series = dict()
for obs in observations:
obs = obs._metadata
station = obs["station_code"]
try:
kw = dict(k=10, max_dist=0.08, min_var=0.01)
args = cube, tree, obs["lon"], obs["lat"]
try:
series, dist, idx = get_nearest_water(*args, **kw)
except RuntimeError as e:
print("Cannot download {!r}.\n{}".format(cube, e))
series = None
except ValueError:
status = "No Data"
print("[{}] {}".format(status, obs["station_name"]))
continue
if not series:
status = "Land "
else:
raw_series.update({station: series})
series = as_series(series)
status = "Water "
print("[{}] {}".format(status, obs["station_name"]))
if raw_series: # Save cube.
for station, cube in raw_series.items():
cube = add_station(cube, station)
cube = remove_ssh(cube)
try:
cube = iris.cube.CubeList(raw_series.values()).merge_cube()
except MergeError as e:
print(e)
ensure_timeseries(cube)
try:
iris.save(cube, fname)
except AttributeError:
# FIXME: we should patch the bad attribute instead of removing everything.
cube.attributes = {}
iris.save(cube, fname)
del cube
print("Finished processing [{}]".format(mod_name))
Explanation: Next, we will match them with the nearest observed time-series. The max_dist=0.08 is in degrees, that is roughly 8 kilometers.
End of explanation
from ioos_tools.ioos import stations_keys
def rename_cols(df, config):
cols = stations_keys(config, key="station_name")
return df.rename(columns=cols)
from ioos_tools.ioos import load_ncs
from ioos_tools.skill_score import apply_skill, mean_bias
dfs = load_ncs(config)
df = apply_skill(dfs, mean_bias, remove_mean=False, filter_tides=False)
skill_score = dict(mean_bias=df.to_dict())
# Filter out stations with no valid comparison.
df.dropna(how="all", axis=1, inplace=True)
df = df.applymap("{:.2f}".format).replace("nan", "--")
Explanation: Now it is possible to compute some simple comparison metrics. First we'll calculate the model mean bias:
$$ \text{MB} = \mathbf{\overline{m}} - \mathbf{\overline{o}}$$
End of explanation
from ioos_tools.skill_score import rmse
dfs = load_ncs(config)
df = apply_skill(dfs, rmse, remove_mean=True, filter_tides=False)
skill_score["rmse"] = df.to_dict()
# Filter out stations with no valid comparison.
df.dropna(how="all", axis=1, inplace=True)
df = df.applymap("{:.2f}".format).replace("nan", "--")
Explanation: And the root mean squared rrror of the deviations from the mean:
$$ \text{CRMS} = \sqrt{\left(\mathbf{m'} - \mathbf{o'}\right)^2}$$
where: $\mathbf{m'} = \mathbf{m} - \mathbf{\overline{m}}$ and $\mathbf{o'} = \mathbf{o} - \mathbf{\overline{o}}$
End of explanation
import pandas as pd
# Stringfy keys.
for key in skill_score.keys():
skill_score[key] = {str(k): v for k, v in skill_score[key].items()}
mean_bias = pd.DataFrame.from_dict(skill_score["mean_bias"])
mean_bias = mean_bias.applymap("{:.2f}".format).replace("nan", "--")
skill_score = pd.DataFrame.from_dict(skill_score["rmse"])
skill_score = skill_score.applymap("{:.2f}".format).replace("nan", "--")
import folium
from ioos_tools.ioos import get_coordinates
def make_map(bbox, **kw):
line = kw.pop("line", True)
layers = kw.pop("layers", True)
zoom_start = kw.pop("zoom_start", 5)
lon = (bbox[0] + bbox[2]) / 2
lat = (bbox[1] + bbox[3]) / 2
m = folium.Map(
width="100%", height="100%", location=[lat, lon], zoom_start=zoom_start
)
if layers:
url = "http://oos.soest.hawaii.edu/thredds/wms/hioos/satellite/dhw_5km"
w = folium.WmsTileLayer(
url,
name="Sea Surface Temperature",
fmt="image/png",
layers="CRW_SST",
attr="PacIOOS TDS",
overlay=True,
transparent=True,
)
w.add_to(m)
if line:
p = folium.PolyLine(
get_coordinates(bbox), color="#FF0000", weight=2, opacity=0.9,
)
p.add_to(m)
return m
bbox = config["region"]["bbox"]
m = make_map(bbox, zoom_start=11, line=True, layers=True)
Explanation: The next 2 cells make the scores "pretty" for plotting.
End of explanation
all_obs = stations_keys(config)
from glob import glob
from operator import itemgetter
import iris
from folium.plugins import MarkerCluster
iris.FUTURE.netcdf_promote = True
big_list = []
for fname in glob(os.path.join(save_dir, "*.nc")):
if "OBS_DATA" in fname:
continue
cube = iris.load_cube(fname)
model = os.path.split(fname)[1].split("-")[-1].split(".")[0]
lons = cube.coord(axis="X").points
lats = cube.coord(axis="Y").points
stations = cube.coord("station_code").points
models = [model] * lons.size
lista = zip(models, lons.tolist(), lats.tolist(), stations.tolist())
big_list.extend(lista)
big_list.sort(key=itemgetter(3))
df = pd.DataFrame(big_list, columns=["name", "lon", "lat", "station"])
df.set_index("station", drop=True, inplace=True)
groups = df.groupby(df.index)
locations, popups = [], []
for station, info in groups:
sta_name = all_obs[station]
for lat, lon, name in zip(info.lat, info.lon, info.name):
locations.append([lat, lon])
popups.append(
"[{}]: {}".format(name.rstrip("fillmismatch").rstrip("#"), sta_name)
)
MarkerCluster(locations=locations, popups=popups, name="Cluster").add_to(m)
Explanation: The cells from [20] to [25] create a folium map with bokeh for the time-series at the observed points.
Note that we did mark the nearest model cell location used in the comparison.
End of explanation
titles = {
"coawst_4_use_best": "COAWST_4",
"global": "HYCOM",
"NECOFS_GOM3_FORECAST": "NECOFS_GOM3",
"NECOFS_FVCOM_OCEAN_MASSBAY_FORECAST": "NECOFS_MassBay",
"OBS_DATA": "Observations",
}
from itertools import cycle
from bokeh.embed import file_html
from bokeh.models import HoverTool
from bokeh.palettes import Category20
from bokeh.plotting import figure
from bokeh.resources import CDN
from folium import IFrame
# Plot defaults.
colors = Category20[20]
colorcycler = cycle(colors)
tools = "pan,box_zoom,reset"
width, height = 750, 250
def make_plot(df, station):
p = figure(
toolbar_location="above",
x_axis_type="datetime",
width=width,
height=height,
tools=tools,
title=str(station),
)
for column, series in df.iteritems():
series.dropna(inplace=True)
if not series.empty:
if "OBS_DATA" not in column:
bias = mean_bias[str(station)][column]
skill = skill_score[str(station)][column]
line_color = next(colorcycler)
kw = dict(alpha=0.65, line_color=line_color)
else:
skill = bias = "NA"
kw = dict(alpha=1, color="crimson")
legend = f"{titles.get(column, column)}"
legend = legend.rstrip("fillmismatch").rstrip("#")
line = p.line(
x=series.index,
y=series.values,
legend=legend,
line_width=5,
line_cap="round",
line_join="round",
**kw,
)
p.add_tools(
HoverTool(
tooltips=[
("Name", "{}".format(titles.get(column, column))),
("Bias", bias),
("Skill", skill),
],
renderers=[line],
)
)
return p
def make_marker(p, station):
lons = stations_keys(config, key="lon")
lats = stations_keys(config, key="lat")
lon, lat = lons[station], lats[station]
html = file_html(p, CDN, station)
iframe = IFrame(html, width=width + 40, height=height + 80)
popup = folium.Popup(iframe, max_width=2650)
icon = folium.Icon(color="green", icon="stats")
marker = folium.Marker(location=[lat, lon], popup=popup, icon=icon)
return marker
dfs = load_ncs(config)
for station in dfs:
sta_name = all_obs[station]
df = dfs[station]
if df.empty:
continue
p = make_plot(df, station)
marker = make_marker(p, station)
marker.add_to(m)
folium.LayerControl().add_to(m)
m
Explanation: Here we use a dictionary with some models we expect to find so we can create a better legend for the plots. If any new models are found, we will use its filename in the legend as a default until we can go back and add a short name to our library.
End of explanation |
2,156 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 7
Step1: Part 1
Step2: Now we will do the timing analysis as well as print out the critical path
Step3: We are also able to print out the critical paths as well as get them
back as an array.
Step4: Part 2
Step5: Part 3
Step6: Part 4
Step7: Now to see the difference | Python Code:
import pyrtl
Explanation: Example 7: Reduction and Speed Analysis
After building a circuit, one might want to do some stuff to reduce the
hardware into simpler nets as well as analyze various metrics of the
hardware. This functionality is provided in the Passes part of PyRTL
and will demonstrated here.
End of explanation
# Creating a sample harware block
pyrtl.reset_working_block()
const_wire = pyrtl.Const(6, bitwidth=4)
in_wire2 = pyrtl.Input(bitwidth=4, name="input2")
out_wire = pyrtl.Output(bitwidth=5, name="output")
out_wire <<= const_wire + in_wire2
Explanation: Part 1: Timing Analysis
Timing and area usage are key considerations of any hardware block that one
makes.
PyRTL provides functions to do these opertions
End of explanation
# Generating timing analysis information
print("Pre Synthesis:")
timing = pyrtl.TimingAnalysis()
timing.print_max_length()
Explanation: Now we will do the timing analysis as well as print out the critical path
End of explanation
critical_path_info = timing.critical_path()
Explanation: We are also able to print out the critical paths as well as get them
back as an array.
End of explanation
logic_area, mem_area = pyrtl.area_estimation(tech_in_nm=65)
est_area = logic_area + mem_area
print("Estimated Area of block", est_area, "sq mm")
print()
Explanation: Part 2: Area Analysis
PyRTL also provides estimates for the area that would be used up if the
circuit was printed as an ASIC
End of explanation
pyrtl.synthesize()
print("Pre Optimization:")
timing = pyrtl.TimingAnalysis()
timing.print_max_length()
for net in pyrtl.working_block().logic:
print(str(net))
print()
Explanation: Part 3: Synthesis
Synthesis is the operation of reducing the circuit into simpler components
The base synthesis function breaks down the more complex logic operations
into logic gates (keeps registers and memories intact) as well as reduces
all combinatorial logic into ops that only use one bitwidth wires
This synthesis allows for PyRTL to make optimizations to the net structure
as well as prepares it for further transformations on the PyRTL Toolchain
End of explanation
pyrtl.optimize()
Explanation: Part 4: Optimization
PyRTL has functions built in to eliminate unnecessary logic from the
circuit.
These functions are all done with a simple call:
End of explanation
print("Post Optimization:")
timing = pyrtl.TimingAnalysis()
timing.print_max_length()
for net in pyrtl.working_block().logic:
print(str(net))
Explanation: Now to see the difference
End of explanation |
2,157 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demo of Max-value Entropy Search Acqusition
This notebook provides a demo of the max-value entropy search (MES) acquisition function of Wang et al [2017].
https
Step1: Set up our toy problem (1D optimisation of the forrester function) and collect 3 initial points.
Step2: Fit our GP model to the observed data.
Step3: Lets plot the resulting acqusition functions for the chosen model on the collected data. Note that MES takes a fraction of the time of ES to compute (plotted on a log scale). This difference becomes even more apparent as you increase the dimensions of the sample space. | Python Code:
### General imports
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import colors as mcolors
import GPy
import time
### Emukit imports
from emukit.test_functions import forrester_function
from emukit.core.loop.user_function import UserFunctionWrapper
from emukit.core import ContinuousParameter, ParameterSpace
from emukit.bayesian_optimization.acquisitions import EntropySearch, ExpectedImprovement, MaxValueEntropySearch
from emukit.model_wrappers.gpy_model_wrappers import GPyModelWrapper
### --- Figure config
LEGEND_SIZE = 15
Explanation: Demo of Max-value Entropy Search Acqusition
This notebook provides a demo of the max-value entropy search (MES) acquisition function of Wang et al [2017].
https://arxiv.org/pdf/1703.01968.pdf
MES provides the high perfoming optimization of other entropy-based acquisitions. However, unlike standard entropy-search, MES requires a fraction of the computational cost. The computational savings are due to MES seeking to reduce our uncertainty in the value of the function at the optima (a 1-dimensional quantity) rather than uncertainty in the location of the optima (a d-dimensional quantity). Therefore, MES has a computational cost that scales linearly with the parameter space dimension d.
Our implementation of MES is controlled by two parameters: "num_samples" and "grid_size". "num_samples" controls how many mote-carlo samples we use to calculate entropy reductions. As we only approximate a 1-d integral, "num_samples" does not need to be large or be increased for problems with large d (unlike standard entropy-search). We recomend values between 5-15. "grid_size" controls the coarseness of the grid used to approximate the distribution of our max value and so must increase with d. We recommend 10,000*d. Note that as the grid must only be calculated once per BO step, the choice of "grid_size" does not have a large impact on computation time.
End of explanation
target_function, space = forrester_function()
x_plot = np.linspace(space.parameters[0].min, space.parameters[0].max, 200)[:, None]
y_plot = target_function(x_plot)
X_init = np.array([[0.2],[0.6], [0.9]])
Y_init = target_function(X_init)
plt.figure(figsize=(12, 8))
plt.plot(x_plot, y_plot, "k", label="Objective Function")
plt.scatter(X_init,Y_init)
plt.legend(loc=2, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(0, 1)
plt.show()
Explanation: Set up our toy problem (1D optimisation of the forrester function) and collect 3 initial points.
End of explanation
gpy_model = GPy.models.GPRegression(X_init, Y_init, GPy.kern.RBF(1, lengthscale=0.08, variance=20), noise_var=1e-10)
emukit_model = GPyModelWrapper(gpy_model)
Explanation: Fit our GP model to the observed data.
End of explanation
ei_acquisition = ExpectedImprovement(emukit_model)
es_acquisition = EntropySearch(emukit_model,space)
mes_acquisition = MaxValueEntropySearch(emukit_model,space)
t_0=time.time()
ei_plot = ei_acquisition.evaluate(x_plot)
t_ei=time.time()-t_0
es_plot = es_acquisition.evaluate(x_plot)
t_es=time.time()-t_ei
mes_plot = mes_acquisition.evaluate(x_plot)
t_mes=time.time()-t_es
plt.figure(figsize=(12, 8))
plt.plot(x_plot, (es_plot - np.min(es_plot)) / (np.max(es_plot) - np.min(es_plot)), "green", label="Entropy Search")
plt.plot(x_plot, (ei_plot - np.min(ei_plot)) / (np.max(ei_plot) - np.min(ei_plot)), "blue", label="Expected Improvement")
plt.plot(x_plot, (mes_plot - np.min(mes_plot)) / (np.max(mes_plot) - np.min(mes_plot)), "red", label="Max Value Entropy Search")
plt.legend(loc=1, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(0, 1)
plt.show()
plt.figure(figsize=(12, 8))
plt.bar(["ei","es","mes"],[t_ei,t_es,t_mes])
plt.xlabel("Acquisition Choice")
plt.yscale('log')
plt.ylabel("Calculation Time (secs)")
Explanation: Lets plot the resulting acqusition functions for the chosen model on the collected data. Note that MES takes a fraction of the time of ES to compute (plotted on a log scale). This difference becomes even more apparent as you increase the dimensions of the sample space.
End of explanation |
2,158 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Bayesian Switchpoint Analysis
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Dataset
The dataset is from here. Note, there is another version of this example floating around, but it has “missing” data – in which case you’d need to impute missing values. (Otherwise your model will not ever leave its initial parameters because the likelihood function will be undefined.)
Step3: Probabilistic Model
The model assumes a “switch point” (e.g. a year during which safety regulations changed), and Poisson-distributed disaster rate with constant (but potentially different) rates before and after that switch point.
The actual disaster count is fixed (observed); any sample of this model will need to specify both the switchpoint and the “early” and “late” rate of disasters.
Original model from pymc3 documentation example
Step4: The above code defines the model via JointDistributionSequential distributions. The disaster_rate functions are called with an array of [0, ..., len(years)-1] to produce a vector of len(years) random variables – the years before the switchpoint are early_disaster_rate, the ones after late_disaster_rate (modulo the sigmoid transition).
Here is a sanity-check that the target log prob function is sane
Step5: HMC to do Bayesian inference
We define the number of results and burn-in steps required; the code is mostly modeled after the documentation of tfp.mcmc.HamiltonianMonteCarlo. It uses an adaptive step size (otherwise the outcome is very sensitive to the step size value chosen). We use values of one as the initial state of the chain.
This is not the full story though. If you go back to the model definition above, you’ll note that some of the probability distributions are not well-defined on the whole real number line. Therefore we constrain the space that HMC shall examine by wrapping the HMC kernel with a TransformedTransitionKernel that specifies the forward bijectors to transform the real numbers onto the domain that the probability distribution is defined on (see comments in the code below).
Step6: Run both models in parallel | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (15,8)
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
Explanation: Bayesian Switchpoint Analysis
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/Bayesian_Switchpoint_Analysis"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Bayesian_Switchpoint_Analysis.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Bayesian_Switchpoint_Analysis.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Bayesian_Switchpoint_Analysis.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This notebook reimplements and extends the Bayesian “Change point analysis” example from the pymc3 documentation.
Prerequisites
End of explanation
disaster_data = np.array([ 4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6,
3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5,
2, 2, 3, 4, 2, 1, 3, 2, 2, 1, 1, 1, 1, 3, 0, 0,
1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1,
0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2,
3, 3, 1, 1, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4,
0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1])
years = np.arange(1851, 1962)
plt.plot(years, disaster_data, 'o', markersize=8);
plt.ylabel('Disaster count')
plt.xlabel('Year')
plt.title('Mining disaster data set')
plt.show()
Explanation: Dataset
The dataset is from here. Note, there is another version of this example floating around, but it has “missing” data – in which case you’d need to impute missing values. (Otherwise your model will not ever leave its initial parameters because the likelihood function will be undefined.)
End of explanation
def disaster_count_model(disaster_rate_fn):
disaster_count = tfd.JointDistributionNamed(dict(
e=tfd.Exponential(rate=1.),
l=tfd.Exponential(rate=1.),
s=tfd.Uniform(0., high=len(years)),
d_t=lambda s, l, e: tfd.Independent(
tfd.Poisson(rate=disaster_rate_fn(np.arange(len(years)), s, l, e)),
reinterpreted_batch_ndims=1)
))
return disaster_count
def disaster_rate_switch(ys, s, l, e):
return tf.where(ys < s, e, l)
def disaster_rate_sigmoid(ys, s, l, e):
return e + tf.sigmoid(ys - s) * (l - e)
model_switch = disaster_count_model(disaster_rate_switch)
model_sigmoid = disaster_count_model(disaster_rate_sigmoid)
Explanation: Probabilistic Model
The model assumes a “switch point” (e.g. a year during which safety regulations changed), and Poisson-distributed disaster rate with constant (but potentially different) rates before and after that switch point.
The actual disaster count is fixed (observed); any sample of this model will need to specify both the switchpoint and the “early” and “late” rate of disasters.
Original model from pymc3 documentation example:
$$
\begin{align}
(D_t|s,e,l)&\sim \text{Poisson}(r_t), \
& \,\quad\text{with}\; r_t = \begin{cases}e & \text{if}\; t < s\l &\text{if}\; t \ge s\end{cases} \
s&\sim\text{Discrete Uniform}(t_l,\,t_h) \
e&\sim\text{Exponential}(r_e)\
l&\sim\text{Exponential}(r_l)
\end{align}
$$
However, the mean disaster rate $r_t$ has a discontinuity at the switchpoint $s$, which makes it not differentiable. Thus it provides no gradient signal to the Hamiltonian Monte Carlo (HMC) algorithm – but because the $s$ prior is continuous, HMC’s fallback to a random walk is good enough to find the areas of high probability mass in this example.
As a second model, we modify the original model using a sigmoid “switch” between e and l to make the transition differentiable, and use a continuous uniform distribution for the switchpoint $s$. (One could argue this model is more true to reality, as a “switch” in mean rate would likely be stretched out over multiple years.) The new model is thus:
$$
\begin{align}
(D_t|s,e,l)&\sim\text{Poisson}(r_t), \
& \,\quad \text{with}\; r_t = e + \frac{1}{1+\exp(s-t)}(l-e) \
s&\sim\text{Uniform}(t_l,\,t_h) \
e&\sim\text{Exponential}(r_e)\
l&\sim\text{Exponential}(r_l)
\end{align}
$$
In the absence of more information we assume $r_e = r_l = 1$ as parameters for the priors. We’ll run both models and compare their inference results.
End of explanation
def target_log_prob_fn(model, s, e, l):
return model.log_prob(s=s, e=e, l=l, d_t=disaster_data)
models = [model_switch, model_sigmoid]
print([target_log_prob_fn(m, 40., 3., .9).numpy() for m in models]) # Somewhat likely result
print([target_log_prob_fn(m, 60., 1., 5.).numpy() for m in models]) # Rather unlikely result
print([target_log_prob_fn(m, -10., 1., 1.).numpy() for m in models]) # Impossible result
Explanation: The above code defines the model via JointDistributionSequential distributions. The disaster_rate functions are called with an array of [0, ..., len(years)-1] to produce a vector of len(years) random variables – the years before the switchpoint are early_disaster_rate, the ones after late_disaster_rate (modulo the sigmoid transition).
Here is a sanity-check that the target log prob function is sane:
End of explanation
num_results = 10000
num_burnin_steps = 3000
@tf.function(autograph=False, jit_compile=True)
def make_chain(target_log_prob_fn):
kernel = tfp.mcmc.TransformedTransitionKernel(
inner_kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_log_prob_fn,
step_size=0.05,
num_leapfrog_steps=3),
bijector=[
# The switchpoint is constrained between zero and len(years).
# Hence we supply a bijector that maps the real numbers (in a
# differentiable way) to the interval (0;len(yers))
tfb.Sigmoid(low=0., high=tf.cast(len(years), dtype=tf.float32)),
# Early and late disaster rate: The exponential distribution is
# defined on the positive real numbers
tfb.Softplus(),
tfb.Softplus(),
])
kernel = tfp.mcmc.SimpleStepSizeAdaptation(
inner_kernel=kernel,
num_adaptation_steps=int(0.8*num_burnin_steps))
states = tfp.mcmc.sample_chain(
num_results=num_results,
num_burnin_steps=num_burnin_steps,
current_state=[
# The three latent variables
tf.ones([], name='init_switchpoint'),
tf.ones([], name='init_early_disaster_rate'),
tf.ones([], name='init_late_disaster_rate'),
],
trace_fn=None,
kernel=kernel)
return states
switch_samples = [s.numpy() for s in make_chain(
lambda *args: target_log_prob_fn(model_switch, *args))]
sigmoid_samples = [s.numpy() for s in make_chain(
lambda *args: target_log_prob_fn(model_sigmoid, *args))]
switchpoint, early_disaster_rate, late_disaster_rate = zip(
switch_samples, sigmoid_samples)
Explanation: HMC to do Bayesian inference
We define the number of results and burn-in steps required; the code is mostly modeled after the documentation of tfp.mcmc.HamiltonianMonteCarlo. It uses an adaptive step size (otherwise the outcome is very sensitive to the step size value chosen). We use values of one as the initial state of the chain.
This is not the full story though. If you go back to the model definition above, you’ll note that some of the probability distributions are not well-defined on the whole real number line. Therefore we constrain the space that HMC shall examine by wrapping the HMC kernel with a TransformedTransitionKernel that specifies the forward bijectors to transform the real numbers onto the domain that the probability distribution is defined on (see comments in the code below).
End of explanation
def _desc(v):
return '(median: {}; 95%ile CI: $[{}, {}]$)'.format(
*np.round(np.percentile(v, [50, 2.5, 97.5]), 2))
for t, v in [
('Early disaster rate ($e$) posterior samples', early_disaster_rate),
('Late disaster rate ($l$) posterior samples', late_disaster_rate),
('Switch point ($s$) posterior samples', years[0] + switchpoint),
]:
fig, ax = plt.subplots(nrows=1, ncols=2, sharex=True)
for (m, i) in (('Switch', 0), ('Sigmoid', 1)):
a = ax[i]
a.hist(v[i], bins=50)
a.axvline(x=np.percentile(v[i], 50), color='k')
a.axvline(x=np.percentile(v[i], 2.5), color='k', ls='dashed', alpha=.5)
a.axvline(x=np.percentile(v[i], 97.5), color='k', ls='dashed', alpha=.5)
a.set_title(m + ' model ' + _desc(v[i]))
fig.suptitle(t)
plt.show()
Explanation: Run both models in parallel:
Visualize the result
We visualize the result as histograms of samples of the posterior distribution for the early and late disaster rate, as well as the switchpoint. The histograms are overlaid with a solid line representing the sample median, as well as the 95%ile credible interval bounds as dashed lines.
End of explanation |
2,159 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Probability Calibration with SplineCalib
This workbook demonstrates the SplineCalib algorithm detailed in the paper
"Spline-Based Probability Calibration" https
Step1: In the next few cells, we load in some data, inspect it, select columns for our features and outcome (mortality) and fill in missing values with the median of that column.
Step2: Now we divide the data into training, calibration, and test sets. The training set will be used to fit the model, the calibration set will be used to calibrate the probabilities, and the test set will be used to evaluate the performance. We use a 60-20-20 split (achived by first doing 80/20 and then splitting the 80 by 75/25)
Step3: Next, we fit a Random Forest model to our training data. Then we use that model to predict "probabilities" on our validation and test sets.
I use quotes on "probabilities" because these numbers, which are the percentage of trees that voted "yes" are better understood as mere scores. A higher value should generally indicate a higher probability of mortality. However, there is no reason to expect these to be well-calibrated probabilities. The fact that, say, 60% of the trees voted "yes" on a particular case does not mean that that case has a 60% probability of mortality.
We will demonstrate this empirically later.
Step4: Model Evaluation
You are pretty happy with this model since it has an AUROC of .859. But someone asks you if the probability is well-calibrated. In other words, if we looked at all the time your model predicted a mortality probability of 40%, did around 40% of them actually die? Or was it 20%, or 80%? It turns our that AUROC just measures the ranking of cases and does not evaluate if the probabilities are meaningful. In fact, if you multiply all of your predicted probabilities by .1, you would still get the same AUROC.
Checking Calibration
How do we know if a model is well-calibrated? One way to check is to create a "Reliability Diagram". The idea behind the reliability diagram is the following
Step5: Above, we see that the model is largely under-predicting the probability of mortality in the range .35 to .85. For example, when the model predicts a probability of between .6 and .65, more than 80% of those patients died. And the error bars indicate that this is not likely due to random error. In other words, our model is poorly calibrated.
Calibrating a Model
Since our current model is not well-calibrated, we would like to fix this. We want that when our model says 60% chance of mortality, it means 60% and not 40% or 80%. We will discuss two ways to fix this
Step6: From the above, we see that not only do our reliability diagrams look better, but our log_loss values have substantially improved. Log_loss measures not only the discriminative power of the model but also how well-calibrated it is.
Approach 2
Step7: We see above that the cross-validated approach gives similar performance (slightly better in this case). Additionally, we did not use the 20% of data set aside for calibration at all in the second approach. We could use approach 2 on the entire training and calibration data and (presumably) get an even better model.
Step8: Indeed, we get a slightly better AUC and log_loss both before and after calibration, due to having a larger training set for our model to learn from
Serializing Models
The SplineCalib object can be saved to disk easily with joblib.dump() and reloaded with joblib.load()
Step9: Comparison to Other Calibration Approaches
Here we compare SplineCalib to Isotonic Regression, Platt Scaling and Beta Calibration. | Python Code:
# "pip install ml_insights" in terminal if needed
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import ml_insights as mli
%matplotlib inline
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import log_loss, brier_score_loss, roc_auc_score
mli.__version__
Explanation: Probability Calibration with SplineCalib
This workbook demonstrates the SplineCalib algorithm detailed in the paper
"Spline-Based Probability Calibration" https://arxiv.org/abs/1809.07751
We build a random forest model and demonstrate that using the vote percentage as a probability is not well-calibrated. We then show different approaches on how to use SplineCalib to appropriately calibrate the model.
We also show how to serialize the calibration object to be able to save it on disk and re-use it.
MIMIC ICU Data*
We illustrate this process using a mortality model on the MIMIC ICU data
*MIMIC-III, a freely accessible critical care database. Johnson AEW, Pollard TJ, Shen L, Lehman L, Feng M, Ghassemi M, Moody B, Szolovits P, Celi LA, and Mark RG. Scientific Data (2016).
https://mimic.physionet.org
End of explanation
# Load dataset derived from the MMIC database
lab_aug_df = pd.read_csv("data/lab_vital_icu_table.csv")
lab_aug_df.head(10)
# Choose a subset of variables
X = lab_aug_df.loc[:,['aniongap_min', 'aniongap_max',
'albumin_min', 'albumin_max', 'bicarbonate_min', 'bicarbonate_max',
'bilirubin_min', 'bilirubin_max', 'creatinine_min', 'creatinine_max',
'chloride_min', 'chloride_max',
'hematocrit_min', 'hematocrit_max', 'hemoglobin_min', 'hemoglobin_max',
'lactate_min', 'lactate_max', 'platelet_min', 'platelet_max',
'potassium_min', 'potassium_max', 'ptt_min', 'ptt_max', 'inr_min',
'inr_max', 'pt_min', 'pt_max', 'sodium_min', 'sodium_max', 'bun_min',
'bun_max', 'wbc_min', 'wbc_max','sysbp_max', 'sysbp_mean', 'diasbp_min', 'diasbp_max', 'diasbp_mean',
'meanbp_min', 'meanbp_max', 'meanbp_mean', 'resprate_min',
'resprate_max', 'resprate_mean', 'tempc_min', 'tempc_max', 'tempc_mean',
'spo2_min', 'spo2_max', 'spo2_mean']]
y = lab_aug_df['hospital_expire_flag']
# Impute the median for in each column to replace NA's
median_vec = [X.iloc[:,i].median() for i in range(len(X.columns))]
for i in range(len(X.columns)):
X.iloc[:,i].fillna(median_vec[i],inplace=True)
Explanation: In the next few cells, we load in some data, inspect it, select columns for our features and outcome (mortality) and fill in missing values with the median of that column.
End of explanation
X_train_calib, X_test, y_train_calib, y_test = train_test_split(X, y, test_size=0.2, random_state=942)
X_train, X_calib, y_train, y_calib = train_test_split(X_train_calib, y_train_calib, test_size=0.25, random_state=942)
X_train.shape, X_calib.shape, X_test.shape
Explanation: Now we divide the data into training, calibration, and test sets. The training set will be used to fit the model, the calibration set will be used to calibrate the probabilities, and the test set will be used to evaluate the performance. We use a 60-20-20 split (achived by first doing 80/20 and then splitting the 80 by 75/25)
End of explanation
rfmodel1 = RandomForestClassifier(n_estimators = 500, class_weight='balanced_subsample', random_state=942, n_jobs=-1 )
rfmodel1.fit(X_train,y_train)
preds_test_uncalib = rfmodel1.predict_proba(X_test)[:,1]
preds_test_uncalib[:10]
roc_auc_score(y_test, preds_test_uncalib), roc_auc_score(y_test, .1*preds_test_uncalib)
Explanation: Next, we fit a Random Forest model to our training data. Then we use that model to predict "probabilities" on our validation and test sets.
I use quotes on "probabilities" because these numbers, which are the percentage of trees that voted "yes" are better understood as mere scores. A higher value should generally indicate a higher probability of mortality. However, there is no reason to expect these to be well-calibrated probabilities. The fact that, say, 60% of the trees voted "yes" on a particular case does not mean that that case has a 60% probability of mortality.
We will demonstrate this empirically later.
End of explanation
mli.plot_reliability_diagram(y_test, preds_test_uncalib, marker='.')
Explanation: Model Evaluation
You are pretty happy with this model since it has an AUROC of .859. But someone asks you if the probability is well-calibrated. In other words, if we looked at all the time your model predicted a mortality probability of 40%, did around 40% of them actually die? Or was it 20%, or 80%? It turns our that AUROC just measures the ranking of cases and does not evaluate if the probabilities are meaningful. In fact, if you multiply all of your predicted probabilities by .1, you would still get the same AUROC.
Checking Calibration
How do we know if a model is well-calibrated? One way to check is to create a "Reliability Diagram". The idea behind the reliability diagram is the following:
- Bin the interval [0,1] into smaller subsets (e.g. [0, 0.05], [0.05, .1], ... [.95,1])
- Find the empirical probabilities when the probabilities fell into each bin (if there were 20 times, and 9 of them were "yes", the empirical probability is .45)
- Plot the predicted probability (average of predicted probabilities in each bin) (x-axis) vs the empirical probabilities(y-axis)
- put error bars based on the size of the bin
- When the dots are (significantly) above the line y=x, the model is under-predicting the true probability, if below the line, model is over-predicting the true probability.
End of explanation
# Define SplineCalib object
calib1 = mli.SplineCalib()
# Use the model to make predictions on the calibration set
preds_cset = rfmodel1.predict_proba(X_calib)[:,1]
# Fit the calibration object on the calibration set
calib1.fit(preds_cset, y_calib)
# Visually inspect the quality of the calibration on the calibration set
mli.plot_reliability_diagram(y_calib, preds_cset);
calib1.show_calibration_curve()
# Visually inspect the quality of the calibration on the test set
calib1.show_calibration_curve()
mli.plot_reliability_diagram(y_test, preds_test_uncalib);
calib1.show_spline_reg_plot()
# Calibrate the previously generated predictions from the model on the test set
preds_test_calib1 = calib1.calibrate(preds_test_uncalib)
# Visually inspect the calibration of the newly calibrated predictions
mli.plot_reliability_diagram(y_test, preds_test_calib1);
## Compare the log_loss values
log_loss(y_test, preds_test_uncalib),log_loss(y_test, preds_test_calib1)
Explanation: Above, we see that the model is largely under-predicting the probability of mortality in the range .35 to .85. For example, when the model predicts a probability of between .6 and .65, more than 80% of those patients died. And the error bars indicate that this is not likely due to random error. In other words, our model is poorly calibrated.
Calibrating a Model
Since our current model is not well-calibrated, we would like to fix this. We want that when our model says 60% chance of mortality, it means 60% and not 40% or 80%. We will discuss two ways to fix this:
Use an independent calibration set
Using Cross-validation to generate scores from the training set.
The first method is simpler, but requires a separate data set, meaning that you will have less data to train your model with. It is good to use if you have plenty of data. It is also a useful approach if you think your distribution has "shifted" but the underlying signal in the model is fundamentally unchanged. In some cases it may make sense to "re-calibrate" a model on the "current" population without doing a full re-training.
The second approach takes more time, but is generally more data-efficient. We generate a set of cross-validated predictions on the training data. These predictions come from models that are close to, but not exactly identical to, your original model. However, this small disrepancy is usually minor and the calibration approach works well. For details, see the "Spline-Based Probability Calibration" paper referenced above.
Approach 1: Independent validation set
First let us demostrate how we would fix this using the independent validation set.
SplineCalib object
The SplineCalib object is similar in spirit to preprocessors / data transformations in scikit-learn. The two main operations are fit and calibrate (akin to fit and transform in sklearn).
To fit a calibration object, we give it a set of uncalibrated predictions from a model, and the corresponding truth set. The fit routine will learn the spline curve that best maps the uncalibrated scores to actual probabilities.
End of explanation
# Get the cross validated predictions given a model and training data.
cv_preds_train = mli.cv_predictions(rfmodel1, X_train, y_train, clone_model=True)
calib2 = mli.SplineCalib()
calib2.fit(cv_preds_train, y_train)
# Show the reliability diagram for the cross-validated predictions, and the calibration curve
calib2.show_calibration_curve()
mli.plot_reliability_diagram(y_train, cv_preds_train[:,1]);
mli.plot_reliability_diagram(y_test, calib2.calibrate(preds_test_uncalib));
preds_test_calib2 = calib2.calibrate(preds_test_uncalib)
log_loss(y_test, preds_test_uncalib), log_loss(y_test, preds_test_calib2)
Explanation: From the above, we see that not only do our reliability diagrams look better, but our log_loss values have substantially improved. Log_loss measures not only the discriminative power of the model but also how well-calibrated it is.
Approach 2: Cross-validation on the training data
The reason to use an independent calibration set (rather than just the training data) is that how the model performs on the training data (that it has already seen) is not indicative of how it will behave on data it has not seen before. We want the calibration to correct how the model will behave on "new" data, not the training data.
Another approach is to take a cross-validation approach to generating calibration data. We divide the training data into k "folds", leave one fold out, train our model (i.e. the choice of model and hyperparameter settings) on the remaining k-1 folds, and then make predictions on the left-out fold. After doing this process k times, each time leaving out a different fold, we will have a set of predictions, each of which was generated by 1 of k slightly different models, but was always generated by a model that did not see that training point. Done properly (assuming no "leakage" across the folds), this set of predictions and answers will serve as an appropriate calibration set.
ML-Insights (the package containing SplineCalib, as well as other functionality) has a simple function to generate these cross-validated predictions. We demonstrate it below.
End of explanation
rfmodel2 = RandomForestClassifier(n_estimators = 500, class_weight='balanced_subsample', random_state=942, n_jobs=-1 )
rfmodel2.fit(X_train_calib,y_train_calib)
preds_test_2_uncalib = rfmodel2.predict_proba(X_test)[:,1]
# Get the cross validated predictions given a model and training data.
cv_preds_train_calib = mli.cv_predictions(rfmodel2, X_train_calib, y_train_calib, stratified=True, clone_model=True)
calib3 = mli.SplineCalib()
calib3.fit(cv_preds_train_calib, y_train_calib)
# Show the reliability diagram for the cross-validated predictions, and the calibration curve
calib3.show_calibration_curve()
mli.plot_reliability_diagram(y_train_calib, cv_preds_train_calib[:,1]);
preds_test_calib3 = calib3.calibrate(preds_test_2_uncalib)
log_loss(y_test, preds_test_2_uncalib), log_loss(y_test, preds_test_calib3)
roc_auc_score(y_test, preds_test_2_uncalib), roc_auc_score(y_test, preds_test_calib3)
Explanation: We see above that the cross-validated approach gives similar performance (slightly better in this case). Additionally, we did not use the 20% of data set aside for calibration at all in the second approach. We could use approach 2 on the entire training and calibration data and (presumably) get an even better model.
End of explanation
import joblib
joblib.dump(calib3, 'calib3.pkl')
calib3_reloaded=joblib.load('calib3.pkl')
mli.plot_reliability_diagram(y_test, calib3_reloaded.calibrate(preds_test_2_uncalib));
calib3_reloaded.show_calibration_curve()
log_loss(y_test, calib3_reloaded.calibrate(preds_test_2_uncalib))
Explanation: Indeed, we get a slightly better AUC and log_loss both before and after calibration, due to having a larger training set for our model to learn from
Serializing Models
The SplineCalib object can be saved to disk easily with joblib.dump() and reloaded with joblib.load()
End of explanation
from sklearn.isotonic import IsotonicRegression
from sklearn.linear_model import LogisticRegression
from betacal import BetaCalibration
# Fit three-parameter beta calibration
bc = BetaCalibration(parameters="abm")
bc.fit(cv_preds_train_calib[:,1], y_train_calib)
# Fit Isotonic Regression
iso = IsotonicRegression()
iso.fit(cv_preds_train_calib[:,1], y_train_calib)
# Fit Platt scaling (logistic calibration)
lr = LogisticRegression(C=99999999999)
lr.fit(cv_preds_train_calib[:,1].reshape(-1,1), y_train_calib)
tvec = np.linspace(0,1,1001)
bc_probs = bc.predict(tvec)
iso_probs = iso.predict(tvec)
platt_probs = lr.predict_proba(tvec.reshape(-1,1))[:,1]
splinecalib_probs = calib3.calibrate(tvec)
#calib3.show_calibration_curve()
mli.plot_reliability_diagram(y_train_calib, cv_preds_train_calib[:,1], error_bars=False);
plt.plot(tvec, splinecalib_probs, label='SplineCalib')
plt.plot(tvec, bc_probs, label='Beta')
plt.plot(tvec, iso_probs, label='Isotonic')
plt.plot(tvec, platt_probs, label='Platt')
plt.legend()
plt.title('Calibration Curves for different methods');
preds_test_bc = bc.predict(preds_test_2_uncalib)
preds_test_iso = iso.predict(preds_test_2_uncalib)
preds_test_platt = lr.predict_proba(preds_test_2_uncalib.reshape(-1,1))[:,1]
preds_test_splinecalib = calib3.calibrate(preds_test_2_uncalib)
bc_loss = log_loss(y_test, preds_test_bc)
iso_loss = log_loss(y_test, preds_test_iso)
platt_loss = log_loss(y_test, preds_test_platt)
splinecalib_loss = log_loss(y_test, preds_test_splinecalib)
print('Platt loss = {}'.format(np.round(platt_loss,5)))
print('Beta Calib loss = {}'.format(np.round(bc_loss,5)))
print('Isotonic loss = {}'.format(np.round(iso_loss,5)))
print('SplineCalib loss = {}'.format(np.round(splinecalib_loss,5)))
Explanation: Comparison to Other Calibration Approaches
Here we compare SplineCalib to Isotonic Regression, Platt Scaling and Beta Calibration.
End of explanation |
2,160 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Some Multiplicative Functionals
Daisuke Oyama, Thomas J. Sargent and John Stachurski
Step1: Plan of the notebook
In other quant-econ lectures
("Markov Asset Pricing" and
"The Lucas Asset Pricing Model"),
we have studied the celebrated Lucas asset pricing model (Lucas (1978)) that is cast in a setting in which the key objects of the theory,
namely, a stochastic discount factor process, an aggregate consumption process, and an
asset payout process, are all taken to be stationary.
In this notebook, we shall learn about some tools that allow us to extend asset pricing models to settings in which neither the stochastic discount factor process nor the asset's payout process is a stationary process. The key tool is the class of multiplicative functionals from the stochastic process literature that Hansen and Scheinkman (2009) have adapted so that they can be applied to asset pricing and other interesting macroeconomic problems.
In this notebook, we confine ourselves to studying a special type of multiplicative functional, namely, multiplicative functionals driven by finite state Markov chains. We'll learn about some of their properties and applications. Among other things, we'll
obtain Hansen and Scheinkman's more general multiplicative decomposition of our particular type of multiplicative functional into the following three primitive types of multiplicative functions
Step2: Clearly, this Markov chain is irreducible
Step3: Create a MultFunctionalFiniteMarkov instance
Step4: The dominant eigenvalue, denoted $\exp(\eta)$ above, of $\widetilde P$ is
Step5: The value $\eta$ is
Step6: The (normalized) dominant eigenvector $e$ of $\widetilde P$ is
Step7: Let us simulate our MultFunctionalFiniteMarkov
Step8: The simulation results are contained in res.
Let's check that M and M_tilde satisfy the identity from their definition (up to numerical errors).
Step9: Likelihood ratio processes
A likelihood ratio process is a multiplicative martingale with mean $1$.
A multiplicative martingale process ${\widetilde M_t }_{t=0}^\infty$ that starts from $\widetilde M_0 = 1$ is a likelihood ratio process.
Evidently, a likelihood ratio process satisfies
$$ E [\widetilde M_t \mid {\mathfrak F}_0] = 1 .$$
Hansen and Sargent (2017) point out that likelihood ratio processes have the following peculiar property
Step10: We revisit the peculiar sample path property at the end of this notebook.
Stochastic discount factor and exponentially changing asset payouts
Define a matrix ${\sf S}$ whose $(x, y)$th element is ${\sf S}(x,y) = \exp(G_S(x,y))$, where $G_S(x,y)$ is a stochastic discount rate
for moving from state $x$ at time $t$ to state $y$ at time $t+1$.
A stochastic discount factor process ${S_t}_{t=0}^\infty$ is governed by the multiplicative functional
Step11: (1) Display the $\widetilde M$ matrices for $S_t$ and $d_t$.
Step12: (2) Plot sample paths of $S_t$ and $d_t$.
Step13: (2) Print $v$.
Step14: (3) Plot sample paths of $p_t$ and $d_t$.
Step15: (5) Experiment with a different $G_S$ matrix.
Step16: Lucas asset pricing model with growth
As an example of our model of a stochastic discount factor and payout process, we'll adapt a version of the famous Lucas (1978) asset pricing model to have an exponentially growing aggregate consumption endowment.
We'll use CRRA utility
$u(c) = c^{1-\gamma}/(1-\gamma)$,
a common specification in applications of the Lucas model.
So now we let $d_t = C_t$, aggregate consumption, and we let
$$\frac{S_{t+1}}{S_t} = \exp(-\delta) \left(\frac{C_{t+1}}{C_t} \right)^{-\gamma} ,$$
where $\delta > 0$ is a rate of time preference and $\gamma > 0$ is a coefficient of relative risk aversion.
To obtain this special case of our model, we set
$$ {\sf S}(x, y) = \exp(-\delta) {\sf D}(x, y)^{-\gamma}, $$
where we now interpret ${\sf D}(x, y)$ as the multiplicative rate of growth of the level of aggregate consumption between $t$ and $t+1$ when $X_t = x$ and $X_{t+1} = y$.
Term structure of interest rates
When the Markov state $X_t = x$ at time $t$, the price of a risk-free zero-coupon bond paying one unit of consumption at time $t+j$
is
$$ p_{j,t} = E \left[ \frac{S_{t+j}}{S_t} \Bigm| X_t = x \right]. $$
Let the matrix $\widehat{P}$ be given by $\widehat{P}(x, y) = P(x, y) {\sf S}(x, y)$
and apply the above forecasting formula to deduce
$$
p_{j,t} = \left( \sum_{y \in S} \widehat P^j(x, y) \right).
$$
The yield $R_{jt}$ on a $j$ period risk-free bond satisfies
$$ p_{jt} = \exp(-j R_{jt}) $$
or
$$ R_{jt} = -\frac{\log(p_{jt})}{j}. $$
For a given $t$,
$$ \begin{bmatrix} R_{1t} & R_{2t} & \cdots & R_{Jt} \end{bmatrix} $$
is the term structure of interest rates on risk-free zero-coupon bonds.
Simulating the Lucas asset pricing model
Write $y$ for the process of quarterly per capita consumption growth with mean $\mu_C$.
In the following example,
we assume that $y - \mu_C$ follows a discretized version of an AR(1) process
(while independent of the Markov state),
where the discrete approximation is derived by the routine
tauchen
from quantecon.markov.
Step17: Create a LucasTreeFiniteMarkov instance
Step18: Simulate the model
Step19: Plotting the term structure of interest rates
Step20: The term structure of interest rates R is a sequence (of length J)
of vectors (of length n each).
Instead of plotting the whole R,
we plot the sequences for the "low", "middle", and "high" states.
Here we define those states as follows.
The vector $(p_{jt}|X_t = x)_{x \in S}$, if appropriately rescaled,
converges as $j \to \infty$
to an eigenvector of $\widehat P$ that corresponds to the dominant eigenvalue,
which equals mf.e times some constant.
Thus call the states that correspond to the smallest, largest, and middle values of mf.e
the high, low, and middle states.
Step21: Another class of examples
Let the elements of ${\sf D}$ (i.e., the multiplicative growth rates of the dividend or consumption process) be, for example,
$$ {\sf D} = \begin{bmatrix} .95 & .975 & 1 \cr
.975 & 1 & 1.025 \cr
1 & 1.025 & 1.05 \end{bmatrix}.$$
Here the realized growth rate depends on both $X_t$ and $X_{t+1}$ -- i.e., the value of the state last period (i) and this period (j).
Here we have imposed symmetry to save parameters, but of course there is no reason to do that.
We can combine this specification with various specifications of $P$ matrices e.g., an "i.i.d." state evolution process would be represented with $P$ in which all rows are identical. Even that simple specification
can some interesting outcomes with the above ${\sf D}$.
We'll try this little $3 \times 3$ example with a Lucas model below.
But first a word of caution.
We have to choose values for the consumption growth rate matrix $G_C$ and
the transition matrix $P$ so that pertinent eigenvalues are smaller than one in modulus.
This check is implemented in the code.
Step22: The peculiar sample path property revisited
Consider again the multiplicative martingale that associated with the $5$ state Lucas model studied earlier.
Remember that by construction, this is a likelihood ratio process.
Here we'll simulate a number of paths and build up histograms of $\widetilde M_t$ at various values of $t$.
These histograms should help us understand what is going on to generate the peculiar property mentioned above.
As $t \rightarrow +\infty$, notice that
more and more probability mass piles up near zero, $\ldots$ but
a longer and longer thin right tail emerges. | Python Code:
%matplotlib inline
import itertools
import numpy as np
import matplotlib.pyplot as plt
from quantecon.markov import tauchen, MarkovChain
from mult_functional import MultFunctionalFiniteMarkov
from asset_pricing_mult_functional import (
AssetPricingMultFiniteMarkov, LucasTreeFiniteMarkov
)
Explanation: Some Multiplicative Functionals
Daisuke Oyama, Thomas J. Sargent and John Stachurski
End of explanation
# Transition probability matrix
P = [[0.4, 0.6],
[0.2, 0.8]]
# Instance of MarkovChain from quantecon.markov
mc = MarkovChain(P)
Explanation: Plan of the notebook
In other quant-econ lectures
("Markov Asset Pricing" and
"The Lucas Asset Pricing Model"),
we have studied the celebrated Lucas asset pricing model (Lucas (1978)) that is cast in a setting in which the key objects of the theory,
namely, a stochastic discount factor process, an aggregate consumption process, and an
asset payout process, are all taken to be stationary.
In this notebook, we shall learn about some tools that allow us to extend asset pricing models to settings in which neither the stochastic discount factor process nor the asset's payout process is a stationary process. The key tool is the class of multiplicative functionals from the stochastic process literature that Hansen and Scheinkman (2009) have adapted so that they can be applied to asset pricing and other interesting macroeconomic problems.
In this notebook, we confine ourselves to studying a special type of multiplicative functional, namely, multiplicative functionals driven by finite state Markov chains. We'll learn about some of their properties and applications. Among other things, we'll
obtain Hansen and Scheinkman's more general multiplicative decomposition of our particular type of multiplicative functional into the following three primitive types of multiplicative functions:
a nonstochastic process displaying deterministic exponential growth;
a multiplicative martingale or likelihood ratio process; and
a stationary stochastic process that is the exponential of another stationary process.
The first two of these primitive types are nonstationary while the third is stationary. The first is nonstochastic, while the second and third are stochastic.
After taking a look at the behavior of these three primitive components, we'll apply this structure to model
a stochastically, exponentially declining stochastic discount factor process;
a stochastically, exponentially growing or declining asset payout or dividend process;
the prices of claims to exponentially growing or declining payout processes; and
a theory of the term structure of interest rates.
We begin by describing a basic setting that we'll use in several applications later in this notebook.
Multiplicative functional driven by a finite state Markov chain
Let $S$ be the integers ${0, \ldots, n-1}$. Because we study stochastic
processes taking values in $S$, elements of $S$ will be denoted by symbols
such as $x, y$ instead of $i, j$. Also, to avoid double subscripts, for a
vector $h \in {\mathbb R}^n$ we will write $h(x)$ instead of $h_x$ for the value at
index $x$.
(In fact $h$ can also be understood as a function $h \colon S \to {\mathbb R}$. However, in expressions involving matrix algebra we always regard it as a column vector. Similarly, $S$ can be any finite set but in what follows we identify it with ${0, \ldots, n-1}$.)
Matrices are represented by symbols such as ${\mathbf P}$ and ${\mathbf Q}$. Analogous to
the vector case, the $(x,y)$-th element of matrix ${\mathbf Q}$ is written
${\mathbf Q}(x, y)$
rather than ${\mathbf Q}_{x y}$. A nonnegative $n \times n$ matrix ${\mathbf
Q}$ is
called irreducible if, for any $(x, y) \in S \times S$, there exists an
integer $m$ such that ${\mathbf Q}^m(x, y) > 0$. It is called primitive if there
exists an integer $m$ such that ${\mathbf Q}^m(x, y) > 0$ for all $(x, y) \in S \times
S$.
A positive integer $d$ is the period of $x \in S$
if it is the greatest common divisor of all $m$'s such that ${\mathbf Q}^m(x, x) > 0$.
If ${\mathbf Q}$ is irreducible, then all $x$'s in $S$ have the same period,
which is called the period of the matrix ${\mathbf Q}$.
${\mathbf Q}$ is called aperiodic if its period is one.
A nonnegative matrix is irreducible and aperiodic if and only if it is primitive.
Let ${\mathbf P}$ be a stochastic $n \times n$ matrix and let ${X_t}$ be
a Markov process with transition probabilities ${\mathbf P}$. That is, ${X_t}$ is a Markov process on $S$ satisfying
$$
\mathbb P [ X_{t+1} = y \mid {\mathcal F}_t]
= {\mathbf P}(X_t, y)
$$
for all $y \in S$. Here ${{\mathcal F}_t}$ is the natural filtration generated by
${X_t}$ and the equality holds almost surely on an underlying probability
space $(\Omega, {\mathcal F}, \mathbb P)$.
A martingale with respect to ${{\mathcal F}t}$ is a real-valued stochastic
process on $(\Omega, {\mathcal F}, \mathbb P)$ satisfying $E[|M_t|] < \infty$ and $E[M{t+1} \mid
{\mathcal F}_t] = M_t$ for all $t$.
A multiplicative functional generated by ${X_t}$ is a real-valued stochastic
process ${M_t}$ satisfying $M_0 > 0$ and
$$
\frac{M_{t+1}}{M_t} = {\mathbf M}(X_t, X_{t+1})
$$
for some strictly positive $n \times n$ matrix ${\mathbf M}$.
If, in addition,
$$
E[ {\mathbf M}(X_t, X_{t+1}) \mid {\mathcal F}_t] = 1,
$$
then ${M_t}$ is clearly a martingale. Given its construction as a product of factors ${\mathbf M}(X_t, X_{t+1})$, it is sometimes called a
multiplicative martingale.
If we write
$$
\ln M_{t+1} - \ln M_t = {\mathbf G}(X_t, X_{t+1})
$$
where
$$
{\mathbf G}(X_t, X_{t+1}) := \ln {\mathbf M}(X_t, X_{t+1}),
$$
then ${\mathbf G}(x, y)$ can be interpreted as the growth rate of ${M_t}$ at state pair $(x, y)$.
A likelihood ratio process is a multiplicative martingale ${M_t}$ with initial condition $M_0 = 1$. From this initial condition and the martingale property it is easy to show that
$$
E[M_t] = E[M_t \mid {\mathcal F}_0] = 1
$$
for all $t$.
Martingale decomposition
Let ${\mathbf P}$ be a stochastic matrix, let ${\mathbf M}$ be a positive $n
\times n$ matrix and let ${M_t}$ be the multiplicative functional defined
above. Assume that ${\mathbf P}$ is irreducible. Let $\widetilde {\mathbf P}$ be defined by
$$
\widetilde {\mathbf P} (x, y) = {\mathbf M}(x, y) {\mathbf P}(x, y)
\qquad ((x, y) \in S \times S).
$$
Using the assumptions that ${\mathbf P}$ is irreducible and ${\mathbf M}$ is positive, it can be
shown that $\widetilde {\mathbf P}$ is also irreducible.
By the Perron-Frobenius theorem, there exists for $\widetilde {\mathbf P}$ a unique eigenpair $(\lambda,
e) \in {\mathbb R} \times {\mathbb R}^n$ such that $\lambda$ and all elements of $e$ are strictly positive. Letting $\eta := \log \lambda$, we have
$$
\widetilde {\mathbf P} e = \exp(\eta) e.
$$
Now define $n \times n$ matrix $\widetilde {\mathbf M}$ by
$$
\widetilde {\mathbf M}(x, y) := \exp(- \eta) {\mathbf M}(x, y) \frac{e(y)}{e(x)}.
$$
Note that $\widetilde {\mathbf M}$ is also strictly positive. By construction, for each $x \in
S$ we have
\begin{align}
\sum_{y \in S} \widetilde {\mathbf M}(x, y) {\mathbf P}(x, y)
& = \sum_{y \in S} \exp(- \eta) {\mathbf M}(x, y) \frac{e(y)}{e(x)} {\mathbf P}(x, y)
\
& = \exp(- \eta) \frac{1}{e(x)} \sum_{y \in S} \widetilde {\mathbf P}(x, y) e(y)
\
& = \exp(- \eta) \frac{1}{e(x)} \widetilde {\mathbf P} e(x) = 1.
\end{align}
Now let ${\widetilde M_t}$ be the multiplicative functional defined by
$$
\frac{\widetilde M_{t+1}}{\widetilde M_t} = \widetilde {\mathbf M}(X_t, X_{t+1})
\quad \text{and} \quad
\widetilde M_0 = 1.
$$
In view of our proceeding calculations, we have
$$
E
\left[
\frac{\widetilde M_{t+1}}{\widetilde M_t}
\Bigm| {\mathcal F}t
\right]
= E[ \widetilde {\mathbf M}(X_t, X{t+1}) \mid {\mathcal F}t]
= \sum{y \in S} \widetilde {\mathbf M}(X_t, y) {\mathbf P}(X_t, y) = 1.
$$
Hence ${\widetilde M_t}$ is a likelihood ratio process.
By reversing the construction of $\widetilde {\mathbf M}$ given above, we can write
$$
{\mathbf M}(x, y) = \exp( \eta) \widetilde {\mathbf M}(x, y) \frac{e(x)}{e(y)}
$$
and hence
$$
\frac{M_{t+1}}{M_t}
=
\exp( \eta)
\frac{e(X_t)}{e(X_{t+1})}
\frac{\widetilde M_{t+1}}{\widetilde M_t} .
$$
In this equation we have decomposed the original multiplicative functional
into the product of
a nonstochastic component $\exp( \eta)$,
a stationary sequence $e(X_t)/e(X_{t+1})$, and
the factors $\widetilde M_{t+1}/\widetilde M_t$ of a likelihood ratio process.
Simulation strategy
Let $x_t$ be the index of the Markov state at time $t$ and let ${ x_0, x_1, \ldots, x_T}$ be a simulation of the Markov process for ${X_t}$.
We can use the formulas above easily to generate simulations of the multiplicative functional $M_t$ and of the positive multiplicative martingale $\widetilde M_t$.
Forecasting formulas
Let ${M_t}$ be the multiplicative functional described above with transition
matrix ${\mathbf P}$ and matrix ${\mathbf M}$ defining the multiplicative increments. We can
use $\widetilde {\mathbf P}$ to forecast future observations of ${M_t}$. In particular,
we have the relation
$$
E[ M_{t+j} \mid X_t = x]
= M_t \sum_{y \in S} \widetilde {\mathbf P}^j(x, y)
\qquad (x \in S).
$$
This follows from the definition of ${M_t}$, which allows us to write
$$
M_{t+j} = M_t {\mathbf M}(X_t, X_{t+1}) \cdots {\mathbf M}(X_{t+j-1}, X_{t+j}).
$$
Taking expectations and conditioning on $X_t = x$ gives
\begin{align}
E[ M_{t+j} \mid X_t = x]
& = \sum_{(x_1, \ldots, x_j)}
M_t {\mathbf M}(x, x_1) \cdots {\mathbf M}(x_{j-1}, x_j)
{\mathbf P}(x, x_1) \cdots {\mathbf P}(x_{j-1}, x_j)
\
& = \sum_{(x_1, \ldots, x_j)}
M_t \widetilde {\mathbf P}(x, x_1) \cdots \widetilde {\mathbf P}(x_{j-1}, x_j)
\
& = M_t \sum_{y \in S} \widetilde {\mathbf P}^j(x, y) .
\end{align}
Implementation
The MultFunctionalFiniteMarkov class
implements multiplicative functionals driven by finite state Markov chains.
Here we briefly demonstrate how to use it.
End of explanation
mc.is_irreducible
# Growth rate matrix
G = [[-1, 0],
[0.5, 1]]
Explanation: Clearly, this Markov chain is irreducible:
End of explanation
mf = MultFunctionalFiniteMarkov(mc, G, M_inits=100)
mf.M_matrix
Explanation: Create a MultFunctionalFiniteMarkov instance:
End of explanation
mf.exp_eta
Explanation: The dominant eigenvalue, denoted $\exp(\eta)$ above, of $\widetilde P$ is
End of explanation
mf.eta
Explanation: The value $\eta$ is
End of explanation
mf.e
Explanation: The (normalized) dominant eigenvector $e$ of $\widetilde P$ is
End of explanation
ts_length = 10
res = mf.simulate(ts_length)
Explanation: Let us simulate our MultFunctionalFiniteMarkov:
End of explanation
exp_eta_geo_series = np.empty_like(res.M)
exp_eta_geo_series[0] = 1
exp_eta_geo_series[1:] = mf.exp_eta
np.cumprod(exp_eta_geo_series, out=exp_eta_geo_series)
M_2 = res.M[0] * res.M_tilde * mf.e[res.X[0]] * exp_eta_geo_series / mf.e[res.X]
M_2
M_2 - res.M
Explanation: The simulation results are contained in res.
Let's check that M and M_tilde satisfy the identity from their definition (up to numerical errors).
End of explanation
ts_length = 120
num_reps = 100
res = mf.simulate(ts_length, num_reps=num_reps)
ylim = (0, 50)
fig, ax = plt.subplots(figsize=(8,5))
for i in range(num_reps):
ax.plot(res.M_tilde[i], color='k', alpha=0.5)
ax.set_xlim(0, ts_length)
ax.set_ylim(*ylim)
ax.set_title(r'{0} sample paths of $\widetilde M_t$'.format(num_reps))
plt.show()
Explanation: Likelihood ratio processes
A likelihood ratio process is a multiplicative martingale with mean $1$.
A multiplicative martingale process ${\widetilde M_t }_{t=0}^\infty$ that starts from $\widetilde M_0 = 1$ is a likelihood ratio process.
Evidently, a likelihood ratio process satisfies
$$ E [\widetilde M_t \mid {\mathfrak F}_0] = 1 .$$
Hansen and Sargent (2017) point out that likelihood ratio processes have the following peculiar property:
Although $E{\widetilde M}{j} = 1$ for each $j$, ${{\widetilde M}{j} : j=1,2,... }$ converges almost surely to zero.
The following graph, and also one at the end of this notebook, illustrate the peculiar property by reporting simulations of many sample paths of a ${\widetilde M_t }_{t=0}^\infty$ process.
End of explanation
# Transition probability matrix
P = [[0.4, 0.6],
[0.2, 0.8]]
# Instance of MarkovChain from quantecon.markov
mc = MarkovChain(P)
# Stochastic discount rate matrix
G_S = [[-0.02, -0.03],
[-0.01, -0.04]]
# Dividend growth rate matrix
G_d = [[0.01, 0.02],
[0.005, 0.02]]
# AssetPricingMultFiniteMarkov instance
ap = AssetPricingMultFiniteMarkov(mc, G_S, G_d)
Explanation: We revisit the peculiar sample path property at the end of this notebook.
Stochastic discount factor and exponentially changing asset payouts
Define a matrix ${\sf S}$ whose $(x, y)$th element is ${\sf S}(x,y) = \exp(G_S(x,y))$, where $G_S(x,y)$ is a stochastic discount rate
for moving from state $x$ at time $t$ to state $y$ at time $t+1$.
A stochastic discount factor process ${S_t}_{t=0}^\infty$ is governed by the multiplicative functional:
$$
{\frac {S_{t+1}}{S_t}} = \exp[ G_S(X_t, X_{t+1} ) ] = {\sf S}(X_t, X_{t+1}).
$$
Define a matrix ${\sf D}$ whose $(x,y)$th element is ${\sf D}(x,y) = \exp(G_d(x,y))$.
A non-negative payout or dividend process ${d_t}_{t=0}^\infty$ is governed by the multiplicative functional:
$$
{\frac {d_{t+1}}{d_t}} = \exp\left[ G_d(X_t,X_{t+1}) \right] = {\sf D}(X_t, X_{t+1}).
$$
Let $p_t$ be the price at the beginning of period $t$ of a claim to the stochastically growing or shrinking stream of payouts
${d_{t+j}}_{j=0}^\infty$.
It satisfies
$$
p_t = E\left[\frac{S_{t+1}}{S_t} (d_t + p_{t+1}) \Bigm| {\mathfrak F}_t\right] ,
$$
or
$$
\frac{p_t}{d_t} =
E\left[\frac{S_{t+1}}{S_t}
\left(1 + \frac{d_{t+1}}{d_t} \frac{p_{t+1}}{d_{t+1}}\right)
\Bigm| {\mathfrak F}_t\right] ,
$$
where the time $t$ information set ${\mathfrak F}_t$ includes $X_t, S_t, d_t$.
Guessing that the price-dividend ratio $\frac{p_t}{d_t}$ is a function of the Markov state $X_t$ only, and letting
it equal $v(x)$ when $X_t = x$, write the preceding equation as
$$ v(x) = \sum_{x \in S} P(x,y) \left[ {\sf S}(x,y) \mathbf{1} + {\sf S}(x,y) {\sf D}(x,y) v(y) \right] $$
or
$$
v = c + \widetilde{P} v ,
$$
where $c = \widehat{P} \mathbf{1}$ is by construction a nonnegative vector and we have defined
the nonnegative matrices $\widetilde{P} \in \mathbb{R}^{n \times n}$ and
$\widehat{P} \in \mathbb{R}^{n \times n}$ by
$$
\begin{aligned}
\widetilde{P}(x,y) &= P(x,y) {\sf S}(x,y) {\sf D}(x,y), \
\widehat{P}(x,y) &= P(x,y) {\sf S}(x,y).
\end{aligned}
$$
The equation $v = \widetilde{P} v + c$ has a nonnegative solution
for any nonnegative vector $c$ if and only if
all the eigenvalues of $\widetilde{P}$ are smaller than $1$ in modulus.
A sufficient condition for existence of a nonnegative solution is that all the column sums, or all the row sums, of $\widetilde{P}$ are less than one, which holds when $G_S + G_d \ll 0$. This condition describes a sense in which discounting counteracts growth in dividends.
Given a solution $v$, the price-dividend ratio is a stationary process that is a fixed function of the Markov state:
$$
\frac{p_t}{d_t} = v(x) \text{ when $X_t = x$}.
$$
Meanwhile, both the asset price process and the dividend process are multiplicative functionals that experience either multiplicative growth or decay.
Implementation
The AssetPricingMultFiniteMarkov class
implements the asset pricing model with the specification of the stochastic discount factor process described above.
Below is an example of how to use the class.
Please note that the stochastic discount rate matrix $G_S$ and the payout growth rate matrix $G_d$ are specified independently.
In the Lucas asset pricing model to be described below, the matrix $G_S$ is a function
of the payoff growth rate matrix $G_d$ and another parameter $\gamma$ that is a coefficient of relative risk aversion in the utility function of a representative consumer,
as well as the discount rate $\delta$.
End of explanation
ap.mf_S.M_tilde_matrix
ap.mf_d.M_tilde_matrix
Explanation: (1) Display the $\widetilde M$ matrices for $S_t$ and $d_t$.
End of explanation
ts_length = 250
res = ap.simulate(ts_length)
paths = [res.S, res.d]
labels = [r'$S_t$', r'$d_t$']
titles = ['Sample path of ' + label for label in labels]
loc = 4
fig, axes = plt.subplots(2, 1, figsize=(8,10))
for ax, path, label, title in zip(axes, paths, labels, titles):
ax.plot(path, label=label)
ax.set_title(title)
ax.legend(loc=loc)
plt.show()
Explanation: (2) Plot sample paths of $S_t$ and $d_t$.
End of explanation
print("price-dividend ratio in different Markov states = {0}".format(ap.v))
Explanation: (2) Print $v$.
End of explanation
paths = [res.p, res.d]
labels = [r'$p_t$', r'$d_t$']
titles = ['Sample path of ' + label for label in labels]
fig, axes = plt.subplots(2, 1, figsize=(8,10))
for ax, path, label, title in zip(axes, paths, labels, titles):
ax.plot(path, label=label)
ax.set_title(title)
ax.legend(loc=loc)
plt.show()
Explanation: (3) Plot sample paths of $p_t$ and $d_t$.
End of explanation
# Change G_s[0, 1] from -0.03 to -1
G_S_2 = [[-0.02, -1],
[-0.01, -0.04]]
ap_2 = AssetPricingMultFiniteMarkov(mc, G_S_2, G_d)
ap_2.v
res_2 = ap_2.simulate(ts_length)
paths = [res_2.p, res_2.d]
labels = [r'$p_t$', r'$d_t$']
titles = ['Sample path of ' + label for label in labels]
fig, axes = plt.subplots(2, 1, figsize=(8,10))
for ax, path, label, title in zip(axes, paths, labels, titles):
ax.plot(path, label=label)
ax.set_title(title)
ax.legend(loc=loc)
plt.show()
Explanation: (5) Experiment with a different $G_S$ matrix.
End of explanation
mu_C = .005 # mean of quarterly per capita consumption growth
sigma_C = .005 # standard deviation of quarterly per capita consumption growth
rho = .25 # persistence of per capita quarterly consumption growth
# standard deviation of the underlying noise distribution
sigma = sigma_C * np.sqrt(1 - rho**2)
m = 2 # number of standard deviations you would like the gridded vector y to cover
n = 5 # number of points in the discretization
y, P = tauchen(rho, sigma, m, n)
mc = MarkovChain(P)
y += mu_C # consumption growth vector
# Consumption growth matrix
G_C = np.empty((n, n))
G_C[:] = y
# Discount rate
delta = .01
# Coefficient of relative risk aversion
gamma = 20
Explanation: Lucas asset pricing model with growth
As an example of our model of a stochastic discount factor and payout process, we'll adapt a version of the famous Lucas (1978) asset pricing model to have an exponentially growing aggregate consumption endowment.
We'll use CRRA utility
$u(c) = c^{1-\gamma}/(1-\gamma)$,
a common specification in applications of the Lucas model.
So now we let $d_t = C_t$, aggregate consumption, and we let
$$\frac{S_{t+1}}{S_t} = \exp(-\delta) \left(\frac{C_{t+1}}{C_t} \right)^{-\gamma} ,$$
where $\delta > 0$ is a rate of time preference and $\gamma > 0$ is a coefficient of relative risk aversion.
To obtain this special case of our model, we set
$$ {\sf S}(x, y) = \exp(-\delta) {\sf D}(x, y)^{-\gamma}, $$
where we now interpret ${\sf D}(x, y)$ as the multiplicative rate of growth of the level of aggregate consumption between $t$ and $t+1$ when $X_t = x$ and $X_{t+1} = y$.
Term structure of interest rates
When the Markov state $X_t = x$ at time $t$, the price of a risk-free zero-coupon bond paying one unit of consumption at time $t+j$
is
$$ p_{j,t} = E \left[ \frac{S_{t+j}}{S_t} \Bigm| X_t = x \right]. $$
Let the matrix $\widehat{P}$ be given by $\widehat{P}(x, y) = P(x, y) {\sf S}(x, y)$
and apply the above forecasting formula to deduce
$$
p_{j,t} = \left( \sum_{y \in S} \widehat P^j(x, y) \right).
$$
The yield $R_{jt}$ on a $j$ period risk-free bond satisfies
$$ p_{jt} = \exp(-j R_{jt}) $$
or
$$ R_{jt} = -\frac{\log(p_{jt})}{j}. $$
For a given $t$,
$$ \begin{bmatrix} R_{1t} & R_{2t} & \cdots & R_{Jt} \end{bmatrix} $$
is the term structure of interest rates on risk-free zero-coupon bonds.
Simulating the Lucas asset pricing model
Write $y$ for the process of quarterly per capita consumption growth with mean $\mu_C$.
In the following example,
we assume that $y - \mu_C$ follows a discretized version of an AR(1) process
(while independent of the Markov state),
where the discrete approximation is derived by the routine
tauchen
from quantecon.markov.
End of explanation
lt = LucasTreeFiniteMarkov(mc, G_C, gamma, delta)
# Consumption growth rates
lt.G_C[0]
# Stochastic discount rates
lt.G_S[0]
Explanation: Create a LucasTreeFiniteMarkov instance:
End of explanation
ts_length = 250
res = lt.simulate(ts_length)
paths = [res.S, res.d]
labels = [r'$S_t$', r'$C_t$']
titles = ['Sample path of ' + label for label in labels]
fig, axes = plt.subplots(2, 1, figsize=(8,10))
for ax, path, label, title in zip(axes, paths, labels, titles):
ax.plot(path, label=label)
ax.set_title(title)
ax.legend(loc=loc)
plt.show()
print("price-dividend ratio in different states = ")
lt.v
paths = [res.p, res.d]
labels = [r'$p_t$', r'$C_t$']
titles = ['Sample path of ' + label for label in labels]
fig, axes = plt.subplots(2, 1, figsize=(8,10))
for ax, path, label, title in zip(axes, paths, labels, titles):
ax.plot(path, label=label)
ax.set_title(title)
ax.legend(loc=loc)
plt.show()
Explanation: Simulate the model:
End of explanation
G_S = lt.G_S
# SDF process as a MultiplicativeFunctional
mf = MultFunctionalFiniteMarkov(mc, G_S)
P_hat = mf.P_tilde
J = 20
# Sequence of price vectors
p = np.empty((J, n))
p[0] = P_hat.dot(np.ones(n))
for j in range(J-1):
p[j+1] = P_hat.dot(p[j])
# Term structure
R = -np.log(p)
R /= np.arange(1, J+1)[:, np.newaxis]
R
Explanation: Plotting the term structure of interest rates
End of explanation
mf.e
hi = np.argsort(mf.e)[0]
lo = np.argsort(mf.e)[-1]
mid = np.argsort(mf.e)[mf.n//2]
states = [hi, mid, lo]
labels = [s + ' state' for s in ['high', 'middle', 'low']]
fig, ax = plt.subplots(figsize=(8,5))
for i, label in zip(states, labels):
ax.plot(np.arange(1, J+1), R[:, i], label=label)
ax.set_xlim((1, J))
ax.legend()
plt.show()
Explanation: The term structure of interest rates R is a sequence (of length J)
of vectors (of length n each).
Instead of plotting the whole R,
we plot the sequences for the "low", "middle", and "high" states.
Here we define those states as follows.
The vector $(p_{jt}|X_t = x)_{x \in S}$, if appropriately rescaled,
converges as $j \to \infty$
to an eigenvector of $\widehat P$ that corresponds to the dominant eigenvalue,
which equals mf.e times some constant.
Thus call the states that correspond to the smallest, largest, and middle values of mf.e
the high, low, and middle states.
End of explanation
# Growth rate matrix
G_C = np.log([[.95 , .975, 1],
[.975, 1 , 1.025],
[1, 1.025, 1.05]])
# MarkovChain instance
P = [[0.1, 0.6, 0.3],
[0.1, 0.5, 0.4],
[0.1, 0.6, 0.3]]
mc = MarkovChain(P)
# Discount rate
delta = .01
# Coefficient of relative risk aversion
gamma = 20
lt = LucasTreeFiniteMarkov(mc, G_C, gamma, delta)
# Price-dividend ratios
lt.v
ts_length = 250
res = lt.simulate(ts_length)
paths = [res.S, res.d]
labels = [r'$S_t$', r'$C_t$']
titles = ['Sample path of ' + label for label in labels]
fig, axes = plt.subplots(2, 1, figsize=(8,10))
for ax, path, label, title in zip(axes, paths, labels, titles):
ax.plot(path, label=label)
ax.set_title(title)
ax.legend(loc=loc)
plt.show()
paths = [res.p, res.d]
labels = [r'$p_t$', r'$C_t$']
titles = ['Sample path of ' + label for label in labels]
fig, axes = plt.subplots(2, 1, figsize=(8,10))
for ax, path, label, title in zip(axes, paths, labels, titles):
ax.plot(path, label=label)
ax.set_title(title)
ax.legend(loc=loc)
plt.show()
Explanation: Another class of examples
Let the elements of ${\sf D}$ (i.e., the multiplicative growth rates of the dividend or consumption process) be, for example,
$$ {\sf D} = \begin{bmatrix} .95 & .975 & 1 \cr
.975 & 1 & 1.025 \cr
1 & 1.025 & 1.05 \end{bmatrix}.$$
Here the realized growth rate depends on both $X_t$ and $X_{t+1}$ -- i.e., the value of the state last period (i) and this period (j).
Here we have imposed symmetry to save parameters, but of course there is no reason to do that.
We can combine this specification with various specifications of $P$ matrices e.g., an "i.i.d." state evolution process would be represented with $P$ in which all rows are identical. Even that simple specification
can some interesting outcomes with the above ${\sf D}$.
We'll try this little $3 \times 3$ example with a Lucas model below.
But first a word of caution.
We have to choose values for the consumption growth rate matrix $G_C$ and
the transition matrix $P$ so that pertinent eigenvalues are smaller than one in modulus.
This check is implemented in the code.
End of explanation
mf.P
T = 200
num_reps = 10**5
res = mf.simulate(T+1, num_reps=num_reps)
bins = np.linspace(0, 5, num=21)
bins_mid = (bins[:-1] + bins[1:]) / 2
nums_row_col = (3, 2)
xlim = (bins[0], bins[-1])
ylim = (0, 0.6)
width = (bins[0] + bins[-1]) / (len(bins)-1)
ts = [5, 10, 20, 50, 100, 200]
fig, axes = plt.subplots(*nums_row_col, figsize=(12,10))
for i, ax_idx in enumerate(itertools.product(*(range(n) for n in nums_row_col))):
mean = res.M_tilde[:, ts[i]].mean()
hist, _ = np.histogram(res.M_tilde[:, ts[i]], bins=bins)
axes[ax_idx].bar(bins_mid, hist/num_reps, width, align='center')
axes[ax_idx].vlines(mean, ax.get_ylim()[0], ax.get_ylim()[1], "k", ":")
axes[ax_idx].set_xlim(*xlim)
axes[ax_idx].set_ylim(*ylim)
axes[ax_idx].set_title(r'$t = {}$'.format(ts[i]))
plt.show()
Explanation: The peculiar sample path property revisited
Consider again the multiplicative martingale that associated with the $5$ state Lucas model studied earlier.
Remember that by construction, this is a likelihood ratio process.
Here we'll simulate a number of paths and build up histograms of $\widetilde M_t$ at various values of $t$.
These histograms should help us understand what is going on to generate the peculiar property mentioned above.
As $t \rightarrow +\infty$, notice that
more and more probability mass piles up near zero, $\ldots$ but
a longer and longer thin right tail emerges.
End of explanation |
2,161 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Get the Data
Step2: Now let's get the movie titles
Step3: We can merge them together
Step4: EDA
Let's explore the data a bit and get a look at some of the best rated movies.
Visualization Imports
Step5: Let's create a ratings dataframe with average rating and number of ratings
Step6: Now set the number of ratings column
Step7: Now a few histograms
Step8: Okay! Now that we have a general idea of what the data looks like, let's move on to creating a simple recommendation system
Step9: Most rated movie
Step10: Let's choose two movies
Step11: Now let's grab the user ratings for those two movies
Step12: We can then use corrwith() method to get correlations between two pandas series
Step13: Let's clean this by removing NaN values and using a DataFrame instead of a series
Step14: Now if we sort the dataframe by correlation, we should get the most similar movies, however note that we get some results that don't really make sense. This is because there are a lot of movies only watched once by users who also watched star wars (it was the most popular movie).
Step15: Let's fix this by filtering out movies that have less than 100 reviews (this value was chosen based off the histogram from earlier).
Step16: Now sort the values and notice how the titles make a lot more sense
Step17: Now the same for the comedy Liar Liar | Python Code:
import numpy as np
import pandas as pd
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Recommender Systems with Python
Welcome to the code notebook for Recommender Systems with Python. In this lecture we will develop basic recommendation systems using Python and pandas. There is another notebook: Advanced Recommender Systems with Python. That notebook goes into more detail with the same data set.
In this notebook, we will focus on providing a basic recommendation system by suggesting items that are most similar to a particular item, in this case, movies. Keep in mind, this is not a true robust recommendation system, to describe it more accurately,it just tells you what movies/items are most similar to your movie choice.
There is no project for this topic, instead you have the option to work through the advanced lecture version of this notebook (totally optional!).
Let's get started!
Import Libraries
End of explanation
column_names = ['user_id', 'item_id', 'rating', 'timestamp']
df = pd.read_csv('u.data', sep='\t', names=column_names)
df.head()
Explanation: Get the Data
End of explanation
movie_titles = pd.read_csv("Movie_Id_Titles")
movie_titles.head()
Explanation: Now let's get the movie titles:
End of explanation
df = pd.merge(df,movie_titles,on='item_id')
df.head()
Explanation: We can merge them together:
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
%matplotlib inline
Explanation: EDA
Let's explore the data a bit and get a look at some of the best rated movies.
Visualization Imports
End of explanation
df.groupby('title')['rating'].mean().sort_values(ascending=False).head()
df.groupby('title')['rating'].count().sort_values(ascending=False).head()
ratings = pd.DataFrame(df.groupby('title')['rating'].mean())
ratings.head()
Explanation: Let's create a ratings dataframe with average rating and number of ratings:
End of explanation
ratings['num of ratings'] = pd.DataFrame(df.groupby('title')['rating'].count())
ratings.head()
Explanation: Now set the number of ratings column:
End of explanation
plt.figure(figsize=(10,4))
ratings['num of ratings'].hist(bins=70)
plt.figure(figsize=(10,4))
ratings['rating'].hist(bins=70)
sns.jointplot(x='rating',y='num of ratings',data=ratings,alpha=0.5)
Explanation: Now a few histograms:
End of explanation
moviemat = df.pivot_table(index='user_id',columns='title',values='rating')
moviemat.head()
Explanation: Okay! Now that we have a general idea of what the data looks like, let's move on to creating a simple recommendation system:
Recommending Similar Movies
Now let's create a matrix that has the user ids on one access and the movie title on another axis. Each cell will then consist of the rating the user gave to that movie. Note there will be a lot of NaN values, because most people have not seen most of the movies.
End of explanation
ratings.sort_values('num of ratings',ascending=False).head(10)
Explanation: Most rated movie:
End of explanation
ratings.head()
Explanation: Let's choose two movies: starwars, a sci-fi movie. And Liar Liar, a comedy.
End of explanation
starwars_user_ratings = moviemat['Star Wars (1977)']
liarliar_user_ratings = moviemat['Liar Liar (1997)']
starwars_user_ratings.head()
Explanation: Now let's grab the user ratings for those two movies:
End of explanation
similar_to_starwars = moviemat.corrwith(starwars_user_ratings)
similar_to_liarliar = moviemat.corrwith(liarliar_user_ratings)
Explanation: We can then use corrwith() method to get correlations between two pandas series:
End of explanation
corr_starwars = pd.DataFrame(similar_to_starwars,columns=['Correlation'])
corr_starwars.dropna(inplace=True)
corr_starwars.head()
Explanation: Let's clean this by removing NaN values and using a DataFrame instead of a series:
End of explanation
corr_starwars.sort_values('Correlation',ascending=False).head(10)
Explanation: Now if we sort the dataframe by correlation, we should get the most similar movies, however note that we get some results that don't really make sense. This is because there are a lot of movies only watched once by users who also watched star wars (it was the most popular movie).
End of explanation
corr_starwars = corr_starwars.join(ratings['num of ratings'])
corr_starwars.head()
Explanation: Let's fix this by filtering out movies that have less than 100 reviews (this value was chosen based off the histogram from earlier).
End of explanation
corr_starwars[corr_starwars['num of ratings']>100].sort_values('Correlation',ascending=False).head()
Explanation: Now sort the values and notice how the titles make a lot more sense:
End of explanation
corr_liarliar = pd.DataFrame(similar_to_liarliar,columns=['Correlation'])
corr_liarliar.dropna(inplace=True)
corr_liarliar = corr_liarliar.join(ratings['num of ratings'])
corr_liarliar[corr_liarliar['num of ratings']>100].sort_values('Correlation',ascending=False).head()
Explanation: Now the same for the comedy Liar Liar:
End of explanation |
2,162 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interactive Geovisualization of Multimodal Freight Transport Network Criticality
Bramka Arga Jafino
Delft University of Technology
Faculty of Technology, Policy and Management
An introduction note
This notebook provides an interactive tool to geospatially visualize the results of Bangladesh's multimodal freight transport network criticality. The interactivity doesn't work if you open this notebook from the Github page. Instead, in order to run the interactivity, you have to fork this notebook (as well as all the corresponding libraries and files used in this notebook) to your local computer, then run the jupyter notebook on it.
1. Import all required module and files
Step1: 2. Interactive visualization
Step2: There are three elements that can be adjusted in this interactive visualization | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from mpld3 import plugins, utils
import geopandas as gp
import pandas as pd
from shapely.wkt import loads
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
#Modules developed for this project
from transport_network_modeling import network_visualization as net_v
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual, IntSlider
import ipywidgets as widgets
#import criticality results
result_df_loc = r'./criticality_results/result_interdiction_1107noz2_v03.csv'
result_df = pd.read_csv(result_df_loc)
#import district shapefile for background
district_gdf_loc = r'./model_input_data/BGD_adm1.shp'
district_gdf = gp.read_file(district_gdf_loc)
#alter the 'geometry' string of the dataframe into geometry object
result_df['geometry'] = result_df['geometry'].apply(loads)
#create geodataframe from criticality results dataframe
crs = {'init': 'epsg:4326'}
result_gdf = gp.GeoDataFrame(result_df, crs=crs, geometry=result_df['geometry'])
#record all metrics in a list
all_metric = ['m1_01', 'm1_02', 'm2_01', 'm2_02', 'm3_01', 'm3_02', 'm4_01', 'm4_02', 'm5_01', 'm6_01',
'm7_01', 'm7_02', 'm7_03', 'm8_01', 'm8_02', 'm8_03', 'm9_01', 'm10']
#create ranking columns for each metric
for metric in all_metric:
result_gdf[metric + '_rank'] = result_gdf[metric].rank(ascending=False)
Explanation: Interactive Geovisualization of Multimodal Freight Transport Network Criticality
Bramka Arga Jafino
Delft University of Technology
Faculty of Technology, Policy and Management
An introduction note
This notebook provides an interactive tool to geospatially visualize the results of Bangladesh's multimodal freight transport network criticality. The interactivity doesn't work if you open this notebook from the Github page. Instead, in order to run the interactivity, you have to fork this notebook (as well as all the corresponding libraries and files used in this notebook) to your local computer, then run the jupyter notebook on it.
1. Import all required module and files
End of explanation
#create special colormap for the visualization
cmap = plt.get_cmap('YlOrRd')
new_cmap1 = net_v.truncate_colormap(cmap, 0.3, 1)
cmap = plt.get_cmap('Blues')
new_cmap2 = net_v.truncate_colormap(cmap, 0.3, 1)
Explanation: 2. Interactive visualization
End of explanation
widgets.interact_manual(net_v.plot_interactive, rank=widgets.IntSlider(min=50, max=500, step=10, value=50),
metric=widgets.Dropdown(options=all_metric, value='m1_01'),
show_division=widgets.Checkbox(value=False), result_gdf=fixed(result_gdf),
cmaps=fixed([new_cmap1, new_cmap2]), district_gdf=fixed(district_gdf));
Explanation: There are three elements that can be adjusted in this interactive visualization:
1. First, you can select the metric that results want to be displayed from the 'metric' dropdown list.
2. Second, you can select the top n links with highest criticality score to be highlighted by adjusting the 'rank' slider.
3. Third, you can select whether you want to also display Bangladesh's administrative boundary by turning on the 'show_division' toggle button.
The red links represent Bangladesh's road network while the blue links represent Bangladesh's waterway network.
End of explanation |
2,163 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sufficient statistics for online linear regression
First, I need to recreate the data generating function from here in Python. See the code for plot_xy, plot_abline, and SimpleOnlineLinearRegressor on GitHub.
Step1: Data is not really linear, but let's just do what the exercise tells us to do. Thus, our model is
\begin{equation}
y_i \sim \mathcal{N}(w_0 + w_1x_i, \sigma^2),
\end{equation}
or written in vector notation,
\begin{equation}
\mathbf{y} \sim \mathcal{N}\left(w_0\mathbf{1} + w_1\mathbf{x}, \sigma^2I\right).
\end{equation}
Thus, we have that
\begin{align}
p(\mathbf{y} \mid w_0,w_1,\sigma^2,\mathbf{x}) &= \prod_{i=1}^N\frac{1}{\sqrt{2\pi\sigma^2}}\exp\left(-\frac{1}{2\sigma^2}\left(y_i - w_0 - w_1x_i\right)^2\right) \
l(w_0,w_1,\sigma^2) = \log p(\mathbf{y} \mid w_0,w_1,\sigma^2,\mathbf{x}) &= -\frac{N}{2}\log(2\pi) - \frac{N}{2}\log(\sigma^2) - \frac{1}{2\sigma^2}\sum_{i=1}^N\left(y_i - w_0 - w_1x_i\right)^2.
\end{align}
Let us try to maximize the log-likelihood. We first solve for $w_0$.
\begin{align}
\frac{\partial{l}}{\partial w_0} = \frac{1}{\sigma^2}\sum_{i=1}^N \left(y_i - w_0 - w_1x_i\right)
= \frac{1}{\sigma^2}\left(-Nw_0 + \sum_{i=1}^N y_i - w_1 \sum_{i=1}^Nx_i\right).
\end{align}
Setting $\frac{\partial{l}}{\partial w_0} = 0$ and solving for $w_0$, we find that
\begin{equation}
\hat{w}0 = \frac{\sum{i=1}^N y_i}{N} - \hat{w}1\frac{\sum{i=1}^N x_i}{N} = \bar{y} - \hat{w}_1\bar{x}
\end{equation}
Next, we solve for $w_1$. Taking the derivative, we have
\begin{align}
\frac{\partial{l}}{\partial w_1} = \frac{1}{\sigma^2}\sum_{i=1}^N x_i\left(y_i - w_0 - w_1x_i\right)
= \frac{1}{\sigma^2}\sum_{i=1}^N\left(x_iy_i - w_0x_i - w_1x_i^2\right).
\end{align}
Setting $frac{\partial{l}}{\partial w_1} = 0$ and substituting $\hat{w}_0$ for $w_0$, we have that
\begin{align}
0 &= \frac{1}{\sigma^2}\sum_{i=1}^N\left(x_iy_i - (\bar{y} - \hat{w}1\bar{x})x_i - \hat{w}_1x_i^2\right) \
&= \sum{i=1}^N\left(x_iy_i - x_i\bar{y}\right) -\hat{w}1\sum{i=1}^N\left(x_i^2 - x_i\bar{x}\right).
\end{align}
Since $\sum_{i=1}^N x_i = N\bar{x}$, we have that
\begin{align}
0 &= \sum_{i=1}^N\left(x_iy_i - \bar{x}\bar{y}\right) -\hat{w}1\sum{i=1}^N\left(x_i^2 - \bar{x}^2\right) \
\hat{w}1 &= \frac{\sum{i=1}^N\left(x_iy_i - \bar{x}\bar{y}\right)}{\sum_{i=1}^N\left(x_i^2 - \bar{x}^2\right)}
= \frac{\sum_{i=1}^N\left(x_iy_i - x_i\bar{y} -\bar{x}y_i + \bar{x}\bar{y}\right)}{\sum_{i=1}^N\left(x_i^2 - 2x_i\bar{x} + \bar{x}^2\right)} \
&= \frac{\sum_{i=1}(x_i - \bar{x})(y_i-\bar{y})}{\sum_{i=1}(x_i - \bar{x})^2}
= \frac{\frac{1}{N}\sum_{i=1}(x_i - \bar{x})(y_i-\bar{y})}{\frac{1}{N}\sum_{i=1}(x_i - \bar{x})^2},
\end{align}
which is just the MLE for the covariance of $X$ and $Y$ over the variance of $X$. This can also be written as
\begin{equation}
\hat{w}1 = \frac{\frac{1}{N}\sum{i=1}(x_i - \bar{x})(y_i-\bar{y})}{\frac{1}{N}\sum_{i=1}(x_i - \bar{x})^2}
= \frac{\sum_{i=1}^N x_iy_i - \frac{1}{N}\left(\sum_{i=1}^Nx_i\right)\left(\sum_{i=1}^Ny_i\right)}{\sum_{i=1}^N x_i^2 - \frac{1}{N}\left(\sum_{i=1}^Nx_i\right)^2}.
\end{equation}
Finally, solving for $\sigma^2$, we have that
\begin{align}
\frac{\partial{l}}{\partial w_1} = -\frac{N}{2\sigma^2} + \frac{1}{2(\sigma^2)^2}\sum_{i=1}^N\left(y_i - \left(w_0 +w_1x_i\right)\right)^2.
\end{align}
Setting this equal to $0$, substituting for $w_0$ and $w_1$, we have that
\begin{align}
\hat{\sigma}^2 &= \frac{1}{N}\sum_{i=1}^N\left(y_i - \left(\hat{w}0 +\hat{w}_1x_i\right)\right)^2
= \frac{1}{N}\sum{i=1}^N\left(y_i^2 - 2y_i\left(\hat{w}0 +\hat{w}_1x_i\right) + \left(\hat{w}_0 +\hat{w}_1x_i\right)^2\right) \
&= \hat{w}_0^2 + \frac{1}{N}\left(\sum{i=1}^Ny_i^2 - 2\hat{w}0\sum{i=1}^Ny_i - 2\hat{w}1\sum{i=1}^N x_iy_i + 2\hat{w}0\hat{w}_1\sum{i=1}^N x_i + \hat{w}1^2\sum{i=1}^N x_i^2\right).
\end{align}
Thus, our sufficient statistics are
\begin{equation}
\left(N, \sum_{i=1}^N x_i, \sum_{i=1}^N y_i,\sum_{i=1}^N x_i^2, \sum_{i=1}^N y_i^2, \sum_{i=1}^N x_iy_i\right).
\end{equation}
Step2: Now, let's verify that the online version comes to the same numbers. | Python Code:
%matplotlib inline
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import pandas as pd
from linreg import *
np.random.seed(2016)
def make_data(N):
X = np.linspace(0, 20, N)
Y = stats.norm.rvs(size=N, loc=-1.5*X + X*X/9, scale=2)
return X, Y
X, Y = make_data(21)
print(np.column_stack((X,Y)))
plot_xy(X, Y)
plt.show()
Explanation: Sufficient statistics for online linear regression
First, I need to recreate the data generating function from here in Python. See the code for plot_xy, plot_abline, and SimpleOnlineLinearRegressor on GitHub.
End of explanation
linReg = SimpleOnlineLinearRegressor()
linReg.fit(X, Y)
## visualize model
plot_xy(X, Y)
plot_abline(linReg.get_params()['slope'], linReg.get_params()['intercept'],
np.min(X) - 1, np.max(X) + 1,
ax=plt.gca())
plt.title("Training data with best fit line")
plt.show()
print(linReg.get_params())
Explanation: Data is not really linear, but let's just do what the exercise tells us to do. Thus, our model is
\begin{equation}
y_i \sim \mathcal{N}(w_0 + w_1x_i, \sigma^2),
\end{equation}
or written in vector notation,
\begin{equation}
\mathbf{y} \sim \mathcal{N}\left(w_0\mathbf{1} + w_1\mathbf{x}, \sigma^2I\right).
\end{equation}
Thus, we have that
\begin{align}
p(\mathbf{y} \mid w_0,w_1,\sigma^2,\mathbf{x}) &= \prod_{i=1}^N\frac{1}{\sqrt{2\pi\sigma^2}}\exp\left(-\frac{1}{2\sigma^2}\left(y_i - w_0 - w_1x_i\right)^2\right) \
l(w_0,w_1,\sigma^2) = \log p(\mathbf{y} \mid w_0,w_1,\sigma^2,\mathbf{x}) &= -\frac{N}{2}\log(2\pi) - \frac{N}{2}\log(\sigma^2) - \frac{1}{2\sigma^2}\sum_{i=1}^N\left(y_i - w_0 - w_1x_i\right)^2.
\end{align}
Let us try to maximize the log-likelihood. We first solve for $w_0$.
\begin{align}
\frac{\partial{l}}{\partial w_0} = \frac{1}{\sigma^2}\sum_{i=1}^N \left(y_i - w_0 - w_1x_i\right)
= \frac{1}{\sigma^2}\left(-Nw_0 + \sum_{i=1}^N y_i - w_1 \sum_{i=1}^Nx_i\right).
\end{align}
Setting $\frac{\partial{l}}{\partial w_0} = 0$ and solving for $w_0$, we find that
\begin{equation}
\hat{w}0 = \frac{\sum{i=1}^N y_i}{N} - \hat{w}1\frac{\sum{i=1}^N x_i}{N} = \bar{y} - \hat{w}_1\bar{x}
\end{equation}
Next, we solve for $w_1$. Taking the derivative, we have
\begin{align}
\frac{\partial{l}}{\partial w_1} = \frac{1}{\sigma^2}\sum_{i=1}^N x_i\left(y_i - w_0 - w_1x_i\right)
= \frac{1}{\sigma^2}\sum_{i=1}^N\left(x_iy_i - w_0x_i - w_1x_i^2\right).
\end{align}
Setting $frac{\partial{l}}{\partial w_1} = 0$ and substituting $\hat{w}_0$ for $w_0$, we have that
\begin{align}
0 &= \frac{1}{\sigma^2}\sum_{i=1}^N\left(x_iy_i - (\bar{y} - \hat{w}1\bar{x})x_i - \hat{w}_1x_i^2\right) \
&= \sum{i=1}^N\left(x_iy_i - x_i\bar{y}\right) -\hat{w}1\sum{i=1}^N\left(x_i^2 - x_i\bar{x}\right).
\end{align}
Since $\sum_{i=1}^N x_i = N\bar{x}$, we have that
\begin{align}
0 &= \sum_{i=1}^N\left(x_iy_i - \bar{x}\bar{y}\right) -\hat{w}1\sum{i=1}^N\left(x_i^2 - \bar{x}^2\right) \
\hat{w}1 &= \frac{\sum{i=1}^N\left(x_iy_i - \bar{x}\bar{y}\right)}{\sum_{i=1}^N\left(x_i^2 - \bar{x}^2\right)}
= \frac{\sum_{i=1}^N\left(x_iy_i - x_i\bar{y} -\bar{x}y_i + \bar{x}\bar{y}\right)}{\sum_{i=1}^N\left(x_i^2 - 2x_i\bar{x} + \bar{x}^2\right)} \
&= \frac{\sum_{i=1}(x_i - \bar{x})(y_i-\bar{y})}{\sum_{i=1}(x_i - \bar{x})^2}
= \frac{\frac{1}{N}\sum_{i=1}(x_i - \bar{x})(y_i-\bar{y})}{\frac{1}{N}\sum_{i=1}(x_i - \bar{x})^2},
\end{align}
which is just the MLE for the covariance of $X$ and $Y$ over the variance of $X$. This can also be written as
\begin{equation}
\hat{w}1 = \frac{\frac{1}{N}\sum{i=1}(x_i - \bar{x})(y_i-\bar{y})}{\frac{1}{N}\sum_{i=1}(x_i - \bar{x})^2}
= \frac{\sum_{i=1}^N x_iy_i - \frac{1}{N}\left(\sum_{i=1}^Nx_i\right)\left(\sum_{i=1}^Ny_i\right)}{\sum_{i=1}^N x_i^2 - \frac{1}{N}\left(\sum_{i=1}^Nx_i\right)^2}.
\end{equation}
Finally, solving for $\sigma^2$, we have that
\begin{align}
\frac{\partial{l}}{\partial w_1} = -\frac{N}{2\sigma^2} + \frac{1}{2(\sigma^2)^2}\sum_{i=1}^N\left(y_i - \left(w_0 +w_1x_i\right)\right)^2.
\end{align}
Setting this equal to $0$, substituting for $w_0$ and $w_1$, we have that
\begin{align}
\hat{\sigma}^2 &= \frac{1}{N}\sum_{i=1}^N\left(y_i - \left(\hat{w}0 +\hat{w}_1x_i\right)\right)^2
= \frac{1}{N}\sum{i=1}^N\left(y_i^2 - 2y_i\left(\hat{w}0 +\hat{w}_1x_i\right) + \left(\hat{w}_0 +\hat{w}_1x_i\right)^2\right) \
&= \hat{w}_0^2 + \frac{1}{N}\left(\sum{i=1}^Ny_i^2 - 2\hat{w}0\sum{i=1}^Ny_i - 2\hat{w}1\sum{i=1}^N x_iy_i + 2\hat{w}0\hat{w}_1\sum{i=1}^N x_i + \hat{w}1^2\sum{i=1}^N x_i^2\right).
\end{align}
Thus, our sufficient statistics are
\begin{equation}
\left(N, \sum_{i=1}^N x_i, \sum_{i=1}^N y_i,\sum_{i=1}^N x_i^2, \sum_{i=1}^N y_i^2, \sum_{i=1}^N x_iy_i\right).
\end{equation}
End of explanation
onlineLinReg = SimpleOnlineLinearRegressor()
w_estimates = pd.DataFrame(index=np.arange(2,22), columns=['w0_est', 'w1_est', 'sigma2'], dtype=np.float64)
for i in range(len(Y)):
onlineLinReg.partial_fit(X[i], Y[i])
if i >= 1:
w_estimates.loc[i + 1] = {'w0_est': onlineLinReg.get_params()['intercept'],
'w1_est': onlineLinReg.get_params()['slope'],
'sigma2': onlineLinReg.get_params()['variance']}
print(w_estimates)
print(onlineLinReg.get_params())
plt.figure(figsize=(12,8))
plt.plot(w_estimates.index, w_estimates['w0_est'], 'o',
markeredgecolor='black', markerfacecolor='none', markeredgewidth=1,
label='Intercept estimate')
plt.plot(w_estimates.index, w_estimates['w1_est'], 'o',
markeredgecolor='red', markerfacecolor='none', markeredgewidth=1,
label='Slope estimate')
plt.grid()
plt.ylabel('Estimate')
plt.xlabel('# of data points')
plt.title('Online Linear Regression Estimates')
plt.hlines(onlineLinReg.get_params()['intercept'], xmin=np.min(X), xmax=np.max(X) + 5, linestyle='--',
label='Final intercept estimate')
plt.hlines(onlineLinReg.get_params()['slope'], xmin=np.min(X), xmax=np.max(X) + 5, linestyle='--', color='red',
label='Final slope estimate')
plt.legend(loc='center left', bbox_to_anchor=(1,0.5))
plt.show()
Explanation: Now, let's verify that the online version comes to the same numbers.
End of explanation |
2,164 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stochastic optimization landscape of a minimal MLP
In this notebook, we will try to better understand how stochastic gradient works. We fit a very simple non-convex model to data generated from a linear ground truth model.
We will also observe how the (stochastic) loss landscape changes when selecting different samples.
Step1: Data is generated from a simple model
Step2: We propose a minimal single hidden layer perceptron model with a single hidden unit and no bias. The model has two tunable parameters $w_1$, and $w_2$, such that
Step4: As in the previous notebook, we define a function to sample from and plot loss landscapes.
Step5: risks[k, i, j] holds loss value $\ell(f(w_1^{(i)} , w_2^{(j)}, x_k), y_k)$ for a single data point $(x_k, y_k)$;
empirical_risk[i, j] corresponds to the empirical risk averaged over the training data points
Step6: Let's define our train loop and train our model
Step7: We now plot
Step8: Observe and comment.
Exercices
Step9: Utilities to generate the slides figures | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
from torch.nn import Parameter
from torch.nn.functional import mse_loss
from torch.autograd import Variable
from torch.nn.functional import relu
Explanation: Stochastic optimization landscape of a minimal MLP
In this notebook, we will try to better understand how stochastic gradient works. We fit a very simple non-convex model to data generated from a linear ground truth model.
We will also observe how the (stochastic) loss landscape changes when selecting different samples.
End of explanation
def sample_from_ground_truth(n_samples=100, std=0.1):
x = torch.FloatTensor(n_samples, 1).uniform_(-1, 1)
epsilon = torch.FloatTensor(n_samples, 1).normal_(0, std)
y = 2 * x + epsilon
return x, y
n_samples = 100
std = 3
x, y = sample_from_ground_truth(n_samples=100, std=std)
Explanation: Data is generated from a simple model:
$$y= 2x + \epsilon$$
where:
$\epsilon \sim \mathcal{N}(0, 3)$
$x \sim \mathcal{U}(-1, 1)$
End of explanation
class SimpleMLP(nn.Module):
def __init__(self, w=None):
super(SimpleMLP, self).__init__()
self.w1 = Parameter(torch.FloatTensor((1,)))
self.w2 = Parameter(torch.FloatTensor((1,)))
if w is None:
self.reset_parameters()
else:
self.set_parameters(w)
def reset_parameters(self):
self.w1.uniform_(-.1, .1)
self.w2.uniform_(-.1, .1)
def set_parameters(self, w):
with torch.no_grad():
self.w1[0] = w[0]
self.w2[0] = w[1]
def forward(self, x):
return self.w1 * relu(self.w2 * x)
Explanation: We propose a minimal single hidden layer perceptron model with a single hidden unit and no bias. The model has two tunable parameters $w_1$, and $w_2$, such that:
$$f(x) = w_1 \cdot \sigma(w_2 \cdot x)$$
where $\sigma$ is the ReLU function.
End of explanation
from math import fabs
def make_grids(x, y, model_constructor, expected_risk_func, grid_size=100):
n_samples = len(x)
assert len(x) == len(y)
# Grid logic
x_max, y_max, x_min, y_min = 5, 5, -5, -5
w1 = np.linspace(x_min, x_max, grid_size, dtype=np.float32)
w2 = np.linspace(y_min, y_max, grid_size, dtype=np.float32)
W1, W2 = np.meshgrid(w1, w2)
W = np.concatenate((W1[:, :, None], W2[:, :, None]), axis=2)
W = torch.from_numpy(W)
# We will store the results in this tensor
risks = torch.FloatTensor(n_samples, grid_size, grid_size)
expected_risk = torch.FloatTensor(grid_size, grid_size)
with torch.no_grad():
for i in range(grid_size):
for j in range(grid_size):
model = model_constructor(W[i, j])
pred = model(x)
loss = mse_loss(pred, y, reduction="none")
risks[:, i, j] = loss.view(-1)
expected_risk[i, j] = expected_risk_func(W[i, j, 0], W[i, j, 1])
empirical_risk = torch.mean(risks, dim=0)
return W1, W2, risks.numpy(), empirical_risk.numpy(), expected_risk.numpy()
def expected_risk_simple_mlp(w1, w2):
Question: Can you derive this your-self?
return .5 * (8 / 3 - (4 / 3) * w1 * w2 + 1 / 3 * w1 ** 2 * w2 ** 2) + std ** 2
Explanation: As in the previous notebook, we define a function to sample from and plot loss landscapes.
End of explanation
W1, W2, risks, empirical_risk, expected_risk = make_grids(
x, y, SimpleMLP, expected_risk_func=expected_risk_simple_mlp)
Explanation: risks[k, i, j] holds loss value $\ell(f(w_1^{(i)} , w_2^{(j)}, x_k), y_k)$ for a single data point $(x_k, y_k)$;
empirical_risk[i, j] corresponds to the empirical risk averaged over the training data points:
$$ \frac{1}{n} \sum_{k=1}^{n} \ell(f(w_1^{(i)}, w_2^{(j)}, x_k), y_k)$$
End of explanation
from torch.optim import SGD
def train(model, x, y, lr=.1, n_epochs=1):
optimizer = SGD(model.parameters(), lr=lr)
iterate_rec = []
grad_rec = []
for epoch in range(n_epochs):
# Iterate over the dataset one sample at a time:
# batch_size=1
for this_x, this_y in zip(x, y):
this_x = this_x[None, :]
this_y = this_y[None, :]
optimizer.zero_grad()
pred = model(this_x)
loss = mse_loss(pred, this_y)
loss.backward()
with torch.no_grad():
iterate_rec.append(
[model.w1.clone()[0], model.w2.clone()[0]]
)
grad_rec.append(
[model.w1.grad.clone()[0], model.w2.grad.clone()[0]]
)
optimizer.step()
return np.array(iterate_rec), np.array(grad_rec)
init = torch.FloatTensor([3, -4])
model = SimpleMLP(init)
iterate_rec, grad_rec = train(model, x, y, lr=.01)
print(iterate_rec[-1])
Explanation: Let's define our train loop and train our model:
End of explanation
import matplotlib.colors as colors
class LevelsNormalize(colors.Normalize):
def __init__(self, levels, clip=False):
self.levels = levels
vmin, vmax = levels[0], levels[-1]
colors.Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
quantiles = np.linspace(0, 1, len(self.levels))
return np.ma.masked_array(np.interp(value, self.levels, quantiles))
def plot_map(W1, W2, risks, emp_risk, exp_risk, sample, iter_):
all_risks = np.concatenate((emp_risk.ravel(), exp_risk.ravel()))
x_center, y_center = emp_risk.shape[0] // 2, emp_risk.shape[1] // 2
risk_at_center = exp_risk[x_center, y_center]
low_levels = np.percentile(all_risks[all_risks <= risk_at_center],
q=np.linspace(0, 100, 11))
high_levels = np.percentile(all_risks[all_risks > risk_at_center],
q=np.linspace(10, 100, 10))
levels = np.concatenate((low_levels, high_levels))
norm = LevelsNormalize(levels=levels)
cmap = plt.get_cmap('RdBu_r')
fig, (ax1, ax2, ax3) = plt.subplots(ncols=3, figsize=(12, 4))
risk_levels = levels.copy()
risk_levels[0] = min(risks[sample].min(), risk_levels[0])
risk_levels[-1] = max(risks[sample].max(), risk_levels[-1])
ax1.contourf(W1, W2, risks[sample], levels=risk_levels,
norm=norm, cmap=cmap)
ax1.scatter(iterate_rec[iter_, 0], iterate_rec[iter_, 1],
color='orange')
if any(grad_rec[iter_] != 0):
ax1.arrow(iterate_rec[iter_, 0], iterate_rec[iter_, 1],
-0.1 * grad_rec[iter_, 0], -0.1 * grad_rec[iter_, 1],
head_width=0.3, head_length=0.5, fc='orange', ec='orange')
ax1.set_title('Pointwise risk')
ax2.contourf(W1, W2, emp_risk, levels=levels, norm=norm, cmap=cmap)
ax2.plot(iterate_rec[:iter_ + 1, 0], iterate_rec[:iter_ + 1, 1],
linestyle='-', marker='o', markersize=6,
color='orange', linewidth=2, label='SGD trajectory')
ax2.legend()
ax2.set_title('Empirical risk')
cf = ax3.contourf(W1, W2, exp_risk, levels=levels, norm=norm, cmap=cmap)
ax3.scatter(iterate_rec[iter_, 0], iterate_rec[iter_, 1],
color='orange', label='Current sample')
ax3.set_title('Expected risk (ground truth)')
plt.colorbar(cf, ax=ax3)
ax3.legend()
fig.suptitle('Iter %i, sample % i' % (iter_, sample))
plt.show()
for sample in range(0, 100, 10):
plot_map(W1, W2, risks, empirical_risk, expected_risk, sample, sample)
Explanation: We now plot:
- the point-wise risk at iteration $k$ on the left plot
- the total empirical risk on the center plot
- the expected risk on the right plot
Observe how empirical and expected risk differ, and how empirical risk minimization is not totally equivalent to expected risk minimization.
End of explanation
# %load solutions/linear_mlp.py
Explanation: Observe and comment.
Exercices:
Change the model to a completely linear one and reproduce the plots. What change do you observe regarding the plot of the stochastic loss landscape?
Try to initialize the model with pathological weights, e.g., symmetric ones. What do you observe?
You may increase the number of epochs to observe slow convergence phenomena
Try augmenting the noise in the dataset. What do you observe?
End of explanation
# from matplotlib.animation import FuncAnimation
# from IPython.display import HTML
# fig, ax = plt.subplots(figsize=(8, 8))
# all_risks = np.concatenate((empirical_risk.ravel(),
# expected_risk.ravel()))
# x_center, y_center = empirical_risk.shape[0] // 2, empirical_risk.shape[1] // 2
# risk_at_center = expected_risk[x_center, y_center]
# low_levels = np.percentile(all_risks[all_risks <= risk_at_center],
# q=np.linspace(0, 100, 11))
# high_levels = np.percentile(all_risks[all_risks > risk_at_center],
# q=np.linspace(10, 100, 10))
# levels = np.concatenate((low_levels, high_levels))
# norm = LevelsNormalize(levels=levels)
# cmap = plt.get_cmap('RdBu_r')
# ax.set_title('Pointwise risk')
# def animate(i):
# for c in ax.collections:
# c.remove()
# for l in ax.lines:
# l.remove()
# for p in ax.patches:
# p.remove()
# risk_levels = levels.copy()
# risk_levels[0] = min(risks[i].min(), risk_levels[0])
# risk_levels[-1] = max(risks[i].max(), risk_levels[-1])
# ax.contourf(W1, W2, risks[i], levels=risk_levels,
# norm=norm, cmap=cmap)
# ax.plot(iterate_rec[:i + 1, 0], iterate_rec[:i + 1, 1],
# linestyle='-', marker='o', markersize=6,
# color='orange', linewidth=2, label='SGD trajectory')
# return []
# anim = FuncAnimation(fig, animate,# init_func=init,
# frames=100, interval=300, blit=True)
# anim.save("stochastic_landscape_minimal_mlp.mp4")
# plt.close(fig)
# HTML(anim.to_html5_video())
# fig, ax = plt.subplots(figsize=(8, 7))
# cf = ax.contourf(W1, W2, empirical_risk, levels=levels, norm=norm, cmap=cmap)
# ax.plot(iterate_rec[:100 + 1, 0], iterate_rec[:100 + 1, 1],
# linestyle='-', marker='o', markersize=6,
# color='orange', linewidth=2, label='SGD trajectory')
# ax.legend()
# plt.colorbar(cf, ax=ax)
# ax.set_title('Empirical risk')
# fig.savefig('empirical_loss_landscape_minimal_mlp.png')
Explanation: Utilities to generate the slides figures
End of explanation |
2,165 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python II
Wiederholung
Step1: 2 Viel mächtigere Funktion
Step2: 3 Aber wie sind Funktion, Modules und Libraries aufgebaut?
Step3: 4 Bauen wir die eigenen Funktion
Bauen wir ganze Sätze, aus Listen von Strings
Step4: Und zum aufrufen packe ich meine List in Klammen ()
Step5: Bauen wir eine simple Suche
Step6: 5 Struktur und Troubleshooting
Zuerst die Imports
Dann die eigenen Funktionen
Nun der eigentliche Code | Python Code:
lst = [11,2,34, 4,5,5111]
len(lst)
len([11,2,'sort',4,5,5111])
sorted(lst)
lst
lst.sort()
lst
min(lst)
max(lst)
str(1212)
sum([1,2,2])
lst
lst.remove(4)
lst.append(4)
string = 'hello, wie geht, es Dir?'
string.split(',')
Explanation: Python II
Wiederholung: die wichtigsten Funktion
Viel mächtigere Funktion: Modules und Libraries
Schauen wir uns diese simplen Funktionen genauer an
Bauen wir die eigenen Funktionen
Struktur und Troubleshooting
1 Wichtigste Funktionen
Eine Übersicht der 64 wichtigsten simplen Python-Funktionen sind hier gelistet.
End of explanation
import urllib
import requests
import glob
import pandas
from bs4 import BeautifulSoup
import re
#etc. etc.
def sort(string):
elem = input('Bitte geben Sie den Suchbegriff ein: ')
if elem in string:
return 'Treffer'
else:
return 'Kein Treffer'
string_test = "«Guten Tag, ich bin der, der Sie vor einer Stunde geweckt hat», sagte der Moderator des Podiums in Stockholm, als er am Montagmittag den US-Wissenschaftler Richard H. Thaler anrief. Für seine Erforschung der Psychologie hinter wirtschaftlichen Entscheidungen bekommt dieser den Nobelpreis für Wirtschaft. Das gab die Königlich-Schwedische Wissenschaftsakademie bekannt. Der 72-Jährige lehrt an der Universität Chicago. Der Verhaltensökonom habe gezeigt, dass begrenzte Rationalität, soziale."
string_test
def suche(elem, string):
#elem = input('Bitte geben Sie den Suchbegriff ein: ')
if elem in string:
return 'Treffer'
else:
return 'Kein Treffer'
suche(strings[1], string_test)
strings = ['Stockholm', 'blödes Wort', 'Rationalität', 'soziale']
for st in strings:
ergebnis = suche(st, string_test)
print(st, ergebnis)
suche(string_test)
suche(string_test)
lst = [1,3,5]
len(lst)
Explanation: 2 Viel mächtigere Funktion: Modules und Libraries
Modules & Libraries
End of explanation
import os
#Funktioniert leider nicht mit allen Built in Functionen
os.path.split??
#Beispiel Sort
def sort(list):
for index in range(1,len(list)):
value = list[index]
i = index-1
while i>=0:
if value < list[i]:
list[i+1] = list[i]
list[i] = value
i -= 1
else:
break
return list
#Ganz komplexe. Wenn Du nicht mit dem Modul urllib, bzw. urlretrieve
#arbeiten könntest, müsstest Du jetzt all das eintippen.
def urlretrieve(url, filename=None, reporthook=None, data=None):
url_type, path = splittype(url)
with contextlib.closing(urlopen(url, data)) as fp:
headers = fp.info()
# Just return the local path and the "headers" for file://
# URLs. No sense in performing a copy unless requested.
if url_type == "file" and not filename:
return os.path.normpath(path), headers
# Handle temporary file setup.
if filename:
tfp = open(filename, 'wb')
else:
tfp = tempfile.NamedTemporaryFile(delete=False)
filename = tfp.name
_url_tempfiles.append(filename)
with tfp:
result = filename, headers
bs = 1024*8
size = -1
read = 0
blocknum = 0
if "content-length" in headers:
size = int(headers["Content-Length"])
if reporthook:
reporthook(blocknum, bs, size)
while True:
block = fp.read(bs)
if not block:
break
read += len(block)
tfp.write(block)
blocknum += 1
if reporthook:
reporthook(blocknum, bs, size)
if size >= 0 and read < size:
raise ContentTooShortError(
"retrieval incomplete: got only %i out of %i bytes"
% (read, size), result)
return result
import urllib.request
with urllib.request.urlopen('http://tagesanzeiger.ch/') as response:
html = response.read()
html
Explanation: 3 Aber wie sind Funktion, Modules und Libraries aufgebaut?
End of explanation
lst = ['ich', 'habe', None, 'ganz', 'kalt']
def join(mylist):
long_str = ''
for elem in mylist:
try:
long_str = long_str + elem + " "
except:
None
return long_str.strip()
join(lst)
Explanation: 4 Bauen wir die eigenen Funktion
Bauen wir ganze Sätze, aus Listen von Strings
End of explanation
join(lst)
string = ' ich habe ganz kalt '
string.strip()
Explanation: Und zum aufrufen packe ich meine List in Klammen ()
End of explanation
satz = "Die Unabhängigkeit der Notenbanken von der Politik gilt bisher als anerkannter Grundpfeiler der modernen Wirtschafts- und Geldpolitik in fortgeschrittenen Volkswirtschaften. Zu gross wäre sonst das Risiko, dass gewählte Politiker die Notenpresse anwerfen, wenn es ihren persönlichen Zielen gerade gelegen kommt, und dass dadurch die Stabilität des Geldes und das Vertrauen in das Zahlungsmittel untergraben wird."
sort(satz)
def find(string):
elem = input('Bitte geben Sie den Suchbegriff ein: ')
if elem in string:
return 'Treffer'
else:
return 'Kein Treffer'
find(satz)
Explanation: Bauen wir eine simple Suche
End of explanation
print('Immer im Code verwenden, um zu wissen wo der Fehler nun ganz genau passiert.')
#Beispiel Sort
def sort(list):
for index in range(1,len(list)):
value = list[index]
print(value)
i = index-1
print(i)
while i>=0:
if value < list[i]:
list[i+1] = list[i]
list[i] = value
i -= 1
else:
break
return list
sort(lst)
lst
Explanation: 5 Struktur und Troubleshooting
Zuerst die Imports
Dann die eigenen Funktionen
Nun der eigentliche Code
End of explanation |
2,166 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Class 27 - Boolean Networks
Step1: Define a function hamming.dist that gives the hamming distance between two states of the Boolean network (as numpy arrays of ones and zeroes)
Step2: Define a function evolve that takes the network from one Boolean vector state to another Boolean vector state
Step3: Write a function that runs 10,000 simulations of the network. In each simulation, the procedure is | Python Code:
import numpy
nodes = ['Cell Size',
'Cln3',
'MBF',
'Clb5,6',
'Mcm1/SFF',
'Swi5',
'Sic1',
'Clb1,2',
'Cdc20&Cdc14',
'Cdh1',
'Cln1,2',
'SBF']
N = len(nodes)
# define the transition matrix
a = numpy.zeros([N, N])
a[0,1] = 1
a[1,1] = -1
a[1,2] = 1
a[1,11] = 1
a[2,3] = 1
a[3,4] = 1
a[3,6] = -1
a[3,7] = 1
a[3,9] = -1
a[4,4] = -1
a[4,5] = 1
a[4,7] = 1
a[4,8] = 1
a[5,5] = -1
a[5,6] = 1
a[6,3] = -1
a[6,7] = -1
a[7,2] = -1
a[7,4] = 1
a[7,5] = -1
a[7,6] = -1
a[7,8] = 1
a[7,9] = -1
a[7,11] = -1
a[8,3] = -1
a[8,5] = 1
a[8,6] = 1
a[8,7] = -1
a[8,8] = -1
a[8,9] = 1
a[9,7] = -1
a[10,6] = -1
a[10,9] = -1
a[10,10] = -1
a[11,10] = 1
a = numpy.matrix(a)
# define the matrix of states for the fixed points
num_fp = 7
fixed_points = numpy.zeros([num_fp, N])
fixed_points[0, 6] = 1
fixed_points[0, 9] = 1
fixed_points[1, 10] = 1
fixed_points[1, 11] = 1
fixed_points[2, 2] = 1
fixed_points[2, 6] = 1
fixed_points[2, 9] = 1
fixed_points[3, 6] = 1
fixed_points[4, 2] = 1
fixed_points[4, 6] = 1
fixed_points[6, 9] = 1
fixed_points = numpy.matrix(fixed_points)
basin_counts = numpy.zeros(num_fp)
Explanation: Class 27 - Boolean Networks
End of explanation
def hamming_dist(x1, x2):
return np.sum(np.abs(x1-x2))
Explanation: Define a function hamming.dist that gives the hamming distance between two states of the Boolean network (as numpy arrays of ones and zeroes)
End of explanation
def evolve(state):
result = numpy.array(a.transpose().dot(state))
result = numpy.reshape(result, N)
result[result > 0] = 1
result[result == 0] = state[result == 0]
result[result < 0] = 0
return result
Explanation: Define a function evolve that takes the network from one Boolean vector state to another Boolean vector state
End of explanation
import itertools
import random
basin_ids = []
for _ in itertools.repeat(None, 10000):
state = [0]
for pos in range(0, (N-1)):
state.append(random.randint(0,1))
state = numpy.array(state)
state_new = numpy.array([-1]*N)
while(True):
state_new = evolve(state)
if hamming_dist(state, state_new) == 0:
break
state = state_new
for j in range(0, num_fp):
fp_state = numpy.array(fixed_points[j,])
fp_state = numpy.reshape(fp_state, N)
if hamming_dist(state, fp_state) == 0:
basin_ids.append(j)
numpy.bincount(basin_ids)
Explanation: Write a function that runs 10,000 simulations of the network. In each simulation, the procedure is:
- create a random binary vector of length 12, and call that vector state (make sure the zeroth element is set to zero)
- iteratively call "evolve", passing the state to evolve and then updating state with the return value from evolve
- check if state changes in the last call to evolve; if it does not, then you have reached a fixed point; stop iterating
- compare the state to the rows of fixed_points; for the unique row i for which you find a match, increment the element in position i of basin_counts
- print out basin_counts
End of explanation |
2,167 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dark Energy Spectroscopic Instrument
Some calculations to assist with building the DESI model from an existing ZEMAX model and other sources.
You can safely ignore this if you just want to use the model.
Step1: Load the DESI model
Step2: Corrector Internal Baffles
Setup YAML to preserve dictionary order and trunctate distances (in meters) to 5 digits
Step3: Define the corrector internal baffle apertures, from DESI-4103-v1. These have been checked against DESI-4037-v6, with the extra baffle between ADC1 and ADC2 added
Step4: Calculate batoid Baffle surfaces for the corrector. These are mechanically planar, but that would put their (planar) center inside a lens, breaking the sequential tracing model. We fix this by use spherical baffle surfaces that have the same apertures. This code was originally used to read a batoid model without baffles, but also works if the baffles are already added.
Step5: Validate that the baffle edges in the final model have the correct apertures
Step6: Corrector Cage and Spider
Calculate simplified vane coordinates using parameters from DESI-4110-v1
Step7: Plot "User Aperture Data" from the ZEMAX "spider" surface 6, as cross check | Python Code:
import batoid
import numpy as np
import yaml
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
Explanation: Dark Energy Spectroscopic Instrument
Some calculations to assist with building the DESI model from an existing ZEMAX model and other sources.
You can safely ignore this if you just want to use the model.
End of explanation
fiducial_telescope = batoid.Optic.fromYaml("DESI.yaml")
Explanation: Load the DESI model:
End of explanation
import collections
def dict_representer(dumper, data):
return dumper.represent_dict(data.items())
yaml.Dumper.add_representer(collections.OrderedDict, dict_representer)
def float_representer(dumper, value):
return dumper.represent_scalar(u'tag:yaml.org,2002:float', f'{value:.5f}')
yaml.Dumper.add_representer(float, float_representer)
Explanation: Corrector Internal Baffles
Setup YAML to preserve dictionary order and trunctate distances (in meters) to 5 digits:
End of explanation
# baffle z-coordinates relative to FP in mm from DESI-4103-v1, checked
# against DESI-4037-v6 (and with extra ADC baffle added).
ZBAFFLE = np.array([
2302.91, 2230.29, 1916.86, 1823.57, 1617.37, 1586.76, 1457.88, 1349.45, 1314.68,
1232.06, 899.67, 862.08, 568.81, 483.84, 415.22])
# baffle radii in mm from DESI-4103-v1, checked
# against DESI-4037-v6 (and with extra ADC baffle added).
RBAFFLE = np.array([
558.80, 544.00, 447.75, 417.00, 376.00, 376.00, 378.00, 378.00, 395.00,
403.00, 448.80, 453.70, 492.00, 501.00, 496.00])
Explanation: Define the corrector internal baffle apertures, from DESI-4103-v1. These have been checked against DESI-4037-v6, with the extra baffle between ADC1 and ADC2 added:
End of explanation
def baffles(nindent=10):
indent = ' ' * nindent
# Measure z from C1 front face in m.
zbaffle = 1e-3 * (2425.007 - ZBAFFLE)
# Convert r from mm to m.
rbaffle = 1e-3 * RBAFFLE
# By default, all baffles are planar.
nbaffles = len(zbaffle)
baffles = []
for i in range(nbaffles):
baffle = collections.OrderedDict()
baffle['type'] = 'Baffle'
baffle['name'] = f'B{i+1}'
baffle['coordSys'] = {'z': float(zbaffle[i])}
baffle['surface'] = {'type': 'Plane'}
baffle['obscuration'] = {'type': 'ClearCircle', 'radius': float(rbaffle[i])}
baffles.append(baffle)
# Loop over corrector lenses.
corrector = fiducial_telescope['DESI.Hexapod.Corrector']
lenses = 'C1', 'C2', 'ADC1rotator.ADC1', 'ADC2rotator.ADC2', 'C3', 'C4'
for lens in lenses:
obj = corrector['Corrector.' + lens]
assert isinstance(obj, batoid.optic.Lens)
front, back = obj.items[0], obj.items[1]
fTransform = batoid.CoordTransform(front.coordSys, corrector.coordSys)
bTransform = batoid.CoordTransform(back.coordSys, corrector.coordSys)
_, _, zfront = fTransform.applyForwardArray(0, 0, 0)
_, _, zback = bTransform.applyForwardArray(0, 0, 0)
# Find any baffles "inside" this lens.
inside = (zbaffle >= zfront) & (zbaffle <= zback)
if not any(inside):
continue
inside = np.where(inside)[0]
for k in inside:
baffle = baffles[k]
r = rbaffle[k]
# Calculate sag at (x,y)=(0,r) to avoid effect of ADC rotation about y.
sagf, sagb = front.surface.sag(0, r), back.surface.sag(0, r)
_, _, zf = fTransform.applyForwardArray(0, r, sagf)
_, _, zb = bTransform.applyForwardArray(0, r, sagb)
if zf > zbaffle[k]:
print(f'{indent}# Move B{k+1} in front of {obj.name} and make spherical to keep model sequential.')
assert isinstance(front.surface, batoid.Sphere)
baffle['surface'] = {'type': 'Sphere', 'R': front.surface.R}
baffle['coordSys']['z'] = float(zfront - (zf - zbaffle[k]))
elif zbaffle[k] > zb:
print(f'{indent}# Move B{k+1} behind {obj.name} and make spherical to keep model sequential.')
assert isinstance(back.surface, batoid.Sphere)
baffle['surface'] = {'type': 'Sphere', 'R': back.surface.R}
baffle['coordSys']['z'] = float(zback + (zbaffle[k] - zb))
else:
print(f'Cannot find a solution for B{k+1} inside {obj.name}!')
lines = yaml.dump(baffles)
for line in lines.split('\n'):
print(indent + line)
baffles()
Explanation: Calculate batoid Baffle surfaces for the corrector. These are mechanically planar, but that would put their (planar) center inside a lens, breaking the sequential tracing model. We fix this by use spherical baffle surfaces that have the same apertures. This code was originally used to read a batoid model without baffles, but also works if the baffles are already added.
End of explanation
def validate_baffles():
corrector = fiducial_telescope['DESI.Hexapod.Corrector']
for i in range(len(ZBAFFLE)):
baffle = corrector[f'Corrector.B{i+1}']
# Calculate surface z at origin in corrector coordinate system.
_, _, z = batoid.CoordTransform(baffle.coordSys, corrector.coordSys).applyForwardArray(0, 0, 0)
# Calculate surface z at (r,0) in corrector coordinate system.
sag = baffle.surface.sag(1e-3 * RBAFFLE[i], 0)
z += sag
# Measure from FP in mm.
z = np.round(2425.007 - 1e3 * z, 2)
assert z == ZBAFFLE[i], baffle.name
validate_baffles()
Explanation: Validate that the baffle edges in the final model have the correct apertures:
End of explanation
def spider(dmin=1762, dmax=4940.3, ns_angle=77, widths=[28.5, 28.5, 60., 19.1],
wart_r=958, wart_dth=6, wart_w=300):
# Vane order is [NE, SE, SW, NW], with N along -y and E along +x.
fig, ax = plt.subplots(figsize=(10, 10))
ax.add_artist(plt.Circle((0, 0), 0.5 * dmax, color='yellow'))
ax.add_artist(plt.Circle((0, 0), 0.5 * dmin, color='gray'))
ax.set_xlim(-0.5 * dmax, 0.5 * dmax)
ax.set_ylim(-0.5 * dmax, 0.5 * dmax)
# Place outer vertices equally along the outer ring at NE, SE, SW, NW.
xymax = 0.5 * dmax * np.array([[1, -1], [1, 1], [-1, 1], [-1, -1]]) / np.sqrt(2)
# Calculate inner vertices so that the planes of the NE and NW vanes intersect
# with an angle of ns_angle (same for the SE and SW planes).
angle = np.deg2rad(ns_angle)
x = xymax[1, 0]
dx = xymax[1, 1] * np.tan(0.5 * angle)
xymin = np.array([[x - dx, 0], [x - dx, 0], [-x+dx, 0], [-x+dx, 0]])
for i in range(4):
plt.plot([xymin[i,0], xymax[i,0]], [xymin[i,1], xymax[i,1]], '-', lw=0.1 * widths[i])
# Calculate batoid rectangle params for the vanes.
xy0 = 0.5 * (xymin + xymax)
heights = np.sqrt(np.sum((xymax - xymin) ** 2, axis=1))
# Calculate wart rectangle coords.
wart_h = 2 * (wart_r - 0.5 * dmin)
wart_dth = np.deg2rad(wart_dth)
wart_xy = 0.5 * dmin * np.array([-np.sin(wart_dth), np.cos(wart_dth)])
plt.plot(*wart_xy, 'rx', ms=25)
# Print batoid config.
indent = ' ' * 10
print(f'{indent}-\n{indent} type: ClearAnnulus')
print(f'{indent} inner: {np.round(0.5e-3 * dmin, 5)}')
print(f'{indent} outer: {np.round(0.5e-3 * dmax, 5)}')
for i in range(4):
print(f'{indent}-\n{indent} type: ObscRectangle')
print(f'{indent} x: {np.round(1e-3 * xy0[i, 0], 5)}')
print(f'{indent} y: {np.round(1e-3 * xy0[i, 1], 5)}')
print(f'{indent} width: {np.round(1e-3 * widths[i], 5)}')
print(f'{indent} height: {np.round(1e-3 * heights[i], 5)}')
dx, dy = xymax[i] - xymin[i]
angle = np.arctan2(-dx, dy)
print(f'{indent} theta: {np.round(angle, 5)}')
print(f'-\n type: ObscRectangle')
print(f' x: {np.round(1e-3 * wart_xy[0], 5)}')
print(f' y: {np.round(1e-3 * wart_xy[1], 5)}')
print(f' width: {np.round(1e-3 * wart_w, 5)}')
print(f' height: {np.round(1e-3 * wart_h, 5)}')
print(f' theta: {np.round(wart_dth, 5)}')
spider()
Explanation: Corrector Cage and Spider
Calculate simplified vane coordinates using parameters from DESI-4110-v1:
End of explanation
def plot_obs():
wart1 = np.array([
[ -233.22959, 783.94254],
[-249.32698, 937.09892],
[49.02959, 968.45746],
[ 65.126976, 815.30108],
[ -233.22959, 783.94254],
])
wart2 = np.array([
[-233.22959, 783.94254],
[ -249.32698, 937.09892],
[49.029593, 968.45746],
[65.126976, 815.30108],
[-233.22959, 783.94254],
])
vane1 = np.array([
[363.96554,-8.8485008],
[341.66121, 8.8931664],
[1713.4345, 1733.4485],
[1735.7388, 1715.7068],
[363.96554,-8.8485008],
])
vane2 = np.array([
[-1748.0649, 1705.9022],
[ -1701.1084, 1743.2531],
[ -329.33513, 18.697772],
[ -376.29162, -18.653106],
[-1748.0649, 1705.9022],
])
vane3 = np.array([
[ -1717.1127, -1730.5227],
[ -1732.0605, -1718.6327],
[ -360.28728, 5.922682],
[-345.33947, -5.9673476],
[ -1717.1127, -1730.5227],
])
vane4 = np.array([
[ 341.66121, -8.8931664],
[363.96554, 8.8485008],
[1735.7388, -1715.7068],
[1713.4345, -1733.4485],
[ 341.66121, -8.8931664],
])
extra = np.array([
[ 2470 , 0 ],
[ 2422.5396 , -481.8731 ],
[ 2281.9824 , -945.22808 ],
[ 2053.7299 , -1372.2585 ],
[ 1746.5537 , -1746.5537 ],
[ 1372.2585 , -2053.7299 ],
[ 945.22808 , -2281.9824 ],
[ 481.8731 , -2422.5396 ],
[ 3.0248776e-13 , -2470 ],
[ -481.8731 , -2422.5396 ],
[ -945.22808 , -2281.9824 ],
[ -1372.2585 , -2053.7299 ],
[ -1746.5537 , -1746.5537 ],
[ -2053.7299 , -1372.2585 ],
[ -2281.9824 , -945.22808 ],
[ -2422.5396 , -481.8731 ],
[ -2470 , 2.9882133e-12 ],
[ -2422.5396 , 481.8731 ],
[ -2281.9824 , 945.22808 ],
[ -2053.7299 , 1372.2585 ],
[ -1746.5537 , 1746.5537 ],
[ -1372.2585 , 2053.7299 ],
[ -945.22808 , 2281.9824 ],
[ -481.8731 , 2422.5396 ],
[ 5.9764266e-12 , 2470 ],
[ 481.8731 , 2422.5396 ],
[ 945.22808 , 2281.9824 ],
[ 1372.2585 , 2053.7299 ],
[ 1746.5537 , 1746.5537 ],
[ 2053.7299 , 1372.2585 ],
[ 2281.9824 , 945.22808 ],
[ 2422.5396 , 481.8731 ],
[ 2470 , -1.0364028e-11 ],
[ 2724 , 0 ],
[ 2671.6591 , -531.42604 ],
[ 2516.6478 , -1042.4297 ],
[ 2264.9232 , -1513.3733 ],
[ 1926.1589 , -1926.1589 ],
[ 1513.3733 , -2264.9232 ],
[ 1042.4297 , -2516.6478 ],
[ 531.42604 , -2671.6591 ],
[ 3.3359379e-13 , -2724 ],
[ -531.42604 , -2671.6591 ],
[ -1042.4297 , -2516.6478 ],
[ -1513.3733 , -2264.9232 ],
[ -1926.1589 , -1926.1589 ],
[ -2264.9232 , -1513.3733 ],
[ -2516.6478 , -1042.4297 ],
[ -2671.6591 , -531.42604 ],
[ -2724 , 3.2955032e-12 ],
[ -2671.6591 , 531.42604 ],
[ -2516.6478 , 1042.4297 ],
[ -2264.9232 , 1513.3733 ],
[ -1926.1589 , 1926.1589 ],
[ -1513.3733 , 2264.9232 ],
[ -1042.4297 , 2516.6478 ],
[ -531.42604 , 2671.6591 ],
[ 6.5910065e-12 , 2724 ],
[ 531.42604 , 2671.6591 ],
[ 1042.4297 , 2516.6478 ],
[ 1513.3733 , 2264.9232 ],
[ 1926.1589 , 1926.1589 ],
[ 2264.9232 , 1513.3733 ],
[ 2516.6478 , 1042.4297 ],
[ 2671.6591 , 531.42604 ],
[ 2724 , -1.1429803e-11 ],
[ 2470 , 0 ],
])
plt.figure(figsize=(20, 20))
plt.plot(*wart1.T)
plt.plot(*wart2.T)
plt.plot(*vane1.T)
plt.plot(*vane2.T)
plt.plot(*vane3.T)
plt.plot(*vane4.T)
plt.plot(*extra.T)
w = 1762./2.
plt.gca().add_artist(plt.Circle((0, 0), w, color='gray'))
plt.gca().set_aspect(1.)
plot_obs()
Explanation: Plot "User Aperture Data" from the ZEMAX "spider" surface 6, as cross check:
End of explanation |
2,168 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../Pierian-Data-Logo.PNG">
<br>
<strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong>
CNN on Custom Images
For this exercise we're using a collection of Cats and Dogs images inspired by the classic <a href='https
Step1: Define transforms
In the previous section we looked at a variety of transforms available for data augmentation (rotate, flip, etc.) and normalization.<br>
Here we'll combine the ones we want, including the <a href='https
Step2: Prepare train and test sets, loaders
We're going to take advantage of a built-in torchvision dataset tool called <a href='https
Step3: Display a batch of images
To verify that the training loader selects cat and dog images at random, let's show a batch of loaded images.<br>
Recall that imshow clips pixel values <0, so the resulting display lacks contrast. We'll apply a quick inverse transform to the input tensor so that images show their "true" colors.
Step4: Define the model
We'll start by using a model similar to the one we applied to the CIFAR-10 dataset, except that here we have a binary classification (2 output channels, not 10). Also, we'll add another set of convolution/pooling layers.
Step5: <div class="alert alert-info"><strong>Why <tt>(54x54x16)</tt>?</strong><br>
With 224 pixels per side, the kernels and pooling layers result in $\;(((224-2)/2)-2)/2 = 54.5\;$ which rounds down to 54 pixels per side.</div>
Instantiate the model, define loss and optimization functions
We're going to call our model "CNNmodel" to differentiate it from an "AlexNetmodel" we'll use later.
Step6: Looking at the trainable parameters
Step7: Train the model
In the interests of time, we'll limit the number of training batches to 800, and the number of testing batches to 300. We'll train the model on 8000 of 18743 available images, and test it on 3000 out of 6251 images.
Step8: Save the trained model
Step9: Evaluate model performance
Step10: Download a pretrained model
Torchvision has a number of proven models available through <a href='https
Step11: <div class="alert alert-info">This model uses <a href='https
Step12: Modify the classifier
Next we need to modify the fully connected layers to produce a binary output. The section is labeled "classifier" in the AlexNet model.<br>
Note that when we assign new layers, their parameters default to <tt>.requires_grad=True</tt>.
Step13: Define loss function & optimizer
We only want to optimize the classifier parameters, as the feature parameters are frozen.
Step14: Train the model
Remember, we're only training the fully connected layers. The convolutional layers have fixed weights and biases. For this reason, we only need to run one epoch.
Step15: Run a new image through the model
We can also pass a single image through the model to obtain a prediction.<br>
Pick a number from 0 to 6250, assign it to "x", and we'll use that value to select an image from the Cats and Dogs test set. | Python Code:
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchvision import datasets, transforms, models # add models to the list
from torchvision.utils import make_grid
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# ignore harmless warnings
import warnings
warnings.filterwarnings("ignore")
Explanation: <img src="../Pierian-Data-Logo.PNG">
<br>
<strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong>
CNN on Custom Images
For this exercise we're using a collection of Cats and Dogs images inspired by the classic <a href='https://www.kaggle.com/c/dogs-vs-cats'>Kaggle competition</a>.
In the last section we downloaded the files, looked at the directory structure, examined the images, and performed a variety of transforms in preparation for training.
In this section we'll define our model, then feed images through a training and validation sequence using DataLoader.
Image files directory tree
<pre>.
└── Data
└── CATS_DOGS
├── test
│ ├── CAT
│ │ ├── 9374.jpg
│ │ ├── 9375.jpg
│ │ └── ... (3,126 files)
│ └── DOG
│ ├── 9374.jpg
│ ├── 9375.jpg
│ └── ... (3,125 files)
│
└── train
├── CAT
│ ├── 0.jpg
│ ├── 1.jpg
│ └── ... (9,371 files)
└── DOG
├── 0.jpg
├── 1.jpg
└── ... (9,372 files)</pre>
Perform standard imports
End of explanation
train_transform = transforms.Compose([
transforms.RandomRotation(10), # rotate +/- 10 degrees
transforms.RandomHorizontalFlip(), # reverse 50% of images
transforms.Resize(224), # resize shortest side to 224 pixels
transforms.CenterCrop(224), # crop longest side to 224 pixels at center
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
test_transform = transforms.Compose([
transforms.Resize(224),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
Explanation: Define transforms
In the previous section we looked at a variety of transforms available for data augmentation (rotate, flip, etc.) and normalization.<br>
Here we'll combine the ones we want, including the <a href='https://discuss.pytorch.org/t/normalization-in-the-mnist-example/457/22'>recommended normalization parameters</a> for mean and std per channel.
End of explanation
root = '../Data/CATS_DOGS'
train_data = datasets.ImageFolder(os.path.join(root, 'train'), transform=train_transform)
test_data = datasets.ImageFolder(os.path.join(root, 'test'), transform=test_transform)
torch.manual_seed(42)
train_loader = DataLoader(train_data, batch_size=10, shuffle=True)
test_loader = DataLoader(test_data, batch_size=10, shuffle=True)
class_names = train_data.classes
print(class_names)
print(f'Training images available: {len(train_data)}')
print(f'Testing images available: {len(test_data)}')
Explanation: Prepare train and test sets, loaders
We're going to take advantage of a built-in torchvision dataset tool called <a href='https://pytorch.org/docs/stable/torchvision/datasets.html#imagefolder'><tt><strong>ImageFolder</strong></tt></a>.
End of explanation
# Grab the first batch of 10 images
for images,labels in train_loader:
break
# Print the labels
print('Label:', labels.numpy())
print('Class:', *np.array([class_names[i] for i in labels]))
im = make_grid(images, nrow=5) # the default nrow is 8
# Inverse normalize the images
inv_normalize = transforms.Normalize(
mean=[-0.485/0.229, -0.456/0.224, -0.406/0.225],
std=[1/0.229, 1/0.224, 1/0.225]
)
im_inv = inv_normalize(im)
# Print the images
plt.figure(figsize=(12,4))
plt.imshow(np.transpose(im_inv.numpy(), (1, 2, 0)));
Explanation: Display a batch of images
To verify that the training loader selects cat and dog images at random, let's show a batch of loaded images.<br>
Recall that imshow clips pixel values <0, so the resulting display lacks contrast. We'll apply a quick inverse transform to the input tensor so that images show their "true" colors.
End of explanation
class ConvolutionalNetwork(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 3, 1)
self.conv2 = nn.Conv2d(6, 16, 3, 1)
self.fc1 = nn.Linear(54*54*16, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 2)
def forward(self, X):
X = F.relu(self.conv1(X))
X = F.max_pool2d(X, 2, 2)
X = F.relu(self.conv2(X))
X = F.max_pool2d(X, 2, 2)
X = X.view(-1, 54*54*16)
X = F.relu(self.fc1(X))
X = F.relu(self.fc2(X))
X = self.fc3(X)
return F.log_softmax(X, dim=1)
Explanation: Define the model
We'll start by using a model similar to the one we applied to the CIFAR-10 dataset, except that here we have a binary classification (2 output channels, not 10). Also, we'll add another set of convolution/pooling layers.
End of explanation
torch.manual_seed(101)
CNNmodel = ConvolutionalNetwork()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(CNNmodel.parameters(), lr=0.001)
CNNmodel
Explanation: <div class="alert alert-info"><strong>Why <tt>(54x54x16)</tt>?</strong><br>
With 224 pixels per side, the kernels and pooling layers result in $\;(((224-2)/2)-2)/2 = 54.5\;$ which rounds down to 54 pixels per side.</div>
Instantiate the model, define loss and optimization functions
We're going to call our model "CNNmodel" to differentiate it from an "AlexNetmodel" we'll use later.
End of explanation
def count_parameters(model):
params = [p.numel() for p in model.parameters() if p.requires_grad]
for item in params:
print(f'{item:>8}')
print(f'________\n{sum(params):>8}')
count_parameters(CNNmodel)
Explanation: Looking at the trainable parameters
End of explanation
import time
start_time = time.time()
epochs = 3
max_trn_batch = 800
max_tst_batch = 300
train_losses = []
test_losses = []
train_correct = []
test_correct = []
for i in range(epochs):
trn_corr = 0
tst_corr = 0
# Run the training batches
for b, (X_train, y_train) in enumerate(train_loader):
# Limit the number of batches
if b == max_trn_batch:
break
b+=1
# Apply the model
y_pred = CNNmodel(X_train)
loss = criterion(y_pred, y_train)
# Tally the number of correct predictions
predicted = torch.max(y_pred.data, 1)[1]
batch_corr = (predicted == y_train).sum()
trn_corr += batch_corr
# Update parameters
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Print interim results
if b%200 == 0:
print(f'epoch: {i:2} batch: {b:4} [{10*b:6}/8000] loss: {loss.item():10.8f} \
accuracy: {trn_corr.item()*100/(10*b):7.3f}%')
train_losses.append(loss)
train_correct.append(trn_corr)
# Run the testing batches
with torch.no_grad():
for b, (X_test, y_test) in enumerate(test_loader):
# Limit the number of batches
if b == max_tst_batch:
break
# Apply the model
y_val = CNNmodel(X_test)
# Tally the number of correct predictions
predicted = torch.max(y_val.data, 1)[1]
tst_corr += (predicted == y_test).sum()
loss = criterion(y_val, y_test)
test_losses.append(loss)
test_correct.append(tst_corr)
print(f'\nDuration: {time.time() - start_time:.0f} seconds') # print the time elapsed
Explanation: Train the model
In the interests of time, we'll limit the number of training batches to 800, and the number of testing batches to 300. We'll train the model on 8000 of 18743 available images, and test it on 3000 out of 6251 images.
End of explanation
torch.save(CNNmodel.state_dict(), 'CustomImageCNNModel.pt')
Explanation: Save the trained model
End of explanation
plt.plot(train_losses, label='training loss')
plt.plot(test_losses, label='validation loss')
plt.title('Loss at the end of each epoch')
plt.legend();
plt.plot([t/80 for t in train_correct], label='training accuracy')
plt.plot([t/30 for t in test_correct], label='validation accuracy')
plt.title('Accuracy at the end of each epoch')
plt.legend();
print(test_correct)
print(f'Test accuracy: {test_correct[-1].item()*100/3000:.3f}%')
Explanation: Evaluate model performance
End of explanation
AlexNetmodel = models.alexnet(pretrained=True)
AlexNetmodel
Explanation: Download a pretrained model
Torchvision has a number of proven models available through <a href='https://pytorch.org/docs/stable/torchvision/models.html#classification'><tt><strong>torchvision.models</strong></tt></a>:
<ul>
<li><a href="https://arxiv.org/abs/1404.5997">AlexNet</a></li>
<li><a href="https://arxiv.org/abs/1409.1556">VGG</a></li>
<li><a href="https://arxiv.org/abs/1512.03385">ResNet</a></li>
<li><a href="https://arxiv.org/abs/1602.07360">SqueezeNet</a></li>
<li><a href="https://arxiv.org/abs/1608.06993">DenseNet</a></li>
<li><a href="https://arxiv.org/abs/1512.00567">Inception</a></li>
<li><a href="https://arxiv.org/abs/1409.4842">GoogLeNet</a></li>
<li><a href="https://arxiv.org/abs/1807.11164">ShuffleNet</a></li>
<li><a href="https://arxiv.org/abs/1801.04381">MobileNet</a></li>
<li><a href="https://arxiv.org/abs/1611.05431">ResNeXt</a></li>
</ul>
These have all been trained on the <a href='http://www.image-net.org/'>ImageNet</a> database of images. Our only task is to reduce the output of the fully connected layers from (typically) 1000 categories to just 2.
To access the models, you can construct a model with random weights by calling its constructor:<br>
<pre>resnet18 = models.resnet18()</pre>
You can also obtain a pre-trained model by passing pretrained=True:<br>
<pre>resnet18 = models.resnet18(pretrained=True)</pre>
All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225].
Feel free to investigate the different models available. Each one will be downloaded to a cache directory the first time they're accessed - from then on they'll be available locally.
For its simplicity and effectiveness, we'll use AlexNet:
End of explanation
for param in AlexNetmodel.parameters():
param.requires_grad = False
Explanation: <div class="alert alert-info">This model uses <a href='https://pytorch.org/docs/master/nn.html#torch.nn.AdaptiveAvgPool2d'><tt><strong>torch.nn.AdaptiveAvgPool2d(<em>output_size</em>)</strong></tt></a> to convert the large matrix coming out of the convolutional layers to a (6x6)x256 matrix being fed into the fully connected layers.</div>
Freeze feature parameters
We want to freeze the pre-trained weights & biases. We set <tt>.requires_grad</tt> to False so we don't backprop through them.
End of explanation
torch.manual_seed(42)
AlexNetmodel.classifier = nn.Sequential(nn.Linear(9216, 1024),
nn.ReLU(),
nn.Dropout(0.4),
nn.Linear(1024, 2),
nn.LogSoftmax(dim=1))
AlexNetmodel
# These are the TRAINABLE parameters:
count_parameters(AlexNetmodel)
Explanation: Modify the classifier
Next we need to modify the fully connected layers to produce a binary output. The section is labeled "classifier" in the AlexNet model.<br>
Note that when we assign new layers, their parameters default to <tt>.requires_grad=True</tt>.
End of explanation
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(AlexNetmodel.classifier.parameters(), lr=0.001)
Explanation: Define loss function & optimizer
We only want to optimize the classifier parameters, as the feature parameters are frozen.
End of explanation
import time
start_time = time.time()
epochs = 1
max_trn_batch = 800
max_tst_batch = 300
train_losses = []
test_losses = []
train_correct = []
test_correct = []
for i in range(epochs):
trn_corr = 0
tst_corr = 0
# Run the training batches
for b, (X_train, y_train) in enumerate(train_loader):
if b == max_trn_batch:
break
b+=1
# Apply the model
y_pred = AlexNetmodel(X_train)
loss = criterion(y_pred, y_train)
# Tally the number of correct predictions
predicted = torch.max(y_pred.data, 1)[1]
batch_corr = (predicted == y_train).sum()
trn_corr += batch_corr
# Update parameters
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Print interim results
if b%200 == 0:
print(f'epoch: {i:2} batch: {b:4} [{10*b:6}/8000] loss: {loss.item():10.8f} \
accuracy: {trn_corr.item()*100/(10*b):7.3f}%')
train_losses.append(loss)
train_correct.append(trn_corr)
# Run the testing batches
with torch.no_grad():
for b, (X_test, y_test) in enumerate(test_loader):
if b == max_tst_batch:
break
# Apply the model
y_val = AlexNetmodel(X_test)
# Tally the number of correct predictions
predicted = torch.max(y_val.data, 1)[1]
tst_corr += (predicted == y_test).sum()
loss = criterion(y_val, y_test)
test_losses.append(loss)
test_correct.append(tst_corr)
print(f'\nDuration: {time.time() - start_time:.0f} seconds') # print the time elapsed
print(test_correct)
print(f'Test accuracy: {test_correct[-1].item()*100/3000:.3f}%')
Explanation: Train the model
Remember, we're only training the fully connected layers. The convolutional layers have fixed weights and biases. For this reason, we only need to run one epoch.
End of explanation
x = 2019
im = inv_normalize(test_data[x][0])
plt.imshow(np.transpose(im.numpy(), (1, 2, 0)));
test_data[x][0].shape
# CNN Model Prediction:
CNNmodel.eval()
with torch.no_grad():
new_pred = CNNmodel(test_data[x][0].view(1,3,224,224)).argmax()
print(f'Predicted value: {new_pred.item()} {class_names[new_pred.item()]}')
# AlexNet Model Prediction:
AlexNetmodel.eval()
with torch.no_grad():
new_pred = AlexNetmodel(test_data[x][0].view(1,3,224,224)).argmax()
print(f'Predicted value: {new_pred.item()} {class_names[new_pred.item()]}')
Explanation: Run a new image through the model
We can also pass a single image through the model to obtain a prediction.<br>
Pick a number from 0 to 6250, assign it to "x", and we'll use that value to select an image from the Cats and Dogs test set.
End of explanation |
2,169 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Alternate PowerShell Hosts
Metadata
| Metadata | Value |
|
Step1: Download & Process Security Dataset
Step2: Analytic I
Within the classic PowerShell log, event ID 400 indicates when a new PowerShell host process has started. Excluding PowerShell.exe is a good way to find alternate PowerShell hosts
| Data source | Event Provider | Relationship | Event |
|
Step3: Analytic II
Looking for processes loading a specific PowerShell DLL is a very effective way to document the use of PowerShell in your environment
| Data source | Event Provider | Relationship | Event |
|
Step4: Analytic III
Monitoring for PSHost* pipes is another interesting way to find other alternate PowerShell hosts in your environment.
| Data source | Event Provider | Relationship | Event |
| | Python Code:
from openhunt.mordorutils import *
spark = get_spark()
Explanation: Alternate PowerShell Hosts
Metadata
| Metadata | Value |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2019/08/15 |
| modification date | 2020/09/20 |
| playbook related | ['WIN-190410151110'] |
Hypothesis
Adversaries might be leveraging alternate PowerShell Hosts to execute PowerShell evading traditional PowerShell detections that look for powershell.exe in my environment.
Technical Context
None
Offensive Tradecraft
Adversaries can abuse alternate signed PowerShell Hosts to evade application whitelisting solutions that block powershell.exe and naive logging based upon traditional PowerShell hosts.
Characteristics of a PowerShell host (Matt Graeber @mattifestation) >
* These binaries are almost always C#/.NET .exes/.dlls
* These binaries have System.Management.Automation.dll as a referenced assembly
* These may not always be "built in" binaries
Security Datasets
| Metadata | Value |
|:----------|:----------|
| docs | https://securitydatasets.com/notebooks/atomic/windows/execution/SDWIN-190518211456.html |
| link | https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/lateral_movement/host/empire_psremoting_stager.zip |
Analytics
Initialize Analytics Engine
End of explanation
sd_file = "https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/lateral_movement/host/empire_psremoting_stager.zip"
registerMordorSQLTable(spark, sd_file, "sdTable")
Explanation: Download & Process Security Dataset
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Channel
FROM sdTable
WHERE (Channel = "Microsoft-Windows-PowerShell/Operational" OR Channel = "Windows PowerShell")
AND (EventID = 400 OR EventID = 4103)
AND NOT Message LIKE "%Host Application%powershell%"
'''
)
df.show(10,False)
Explanation: Analytic I
Within the classic PowerShell log, event ID 400 indicates when a new PowerShell host process has started. Excluding PowerShell.exe is a good way to find alternate PowerShell hosts
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Powershell | Windows PowerShell | Application host started | 400 |
| Powershell | Microsoft-Windows-PowerShell/Operational | User started Application host | 4103 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Image, Description
FROM sdTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 7
AND (lower(Description) = "system.management.automation" OR lower(ImageLoaded) LIKE "%system.management.automation%")
AND NOT Image LIKE "%powershell.exe"
'''
)
df.show(10,False)
Explanation: Analytic II
Looking for processes loading a specific PowerShell DLL is a very effective way to document the use of PowerShell in your environment
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Module | Microsoft-Windows-Sysmon/Operational | Process loaded Dll | 7 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Image, PipeName
FROM sdTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 17
AND lower(PipeName) LIKE "\\\pshost%"
AND NOT Image LIKE "%powershell.exe"
'''
)
df.show(10,False)
Explanation: Analytic III
Monitoring for PSHost* pipes is another interesting way to find other alternate PowerShell hosts in your environment.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Named pipe | Microsoft-Windows-Sysmon/Operational | Process created Pipe | 17 |
End of explanation |
2,170 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 10
Lists
A sequence of elements of any type.
Step1: Lists are mutable while strings are immutable. We can never change a string, only reassign it to something else.
Step2: Some common list procedures
Step3: list1 and list2 are equivalent (same values) but not identical (same object). In order to make these two lists identical we can alias the object.
Step4: Now both names/variables point at the same object (reference the same object).
Step5: Let's try to change b by assigning to a (they reference the same object after all)
Step6: What happened is that we have reassigned a to a new object, that is they no longer point at the same object. | Python Code:
L = [1,2,3]
M = ['a', 'b', 'c']
N = [1, 'a', 2, [32, 64]]
Explanation: Chapter 10
Lists
A sequence of elements of any type.
End of explanation
S = 'abc'
#S[1] = 'z' # <== Doesn't work!
L = ['a', 'b', 'c']
L[1] = 'z'
print L
Explanation: Lists are mutable while strings are immutable. We can never change a string, only reassign it to something else.
End of explanation
a = 23
b = 23
a is b
list1 = [1,2,3]
list2 = [1,2,3]
list1 is list2
Explanation: Some common list procedures:
reduce
Convert a sequence (eg list) into a single element. Examples: sum, mean
map
Apply some function to each element of a sequence. Examples: making every element in a list positive, capitalizing all elements of a list
filter
Selecting some elements of a sequence according to some condition. Examples: selecting only positive numbers from a list, selecting only elements of a list of strings that have length greater than 10.
Everything in Python is an object. Think of an object as the underlying data. Objects have individuality. For example,
End of explanation
list2 = list1
list1 is list2
Explanation: list1 and list2 are equivalent (same values) but not identical (same object). In order to make these two lists identical we can alias the object.
End of explanation
list1[0] = 1234
print list1
print list2
Back to the strings,
b = 'abc'
a = b
a is b
Explanation: Now both names/variables point at the same object (reference the same object).
End of explanation
a = 'xyz'
print a
print b
Explanation: Let's try to change b by assigning to a (they reference the same object after all)
End of explanation
a is b
id(b)
Explanation: What happened is that we have reassigned a to a new object, that is they no longer point at the same object.
End of explanation |
2,171 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Encontro 02, Parte 1
Step1: Configurando a biblioteca
A socnet disponibiliza variáveis de módulo que permitem configurar propriedades visuais. Os nomes são auto-explicativos e os valores abaixo são padrão.
Step2: Uma variável de cor armazena uma tupla contendo três inteiros entre 0 e 255 que representam intensidades de vermelho, verde e azul respectivamente.
Uma variável de posição armazena uma string contendo duas palavras separadas por um espaço
Step3: Abra esses arquivos em um editor de texto e note como o formato é auto-explicativo.
Visualizando grafos
Vamos visualizar o primeiro grafo, que é não-dirigido
Step4: Essa é a representação mais comum de grafos não-dirigidos
Step5: Essa é a representação mais comum de grafos dirigidos
Step6: Cada aresta também é asssociada a um dicionário que armazena seus atributos. Vamos modificar e imprimir o atributo color da aresta ${1, 2}$ do grafo ug. Esse atributo existe por padrão.
Step7: Note que a ordem dos nós não importa, pois ug é um grafo não-dirigido.
Step8: Os atributos color são exibidos na visualização.
Step9: Podemos usar funções de conveniência para reinicializar as cores.
Step10: Os atributos label também podem ser exibidos na visualização, mas não existem por padrão. Primeiramente, precisamos criá-los.
Step11: Depois, precisamos usar os argumentos nlab e elab para indicar que queremos exibi-los. Esses argumentos são False por padrão.
Step12: Vizinhos, predecessores e sucessores
Considere um grafo $(N, E)$ e um nó $n$. Suponha que esse grafo é não-dirigido.
Nesse caso, dizemos que $n$ é vizinho (neighbor) de $m$ se ${n, m} \in E$. Denotamos por $\mathcal{N}(n)$ o conjunto dos vizinhos de $n$.
Step13: Suponha agora que o grafo $(N, E)$ é dirigido.
Nesse caso, dizemos que $n$ é predecessor de $m$ se $(n, m) \in E$ e dizemos que $n$ é sucessor de $m$ se $(m, n) \in E$. Denotamos por $\mathcal{P}(n)$ o conjunto dos predecessores de $n$ e denotamos por $\mathcal{S}(n)$ o conjunto dos sucessores de $n$.
Step14: Passeios, trilhas e caminhos
Se $(N, E)$ é um grafo não-dirigido
Step15: Exercício 6
Use cores para dar um exemplo de caminho no grafo dg.
Step16: Posicionamento dos nós
Para encerrar, vamos carregar o grafo do encontro anterior. O próprio arquivo atribui label aos nós, portanto não é necessário criá-los.
Step17: Usamos o argumento has_pos para indicar que os atributos x e y devem ser usados para posicionar os nós. Esse argumento é False por padrão, pois nem todo arquivo atribui essas coordenadas.
Se elas não forem usadas, a visualização usa um tipo de force-directed graph drawing. | Python Code:
import sys
sys.path.append('..')
import socnet as sn
Explanation: Encontro 02, Parte 1: Revisão de Grafos
Este guia foi escrito para ajudar você a atingir os seguintes objetivos:
formalizar conceitos básicos de teoria dos grafos;
usar funcionalidades básicas da biblioteca da disciplina.
Grafos não-dirigidos
Um grafo não-dirigido (undirected graph) é um par
$(N, E)$,
onde $N$ é um conjunto qualquer e $E$ é um conjunto de pares não-ordenados de elementos de $N$, ou seja,
$E \subseteq {{n, m} \colon n \in N \textrm{ e } m \in N}$.
Um elemento de $N$ chama-se nó (node) e um elemento de $E$ chama-se aresta (edge). Em alguns trabalhos, usa-se $V$ e vértice em vez de $N$ e nó.
Grafos dirigidos
Formalmente, um grafo dirigido (directed graph) é um par
$(N, E)$,
onde $N$ é um conjunto qualquer e $E$ é um conjunto de pares ordenados de elementos de N, ou seja,
$E \subseteq {(n, m) \colon n \in N \textrm{ e } m \in N}$.
Um elemento de $N$ chama-se nó (node) e um elemento de $E$ chama-se aresta (edge). Em alguns trabalhos, usa-se $V$ e vértice em vez de $N$ e nó e usa-se $A$ e arco em vez de $E$ e aresta.
Instalando as dependências
Antes de continuar, instale as duas dependências da biblioteca da disciplina:
pip install networkx plotly
Em algumas distribuições Linux você deve usar o comando pip3, pois o comando pip está associado a Python 2 por padrão.
Importando a biblioteca
Não mova ou renomeie os arquivos do repositório, a menos que você esteja disposto a adaptar os notebooks de acordo.
Vamos importar a biblioteca da disciplina no notebook:
End of explanation
sn.graph_width = 800
sn.graph_height = 450
sn.node_size = 20
sn.node_color = (255, 255, 255)
sn.edge_width = 2
sn.edge_color = (0, 0, 0)
sn.node_label_position = 'middle center'
sn.edge_label_distance = 10
Explanation: Configurando a biblioteca
A socnet disponibiliza variáveis de módulo que permitem configurar propriedades visuais. Os nomes são auto-explicativos e os valores abaixo são padrão.
End of explanation
ug = sn.load_graph('5-kruskal.gml', has_pos=True)
dg = sn.load_graph('4-dijkstra.gml', has_pos=True)
Explanation: Uma variável de cor armazena uma tupla contendo três inteiros entre 0 e 255 que representam intensidades de vermelho, verde e azul respectivamente.
Uma variável de posição armazena uma string contendo duas palavras separadas por um espaço:
* a primeira representa o alinhamento vertical e pode ser top, middle ou bottom;
* a segunda representa o alinhamento horizontal e pode ser left, center ou right.
Carregando grafos
Vamos carregar dois grafos no formato GML:
End of explanation
sn.graph_width = 320
sn.graph_height = 180
sn.show_graph(ug)
Explanation: Abra esses arquivos em um editor de texto e note como o formato é auto-explicativo.
Visualizando grafos
Vamos visualizar o primeiro grafo, que é não-dirigido:
End of explanation
sn.graph_width = 320
sn.graph_height = 180
sn.show_graph(dg)
Explanation: Essa é a representação mais comum de grafos não-dirigidos: círculos como nós e retas como arestas. Se uma reta conecta o círculo que representa $n$ ao círculo que representa $m$, ela representa a aresta ${n, m}$.
Vamos agora visualizar o segundo grafo, que é dirigido:
End of explanation
ug.node[0]['color'] = (0, 0, 255)
print(ug.node[0]['color'])
Explanation: Essa é a representação mais comum de grafos dirigidos: círculos como nós e setas como arestas. Se uma seta sai do círculo que representa $n$ e entra no círculo que representa $m$, ela representa a aresta $(n, m)$.
Note que as duas primeiras linhas não são necessárias se você rodou a célula anterior, pois os valores atribuídos a graph_width e graph_height são exatamente iguais.
Atributos de nós e arestas
Na estrutura de dados usada pela socnet, os nós são inteiros e cada nó é asssociado a um dicionário que armazena seus atributos. Vamos modificar e imprimir o atributo color do nó $0$ do grafo ug. Esse atributo existe por padrão.
End of explanation
ug.edge[1][2]['color'] = (0, 255, 0)
print(ug.edge[1][2]['color'])
Explanation: Cada aresta também é asssociada a um dicionário que armazena seus atributos. Vamos modificar e imprimir o atributo color da aresta ${1, 2}$ do grafo ug. Esse atributo existe por padrão.
End of explanation
ug.edge[2][1]['color'] = (255, 0, 255)
print(ug.edge[1][2]['color'])
Explanation: Note que a ordem dos nós não importa, pois ug é um grafo não-dirigido.
End of explanation
sn.show_graph(ug)
Explanation: Os atributos color são exibidos na visualização.
End of explanation
sn.reset_node_colors(ug)
sn.reset_edge_colors(ug)
sn.show_graph(ug)
Explanation: Podemos usar funções de conveniência para reinicializar as cores.
End of explanation
for n in ug.nodes():
ug.node[n]['label'] = str(n)
for n, m in ug.edges():
ug.edge[n][m]['label'] = '?'
for n in dg.nodes():
dg.node[n]['label'] = str(n)
for n, m in dg.edges():
dg.edge[n][m]['label'] = '?'
Explanation: Os atributos label também podem ser exibidos na visualização, mas não existem por padrão. Primeiramente, precisamos criá-los.
End of explanation
sn.show_graph(ug, nlab=True, elab=True)
sn.show_graph(dg, nlab=True, elab=True)
Explanation: Depois, precisamos usar os argumentos nlab e elab para indicar que queremos exibi-los. Esses argumentos são False por padrão.
End of explanation
print(ug.neighbors(0))
Explanation: Vizinhos, predecessores e sucessores
Considere um grafo $(N, E)$ e um nó $n$. Suponha que esse grafo é não-dirigido.
Nesse caso, dizemos que $n$ é vizinho (neighbor) de $m$ se ${n, m} \in E$. Denotamos por $\mathcal{N}(n)$ o conjunto dos vizinhos de $n$.
End of explanation
print(dg.successors(0))
print(dg.predecessors(1))
Explanation: Suponha agora que o grafo $(N, E)$ é dirigido.
Nesse caso, dizemos que $n$ é predecessor de $m$ se $(n, m) \in E$ e dizemos que $n$ é sucessor de $m$ se $(m, n) \in E$. Denotamos por $\mathcal{P}(n)$ o conjunto dos predecessores de $n$ e denotamos por $\mathcal{S}(n)$ o conjunto dos sucessores de $n$.
End of explanation
ug.node[0]['color'] = (0, 0, 255)
ug.node[1]['color'] = (0, 0, 255)
ug.node[2]['color'] = (0, 0, 255)
ug.node[3]['color'] = (0, 0, 255)
ug.node[4]['color'] = (0, 0, 255)
ug.node[5]['color'] = (0, 0, 255)
ug.edge[0][1]['color'] = (0, 255, 0)
ug.edge[1][2]['color'] = (0, 255, 0)
ug.edge[2][3]['color'] = (0, 255, 0)
ug.edge[3][4]['color'] = (0, 255, 0)
ug.edge[4][5]['color'] = (0, 255, 0)
sn.show_graph(ug)
Explanation: Passeios, trilhas e caminhos
Se $(N, E)$ é um grafo não-dirigido:
um passeio (walk) é uma sequência de nós $\langle n_0, n_1, \ldots, n_{k-1} \rangle$ tal que, para todo $i$ entre $0$ e $k-2$, temos que ${n_i, n_{i + 1}} \in E$;
uma trilha (trail) é um passeio $\langle n_0, n_1, \ldots, n_{k-1} \rangle$ no qual não existem índices $i$ e $j$ entre $0$ e $k-2$ tais que $i \neq j$ e ${n_i, n_{i+1}} = {n_j, n_{j+1}}$;
um caminho (path) é um passeio $\langle n_0, n_1, \ldots, n_{k-1} \rangle$ no qual não existem índices $i$ e $j$ entre $0$ e $k-1$ tais que $i \neq j$ e $n_i = n_j$.
Se $(N, E)$ é um grafo dirigido:
um passeio (walk) é uma sequência de nós $\langle n_0, n_1, \ldots, n_{k-1} \rangle$ tal que, para todo $i$ entre $0$ e $k-2$, temos que $(n_i, n_{i + 1}) \in E$;
uma trilha (trail) é um passeio $\langle n_0, n_1, \ldots, n_{k-1} \rangle$ no qual não existem índices $i$ e $j$ entre $0$ e $k-2$ tais que $i \neq j$ e $(n_i, n_{i+1}) = (n_j, n_{j+1})$;
um caminho (path) é um passeio $\langle n_0, n_1, \ldots, n_{k-1} \rangle$ no qual não existem índices $i$ e $j$ entre $0$ e $k-1$ tais que $i \neq j$ e $n_i = n_j$.
Pode-se dizer que uma trilha é um passeio que não repete arestas e um caminho é um passeio que não repete nós.
Exercício 1
Dê um exemplo de passeio que não é trilha no grafo ug.
Um passeio que não é trilha é o seguinte:
- 0, 1, 7, 8, 6, 7, 1, 0
Exercício 2
Dê um exemplo de passeio que não é trilha no grafo dg.
Um exemplo de passeio que não é trilha é o seguinte:
- 0, 1, 3, 4, 0, 1, 2
Exercício 3
Dê um exemplo de trilha que não é caminho no grafo ug.
Um exemplo de trilha que não é caminho é o seguinte:
- 0, 1, 2, 5, 6, 8, 2, 3, 4, 5, 3
Exercício 4
Dê um exemplo de trilha que não é caminho no grafo dg.
Um exemplo de trilha que não é caminho é o seguinte:
- 0, 1, 3, 2, 4, 2
Exercício 5
Use cores para dar um exemplo de caminho no grafo ug.
End of explanation
dg.node[0]['color'] = (0, 0, 255)
dg.edge[0][1]['color'] = (0, 255, 0)
dg.node[1]['color'] = (0, 0, 255)
dg.edge[1][3]['color'] = (0, 255, 0)
dg.node[3]['color'] = (0, 0, 255)
dg.edge[3][2]['color'] = (0, 255, 0)
dg.node[2]['color'] = (0, 0, 255)
dg.edge[2][4]['color'] = (0, 255, 0)
dg.node[4]['color'] = (0, 0, 255)
sn.show_graph(dg)
Explanation: Exercício 6
Use cores para dar um exemplo de caminho no grafo dg.
End of explanation
sn.graph_width = 450
sn.graph_height = 450
sn.node_label_position = 'hover' # easter egg!
g = sn.load_graph('1-introducao.gml', has_pos=True)
sn.show_graph(g, nlab=True)
Explanation: Posicionamento dos nós
Para encerrar, vamos carregar o grafo do encontro anterior. O próprio arquivo atribui label aos nós, portanto não é necessário criá-los.
End of explanation
g = sn.load_graph('1-introducao.gml')
sn.show_graph(g, nlab=True)
Explanation: Usamos o argumento has_pos para indicar que os atributos x e y devem ser usados para posicionar os nós. Esse argumento é False por padrão, pois nem todo arquivo atribui essas coordenadas.
Se elas não forem usadas, a visualização usa um tipo de force-directed graph drawing.
End of explanation |
2,172 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EventVestor
Step1: Let's go over the columns
Step2: Finally, suppose we want the above as a DataFrame | Python Code:
# import the dataset
from quantopian.interactive.data.eventvestor import contract_win
# or if you want to import the free dataset, use:
# from quantopian.data.eventvestor import contract_win_free
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
# Let's use blaze to understand the data a bit using Blaze dshape()
contract_win.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
contract_win.count()
# Let's see what the data looks like. We'll grab the first three rows.
contract_win[:3]
Explanation: EventVestor: Contract Wins
In this notebook, we'll take a look at EventVestor's Contract Wins dataset, available on the Quantopian Store. This dataset spans January 01, 2007 through the current day, and documents major contract wins by companies.
Blaze
Before we dig into the data, we want to tell you about how you generally access Quantopian Store data sets. These datasets are available through an API service known as Blaze. Blaze provides the Quantopian user with a convenient interface to access very large datasets.
Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.
It is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization.
Helpful links:
* Query building for Blaze
* Pandas-to-Blaze dictionary
* SQL-to-Blaze dictionary.
Once you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using:
from odo import odo
odo(expr, pandas.DataFrame)
Free samples and limits
One other key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.
There is a free version of this dataset as well as a paid one. The free one includes about three years of historical data, though not up to the current day.
With preamble in place, let's get started:
End of explanation
ba_sid = symbols('BA').sid
wins = contract_win[contract_win.sid == ba_sid][['timestamp', 'contract_amount','amount_units','contract_entity']].sort('timestamp')
# When displaying a Blaze Data Object, the printout is automatically truncated to ten rows.
wins
Explanation: Let's go over the columns:
- event_id: the unique identifier for this contract win.
- asof_date: EventVestor's timestamp of event capture.
- trade_date: for event announcements made before trading ends, trade_date is the same as event_date. For announcements issued after market close, trade_date is next market open day.
- symbol: stock ticker symbol of the affected company.
- event_type: this should always be Contract Win.
- contract_amount: the amount of amount_units the contract is for.
- amount_units: the currency or other units for the value of the contract. Most commonly in millions of dollars.
- contract_entity: name of the customer, if available
- event_rating: this is always 1. The meaning of this is uncertain.
- timestamp: this is our timestamp on when we registered the data.
- sid: the equity's unique identifier. Use this instead of the symbol.
We've done much of the data processing for you. Fields like timestamp and sid are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the sid across all our equity databases.
We can select columns and rows with ease. Below, we'll fetch all contract wins by Boeing. We'll display only the contract_amount, amount_units, contract_entity, and timestamp. We'll sort by date.
End of explanation
ba_df = odo(wins, pd.DataFrame)
# Printing a pandas DataFrame displays the first 30 and last 30 items, and truncates the middle.
ba_df
Explanation: Finally, suppose we want the above as a DataFrame:
End of explanation |
2,173 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Theory and Practice of Visualization Exercise 1
Imports
Step1: Graphical excellence and integrity
Find a data-focused visualization on one of the following websites that is a positive example of the principles that Tufte describes in The Visual Display of Quantitative Information.
Vox
Upshot
538
BuzzFeed
Upload the image for the visualization to this directory and display the image inline in this notebook. | Python Code:
from IPython.display import Image
Explanation: Theory and Practice of Visualization Exercise 1
Imports
End of explanation
# Add your filename and uncomment the following line:
Image(filename='alcohol-consumption-by-country-pure-alcohol-consumption-per-drinker-2010_chartbuilder-1.png')
Explanation: Graphical excellence and integrity
Find a data-focused visualization on one of the following websites that is a positive example of the principles that Tufte describes in The Visual Display of Quantitative Information.
Vox
Upshot
538
BuzzFeed
Upload the image for the visualization to this directory and display the image inline in this notebook.
End of explanation |
2,174 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
T81-558
Step1: Toolkit
Step2: Binary Classification
Binary classification is used to create a model that classifies between only two classes. These two classes are often called "positive" and "negative". Consider the following program that uses the wcbreast_wdbc dataset to classify if a breast tumor is cancerous (malignant) or not (benign). The iris dataset is not binary, because there are three classes (3 types of iris).
Step3: Confusion Matrix
The confusion matrix is a common visualization for both binary and larger classification problems. Often a model will have difficulty differentiating between two classes. For example, a neural network might be really good at telling the difference between cats and dogs, but not so good at telling the difference between dogs and wolves. The following code generates a confusion matrix
Step4: The above two confusion matrixes show the same network. The bottom (normalized) is the type you will normally see. Notice the two labels. The label "B" means benign (no cancer) and the label "M" means malignant (cancer). The left-right (x) axis are the predictions, the top-bottom) are the expected outcomes. A perfect model (that never makes an error) has a dark blue diagonal that runs from top-left to bottom-right.
To read, consider the top-left square. This square indicates "true labeled" of B and also "predicted label" of B. This is good! The prediction matched the truth. The blueness of this box represents how often "B" is classified correct. It is not darkest blue. This is because the square to the right(which is off the perfect diagonal) has some color. This square indicates truth of "B" but prediction of "M". The white square, at the bottom-left, indicates a true of "M" but predicted of "B". The whiteness indicates this rarely happens.
Your conclusion from the above chart is that the model sometimes classifies "B" as "M" (a false negative), but never mis-classifis "M" as "B". Always look for the dark diagonal, this is good!
ROC Curves
ROC curves can be a bit confusing. However, they are very common. It is important to know how to read them. Even their name is confusing. Do not worry about their name, it comes from electrical engineering (EE).
Binary classification is common in medical testing. Often you want to diagnose if someone has a disease. This can lead to two types of errors, know as false positives and false negatives
Step5: Classification
We've already seen multi-class classification, with the iris dataset. Confusion matrixes work just fine with 3 classes. The following code generates a confusion matrix for iris.
Step6: See the strong diagonal? Iris is easy. See the light blue near the bottom? Sometimes virginica is confused for versicolor.
Regression
We've already seen regression with the MPG dataset. Regression uses its own set of visualizations, one of the most common is the lift chart. The following code generates a lift chart. | Python Code:
from sklearn import preprocessing
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue)
def encode_text_dummy(df,name):
dummies = pd.get_dummies(df[name])
for x in dummies.columns:
dummy_name = "{}-{}".format(name,x)
df[dummy_name] = dummies[x]
df.drop(name, axis=1, inplace=True)
# Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue).
def encode_text_index(df,name):
le = preprocessing.LabelEncoder()
df[name] = le.fit_transform(df[name])
return le.classes_
# Encode a numeric column as zscores
def encode_numeric_zscore(df,name,mean=None,sd=None):
if mean is None:
mean = df[name].mean()
if sd is None:
sd = df[name].std()
df[name] = (df[name]-mean)/sd
# Convert all missing values in the specified column to the median
def missing_median(df, name):
med = df[name].median()
df[name] = df[name].fillna(med)
# Convert a Pandas dataframe to the x,y inputs that TensorFlow needs
def to_xy(df,target):
result = []
for x in df.columns:
if x != target:
result.append(x)
# find out the type of the target column. Is it really this hard? :(
target_type = df[target].dtypes
target_type = target_type[0] if hasattr(target_type, '__iter__') else target_type
# Encode to int for classification, float otherwise. TensorFlow likes 32 bits.
if target_type in (np.int64, np.int32):
# Classification
return df.as_matrix(result).astype(np.float32),df.as_matrix([target]).astype(np.int32)
else:
# Regression
return df.as_matrix(result).astype(np.float32),df.as_matrix([target]).astype(np.float32)
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return "{}:{:>02}:{:>05.2f}".format(h, m, s)
Explanation: T81-558: Applications of Deep Neural Networks
Class 4: Classification and Regression
* Instructor: Jeff Heaton, School of Engineering and Applied Science, Washington University in St. Louis
* For more information visit the class website.
Binary Classification, Classification and Regression
Binary Classification - Classification between two possibilities (positive and negative). Common in medical testing, does the person have the disease (positive) or not (negative).
Classification - Classification between more than 2. The iris dataset (3-way classification).
Regression - Numeric prediction. How many MPG does a car get?
In this class session we will look at some visualizations for all three.
Feature Vector Encoding
These are exactly the same feature vector encoding functions from Class 3. They must be defined for this class as well. For more information, refer to class 3.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve, auc
# Plot a confusion matrix.
# cm is the confusion matrix, names are the names of the classes.
def plot_confusion_matrix(cm, names, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(names))
plt.xticks(tick_marks, names, rotation=45)
plt.yticks(tick_marks, names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Plot an ROC. pred - the predictions, y - the expected output.
def plot_roc(pred,y):
fpr, tpr, _ = roc_curve(y_test, pred)
roc_auc = auc(fpr, tpr)
plt.figure()
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic (ROC)')
plt.legend(loc="lower right")
plt.show()
# Plot a lift curve. pred - the predictions, y - the expected output.
def chart_regression(pred,y):
t = pd.DataFrame({'pred' : pred.flatten(), 'y' : y_test.flatten()})
t.sort_values(by=['y'],inplace=True)
a = plt.plot(t['y'].tolist(),label='expected')
b = plt.plot(t['pred'].tolist(),label='prediction')
plt.ylabel('output')
plt.legend()
plt.show()
Explanation: Toolkit: Visualization Functions
This class will introduce 3 different visualizations that can be used with the two different classification type neural networks and regression neural networks.
Confusion Matrix - For any type of classification neural network.
ROC Curve - For binary classification.
Lift Curve - For regression neural networks.
The code used to produce these visualizations is shown here:
End of explanation
import os
import pandas as pd
from sklearn.cross_validation import train_test_split
import tensorflow.contrib.learn as skflow
import numpy as np
from sklearn import metrics
path = "./data/"
filename = os.path.join(path,"wcbreast_wdbc.csv")
df = pd.read_csv(filename,na_values=['NA','?'])
# Encode feature vector
df.drop('id',axis=1,inplace=True)
encode_numeric_zscore(df,'mean_radius')
encode_text_index(df,'mean_texture')
encode_text_index(df,'mean_perimeter')
encode_text_index(df,'mean_area')
encode_text_index(df,'mean_smoothness')
encode_text_index(df,'mean_compactness')
encode_text_index(df,'mean_concavity')
encode_text_index(df,'mean_concave_points')
encode_text_index(df,'mean_symmetry')
encode_text_index(df,'mean_fractal_dimension')
encode_text_index(df,'se_radius')
encode_text_index(df,'se_texture')
encode_text_index(df,'se_perimeter')
encode_text_index(df,'se_area')
encode_text_index(df,'se_smoothness')
encode_text_index(df,'se_compactness')
encode_text_index(df,'se_concavity')
encode_text_index(df,'se_concave_points')
encode_text_index(df,'se_symmetry')
encode_text_index(df,'se_fractal_dimension')
encode_text_index(df,'worst_radius')
encode_text_index(df,'worst_texture')
encode_text_index(df,'worst_perimeter')
encode_text_index(df,'worst_area')
encode_text_index(df,'worst_smoothness')
encode_text_index(df,'worst_compactness')
encode_text_index(df,'worst_concavity')
encode_text_index(df,'worst_concave_points')
encode_text_index(df,'worst_symmetry')
encode_text_index(df,'worst_fractal_dimension')
diagnosis = encode_text_index(df,'diagnosis')
num_classes = len(diagnosis)
# Create x & y for training
# Create the x-side (feature vectors) of the training
x, y = to_xy(df,'diagnosis')
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=42)
# Create a deep neural network with 3 hidden layers of 10, 20, 10
classifier = skflow.TensorFlowDNNClassifier(hidden_units=[10, 20, 10], n_classes=num_classes,
steps=10000)
# Early stopping
early_stop = skflow.monitors.ValidationMonitor(x_test, y_test,
early_stopping_rounds=200, print_steps=50, n_classes=num_classes)
# Fit/train neural network
classifier.fit(x_train, y_train, early_stop)
# Measure accuracy
score = metrics.accuracy_score(y, classifier.predict(x))
print("Final accuracy: {}".format(score))
Explanation: Binary Classification
Binary classification is used to create a model that classifies between only two classes. These two classes are often called "positive" and "negative". Consider the following program that uses the wcbreast_wdbc dataset to classify if a breast tumor is cancerous (malignant) or not (benign). The iris dataset is not binary, because there are three classes (3 types of iris).
End of explanation
import numpy as np
from sklearn import svm, datasets
from sklearn.cross_validation import train_test_split
from sklearn.metrics import confusion_matrix
pred = classifier.predict(x_test)
# Compute confusion matrix
cm = confusion_matrix(y_test, pred)
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
plt.figure()
plot_confusion_matrix(cm, diagnosis)
# Normalize the confusion matrix by row (i.e by the number of samples
# in each class)
cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print('Normalized confusion matrix')
print(cm_normalized)
plt.figure()
plot_confusion_matrix(cm_normalized, diagnosis, title='Normalized confusion matrix')
plt.show()
Explanation: Confusion Matrix
The confusion matrix is a common visualization for both binary and larger classification problems. Often a model will have difficulty differentiating between two classes. For example, a neural network might be really good at telling the difference between cats and dogs, but not so good at telling the difference between dogs and wolves. The following code generates a confusion matrix:
End of explanation
pred = classifier.predict_proba(x_test)
pred = pred[:,1] # Only positive cases
# print(pred[:,1])
plot_roc(pred,y_test)
Explanation: The above two confusion matrixes show the same network. The bottom (normalized) is the type you will normally see. Notice the two labels. The label "B" means benign (no cancer) and the label "M" means malignant (cancer). The left-right (x) axis are the predictions, the top-bottom) are the expected outcomes. A perfect model (that never makes an error) has a dark blue diagonal that runs from top-left to bottom-right.
To read, consider the top-left square. This square indicates "true labeled" of B and also "predicted label" of B. This is good! The prediction matched the truth. The blueness of this box represents how often "B" is classified correct. It is not darkest blue. This is because the square to the right(which is off the perfect diagonal) has some color. This square indicates truth of "B" but prediction of "M". The white square, at the bottom-left, indicates a true of "M" but predicted of "B". The whiteness indicates this rarely happens.
Your conclusion from the above chart is that the model sometimes classifies "B" as "M" (a false negative), but never mis-classifis "M" as "B". Always look for the dark diagonal, this is good!
ROC Curves
ROC curves can be a bit confusing. However, they are very common. It is important to know how to read them. Even their name is confusing. Do not worry about their name, it comes from electrical engineering (EE).
Binary classification is common in medical testing. Often you want to diagnose if someone has a disease. This can lead to two types of errors, know as false positives and false negatives:
False Positive - Your test (neural network) indicated that the patient had the disease; however, the patient did not have the disease.
False Negative - Your test (neural network) indicated that the patient did not have the disease; however, the patient did have the disease.
True Positive - Your test (neural network) correctly identified that the patient had the disease.
True Negative - Your test (neural network) correctly identified that the patient did not have the disease.
Types of errors:
Neural networks classify in terms of probbility of it being positive. However, at what probability do you give a positive result? Is the cutoff 50%? 90%? Where you set this cutoff is called the threshold. Anything above the cutoff is positive, anything below is negative. Setting this cutoff allows the model to be more sensative or specific:
The following shows a more sensitive cutoff:
An ROC curve measures how good a model is regardless of the cutoff. The following shows how to read a ROC chart:
The following code shows an ROC chart for the breast cancer neural network. The area under the curve (AUC) is also an important measure. The larger the AUC, the better.
End of explanation
import os
import pandas as pd
from sklearn.cross_validation import train_test_split
import tensorflow.contrib.learn as skflow
import numpy as np
path = "./data/"
filename = os.path.join(path,"iris.csv")
df = pd.read_csv(filename,na_values=['NA','?'])
# Encode feature vector
encode_numeric_zscore(df,'petal_w')
encode_numeric_zscore(df,'petal_l')
encode_numeric_zscore(df,'sepal_w')
encode_numeric_zscore(df,'sepal_l')
species = encode_text_index(df,"species")
num_classes = len(species)
# Create x & y for training
# Create the x-side (feature vectors) of the training
x, y = to_xy(df,'species')
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=45)
# as much as I would like to use 42, it gives a perfect result, and a boring confusion matrix!
# Create a deep neural network with 3 hidden layers of 10, 20, 10
classifier = skflow.TensorFlowDNNClassifier(hidden_units=[10, 20, 10], n_classes=num_classes,
steps=10000)
# Early stopping
early_stop = skflow.monitors.ValidationMonitor(x_test, y_test,
early_stopping_rounds=200, print_steps=50, n_classes=num_classes)
# Fit/train neural network
classifier.fit(x_train, y_train, early_stop)
import numpy as np
from sklearn import svm, datasets
from sklearn.cross_validation import train_test_split
from sklearn.metrics import confusion_matrix
pred = classifier.predict(x_test)
# Compute confusion matrix
cm = confusion_matrix(y_test, pred)
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
plt.figure()
plot_confusion_matrix(cm, species)
# Normalize the confusion matrix by row (i.e by the number of samples
# in each class)
cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print('Normalized confusion matrix')
print(cm_normalized)
plt.figure()
plot_confusion_matrix(cm_normalized, species, title='Normalized confusion matrix')
plt.show()
Explanation: Classification
We've already seen multi-class classification, with the iris dataset. Confusion matrixes work just fine with 3 classes. The following code generates a confusion matrix for iris.
End of explanation
import tensorflow.contrib.learn as skflow
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
path = "./data/"
filename_read = os.path.join(path,"auto-mpg.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
# create feature vector
missing_median(df, 'horsepower')
df.drop('name',1,inplace=True)
encode_numeric_zscore(df, 'horsepower')
encode_numeric_zscore(df, 'weight')
encode_numeric_zscore(df, 'cylinders')
encode_numeric_zscore(df, 'displacement')
encode_numeric_zscore(df, 'acceleration')
encode_text_dummy(df, 'origin')
# Encode to a 2D matrix for training
x,y = to_xy(df,['mpg'])
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=42)
# Create a deep neural network with 3 hidden layers of 50, 25, 10
regressor = skflow.TensorFlowDNNRegressor(hidden_units=[50, 25, 10], steps=5000)
# Early stopping
early_stop = skflow.monitors.ValidationMonitor(x_test, y_test,
early_stopping_rounds=200, print_steps=50)
# Fit/train neural network
regressor.fit(x_train, y_train, early_stop)
pred = regressor.predict(x_test)
chart_regression(pred,y_test)
Explanation: See the strong diagonal? Iris is easy. See the light blue near the bottom? Sometimes virginica is confused for versicolor.
Regression
We've already seen regression with the MPG dataset. Regression uses its own set of visualizations, one of the most common is the lift chart. The following code generates a lift chart.
End of explanation |
2,175 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing covariance matrix
Step1: Source estimation method such as MNE require a noise estimations from the
recordings. In this tutorial we cover the basics of noise covariance and
construct a noise covariance matrix that can be used when computing the
inverse solution. For more information, see BABDEEEB.
Step2: The definition of noise depends on the paradigm. In MEG it is quite common
to use empty room measurements for the estimation of sensor noise. However if
you are dealing with evoked responses, you might want to also consider
resting state brain activity as noise.
First we compute the noise using empty room recording. Note that you can also
use only a part of the recording with tmin and tmax arguments. That can be
useful if you use resting state as a noise baseline. Here we use the whole
empty room recording to compute the noise covariance (tmax=None is the same
as the end of the recording, see
Step3: Now that you the covariance matrix in a python object you can save it to a
file with
Step4: Note that this method also attenuates the resting state activity in your
source estimates.
Step5: Plot the covariance matrices
Try setting proj to False to see the effect. Notice that the projectors in
epochs are already applied, so proj parameter has no effect.
Step6: How should I regularize the covariance matrix?
The estimated covariance can be numerically
unstable and tends to induce correlations between estimated source amplitudes
and the number of samples available. The MNE manual therefore suggests to
regularize the noise covariance matrix (see
cov_regularization), especially if only few samples are available.
Unfortunately it is not easy to tell the effective number of samples, hence,
to choose the appropriate regularization.
In MNE-Python, regularization is done using advanced regularization methods
described in [1]_. For this the 'auto' option can be used. With this
option cross-validation will be used to learn the optimal regularization
Step7: This procedure evaluates the noise covariance quantitatively by how well it
whitens the data using the
negative log-likelihood of unseen data. The final result can also be visually
inspected.
Under the assumption that the baseline does not contain a systematic signal
(time-locked to the event of interest), the whitened baseline signal should
be follow a multivariate Gaussian distribution, i.e.,
whitened baseline signals should be between -1.96 and 1.96 at a given time
sample.
Based on the same reasoning, the expected value for the global field power
(GFP) is 1 (calculation of the GFP should take into account the true degrees
of freedom, e.g. ddof=3 with 2 active SSP vectors)
Step8: This plot displays both, the whitened evoked signals for each channels and
the whitened GFP. The numbers in the GFP panel represent the estimated rank
of the data, which amounts to the effective degrees of freedom by which the
squared sum across sensors is divided when computing the whitened GFP.
The whitened GFP also helps detecting spurious late evoked components which
can be the consequence of over- or under-regularization.
Note that if data have been processed using signal space separation
(SSS) [2],
gradiometers and magnetometers will be displayed jointly because both are
reconstructed from the same SSS basis vectors with the same numerical rank.
This also implies that both sensor types are not any longer statistically
independent.
These methods for evaluation can be used to assess model violations.
Additional
introductory materials can be found here <https | Python Code:
import os.path as op
import mne
from mne.datasets import sample
Explanation: Computing covariance matrix
End of explanation
data_path = sample.data_path()
raw_empty_room_fname = op.join(
data_path, 'MEG', 'sample', 'ernoise_raw.fif')
raw_empty_room = mne.io.read_raw_fif(raw_empty_room_fname, add_eeg_ref=False)
raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(raw_fname, add_eeg_ref=False)
raw.set_eeg_reference()
raw.info['bads'] += ['EEG 053'] # bads + 1 more
Explanation: Source estimation method such as MNE require a noise estimations from the
recordings. In this tutorial we cover the basics of noise covariance and
construct a noise covariance matrix that can be used when computing the
inverse solution. For more information, see BABDEEEB.
End of explanation
noise_cov = mne.compute_raw_covariance(raw_empty_room, tmin=0, tmax=None)
Explanation: The definition of noise depends on the paradigm. In MEG it is quite common
to use empty room measurements for the estimation of sensor noise. However if
you are dealing with evoked responses, you might want to also consider
resting state brain activity as noise.
First we compute the noise using empty room recording. Note that you can also
use only a part of the recording with tmin and tmax arguments. That can be
useful if you use resting state as a noise baseline. Here we use the whole
empty room recording to compute the noise covariance (tmax=None is the same
as the end of the recording, see :func:mne.compute_raw_covariance).
End of explanation
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events, event_id=1, tmin=-0.2, tmax=0.0,
baseline=(-0.2, 0.0))
Explanation: Now that you the covariance matrix in a python object you can save it to a
file with :func:mne.write_cov. Later you can read it back to a python
object using :func:mne.read_cov.
You can also use the pre-stimulus baseline to estimate the noise covariance.
First we have to construct the epochs. When computing the covariance, you
should use baseline correction when constructing the epochs. Otherwise the
covariance matrix will be inaccurate. In MNE this is done by default, but
just to be sure, we define it here manually.
End of explanation
noise_cov_baseline = mne.compute_covariance(epochs)
Explanation: Note that this method also attenuates the resting state activity in your
source estimates.
End of explanation
noise_cov.plot(raw_empty_room.info, proj=True)
noise_cov_baseline.plot(epochs.info)
Explanation: Plot the covariance matrices
Try setting proj to False to see the effect. Notice that the projectors in
epochs are already applied, so proj parameter has no effect.
End of explanation
cov = mne.compute_covariance(epochs, tmax=0., method='auto')
Explanation: How should I regularize the covariance matrix?
The estimated covariance can be numerically
unstable and tends to induce correlations between estimated source amplitudes
and the number of samples available. The MNE manual therefore suggests to
regularize the noise covariance matrix (see
cov_regularization), especially if only few samples are available.
Unfortunately it is not easy to tell the effective number of samples, hence,
to choose the appropriate regularization.
In MNE-Python, regularization is done using advanced regularization methods
described in [1]_. For this the 'auto' option can be used. With this
option cross-validation will be used to learn the optimal regularization:
End of explanation
evoked = epochs.average()
evoked.plot_white(cov)
Explanation: This procedure evaluates the noise covariance quantitatively by how well it
whitens the data using the
negative log-likelihood of unseen data. The final result can also be visually
inspected.
Under the assumption that the baseline does not contain a systematic signal
(time-locked to the event of interest), the whitened baseline signal should
be follow a multivariate Gaussian distribution, i.e.,
whitened baseline signals should be between -1.96 and 1.96 at a given time
sample.
Based on the same reasoning, the expected value for the global field power
(GFP) is 1 (calculation of the GFP should take into account the true degrees
of freedom, e.g. ddof=3 with 2 active SSP vectors):
End of explanation
covs = mne.compute_covariance(epochs, tmax=0., method=('empirical', 'shrunk'),
return_estimators=True)
evoked = epochs.average()
evoked.plot_white(covs)
Explanation: This plot displays both, the whitened evoked signals for each channels and
the whitened GFP. The numbers in the GFP panel represent the estimated rank
of the data, which amounts to the effective degrees of freedom by which the
squared sum across sensors is divided when computing the whitened GFP.
The whitened GFP also helps detecting spurious late evoked components which
can be the consequence of over- or under-regularization.
Note that if data have been processed using signal space separation
(SSS) [2],
gradiometers and magnetometers will be displayed jointly because both are
reconstructed from the same SSS basis vectors with the same numerical rank.
This also implies that both sensor types are not any longer statistically
independent.
These methods for evaluation can be used to assess model violations.
Additional
introductory materials can be found here <https://goo.gl/ElWrxe>.
For expert use cases or debugging the alternative estimators can also be
compared:
End of explanation |
2,176 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
Step5: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
image_size = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, shape=(None,image_size), name="inputs")
targets_= tf.placeholder(tf.float32, shape=(None,image_size), name="targets")
# Output of hidden layer
encoded = tf.layers.dense(inputs=inputs_, units=encoding_dim, activation=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(inputs=encoded, units=image_size, activation=None)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits,name="output")
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_,logits=logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
# Create the session
sess = tf.Session()
Explanation: Training
End of explanation
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation |
2,177 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Histogram of one column by binning on another continuous
Step1: Lets create a data frame of a column made up of 1's and 0's and another categorical column.
Step2: Now, lets create histograms of the N column but using the class column as a grouping, using the 'by' param in hist()
Step3: OK, lets weigh the creation of the binary column, using p in random.choice()
Step4: OK, but what about using a continuous variable? We can use pandas.cut() to bin the continuous variable
Step5: Lets create another dataframe using a binary column and the binning from above | Python Code:
%pylab inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Histogram of one column by binning on another continuous
End of explanation
# Class label would be categorical variable derived from binning the continuous column
x = ['Class1']*300 + ['Class2']*400 + ['Class3']*300
# Column of random 0s and 1s
y = np.random.choice([0,1], 1000)
# Dataframe from the above variables
df = pd.DataFrame({'Class':x, 'N':y})
Explanation: Lets create a data frame of a column made up of 1's and 0's and another categorical column.
End of explanation
# From this grouping, plot histograms
plts = df['N'].hist(by=df['Class'])
Explanation: Now, lets create histograms of the N column but using the class column as a grouping, using the 'by' param in hist():
End of explanation
x = ['Class1']*300 + ['Class2']*400 + ['Class3']*300
y = np.random.choice([0,1], 1000, p=[0.25, 0.75])
df = pd.DataFrame({'Class':x, 'N':y})
# grouped = df.groupby('Class')
plts = df['N'].hist(by=df['Class'])
Explanation: OK, lets weigh the creation of the binary column, using p in random.choice():
End of explanation
# Random x data: values from 0 - 9
x = np.random.rand(1000) * 9
# Here we bin the continuous x variable into bins (I set the end points to be from)
# http://pandas.pydata.org/pandas-docs/stable/generated/pandas.cut.html
bins = pd.cut(x, [0, 3, 6, 9])
bins
Explanation: OK, but what about using a continuous variable? We can use pandas.cut() to bin the continuous variable:
End of explanation
# Column of random 0s and 1s
y = np.random.choice([0,1], 1000)
# Data frame made from column of 0s and 1s and the other column the categorical binning of the continuous x data
df = pd.DataFrame({'y':y, 'Class': bins})
plts = df['y'].hist(by=df['Class'])
# Column of random 0s and 1s, weighed
y = np.random.choice([0,1], 1000, p = [0.25, 0.75])
# Data frame made from column of 0s and 1s and the other column the categorical binning of the continuous x data
df = pd.DataFrame({'y':y, 'Class': bins})
plts = df['y'].hist(by=df['Class'])
Explanation: Lets create another dataframe using a binary column and the binning from above:
End of explanation |
2,178 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Data Validation (Advanced)
Learning Objectives
Install TFDV
Compute and visualize statistics
Infer a schema
Check evaluation data for errors
Check for evaluation anomalies and fix it
Check for drift and skew
Freeze the schema
Introduction
This notebook illustrates how TensorFlow Data Validation (TFDV) can be used to investigate and visualize your dataset. That includes looking at descriptive statistics, inferring a schema, checking for and fixing anomalies, and checking for drift and skew in our dataset. It's important to understand your dataset's characteristics, including how it might change over time in your production pipeline. It's also important to look for anomalies in your data, and to compare your training, evaluation, and serving datasets to make sure that they're consistent.
We'll use data from the Taxi Trips dataset released by the City of Chicago.
Note
Step1: Restart the kernel (Kernel > Restart kernel > Restart).
Re-run the above cell and proceed further.
Note
Step2: Load the Files
We will download our dataset from Google Cloud Storage.
Step3: Check the version
Step4: Compute and visualize statistics
First we'll use tfdv.generate_statistics_from_csv to compute statistics for our training data. (ignore the snappy warnings)
TFDV can compute descriptive statistics that provide a quick overview of the data in terms of the features that are present and the shapes of their value distributions.
Internally, TFDV uses Apache Beam's data-parallel processing framework to scale the computation of statistics over large datasets. For applications that wish to integrate deeper with TFDV (e.g., attach statistics generation at the end of a data-generation pipeline), the API also exposes a Beam PTransform for statistics generation.
Step5: Now let's use tfdv.visualize_statistics, which uses Facets to create a succinct visualization of our training data
Step6: Infer a schema
Now let's use tfdv.infer_schema to create a schema for our data. A schema defines constraints for the data that are relevant for ML. Example constraints include the data type of each feature, whether it's numerical or categorical, or the frequency of its presence in the data. For categorical features the schema also defines the domain - the list of acceptable values. Since writing a schema can be a tedious task, especially for datasets with lots of features, TFDV provides a method to generate an initial version of the schema based on the descriptive statistics.
Getting the schema right is important because the rest of our production pipeline will be relying on the schema that TFDV generates to be correct. The schema also provides documentation for the data, and so is useful when different developers work on the same data. Let's use tfdv.display_schema to display the inferred schema so that we can review it.
Step7: Check evaluation data for errors
So far we've only been looking at the training data. It's important that our evaluation data is consistent with our training data, including that it uses the same schema. It's also important that the evaluation data includes examples of roughly the same ranges of values for our numerical features as our training data, so that our coverage of the loss surface during evaluation is roughly the same as during training. The same is true for categorical features. Otherwise, we may have training issues that are not identified during evaluation, because we didn't evaluate part of our loss surface.
Notice that each feature now includes statistics for both the training and evaluation datasets.
Notice that the charts now have both the training and evaluation datasets overlaid, making it easy to compare them.
Notice that the charts now include a percentages view, which can be combined with log or the default linear scales.
Notice that the mean and median for trip_miles are different for the training versus the evaluation datasets. Will that cause problems?
Wow, the max tips is very different for the training versus the evaluation datasets. Will that cause problems?
Click expand on the Numeric Features chart, and select the log scale. Review the trip_seconds feature, and notice the difference in the max. Will evaluation miss parts of the loss surface?
Step8: Check for evaluation anomalies
Does our evaluation dataset match the schema from our training dataset? This is especially important for categorical features, where we want to identify the range of acceptable values.
Key Point
Step9: Fix evaluation anomalies in the schema
Oops! It looks like we have some new values for company in our evaluation data, that we didn't have in our training data. We also have a new value for payment_type. These should be considered anomalies, but what we decide to do about them depends on our domain knowledge of the data. If an anomaly truly indicates a data error, then the underlying data should be fixed. Otherwise, we can simply update the schema to include the values in the eval dataset.
Key Point
Step10: Hey, look at that! We verified that the training and evaluation data are now consistent! Thanks TFDV ;)
Schema Environments
We also split off a 'serving' dataset for this example, so we should check that too. By default all datasets in a pipeline should use the same schema, but there are often exceptions. For example, in supervised learning we need to include labels in our dataset, but when we serve the model for inference the labels will not be included. In some cases introducing slight schema variations is necessary.
Environments can be used to express such requirements. In particular, features in schema can be associated with a set of environments using default_environment, in_environment and not_in_environment.
For example, in this dataset the tips feature is included as the label for training, but it's missing in the serving data. Without environment specified, it will show up as an anomaly.
Step11: We'll deal with the tips feature below. We also have an INT value in our trip seconds, where our schema expected a FLOAT. By making us aware of that difference, TFDV helps uncover inconsistencies in the way the data is generated for training and serving. It's very easy to be unaware of problems like that until model performance suffers, sometimes catastrophically. It may or may not be a significant issue, but in any case this should be cause for further investigation.
In this case, we can safely convert INT values to FLOATs, so we want to tell TFDV to use our schema to infer the type. Let's do that now.
Step12: Now we just have the tips feature (which is our label) showing up as an anomaly ('Column dropped'). Of course we don't expect to have labels in our serving data, so let's tell TFDV to ignore that.
Step13: Check for drift and skew
In addition to checking whether a dataset conforms to the expectations set in the schema, TFDV also provides functionalities to detect drift and skew. TFDV performs this check by comparing the statistics of the different datasets based on the drift/skew comparators specified in the schema.
Drift
Drift detection is supported for categorical features and between consecutive spans of data (i.e., between span N and span N+1), such as between different days of training data. We express drift in terms of L-infinity distance, and you can set the threshold distance so that you receive warnings when the drift is higher than is acceptable. Setting the correct distance is typically an iterative process requiring domain knowledge and experimentation.
Skew
TFDV can detect three different kinds of skew in your data - schema skew, feature skew, and distribution skew.
Schema Skew
Schema skew occurs when the training and serving data do not conform to the same schema. Both training and serving data are expected to adhere to the same schema. Any expected deviations between the two (such as the label feature being only present in the training data but not in serving) should be specified through environments field in the schema.
Feature Skew
Feature skew occurs when the feature values that a model trains on are different from the feature values that it sees at serving time. For example, this can happen when
Step14: In this example we do see some drift, but it is well below the threshold that we've set.
Freeze the schema
Now that the schema has been reviewed and curated, we will store it in a file to reflect its "frozen" state. | Python Code:
!pip install pyarrow==5.0.0
!pip install numpy==1.19.2
!pip install tensorflow-data-validation
Explanation: TensorFlow Data Validation (Advanced)
Learning Objectives
Install TFDV
Compute and visualize statistics
Infer a schema
Check evaluation data for errors
Check for evaluation anomalies and fix it
Check for drift and skew
Freeze the schema
Introduction
This notebook illustrates how TensorFlow Data Validation (TFDV) can be used to investigate and visualize your dataset. That includes looking at descriptive statistics, inferring a schema, checking for and fixing anomalies, and checking for drift and skew in our dataset. It's important to understand your dataset's characteristics, including how it might change over time in your production pipeline. It's also important to look for anomalies in your data, and to compare your training, evaluation, and serving datasets to make sure that they're consistent.
We'll use data from the Taxi Trips dataset released by the City of Chicago.
Note: This site provides applications using data that has been modified for use from its original source, www.cityofchicago.org, the official website of the City of Chicago. The City of Chicago makes no claims as to the content, accuracy, timeliness, or completeness of any of the data provided at this site. The data provided at this site is subject to change at any time. It is understood that the data provided at this site is being used at one’s own risk.
Read more about the dataset in Google BigQuery. Explore the full dataset in the BigQuery UI.
Key Point: As a modeler and developer, think about how this data is used and the potential benefits and harm a model's predictions can cause. A model like this could reinforce societal biases and disparities. Is a feature relevant to the problem you want to solve or will it introduce bias? For more information, read about ML fairness.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the Solution Notebook for reference.
The columns in the dataset are:
<table>
<tr><td>pickup_community_area</td><td>fare</td><td>trip_start_month</td></tr>
<tr><td>trip_start_hour</td><td>trip_start_day</td><td>trip_start_timestamp</td></tr>
<tr><td>pickup_latitude</td><td>pickup_longitude</td><td>dropoff_latitude</td></tr>
<tr><td>dropoff_longitude</td><td>trip_miles</td><td>pickup_census_tract</td></tr>
<tr><td>dropoff_census_tract</td><td>payment_type</td><td>company</td></tr>
<tr><td>trip_seconds</td><td>dropoff_community_area</td><td>tips</td></tr>
</table>
Install Libraries
End of explanation
import pandas as pd
import tensorflow_data_validation as tfdv
import sys
import warnings
warnings.filterwarnings('ignore')
print('Installing TensorFlow Data Validation')
!pip install -q tensorflow_data_validation[visualization]
Explanation: Restart the kernel (Kernel > Restart kernel > Restart).
Re-run the above cell and proceed further.
Note: Please ignore any incompatibility warnings and errors.
Install TFDV
This will pull in all the dependencies, which will take a minute. Please ignore the warnings or errors regarding incompatible dependency versions.
End of explanation
import os
import tempfile, urllib, zipfile
# Set up some globals for our file paths
BASE_DIR = tempfile.mkdtemp()
DATA_DIR = os.path.join(BASE_DIR, 'data')
OUTPUT_DIR = os.path.join(BASE_DIR, 'chicago_taxi_output')
TRAIN_DATA = os.path.join(DATA_DIR, 'train', 'data.csv')
EVAL_DATA = os.path.join(DATA_DIR, 'eval', 'data.csv')
SERVING_DATA = os.path.join(DATA_DIR, 'serving', 'data.csv')
# Download the zip file from GCP and unzip it
zip, headers = urllib.request.urlretrieve('https://storage.googleapis.com/artifacts.tfx-oss-public.appspot.com/datasets/chicago_data.zip')
zipfile.ZipFile(zip).extractall(BASE_DIR)
zipfile.ZipFile(zip).close()
print("Here's what we downloaded:")
!ls -R {os.path.join(BASE_DIR, 'data')}
Explanation: Load the Files
We will download our dataset from Google Cloud Storage.
End of explanation
import tensorflow_data_validation as tfdv
print('TFDV version: {}'.format(tfdv.version.__version__))
Explanation: Check the version
End of explanation
# Compute data statistics from CSV files.
# TODO: Your code goes here
Explanation: Compute and visualize statistics
First we'll use tfdv.generate_statistics_from_csv to compute statistics for our training data. (ignore the snappy warnings)
TFDV can compute descriptive statistics that provide a quick overview of the data in terms of the features that are present and the shapes of their value distributions.
Internally, TFDV uses Apache Beam's data-parallel processing framework to scale the computation of statistics over large datasets. For applications that wish to integrate deeper with TFDV (e.g., attach statistics generation at the end of a data-generation pipeline), the API also exposes a Beam PTransform for statistics generation.
End of explanation
# Visualize the input statistics using Facets.
# TODO: Your code goes here
Explanation: Now let's use tfdv.visualize_statistics, which uses Facets to create a succinct visualization of our training data:
Notice that numeric features and categorical features are visualized separately, and that charts are displayed showing the distributions for each feature.
Notice that features with missing or zero values display a percentage in red as a visual indicator that there may be issues with examples in those features. The percentage is the percentage of examples that have missing or zero values for that feature.
Notice that there are no examples with values for pickup_census_tract. This is an opportunity for dimensionality reduction!
Try clicking "expand" above the charts to change the display
Try hovering over bars in the charts to display bucket ranges and counts
Try switching between the log and linear scales, and notice how the log scale reveals much more detail about the payment_type categorical feature
Try selecting "quantiles" from the "Chart to show" menu, and hover over the markers to show the quantile percentages
End of explanation
# Infers schema from the input statistics.
# TODO: Your code goes here
tfdv.display_schema(schema=schema)
Explanation: Infer a schema
Now let's use tfdv.infer_schema to create a schema for our data. A schema defines constraints for the data that are relevant for ML. Example constraints include the data type of each feature, whether it's numerical or categorical, or the frequency of its presence in the data. For categorical features the schema also defines the domain - the list of acceptable values. Since writing a schema can be a tedious task, especially for datasets with lots of features, TFDV provides a method to generate an initial version of the schema based on the descriptive statistics.
Getting the schema right is important because the rest of our production pipeline will be relying on the schema that TFDV generates to be correct. The schema also provides documentation for the data, and so is useful when different developers work on the same data. Let's use tfdv.display_schema to display the inferred schema so that we can review it.
End of explanation
# Compute stats for evaluation data
eval_stats = tfdv.generate_statistics_from_csv(data_location=EVAL_DATA)
# Compare evaluation data with training data
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
Explanation: Check evaluation data for errors
So far we've only been looking at the training data. It's important that our evaluation data is consistent with our training data, including that it uses the same schema. It's also important that the evaluation data includes examples of roughly the same ranges of values for our numerical features as our training data, so that our coverage of the loss surface during evaluation is roughly the same as during training. The same is true for categorical features. Otherwise, we may have training issues that are not identified during evaluation, because we didn't evaluate part of our loss surface.
Notice that each feature now includes statistics for both the training and evaluation datasets.
Notice that the charts now have both the training and evaluation datasets overlaid, making it easy to compare them.
Notice that the charts now include a percentages view, which can be combined with log or the default linear scales.
Notice that the mean and median for trip_miles are different for the training versus the evaluation datasets. Will that cause problems?
Wow, the max tips is very different for the training versus the evaluation datasets. Will that cause problems?
Click expand on the Numeric Features chart, and select the log scale. Review the trip_seconds feature, and notice the difference in the max. Will evaluation miss parts of the loss surface?
End of explanation
# Check eval data for errors by validating the eval data stats using the previously inferred schema.
# TODO: Your code goes here
tfdv.display_anomalies(anomalies)
Explanation: Check for evaluation anomalies
Does our evaluation dataset match the schema from our training dataset? This is especially important for categorical features, where we want to identify the range of acceptable values.
Key Point: What would happen if we tried to evaluate using data with categorical feature values that were not in our training dataset? What about numeric features that are outside the ranges in our training dataset?
End of explanation
# Relax the minimum fraction of values that must come from the domain for feature company.
company = tfdv.get_feature(schema, 'company')
company.distribution_constraints.min_domain_mass = 0.9
# Add new value to the domain of feature payment_type.
payment_type_domain = tfdv.get_domain(schema, 'payment_type')
payment_type_domain.value.append('Prcard')
# Validate eval stats after updating the schema
# TODO: Your code goes here
tfdv.display_anomalies(updated_anomalies)
Explanation: Fix evaluation anomalies in the schema
Oops! It looks like we have some new values for company in our evaluation data, that we didn't have in our training data. We also have a new value for payment_type. These should be considered anomalies, but what we decide to do about them depends on our domain knowledge of the data. If an anomaly truly indicates a data error, then the underlying data should be fixed. Otherwise, we can simply update the schema to include the values in the eval dataset.
Key Point: How would our evaluation results be affected if we did not fix these problems?
Unless we change our evaluation dataset we can't fix everything, but we can fix things in the schema that we're comfortable accepting. That includes relaxing our view of what is and what is not an anomaly for particular features, as well as updating our schema to include missing values for categorical features. TFDV has enabled us to discover what we need to fix.
Let's make those fixes now, and then review one more time.
End of explanation
serving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA)
serving_anomalies = tfdv.validate_statistics(serving_stats, schema)
tfdv.display_anomalies(serving_anomalies)
Explanation: Hey, look at that! We verified that the training and evaluation data are now consistent! Thanks TFDV ;)
Schema Environments
We also split off a 'serving' dataset for this example, so we should check that too. By default all datasets in a pipeline should use the same schema, but there are often exceptions. For example, in supervised learning we need to include labels in our dataset, but when we serve the model for inference the labels will not be included. In some cases introducing slight schema variations is necessary.
Environments can be used to express such requirements. In particular, features in schema can be associated with a set of environments using default_environment, in_environment and not_in_environment.
For example, in this dataset the tips feature is included as the label for training, but it's missing in the serving data. Without environment specified, it will show up as an anomaly.
End of explanation
options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True)
serving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA, stats_options=options)
serving_anomalies = tfdv.validate_statistics(serving_stats, schema)
tfdv.display_anomalies(serving_anomalies)
Explanation: We'll deal with the tips feature below. We also have an INT value in our trip seconds, where our schema expected a FLOAT. By making us aware of that difference, TFDV helps uncover inconsistencies in the way the data is generated for training and serving. It's very easy to be unaware of problems like that until model performance suffers, sometimes catastrophically. It may or may not be a significant issue, but in any case this should be cause for further investigation.
In this case, we can safely convert INT values to FLOATs, so we want to tell TFDV to use our schema to infer the type. Let's do that now.
End of explanation
# All features are by default in both TRAINING and SERVING environments.
schema.default_environment.append('TRAINING')
schema.default_environment.append('SERVING')
# Specify that 'tips' feature is not in SERVING environment.
tfdv.get_feature(schema, 'tips').not_in_environment.append('SERVING')
serving_anomalies_with_env = tfdv.validate_statistics(
serving_stats, schema, environment='SERVING')
tfdv.display_anomalies(serving_anomalies_with_env)
Explanation: Now we just have the tips feature (which is our label) showing up as an anomaly ('Column dropped'). Of course we don't expect to have labels in our serving data, so let's tell TFDV to ignore that.
End of explanation
# Add skew comparator for 'payment_type' feature.
payment_type = tfdv.get_feature(schema, 'payment_type')
payment_type.skew_comparator.infinity_norm.threshold = 0.01
# Add drift comparator for 'company' feature.
company=tfdv.get_feature(schema, 'company')
company.drift_comparator.infinity_norm.threshold = 0.001
# TODO: Your code goes here
tfdv.display_anomalies(skew_anomalies)
Explanation: Check for drift and skew
In addition to checking whether a dataset conforms to the expectations set in the schema, TFDV also provides functionalities to detect drift and skew. TFDV performs this check by comparing the statistics of the different datasets based on the drift/skew comparators specified in the schema.
Drift
Drift detection is supported for categorical features and between consecutive spans of data (i.e., between span N and span N+1), such as between different days of training data. We express drift in terms of L-infinity distance, and you can set the threshold distance so that you receive warnings when the drift is higher than is acceptable. Setting the correct distance is typically an iterative process requiring domain knowledge and experimentation.
Skew
TFDV can detect three different kinds of skew in your data - schema skew, feature skew, and distribution skew.
Schema Skew
Schema skew occurs when the training and serving data do not conform to the same schema. Both training and serving data are expected to adhere to the same schema. Any expected deviations between the two (such as the label feature being only present in the training data but not in serving) should be specified through environments field in the schema.
Feature Skew
Feature skew occurs when the feature values that a model trains on are different from the feature values that it sees at serving time. For example, this can happen when:
A data source that provides some feature values is modified between training and serving time
There is different logic for generating features between training and serving. For example, if you apply some transformation only in one of the two code paths.
Distribution Skew
Distribution skew occurs when the distribution of the training dataset is significantly different from the distribution of the serving dataset. One of the key causes for distribution skew is using different code or different data sources to generate the training dataset. Another reason is a faulty sampling mechanism that chooses a non-representative subsample of the serving data to train on.
End of explanation
from tensorflow.python.lib.io import file_io
from google.protobuf import text_format
file_io.recursive_create_dir(OUTPUT_DIR)
schema_file = os.path.join(OUTPUT_DIR, 'schema.pbtxt')
tfdv.write_schema_text(schema, schema_file)
!cat {schema_file}
Explanation: In this example we do see some drift, but it is well below the threshold that we've set.
Freeze the schema
Now that the schema has been reviewed and curated, we will store it in a file to reflect its "frozen" state.
End of explanation |
2,179 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Departamento de Física - Faculdade de Ciências e Tecnologia da Universidade de Coimbra
Física Computacional - Ficha 3 - Integração e Diferenciação Numérica
Rafael Isaque Santos - 2012144694 - Licenciatura em Física
Step1: 1 - Cálculo do integral $\int _{0}^{\pi} e^{x} \cos(x) \; dx$
Step2: Integrando $e^{x} \cos (x)$
Step3: Derivar $e^{x} \sin(x) + e^{-x} \cos(x)$ nos pontos
Step4: Calcular o integral
Step5: Usando as transformações
$x = \frac{y}{1-y}$
$x = \tan \left[ \frac{\pi}{4} (1+y) \right]$
Integral Duplo de
$\int {0}^{1} \left( \int {-\sqrt{1-y^{2}}} ^{\sqrt{1-y^{2}}} \, dx \right) dy$
Step6: Integral Duplo de
$\int {0}^{1} \left( \int {-\sqrt{1-y^{2}}} ^{\sqrt{1-y^{2}}} e^{-xy} \, dx \right) dy$ | Python Code:
from numpy import sin, cos, tan, pi, e, exp, log, copy, linspace
from numpy.polynomial.legendre import leggauss
n_list = [2, 4, 8, 10, 20, 30, 50, 100]
Explanation: Departamento de Física - Faculdade de Ciências e Tecnologia da Universidade de Coimbra
Física Computacional - Ficha 3 - Integração e Diferenciação Numérica
Rafael Isaque Santos - 2012144694 - Licenciatura em Física
End of explanation
f1 = lambda x: exp(x) * cos(x)
f1_sol = -(exp(pi) + 1) / 2
trapezios_simples = lambda f, a, b: (b-a)/2 * (f(a) + f(b))
simpson13_simples = lambda f, a, b: ((b-a)/3) * (f(a) + 4*f((a + b)/2) + f(b))
simpson38_simples = lambda f, a, b: (3/8)*(b-a) * (f(a) + 3*f((2*a + b)/3) + 3*f((a + 2*b)/3) + f(b))
def trapezios_composta(f, a, b, n):
h = (b-a)/n
xi = a
s_int = 0
for i in range(n):
s_int += f(xi) + f(xi+h)
xi += h
s_int *= h/2
return s_int
def simpson13_composta(f, a, b, n):
h = (b-a)/n
x = linspace(a, b, n+1)
s_int = 0
for i in range(0, n, 2):
s_int += f(x[i]) + 4*f(x[i+1]) + f(x[i+2])
s_int *= h/3
return s_int
from sympy import oo # símbolo 'infinito'
def gausslegendre(f, a, b, x_pts, w_pts):
x_gl = copy(x_pts)
w_gl = copy(w_pts)
def gl_sum(f, x_list, w_list):
s_int = 0
for x, w in zip(x_list, w_list):
s_int += w * f(x)
return s_int
if (a == -1 and b == 1): return gl_sum(f, x_gl, w_gl)
elif (a == 0 and b == oo):
x_inf = list(map(lambda x: tan( pi/4 * (1+x)), copy(x_pts)))
w_inf = list(map(lambda w, x: pi/4 * w/(cos(pi/4 * (1+x)))**2, copy(w_pts), copy(x_pts)))
return gl_sum(f, x_inf, w_inf)
else:
h = (b-a)/2
xi = list(map(lambda x: h * (x + 1) + a, x_gl))
return h * gl_sum(f, xi, w_gl)
def erro_rel(est, real):
if real == 0: return abs((est-real)/(est+real)) * 100
else: return abs((est-real)/real) * 100
def aval_simples(f, a, b, real_value):
print('Utilizando os métodos:')
trap_si = trapezios_simples(f, a, b)
print('Trapézio Simples: ' + str(trap_si) + ' Erro Relativo: ' + str(erro_rel(trap_si, real_value)) + ' %')
simps13_si = simpson13_simples(f, a, b)
print('Simpson 1/3 Simples: ' + str(simps13_si) + ' Erro Relativo: ' + str(erro_rel(simps13_si, real_value)) + ' %')
simps38_si = simpson38_simples(f, a, b)
print('Simpson 3/8 Simples: ' + str(simps38_si) + ' Erro Relativo: ' + str(erro_rel(simps38_si, real_value)) + ' %')
def aval_composta(f, a, b, n, x_n, w_n, real_value):
print('Utilizando os métodos: [N = ' + str(n) + '] \n')
trap_c = trapezios_composta(f, a, b, n)
print('Trapézios Composta: ' + str(trap_c) + ' Erro Relativo: ' + str(erro_rel(trap_c, real_value)))
simp_13_c = simpson13_composta(f, a, b, n)
print('Simpson Composta: ' + str(simp_13_c) + ' Erro Relativo: ' + str(erro_rel(simp_13_c, real_value)))
gaule_ab = gausslegendre(f, a, b, x_n, w_n)
print('Gauss-Legendre: ' + str(gaule_ab) + ' Erro Relativo: ' + str(erro_rel(gaule_ab, real_value)))
print('\n')
Explanation: 1 - Cálculo do integral $\int _{0}^{\pi} e^{x} \cos(x) \; dx$
End of explanation
aval_simples(f1, 0, pi, f1_sol)
for n in n_list:
x_i, w_i = leggauss(n)
aval_composta(f1, 0, pi, n, x_i, w_i, f1_sol)
Explanation: Integrando $e^{x} \cos (x)$
End of explanation
f2 = lambda x: exp(x)*sin(x) + exp(-x)*cos(x)
f2_sol = lambda x: (exp(x)-exp(-x)) * (sin(x) + cos(x))
x_2 = [0, pi/4, pi/2, 3*pi/4, pi]
h_2 = [0.1, 0.05, 0.01]
df_2pts = lambda f, x, h: (f(x+h) - f(x)) / h
df_3pts = lambda f, x, h: (-f(x + 2*h) + 4*f(x+h) - 3*f(x)) / (2*h)
df_5pts = lambda f, x, h: (-3*f(x+4*h) + 16*f(x+3*h) - 36*f(x+2*h) + 48*f(x+h) - 25*f(x)) / (12*h)
for x in x_2:
print('\nDerivada de f(' + str(x) + ') :' )
d_sol = f2_sol(x)
print('Valor real = ' + str(d_sol))
for h in h_2:
print('com passo \'h\' = ' + str(h) + ' :')
d2r = df_2pts(f2, x, h)
print('Fórmula a 2 pontos: ' + str(d2r) + ' Erro relativo: ' + str(erro_rel(d2r, d_sol)))
d3r = df_3pts(f2, x, h)
print('Fórmula a 3 pontos: ' + str(d3r) + ' Erro relativo: ' + str(erro_rel(d3r, d_sol)))
d5r = df_5pts(f2, x, h)
print('Fórmula a 5 pontos: ' + str(d5r) + ' Erro relativo: ' + str(erro_rel(d5r, d_sol)))
Explanation: Derivar $e^{x} \sin(x) + e^{-x} \cos(x)$ nos pontos:
x = 0, $\frac{\pi}{4}$, $\frac{\pi}{2}$, $\frac{3\pi}{4}$ e $\pi$.
Utilizando as fórmulas a 2, 3 e 5 pontos.
com passos h = 0.1, 0.05, 0.01
End of explanation
xi, wi = leggauss(100)
f3 = lambda x: x / (1+x)**4
gausslegendre(f3, 0, oo, xi, wi)
Explanation: Calcular o integral:
$\int _{0}^{\infty} \frac{x dx}{(1+x)^{4}}$
End of explanation
gausslegendre((lambda y: gausslegendre((lambda x: 1), -(1-y**2)**(1/2), (1-y**2)**(1/2) , xi, wi)), 0, 1, xi, wi)
Explanation: Usando as transformações
$x = \frac{y}{1-y}$
$x = \tan \left[ \frac{\pi}{4} (1+y) \right]$
Integral Duplo de
$\int {0}^{1} \left( \int {-\sqrt{1-y^{2}}} ^{\sqrt{1-y^{2}}} \, dx \right) dy$
End of explanation
gausslegendre((lambda y: gausslegendre(lambda x: e**(-x*y), -(1-y**2)**(1/2), (1-y**2)**(1/2), xi, wi)), 0, 1, xi, wi)
Explanation: Integral Duplo de
$\int {0}^{1} \left( \int {-\sqrt{1-y^{2}}} ^{\sqrt{1-y^{2}}} e^{-xy} \, dx \right) dy$
End of explanation |
2,180 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The OpenFermion Developers
Step1: Circuits 1
Step2: Background
Second quantized fermionic operators
In order to represent fermionic systems on a quantum computer one must first discretize space. Usually, one expands the many-body wavefunction in a basis of spin-orbitals $\varphi_p = \varphi_p(r)$ which are single-particle basis functions. For reasons of spatial efficiency, all NISQ (and even most error-corrected) algorithms for simulating fermionic systems focus on representing operators in second-quantization. Second-quantized operators are expressed using the fermionic creation and annihilation operators, $a^\dagger_p$ and $a_p$. The action of $a^\dagger_p$ is to excite a fermion in spin-orbital $\varphi_p$ and the action of $a_p$ is to annihilate a fermion from spin-orbital $\varphi_p$. Specifically, if electron $i$ is represented in a space of spin-orbitals ${\varphi_p(r_i)}$ then $a^\dagger_p$ and $a_p$ are related to Slater determinants through the equivalence,
$$
\langle r_0 \cdots r_{\eta-1} | a^\dagger_{0} \cdots a^\dagger_{\eta-1} | \varnothing\rangle \equiv \sqrt{\frac{1}{\eta!}}
\begin{vmatrix}
\varphi_{0}\left(r_0\right) & \varphi_{1}\left( r_0\right) & \cdots & \varphi_{\eta-1} \left( r_0\right) \
\varphi_{0}\left(r_1\right) & \varphi_{1}\left( r_1\right) & \cdots & \varphi_{\eta-1} \left( r_1\right) \
\vdots & \vdots & \ddots & \vdots\
\varphi_{0}\left(r_{\eta-1}\right) & \varphi_{1}\left(r_{\eta-1}\right) & \cdots & \varphi_{\eta-1} \left(r_{\eta-1}\right) \end{vmatrix}
$$
where $\eta$ is the number of electrons in the system, $|\varnothing \rangle$ is the Fermi vacuum and $\varphi_p(r)=\langle r|\varphi_p \rangle$ are the single-particle orbitals that define the basis. By using a basis of Slater determinants, we ensure antisymmetry in the encoded state.
Rotations of the single-particle basis
Very often in electronic structure calculations one would like to rotate the single-particle basis. That is, one would like to generate new orbitals that are formed from a linear combination of the old orbitals. Any particle-conserving rotation of the single-particle basis can be expressed as
$$
\tilde{\varphi}p = \sum{q} \varphi_q u_{pq}
\quad
\tilde{a}^\dagger_p = \sum_{q} a^\dagger_q u_{pq}
\quad
\tilde{a}p = \sum{q} a_q u_{pq}^*
$$
where $\tilde{\varphi}p$, $\tilde{a}^\dagger_p$, and $\tilde{a}^\dagger_p$ correspond to spin-orbitals and operators in the rotated basis and $u$ is an $N\times N$ unitary matrix. From the Thouless theorem, this single-particle rotation
is equivalent to applying the $2^N \times 2^N$ operator
$$
U(u) = \exp\left(\sum{pq} \left[\log u \right]{pq} \left(a^\dagger_p a_q - a^\dagger_q a_p\right)\right)
$$
where $\left[\log u\right]{pq}$ is the $(p, q)$ element of the matrix $\log u$.
There are many reasons that one might be interested in performing such basis rotations. For instance, one might be interested in preparing the Hartree-Fock (mean-field) state of a chemical system, by rotating from some initial orbitals (e.g. atomic orbitals or plane waves) into the molecular orbitals of the system. Alternatively, one might be interested in rotating from a basis where certain operators are diagonal (e.g. the kinetic operator is diagonal in the plane wave basis) to a basis where certain other operators are diagonal (e.g. the Coulomb operator is diagonal in the position basis). Thus, it is a very useful thing to be able to apply circuits corresponding to $U(u)$ on a quantum computer in low depth.
Compiling linear depth circuits to rotate the orbital basis
OpenFermion prominently features routines for implementing the linear depth / linear connectivity basis transformations described in Phys. Rev. Lett. 120, 110501. While we will not discuss this functionality here, we also support routines for compiling the more general form of these transformations which do not conserve particle-number, known as a Bogoliubov transformation, using routines described in Phys. Rev. Applied 9, 044036. We will not discuss the details of how these methods are implemented here and instead refer readers to those papers. All that one needs in order to compile the circuit $U(u)$ using OpenFermion is the $N \times N$ matrix $u$, which we refer to in documentation as the "basis_transformation_matrix". Note that if one intends to apply this matrix to a computational basis state with only $\eta$ electrons, then one can reduce the number of gates required by instead supplying the $\eta \times N$ rectangular matrix that characterizes the rotation of the occupied orbitals only. OpenFermion will automatically take advantage of this symmetry.
OpenFermion example implementation
Step3: Now we're ready to make a circuit! First we will use OpenFermion to generate the basis transform $U(u)$ from the basis transformation matrix $u$ by calling the Bogoliubov transform function (named as such because this function can also handle non-particle conserving basis transformations). Then, we'll apply local $Z$ rotations to phase by the eigenvalues, then we'll apply the inverse transformation. That will finish the circuit. We're just going to print out the first rotation to keep things easy-to-read, but feel free to play around with the notebook.
Step4: Finally, we can check whether our circuit applied to a random initial state with the exact result. Print out the fidelity with the exact result.
Step5: Thus, we see that the circuit correctly effects the intended evolution. We can now use Cirq's compiler to output the circuit using gates native to near-term devices, and then optimize those circuits. We'll output in QASM 2.0 just to demonstrate that functionality. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The OpenFermion Developers
End of explanation
try:
import openfermion
except ImportError:
!pip install git+https://github.com/quantumlib/OpenFermion.git@master#egg=openfermion
Explanation: Circuits 1: Compiling arbitrary single-particle basis rotations in linear depth
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/openfermion/tutorials/circuits_1_basis_change"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/OpenFermion/blob/master/docs/tutorials/circuits_1_basis_change.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/OpenFermion/blob/master/docs/tutorials/circuits_1_basis_change.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/OpenFermion/docs/tutorials/circuits_1_basis_change.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
This is the first of several tutorials demonstrating the compilation of quantum circuits. These tutorials build on one another and should be studied in order. In this tutorial we will discuss the compilation of circuits for implementing arbitrary rotations of the single-particle basis of an electronic structure simulation. As an example, we show how one can use these methods to simulate the evolution of an arbitrary non-interacting fermion model.
Setup
Install the OpenFermion package:
End of explanation
import openfermion
import numpy
# Set the number of qubits in our example.
n_qubits = 3
simulation_time = 1.
random_seed = 8317
# Generate the random one-body operator.
T = openfermion.random_hermitian_matrix(n_qubits, seed=random_seed)
# Diagonalize T and obtain basis transformation matrix (aka "u").
eigenvalues, eigenvectors = numpy.linalg.eigh(T)
basis_transformation_matrix = eigenvectors.transpose()
# Print out familiar OpenFermion "FermionOperator" form of H.
H = openfermion.FermionOperator()
for p in range(n_qubits):
for q in range(n_qubits):
term = ((p, 1), (q, 0))
H += openfermion.FermionOperator(term, T[p, q])
print(H)
Explanation: Background
Second quantized fermionic operators
In order to represent fermionic systems on a quantum computer one must first discretize space. Usually, one expands the many-body wavefunction in a basis of spin-orbitals $\varphi_p = \varphi_p(r)$ which are single-particle basis functions. For reasons of spatial efficiency, all NISQ (and even most error-corrected) algorithms for simulating fermionic systems focus on representing operators in second-quantization. Second-quantized operators are expressed using the fermionic creation and annihilation operators, $a^\dagger_p$ and $a_p$. The action of $a^\dagger_p$ is to excite a fermion in spin-orbital $\varphi_p$ and the action of $a_p$ is to annihilate a fermion from spin-orbital $\varphi_p$. Specifically, if electron $i$ is represented in a space of spin-orbitals ${\varphi_p(r_i)}$ then $a^\dagger_p$ and $a_p$ are related to Slater determinants through the equivalence,
$$
\langle r_0 \cdots r_{\eta-1} | a^\dagger_{0} \cdots a^\dagger_{\eta-1} | \varnothing\rangle \equiv \sqrt{\frac{1}{\eta!}}
\begin{vmatrix}
\varphi_{0}\left(r_0\right) & \varphi_{1}\left( r_0\right) & \cdots & \varphi_{\eta-1} \left( r_0\right) \
\varphi_{0}\left(r_1\right) & \varphi_{1}\left( r_1\right) & \cdots & \varphi_{\eta-1} \left( r_1\right) \
\vdots & \vdots & \ddots & \vdots\
\varphi_{0}\left(r_{\eta-1}\right) & \varphi_{1}\left(r_{\eta-1}\right) & \cdots & \varphi_{\eta-1} \left(r_{\eta-1}\right) \end{vmatrix}
$$
where $\eta$ is the number of electrons in the system, $|\varnothing \rangle$ is the Fermi vacuum and $\varphi_p(r)=\langle r|\varphi_p \rangle$ are the single-particle orbitals that define the basis. By using a basis of Slater determinants, we ensure antisymmetry in the encoded state.
Rotations of the single-particle basis
Very often in electronic structure calculations one would like to rotate the single-particle basis. That is, one would like to generate new orbitals that are formed from a linear combination of the old orbitals. Any particle-conserving rotation of the single-particle basis can be expressed as
$$
\tilde{\varphi}p = \sum{q} \varphi_q u_{pq}
\quad
\tilde{a}^\dagger_p = \sum_{q} a^\dagger_q u_{pq}
\quad
\tilde{a}p = \sum{q} a_q u_{pq}^*
$$
where $\tilde{\varphi}p$, $\tilde{a}^\dagger_p$, and $\tilde{a}^\dagger_p$ correspond to spin-orbitals and operators in the rotated basis and $u$ is an $N\times N$ unitary matrix. From the Thouless theorem, this single-particle rotation
is equivalent to applying the $2^N \times 2^N$ operator
$$
U(u) = \exp\left(\sum{pq} \left[\log u \right]{pq} \left(a^\dagger_p a_q - a^\dagger_q a_p\right)\right)
$$
where $\left[\log u\right]{pq}$ is the $(p, q)$ element of the matrix $\log u$.
There are many reasons that one might be interested in performing such basis rotations. For instance, one might be interested in preparing the Hartree-Fock (mean-field) state of a chemical system, by rotating from some initial orbitals (e.g. atomic orbitals or plane waves) into the molecular orbitals of the system. Alternatively, one might be interested in rotating from a basis where certain operators are diagonal (e.g. the kinetic operator is diagonal in the plane wave basis) to a basis where certain other operators are diagonal (e.g. the Coulomb operator is diagonal in the position basis). Thus, it is a very useful thing to be able to apply circuits corresponding to $U(u)$ on a quantum computer in low depth.
Compiling linear depth circuits to rotate the orbital basis
OpenFermion prominently features routines for implementing the linear depth / linear connectivity basis transformations described in Phys. Rev. Lett. 120, 110501. While we will not discuss this functionality here, we also support routines for compiling the more general form of these transformations which do not conserve particle-number, known as a Bogoliubov transformation, using routines described in Phys. Rev. Applied 9, 044036. We will not discuss the details of how these methods are implemented here and instead refer readers to those papers. All that one needs in order to compile the circuit $U(u)$ using OpenFermion is the $N \times N$ matrix $u$, which we refer to in documentation as the "basis_transformation_matrix". Note that if one intends to apply this matrix to a computational basis state with only $\eta$ electrons, then one can reduce the number of gates required by instead supplying the $\eta \times N$ rectangular matrix that characterizes the rotation of the occupied orbitals only. OpenFermion will automatically take advantage of this symmetry.
OpenFermion example implementation: exact evolution under tight binding models
In this example will show how basis transforms can be used to implement exact evolution under a random Hermitian one-body fermionic operator
\begin{equation}
H = \sum_{pq} T_{pq} a^\dagger_p a_q.
\end{equation}
That is, we will compile a circuit to implement $e^{-i H t}$ for some time $t$. Of course, this is a tractable problem classically but we discuss it here since it is often useful as a subroutine for more complex quantum simulations. To accomplish this evolution, we will use basis transformations. Suppose that $u$ is the basis transformation matrix that diagonalizes $T$. Then, we could implement $e^{-i H t}$ by implementing $U(u)^\dagger (\prod_{k} e^{-i \lambda_k Z_k}) U(u)$ where $\lambda_k$ are the eigenvalues of $T$.
Below, we initialize the T matrix characterizing $H$ and then obtain the eigenvalues $\lambda_k$ and eigenvectors $u_k$ of $T$. We print out the OpenFermion FermionOperator representation of $T$.
End of explanation
import openfermion
import cirq
import cirq_google
# Initialize the qubit register.
qubits = cirq.LineQubit.range(n_qubits)
# Start circuit with the inverse basis rotation, print out this step.
inverse_basis_rotation = cirq.inverse(openfermion.bogoliubov_transform(qubits, basis_transformation_matrix))
circuit = cirq.Circuit(inverse_basis_rotation)
print(circuit)
# Add diagonal phase rotations to circuit.
for k, eigenvalue in enumerate(eigenvalues):
phase = -eigenvalue * simulation_time
circuit.append(cirq.rz(rads=phase).on(qubits[k]))
# Finally, restore basis.
basis_rotation = openfermion.bogoliubov_transform(qubits, basis_transformation_matrix)
circuit.append(basis_rotation)
Explanation: Now we're ready to make a circuit! First we will use OpenFermion to generate the basis transform $U(u)$ from the basis transformation matrix $u$ by calling the Bogoliubov transform function (named as such because this function can also handle non-particle conserving basis transformations). Then, we'll apply local $Z$ rotations to phase by the eigenvalues, then we'll apply the inverse transformation. That will finish the circuit. We're just going to print out the first rotation to keep things easy-to-read, but feel free to play around with the notebook.
End of explanation
# Initialize a random initial state.
initial_state = openfermion.haar_random_vector(
2 ** n_qubits, random_seed).astype(numpy.complex64)
# Numerically compute the correct circuit output.
import scipy
hamiltonian_sparse = openfermion.get_sparse_operator(H)
exact_state = scipy.sparse.linalg.expm_multiply(
-1j * simulation_time * hamiltonian_sparse, initial_state)
# Use Cirq simulator to apply circuit.
simulator = cirq.Simulator()
result = simulator.simulate(circuit, qubit_order=qubits,
initial_state=initial_state)
simulated_state = result.final_state_vector
# Print final fidelity.
fidelity = abs(numpy.dot(simulated_state, numpy.conjugate(exact_state)))**2
print(fidelity)
Explanation: Finally, we can check whether our circuit applied to a random initial state with the exact result. Print out the fidelity with the exact result.
End of explanation
xmon_circuit = cirq_google.optimized_for_xmon(circuit)
print(xmon_circuit.to_qasm())
Explanation: Thus, we see that the circuit correctly effects the intended evolution. We can now use Cirq's compiler to output the circuit using gates native to near-term devices, and then optimize those circuits. We'll output in QASM 2.0 just to demonstrate that functionality.
End of explanation |
2,181 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
我们的任务
垃圾邮件检测是机器学习在现今互联网领域的主要应用之一。几乎所有大型电子邮箱服务提供商都内置了垃圾邮件检测系统,能够自动将此类邮件分类为“垃圾邮件”。
在此项目中,我们将使用朴素贝叶斯算法创建一个模型,该模型会通过我们对模型的训练将信息数据集分类为垃圾信息或非垃圾信息。对垃圾文本信息进行大致了解十分重要。通常它们都包含“免费”、“赢取”、“获奖者”、“现金”、“奖品”等字眼,因为这些它们专门用来吸引你的注意力,诱惑你打开信息。此外,垃圾信息的文字一般都使用大写形式和大量感叹号。收信人能轻易辨认垃圾信息,而我们的目标是训练模型帮助我们识别垃圾信息!
能够识别垃圾信息是一种二元分类问题,因为此处信息只有“垃圾信息”或“非垃圾信息”这两种分类。此外,这是一种监督式学习问题,因为我们会向模型中提供带标签数据集,模型能够从中学习规律并在日后做出预测。
第 0 步:朴素贝叶斯定理简介
贝叶斯定理是最早的概率推理算法之一,由 Reverend Bayes 提出(他用来推理上帝是否存在),该定理在某些用例中依然很有用。
理解该定理的最佳方式是通过一个例子来讲解。假设你是一名特勤人员,你接到任务,需要在共和党总统候选人的某次竞选演说中保护他/她的安全。这场竞选演说是所有人都可以参加的公开活动,你的任务并不简单,需要时刻注意危险是否存在。一种方式是对每个人都设定一个威胁因子,根据人的特征(例如年龄、性别,是否随身带包以及紧张程度等等),你可以判断此人是否存在威胁。
如果某人符合所有这些特征,已经超出了你内心中的疑虑阈值,你可以采取措施并将此人带离活动现场。贝叶斯定理的原理也是如此,我们将根据某些相关事件(某人的年龄、性别、是否带包了、紧张程度等)的发生概率计算某个事件(某人存在威胁)的概率。
你还需要考虑这些特征之间的独立性。例如,如果在活动现场,有个孩子看起来很紧张,那么与紧张的成人相比,孩子存在威胁的可能性会更低。为了深入讲解这一点,看看下面两个特征:年龄和紧张程度。假设我们单独研究这些特征,我们可以设计一个将所有紧张的人视作潜在威胁人士的模型。但是,很有可能会有很多假正例,因为现场的未成年人很有可能会紧张。因此同时考虑年龄和“紧张程度”特征肯定会更准确地反映哪些人存在威胁。
这就是该定理的“朴素”一词的含义,该定理会认为每个特征相互之间都保持独立,但实际上并非始终是这样,因此会影响到最终的结论。
简而言之,贝叶斯定理根据某些其他事件(在此例中是信息被分类为垃圾信息)的联合概率分布计算某个事件(在此例中是信息为垃圾信息)的发生概率。稍后我们将深入了解贝叶斯定理的原理,但首先了解下我们将处理的数据。
第 1.1 步:了解我们的数据集 ###
我们将使用来自 UCI 机器学习资源库中的数据集,该资源库有大量供实验性研究的精彩数据集。这是直接数据链接。
下面是该数据的预览:
<img src="images/dqnb.png" height="1242" width="1242">
数据集中的列目前没有命名,可以看出有 2 列。
第一列有两个值:“ham”,表示信息不是垃圾信息,以及“spam”,表示信息是垃圾信息。
第二列是被分类的信息的文本内容。
说明:
* 使用 read_table 方法可以将数据集导入 pandas 数据帧。因为这是一个用制表符分隔的数据集,因此我们将使用“\t”作为“sep”参数的值,表示这种分隔格式。
* 此外,通过为 read_table() 的“names”参数指定列表 ['label, 'sms_message'],重命名列。
* 用新的列名输出数据帧的前五个值。
Step1: 第 1.2 步:数据预处理
我们已经大概了解数据集的结构,现在将标签转换为二元变量,0 表示“ham”(即非垃圾信息),1表示“spam”,这样比较方便计算。
你可能会疑问,为何要执行这一步?答案在于 scikit-learn 处理输入的方式。Scikit-learn 只处理数字值,因此如果标签值保留为字符串,scikit-learn 会自己进行转换(更确切地说,字符串标签将转型为未知浮点值)。
如果标签保留为字符串,模型依然能够做出预测,但是稍后计算效果指标(例如计算精确率和召回率分数)时可能会遇到问题。因此,为了避免稍后出现意外的陷阱,最好将分类值转换为整数,再传入模型中。
说明:
* 使用映射方法将“标签”列中的值转换为数字值,如下所示:
{'ham'
Step2: 第 2.1 步:Bag of words
我们的数据集中有大量文本数据(5,572 行数据)。大多数机器学习算法都要求传入的输入是数字数据,而电子邮件/信息通常都是文本。
现在我们要介绍 Bag of Words (BoW) 这个概念,它用来表示要处理的问题具有“大量单词”或很多文本数据。BoW 的基本概念是拿出一段文本,计算该文本中单词的出现频率。注意:BoW 平等地对待每个单词,单词的出现顺序并不重要。
利用我们将介绍的流程,我们可以将文档集合转换成矩阵,每个文档是一行,每个单词(令牌)是一列,对应的(行,列)值是每个单词或令牌在此文档中出现的频率。
例如:
假设有四个如下所示的文档:
['Hello, how are you!',
'Win money, win from home.',
'Call me now',
'Hello, Call you tomorrow?']
我们的目标是将这组文本转换为频率分布矩阵,如下所示:
<img src="images/countvectorizer.png" height="542" width="542">
从图中可以看出,文档在行中进行了编号,每个单词是一个列名称,相应的值是该单词在文档中出现的频率。
我们详细讲解下,看看如何使用一小组文档进行转换。
要处理这一步,我们将使用 sklearns
count vectorizer 方法,该方法的作用如下所示:
它会令牌化字符串(将字符串划分为单个单词)并为每个令牌设定一个整型 ID。
它会计算每个令牌的出现次数。
请注意:
CountVectorizer 方法会自动将所有令牌化单词转换为小写形式,避免区分“He”和“he”等单词。为此,它会使用参数 lowercase,该参数默认设为 True。
它还会忽略所有标点符号,避免区分后面有标点的单词(例如“hello!”)和前后没有标点的同一单词(例如“hello”)。为此,它会使用参数 token_pattern,该参数使用默认正则表达式选择具有 2 个或多个字母数字字符的令牌。
要注意的第三个参数是 stop_words。停用词是指某个语言中最常用的字词,包括“am”、“an”、“and”、“the”等。 通过将此参数值设为 english,CountVectorizer 将自动忽略(输入文本中)出现在 scikit-learn 中的内置英语停用词列表中的所有单词。这非常有用,因为当我们尝试查找表明是垃圾内容的某些单词时,停用词会使我们的结论出现偏差。
我们将在之后的步骤中深入讲解在模型中应用每种预处理技巧的效果,暂时先知道在处理文本数据时,有这些预处理技巧可采用。
第 2.2 步:从头实现 Bag of Words
在深入了解帮助我们处理繁重工作的 scikit-learn 的 Bag of Words(BoW) 库之前,首先我们自己实现该步骤,以便了解该库的背后原理。
第 1 步:将所有字符串转换成小写形式。
假设有一个文档集合:
Step3: 说明:
* 将文档集合中的所有字符串转换成小写形式。将它们保存到叫做“lower_case_documents”的列表中。你可以使用 lower() 方法在 python 中将字符串转换成小写形式。
Step4: 第 2 步:删除所有标点符号
说明:
删除文档集合中的字符串中的所有标点。将它们保存在叫做“sans_punctuation_documents”的列表中。
Step5: 第 3 步:令牌化
令牌化文档集合中的句子是指使用分隔符将句子拆分成单个单词。分隔符指定了我们将使用哪个字符来表示单词的开始和结束位置(例如,我们可以使用一个空格作为我们的文档集合的单词分隔符。)
说明:
使用 split() 方法令牌化“sans_punctuation_documents”中存储的字符串,并将最终文档集合存储在叫做“preprocessed_documents”的列表中。
Step6: 第 4 步:计算频率
我们已经获得所需格式的文档集合,现在可以数出每个单词在文档集合的每个文档中出现的次数了。为此,我们将使用 Python collections 库中的 Counter 方法。
Counter 会数出列表中每项的出现次数,并返回一个字典,键是被数的项目,相应的值是该项目在列表中的计数。
说明:
使用 Counter() 方法和作为输入的 preprocessed_documents 创建一个字典,键是每个文档中的每个单词,相应的值是该单词的出现频率。将每个 Counter 字典当做项目另存到一个叫做“frequency_list”的列表中。
Step7: 恭喜!你从头实现了 Bag of Words 流程!正如在上一个输出中看到的,我们有一个频率分布字典,清晰地显示了我们正在处理的文本。
我们现在应该充分理解 scikit-learn 中的 sklearn.feature_extraction.text.CountVectorizer 方法的背后原理了。
我们将在下一步实现 sklearn.feature_extraction.text.CountVectorizer 方法。
第 2.3 步:在 scikit-learn 中实现 Bag of Words
我们已经从头实现了 BoW 概念,并使用 scikit-learn 以简洁的方式实现这一流程。我们将使用在上一步用到的相同文档集合。
Step8: 说明:
导入 sklearn.feature_extraction.text.CountVectorizer 方法并创建一个实例,命名为 'count_vector'。
Step9: 使用 CountVectorizer() 预处理数据
在第 2.2 步,我们从头实现了可以首先清理数据的 CountVectorizer() 方法。清理过程包括将所有数据转换为小写形式,并删除所有标点符号。CountVectorizer() 具有某些可以帮助我们完成这些步骤的参数,这些参数包括:
lowercase = True
Step10: token_pattern = (?u)\\b\\w\\w+\\b
Step11: stop_words
Step12: 你可以通过如下所示输出 count_vector 对象,查看该对象的所有参数值:
Step13: 说明:
使用 fit() 将你的文档数据集与 CountVectorizer 对象进行拟合,并使用 get_feature_names() 方法获得被归类为特征的单词列表。
Step14: get_feature_names() 方法会返回此数据集的特征名称,即组成 'documents' 词汇表的单词集合。
说明:
创建一个矩阵,行是 4 个文档中每个文档的行,列是每个单词。对应的值(行,列)是该单词(在列中)在特定文档(在行中)中出现的频率。为此,你可以使用 transform() 方法并传入文档数据集作为参数。transform() 方法会返回一个 numpy 整数矩阵,你可以使用 toarray() 将其转换为数组,称之为 'doc_array'
Step15: 现在,对于单词在文档中的出现频率,我们已经获得了整洁的文档表示形式。为了方便理解,下一步我们会将此数组转换为数据帧,并相应地为列命名。
说明:
将我们获得并加载到 'doc_array' 中的数组转换为数据帧,并将列名设为单词名称(你之前使用 get_feature_names() 计算了名称)。将该数据帧命名为 'frequency_matrix'。
Step16: 恭喜!你为我们创建的文档数据集成功地实现了 Bag of Words 问题。
直接使用该方法的一个潜在问题是如果我们的文本数据集非常庞大(假设有一大批新闻文章或电子邮件数据),由于语言本身的原因,肯定有某些值比其他值更常见。例如“is”、“the”、“an”等单词、代词、语法结构等会使矩阵出现偏斜并影响到分析结果。
有几种方式可以减轻这种情况。一种方式是使用 stop_words 参数并将其值设为 english。这样会自动忽略 scikit-learn 中的内置英语停用词列表中出现的所有单词(来自输入文本)。
另一种方式是使用 tfidf 方法。该方法已经超出了这门课程的讲解范畴。
第 3.1 步:训练集和测试集
我们已经知道如何处理 Bag of Words 问题,现在回到我们的数据集并继续我们的分析工作。第一步是将数据集拆分为训练集和测试集,以便稍后测试我们的模型。
说明:
通过在 sklearn 中使用 train_test_split 方法,将数据集拆分为训练集和测试集。使用以下变量拆分数据:
* X_train 是 'sms_message' 列的训练数据。
* y_train 是 'label' 列的训练数据
* X_test 是 'sms_message' 列的测试数据。
* y_test 是 'label' 列的测试数据。
输出每个训练数据和测试数据的行数。
Step17: 第 3.2 步:对数据集应用 Bag of Words 流程。
我们已经拆分了数据,下个目标是按照第 2 步:Bag of words 中的步骤操作,并将数据转换为期望的矩阵格式。为此,我们将像之前一样使用 CountVectorizer()。我们需要完成两步:
首先,我们需要对 CountVectorizer()拟合训练数据 (X_train) 并返回矩阵。
其次,我们需要转换测试数据 (X_test) 以返回矩阵。
注意:X_train 是数据集中 'sms_message' 列的训练数据,我们将使用此数据训练模型。
X_test 是 'sms_message' 列的测试数据,我们将使用该数据(转换为矩阵后)进行预测。然后在后面的步骤中将这些预测与 y_test 进行比较。
我们暂时为你提供了进行矩阵转换的代码!
Step18: 第 4.1 步:从头实现贝叶斯定理
我们的数据集已经是我们希望的格式,现在可以进行任务的下一步了,即研究用来做出预测并将信息分类为垃圾信息或非垃圾信息的算法。记得在该项目的开头,我们简要介绍了贝叶斯定理,现在我们将深入讲解该定理。通俗地说,贝叶斯定理根据与相关事件有关的其他事件的概率计算该事件的发生概率。它由先验概率(我们知道的概率或提供给我们的概率)和后验概率(我们希望用先验部分计算的概率)组成。
我们用一个简单的示例从头实现贝叶斯定理。假设我们要根据某人接受糖尿病检测后获得阳性结果计算此人有糖尿病的概率。
在医学领域,此类概率非常重要,因为它们涉及的是生死情况。
我们假设:
P(D) 是某人患有糖尿病的概率。值为 0.01,换句话说,普通人群中有 1% 的人患有糖尿病(免责声明:这些值只是假设,并非任何医学研究的结论)。
P(Pos):是获得阳性测试结果的概率。
P(Neg):是获得阴性测试结果的概率。
P(Pos|D):是本身有糖尿病并且获得阳性测试结果的概率,值为 0.9,换句话说,该测试在 90% 的情况下是正确的。亦称为敏感性或真正例率。
P(Neg|~D):是本身没有糖尿病并且获得阴性测试结果的概率,值也为 0.9 ,因此在 90% 的情况下是正确的。亦称为特异性或真负例率。
贝叶斯公式如下所示:
<img src="images/bayes_formula.png" height="242" width="242">
P(A):A 独立发生的先验概率。在我们的示例中为 P(D),该值已经提供给我们了 。
P(B):B 独立发生的先验概率。在我们的示例中为 P(Pos)。
P(A|B):在给定 B 的情况下 A 发生的后验概率,在我们的示例中为 P(D|Pos),即某人的测试结果为阳性时患有糖尿病的概率。这是我们要计算的值。
P(B|A):在给定 A 的情况下 B 可能发生的概率。在我们的示例中为 P(Pos|D),该值已经提供给我们了 。
将这些值代入贝叶斯定理公式中:
P(D|Pos) = P(D) * P(Pos|D) / P(Pos)
获得阳性测试结果 P(Pos) 的概率可以使用敏感性和特异性来计算,如下所示:
P(Pos) = [P(D) * Sensitivity] + [P(~D) * (1-Specificity))]
Step19: 我们可以利用所有这些信息计算后验概率,如下所示:
某人测试结果为阳性时患有糖尿病的概率为:
P(D|Pos) = (P(D) * Sensitivity)) / P(Pos)
某人测试结果为阳性时没有糖尿病的概率为:
P(~D|Pos) = (P(~D) * (1-Specificity)) / P(Pos)
后验概率的和将始终为 1。
Step20: 恭喜!你从头实现了贝叶斯定理。你的分析表明即使某人的测试结果为阳性,他/她也有 8.3% 的概率实际上患有糖尿病,以及 91.67% 的概率没有糖尿病。当然前提是全球只有 1% 的人群患有糖尿病,这只是个假设。
“朴素贝叶斯”中的“朴素”一词是什么意思?
朴素贝叶斯中的“朴素”一词实际上是指,算法在进行预测时使用的特征相互之间是独立的,但实际上并非始终这样。在我们的糖尿病示例中,我们只考虑了一个特征,即测试结果。假设我们添加了另一个特征“锻炼”。假设此特征具有二元值 0 和 1,0 表示某人一周的锻炼时间不超过 2 天,1 表示某人一周的锻炼时间超过 2 天。如果我们要同时使用这两个特征(即测试结果和“锻炼”特征的值)计算最终概率,贝叶斯定理将不可行。朴素贝叶斯是贝叶斯定理的一种延伸,假设所有特征相互之间是独立的。
第 4.2 步:从头实现朴素贝叶斯
你已经知道贝叶斯定理的详细原理,现在我们将用它来考虑有多个特征的情况。
假设有两个政党的候选人,“Jill Stein”是绿党候选人,“Gary Johnson”是自由党的候选人,两位候选人在演讲中提到“自由”、“移民”和“环境”这些字眼的概率为:
Jill Stein 提到“自由”的概率:0.1 ---------> P(F|J)
Jill Stein 提到“移民”的概率:0.1 -----> P(I|J)
Jill Stein 提到“环境”的概率:0.8 -----> P(E|J)
Gary Johnson 提到“自由”的概率:0.7 -------> P(F|G)
Gary Johnson 提到“移民”的概率:0.2 ---> P(I|G)
Gary Johnson 提到“环境”的概率:0.1 ---> P(E|G)
假设 Jill Stein 发表演讲的概率 P(J) 是 0.5,Gary Johnson 也是 P(G) = 0.5。
了解这些信息后,如果我们要计算 Jill Stein 提到“自由”和“移民”的概率,该怎么做呢?这时候朴素贝叶斯定理就派上用场了,我们将考虑两个特征:“自由”和“移民”。
现在我们可以定义朴素贝叶斯定理的公式:
<img src="images/naivebayes.png" height="342" width="342">
在该公式中,y 是分类变量,即候选人的姓名,x1 到 xn 是特征向量,即单个单词。该定理假设每个特征向量或单词 (xi) 相互之间是独立的。
为了详细讲解该公式,我们需要计算以下后验概率:
P(J|F,I):Jill Stein 提到“自由”和“移民”的概率。
Step21: P(G|F,I):Gary Johnson 提到“自由”和“移民”的概率。
Step22: 现在可以计算 P(J|F,I) 的概率,即 Jill Stein 提到“自由”和“移民”的概率,以及 P(G|F,I),即 Gary Johnson 提到“自由”和“移民”的概率。
Step23: 可以看出,和贝叶斯定理一样,后验概率之和等于 1。恭喜!你从头实现了朴素贝叶斯定理。分析表明,绿党的 Jill Stein 在演讲中提到“自由”和“移民”的概率只有 6.6%,而自由党的 Gary Johnson 有 93.3% 的可能性会提到这两个词。
另一个比较常见的朴素贝叶斯定理应用示例是在搜索引擎中搜索“萨克拉门托国王”。为了使我们能够获得与萨克拉门托国王队 NBA 篮球队相关的结果,搜索引擎需要将这两个单词关联到一起,而不是单独处理它们,否则就会获得标有“萨克拉门托”的图片(例如风光图片)以及关于“国王”的图片(可能是历史上的国王),而实际上我们想要搜索的是关于篮球队的图片。这是一种搜索引擎将单词当做非独立个体(因此采用的是“朴素”方式)的经典示例。
将此方法应用到我们的垃圾信息分类问题上,朴素贝叶斯算法会查看每个单词,而不是将它们当做有任何联系的关联体。对于垃圾内容检测器来说,这么做通常都可行,因为有些禁用词几乎肯定会被分类为垃圾内容,例如包含“伟哥”的电子邮件通常都被归类为垃圾邮件。
第 5 步:使用 scikit-learn 实现朴素贝叶斯
幸运的是,sklearn 具有多个朴素贝叶斯实现,这样我们就不用从头进行计算。我们将使用 sklearns 的 sklearn.naive_bayes 方法对我们的数据集做出预测。
具体而言,我们将使用多项式朴素贝叶斯实现。这个分类器适合分类离散特征(例如我们的单词计数文本分类)。它会将整数单词计数作为输入。另一方面,高斯朴素贝叶斯更适合连续数据,因为它假设输入数据是高斯(正态)分布。
Step24: 我们已经对测试集进行预测,现在需要检查预测的准确率了。
第 6 步:评估模型
我们已经对测试集进行了预测,下一个目标是评估模型的效果。我们可以采用各种衡量指标,但首先快速总结下这些指标。
准确率 衡量的是分类器做出正确预测的概率,即正确预测的数量与预测总数(测试数据点的数量)之比。
精确率 指的是分类为垃圾信息的信息实际上是垃圾信息的概率,即真正例(分类为垃圾内容并且实际上是垃圾内容的单词)与所有正例(所有分类为垃圾内容的单词,无论是否分类正确)之比,换句话说,是以下公式的比值结果:
[True Positives/(True Positives + False Positives)]
召回率(敏感性)表示实际上为垃圾信息并且被分类为垃圾信息的信息所占比例,即真正例(分类为垃圾内容并且实际上是垃圾内容的单词)与所有为垃圾内容的单词之比,换句话说,是以下公式的比值结果:
[True Positives/(True Positives + False Negatives)]
对于偏态分类分布问题(我们的数据集就属于偏态分类),例如如果有 100 条信息,只有 2 条是垃圾信息,剩下的 98 条不是,则准确率本身并不是很好的指标。我们将 90 条信息分类为垃圾信息(包括 2 条垃圾信息,但是我们将其分类为非垃圾信息,因此它们属于假负例),并将 10 条信息分类为垃圾信息(所有 10 条都是假正例),依然会获得比较高的准确率分数。对于此类情形,精确率和召回率非常实用。可以通过这两个指标获得 F1 分数,即精确率和召回率分数的加权平均值。该分数的范围是 0 到 1,1 表示最佳潜在 F1 分数。
我们将使用所有四个指标确保我们的模型效果很好。这四个指标的值范围都在 0 到 1 之间,分数尽量接近 1 可以很好地表示模型的效果如何。 | Python Code:
'''
Solution
'''
import pandas as pd
# Dataset from - https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection
df = pd.read_table("smsspamcollection/SMSSpamCollection", sep="\t",names = ['label', 'sms_message'] )
# Output printing out first 5 columns
df.head()
Explanation: 我们的任务
垃圾邮件检测是机器学习在现今互联网领域的主要应用之一。几乎所有大型电子邮箱服务提供商都内置了垃圾邮件检测系统,能够自动将此类邮件分类为“垃圾邮件”。
在此项目中,我们将使用朴素贝叶斯算法创建一个模型,该模型会通过我们对模型的训练将信息数据集分类为垃圾信息或非垃圾信息。对垃圾文本信息进行大致了解十分重要。通常它们都包含“免费”、“赢取”、“获奖者”、“现金”、“奖品”等字眼,因为这些它们专门用来吸引你的注意力,诱惑你打开信息。此外,垃圾信息的文字一般都使用大写形式和大量感叹号。收信人能轻易辨认垃圾信息,而我们的目标是训练模型帮助我们识别垃圾信息!
能够识别垃圾信息是一种二元分类问题,因为此处信息只有“垃圾信息”或“非垃圾信息”这两种分类。此外,这是一种监督式学习问题,因为我们会向模型中提供带标签数据集,模型能够从中学习规律并在日后做出预测。
第 0 步:朴素贝叶斯定理简介
贝叶斯定理是最早的概率推理算法之一,由 Reverend Bayes 提出(他用来推理上帝是否存在),该定理在某些用例中依然很有用。
理解该定理的最佳方式是通过一个例子来讲解。假设你是一名特勤人员,你接到任务,需要在共和党总统候选人的某次竞选演说中保护他/她的安全。这场竞选演说是所有人都可以参加的公开活动,你的任务并不简单,需要时刻注意危险是否存在。一种方式是对每个人都设定一个威胁因子,根据人的特征(例如年龄、性别,是否随身带包以及紧张程度等等),你可以判断此人是否存在威胁。
如果某人符合所有这些特征,已经超出了你内心中的疑虑阈值,你可以采取措施并将此人带离活动现场。贝叶斯定理的原理也是如此,我们将根据某些相关事件(某人的年龄、性别、是否带包了、紧张程度等)的发生概率计算某个事件(某人存在威胁)的概率。
你还需要考虑这些特征之间的独立性。例如,如果在活动现场,有个孩子看起来很紧张,那么与紧张的成人相比,孩子存在威胁的可能性会更低。为了深入讲解这一点,看看下面两个特征:年龄和紧张程度。假设我们单独研究这些特征,我们可以设计一个将所有紧张的人视作潜在威胁人士的模型。但是,很有可能会有很多假正例,因为现场的未成年人很有可能会紧张。因此同时考虑年龄和“紧张程度”特征肯定会更准确地反映哪些人存在威胁。
这就是该定理的“朴素”一词的含义,该定理会认为每个特征相互之间都保持独立,但实际上并非始终是这样,因此会影响到最终的结论。
简而言之,贝叶斯定理根据某些其他事件(在此例中是信息被分类为垃圾信息)的联合概率分布计算某个事件(在此例中是信息为垃圾信息)的发生概率。稍后我们将深入了解贝叶斯定理的原理,但首先了解下我们将处理的数据。
第 1.1 步:了解我们的数据集 ###
我们将使用来自 UCI 机器学习资源库中的数据集,该资源库有大量供实验性研究的精彩数据集。这是直接数据链接。
下面是该数据的预览:
<img src="images/dqnb.png" height="1242" width="1242">
数据集中的列目前没有命名,可以看出有 2 列。
第一列有两个值:“ham”,表示信息不是垃圾信息,以及“spam”,表示信息是垃圾信息。
第二列是被分类的信息的文本内容。
说明:
* 使用 read_table 方法可以将数据集导入 pandas 数据帧。因为这是一个用制表符分隔的数据集,因此我们将使用“\t”作为“sep”参数的值,表示这种分隔格式。
* 此外,通过为 read_table() 的“names”参数指定列表 ['label, 'sms_message'],重命名列。
* 用新的列名输出数据帧的前五个值。
End of explanation
'''
Solution
'''
df['label'] = df.label.map({"ham":0, "spam":1})
Explanation: 第 1.2 步:数据预处理
我们已经大概了解数据集的结构,现在将标签转换为二元变量,0 表示“ham”(即非垃圾信息),1表示“spam”,这样比较方便计算。
你可能会疑问,为何要执行这一步?答案在于 scikit-learn 处理输入的方式。Scikit-learn 只处理数字值,因此如果标签值保留为字符串,scikit-learn 会自己进行转换(更确切地说,字符串标签将转型为未知浮点值)。
如果标签保留为字符串,模型依然能够做出预测,但是稍后计算效果指标(例如计算精确率和召回率分数)时可能会遇到问题。因此,为了避免稍后出现意外的陷阱,最好将分类值转换为整数,再传入模型中。
说明:
* 使用映射方法将“标签”列中的值转换为数字值,如下所示:
{'ham':0, 'spam':1} 这样会将“ham”值映射为 0,将“spam”值映射为 1。
* 此外,为了知道我们正在处理的数据集有多大,使用“shape”输出行数和列数
End of explanation
documents = ['Hello, how are you!',
'Win money, win from home.',
'Call me now.',
'Hello, Call hello you tomorrow?']
Explanation: 第 2.1 步:Bag of words
我们的数据集中有大量文本数据(5,572 行数据)。大多数机器学习算法都要求传入的输入是数字数据,而电子邮件/信息通常都是文本。
现在我们要介绍 Bag of Words (BoW) 这个概念,它用来表示要处理的问题具有“大量单词”或很多文本数据。BoW 的基本概念是拿出一段文本,计算该文本中单词的出现频率。注意:BoW 平等地对待每个单词,单词的出现顺序并不重要。
利用我们将介绍的流程,我们可以将文档集合转换成矩阵,每个文档是一行,每个单词(令牌)是一列,对应的(行,列)值是每个单词或令牌在此文档中出现的频率。
例如:
假设有四个如下所示的文档:
['Hello, how are you!',
'Win money, win from home.',
'Call me now',
'Hello, Call you tomorrow?']
我们的目标是将这组文本转换为频率分布矩阵,如下所示:
<img src="images/countvectorizer.png" height="542" width="542">
从图中可以看出,文档在行中进行了编号,每个单词是一个列名称,相应的值是该单词在文档中出现的频率。
我们详细讲解下,看看如何使用一小组文档进行转换。
要处理这一步,我们将使用 sklearns
count vectorizer 方法,该方法的作用如下所示:
它会令牌化字符串(将字符串划分为单个单词)并为每个令牌设定一个整型 ID。
它会计算每个令牌的出现次数。
请注意:
CountVectorizer 方法会自动将所有令牌化单词转换为小写形式,避免区分“He”和“he”等单词。为此,它会使用参数 lowercase,该参数默认设为 True。
它还会忽略所有标点符号,避免区分后面有标点的单词(例如“hello!”)和前后没有标点的同一单词(例如“hello”)。为此,它会使用参数 token_pattern,该参数使用默认正则表达式选择具有 2 个或多个字母数字字符的令牌。
要注意的第三个参数是 stop_words。停用词是指某个语言中最常用的字词,包括“am”、“an”、“and”、“the”等。 通过将此参数值设为 english,CountVectorizer 将自动忽略(输入文本中)出现在 scikit-learn 中的内置英语停用词列表中的所有单词。这非常有用,因为当我们尝试查找表明是垃圾内容的某些单词时,停用词会使我们的结论出现偏差。
我们将在之后的步骤中深入讲解在模型中应用每种预处理技巧的效果,暂时先知道在处理文本数据时,有这些预处理技巧可采用。
第 2.2 步:从头实现 Bag of Words
在深入了解帮助我们处理繁重工作的 scikit-learn 的 Bag of Words(BoW) 库之前,首先我们自己实现该步骤,以便了解该库的背后原理。
第 1 步:将所有字符串转换成小写形式。
假设有一个文档集合:
End of explanation
'''
Solution:
'''
documents = ['Hello, how are you!',
'Win money, win from home.',
'Call me now.',
'Hello, Call hello you tomorrow?']
lower_case_documents = []
for i in documents:
low_doc = i.lower()
lower_case_documents.append(low_doc)
print(lower_case_documents)
Explanation: 说明:
* 将文档集合中的所有字符串转换成小写形式。将它们保存到叫做“lower_case_documents”的列表中。你可以使用 lower() 方法在 python 中将字符串转换成小写形式。
End of explanation
'''
Solution:
'''
sans_punctuation_documents = []
import string
import re
for i in lower_case_documents:
punc = '[,.?!\']'
string = re.sub(punc, '', i)
sans_punctuation_documents.append(string)
print(sans_punctuation_documents)
Explanation: 第 2 步:删除所有标点符号
说明:
删除文档集合中的字符串中的所有标点。将它们保存在叫做“sans_punctuation_documents”的列表中。
End of explanation
'''
Solution:
'''
preprocessed_documents = []
for i in sans_punctuation_documents:
preprocessed_documents.append(i.split(' '))
print(preprocessed_documents)
Explanation: 第 3 步:令牌化
令牌化文档集合中的句子是指使用分隔符将句子拆分成单个单词。分隔符指定了我们将使用哪个字符来表示单词的开始和结束位置(例如,我们可以使用一个空格作为我们的文档集合的单词分隔符。)
说明:
使用 split() 方法令牌化“sans_punctuation_documents”中存储的字符串,并将最终文档集合存储在叫做“preprocessed_documents”的列表中。
End of explanation
'''
Solution
'''
frequency_list = []
import pprint
from collections import Counter
for i in preprocessed_documents:
dic = Counter(i)
frequency_list.append(dic)
pprint.pprint(frequency_list)
Explanation: 第 4 步:计算频率
我们已经获得所需格式的文档集合,现在可以数出每个单词在文档集合的每个文档中出现的次数了。为此,我们将使用 Python collections 库中的 Counter 方法。
Counter 会数出列表中每项的出现次数,并返回一个字典,键是被数的项目,相应的值是该项目在列表中的计数。
说明:
使用 Counter() 方法和作为输入的 preprocessed_documents 创建一个字典,键是每个文档中的每个单词,相应的值是该单词的出现频率。将每个 Counter 字典当做项目另存到一个叫做“frequency_list”的列表中。
End of explanation
'''
Here we will look to create a frequency matrix on a smaller document set to make sure we understand how the
document-term matrix generation happens. We have created a sample document set 'documents'.
'''
documents = ['Hello, how are you!',
'Win money, win from home.',
'Call me now.',
'Hello, Call hello you tomorrow?']
Explanation: 恭喜!你从头实现了 Bag of Words 流程!正如在上一个输出中看到的,我们有一个频率分布字典,清晰地显示了我们正在处理的文本。
我们现在应该充分理解 scikit-learn 中的 sklearn.feature_extraction.text.CountVectorizer 方法的背后原理了。
我们将在下一步实现 sklearn.feature_extraction.text.CountVectorizer 方法。
第 2.3 步:在 scikit-learn 中实现 Bag of Words
我们已经从头实现了 BoW 概念,并使用 scikit-learn 以简洁的方式实现这一流程。我们将使用在上一步用到的相同文档集合。
End of explanation
'''
Solution
'''
from sklearn.feature_extraction.text import CountVectorizer
count_vector = CountVectorizer()
Explanation: 说明:
导入 sklearn.feature_extraction.text.CountVectorizer 方法并创建一个实例,命名为 'count_vector'。
End of explanation
`lowercase` 参数的默认值为 `True`,它会将所有文本都转换为小写形式。
Explanation: 使用 CountVectorizer() 预处理数据
在第 2.2 步,我们从头实现了可以首先清理数据的 CountVectorizer() 方法。清理过程包括将所有数据转换为小写形式,并删除所有标点符号。CountVectorizer() 具有某些可以帮助我们完成这些步骤的参数,这些参数包括:
lowercase = True
End of explanation
`token_pattern` 参数具有默认正则表达式值 `(?u)\\b\\w\\w+\\b`,它会忽略所有标点符号并将它们当做分隔符,并将长度大于等于 2 的字母数字字符串当做单个令牌或单词。
Explanation: token_pattern = (?u)\\b\\w\\w+\\b
End of explanation
`stop_words` 参数如果设为 `english`,将从文档集合中删除与 scikit-learn 中定义的英语停用词列表匹配的所有单词。考虑到我们的数据集规模不大,并且我们处理的是信息,并不是电子邮件这样的更庞大文本来源,因此我们将不设置此参数值。
Explanation: stop_words
End of explanation
'''
Practice node:
Print the 'count_vector' object which is an instance of 'CountVectorizer()'
'''
print(count_vector)
Explanation: 你可以通过如下所示输出 count_vector 对象,查看该对象的所有参数值:
End of explanation
'''
Solution:
'''
count_vector.fit(documents)
count_vector.get_feature_names()
Explanation: 说明:
使用 fit() 将你的文档数据集与 CountVectorizer 对象进行拟合,并使用 get_feature_names() 方法获得被归类为特征的单词列表。
End of explanation
'''
Solution
'''
doc_array = count_vector.transform(documents).toarray()
doc_array
Explanation: get_feature_names() 方法会返回此数据集的特征名称,即组成 'documents' 词汇表的单词集合。
说明:
创建一个矩阵,行是 4 个文档中每个文档的行,列是每个单词。对应的值(行,列)是该单词(在列中)在特定文档(在行中)中出现的频率。为此,你可以使用 transform() 方法并传入文档数据集作为参数。transform() 方法会返回一个 numpy 整数矩阵,你可以使用 toarray() 将其转换为数组,称之为 'doc_array'
End of explanation
'''
Solution
'''
frequency_matrix = pd.DataFrame(doc_array, columns = count_vector.get_feature_names())
frequency_matrix
Explanation: 现在,对于单词在文档中的出现频率,我们已经获得了整洁的文档表示形式。为了方便理解,下一步我们会将此数组转换为数据帧,并相应地为列命名。
说明:
将我们获得并加载到 'doc_array' 中的数组转换为数据帧,并将列名设为单词名称(你之前使用 get_feature_names() 计算了名称)。将该数据帧命名为 'frequency_matrix'。
End of explanation
'''
Solution
NOTE: sklearn.cross_validation will be deprecated soon to sklearn.model_selection
'''
# split into training and testing sets
# USE from sklearn.model_selection import train_test_split to avoid seeing deprecation warning.
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df['sms_message'],
df['label'],
random_state=1)
print('Number of rows in the total set: {}'.format(df.shape[0]))
print('Number of rows in the training set: {}'.format(X_train.shape[0]))
print('Number of rows in the test set: {}'.format(X_test.shape[0]))
Explanation: 恭喜!你为我们创建的文档数据集成功地实现了 Bag of Words 问题。
直接使用该方法的一个潜在问题是如果我们的文本数据集非常庞大(假设有一大批新闻文章或电子邮件数据),由于语言本身的原因,肯定有某些值比其他值更常见。例如“is”、“the”、“an”等单词、代词、语法结构等会使矩阵出现偏斜并影响到分析结果。
有几种方式可以减轻这种情况。一种方式是使用 stop_words 参数并将其值设为 english。这样会自动忽略 scikit-learn 中的内置英语停用词列表中出现的所有单词(来自输入文本)。
另一种方式是使用 tfidf 方法。该方法已经超出了这门课程的讲解范畴。
第 3.1 步:训练集和测试集
我们已经知道如何处理 Bag of Words 问题,现在回到我们的数据集并继续我们的分析工作。第一步是将数据集拆分为训练集和测试集,以便稍后测试我们的模型。
说明:
通过在 sklearn 中使用 train_test_split 方法,将数据集拆分为训练集和测试集。使用以下变量拆分数据:
* X_train 是 'sms_message' 列的训练数据。
* y_train 是 'label' 列的训练数据
* X_test 是 'sms_message' 列的测试数据。
* y_test 是 'label' 列的测试数据。
输出每个训练数据和测试数据的行数。
End of explanation
'''
[Practice Node]
The code for this segment is in 2 parts. Firstly, we are learning a vocabulary dictionary for the training data
and then transforming the data into a document-term matrix; secondly, for the testing data we are only
transforming the data into a document-term matrix.
This is similar to the process we followed in Step 2.3
We will provide the transformed data to students in the variables 'training_data' and 'testing_data'.
'''
'''
Solution
'''
# Instantiate the CountVectorizer method
count_vector = CountVectorizer()
# Fit the training data and then return the matrix
training_data = count_vector.fit_transform(X_train)
# Transform testing data and return the matrix. Note we are not fitting the testing data into the CountVectorizer()
testing_data = count_vector.transform(X_test)
Explanation: 第 3.2 步:对数据集应用 Bag of Words 流程。
我们已经拆分了数据,下个目标是按照第 2 步:Bag of words 中的步骤操作,并将数据转换为期望的矩阵格式。为此,我们将像之前一样使用 CountVectorizer()。我们需要完成两步:
首先,我们需要对 CountVectorizer()拟合训练数据 (X_train) 并返回矩阵。
其次,我们需要转换测试数据 (X_test) 以返回矩阵。
注意:X_train 是数据集中 'sms_message' 列的训练数据,我们将使用此数据训练模型。
X_test 是 'sms_message' 列的测试数据,我们将使用该数据(转换为矩阵后)进行预测。然后在后面的步骤中将这些预测与 y_test 进行比较。
我们暂时为你提供了进行矩阵转换的代码!
End of explanation
'''
Instructions:
Calculate probability of getting a positive test result, P(Pos)
'''
'''
Solution (skeleton code will be provided)
'''
# P(D)
p_diabetes = 0.01
# P(~D)
p_no_diabetes = 0.99
# Sensitivity or P(Pos|D)
p_pos_diabetes = 0.9
# Specificity or P(Neg|~D)
p_neg_no_diabetes = 0.9
# P(Pos)
p_pos = p_diabetes * p_pos_diabetes + (p_no_diabetes *(1- p_neg_no_diabetes))
print('The probability of getting a positive test result P(Pos) is: {}',format(p_pos))
Explanation: 第 4.1 步:从头实现贝叶斯定理
我们的数据集已经是我们希望的格式,现在可以进行任务的下一步了,即研究用来做出预测并将信息分类为垃圾信息或非垃圾信息的算法。记得在该项目的开头,我们简要介绍了贝叶斯定理,现在我们将深入讲解该定理。通俗地说,贝叶斯定理根据与相关事件有关的其他事件的概率计算该事件的发生概率。它由先验概率(我们知道的概率或提供给我们的概率)和后验概率(我们希望用先验部分计算的概率)组成。
我们用一个简单的示例从头实现贝叶斯定理。假设我们要根据某人接受糖尿病检测后获得阳性结果计算此人有糖尿病的概率。
在医学领域,此类概率非常重要,因为它们涉及的是生死情况。
我们假设:
P(D) 是某人患有糖尿病的概率。值为 0.01,换句话说,普通人群中有 1% 的人患有糖尿病(免责声明:这些值只是假设,并非任何医学研究的结论)。
P(Pos):是获得阳性测试结果的概率。
P(Neg):是获得阴性测试结果的概率。
P(Pos|D):是本身有糖尿病并且获得阳性测试结果的概率,值为 0.9,换句话说,该测试在 90% 的情况下是正确的。亦称为敏感性或真正例率。
P(Neg|~D):是本身没有糖尿病并且获得阴性测试结果的概率,值也为 0.9 ,因此在 90% 的情况下是正确的。亦称为特异性或真负例率。
贝叶斯公式如下所示:
<img src="images/bayes_formula.png" height="242" width="242">
P(A):A 独立发生的先验概率。在我们的示例中为 P(D),该值已经提供给我们了 。
P(B):B 独立发生的先验概率。在我们的示例中为 P(Pos)。
P(A|B):在给定 B 的情况下 A 发生的后验概率,在我们的示例中为 P(D|Pos),即某人的测试结果为阳性时患有糖尿病的概率。这是我们要计算的值。
P(B|A):在给定 A 的情况下 B 可能发生的概率。在我们的示例中为 P(Pos|D),该值已经提供给我们了 。
将这些值代入贝叶斯定理公式中:
P(D|Pos) = P(D) * P(Pos|D) / P(Pos)
获得阳性测试结果 P(Pos) 的概率可以使用敏感性和特异性来计算,如下所示:
P(Pos) = [P(D) * Sensitivity] + [P(~D) * (1-Specificity))]
End of explanation
'''
Instructions:
Compute the probability of an individual having diabetes, given that, that individual got a positive test result.
In other words, compute P(D|Pos).
The formula is: P(D|Pos) = (P(D) * P(Pos|D) / P(Pos)
'''
'''
Solution
'''
# P(D|Pos)
p_diabetes_pos = (p_diabetes * p_pos_diabetes)/p_pos
print('Probability of an individual having diabetes, given that that individual got a positive test result is:\
',format(p_diabetes_pos))
'''
Instructions:
Compute the probability of an individual not having diabetes, given that, that individual got a positive test result.
In other words, compute P(~D|Pos).
The formula is: P(~D|Pos) = P(~D) * P(Pos|~D) / P(Pos)
Note that P(Pos|~D) can be computed as 1 - P(Neg|~D).
Therefore:
P(Pos|~D) = p_pos_no_diabetes = 1 - 0.9 = 0.1
'''
'''
Solution
'''
# P(Pos|~D)
p_pos_no_diabetes = 0.1
# P(~D|Pos)
p_no_diabetes_pos = (p_pos_no_diabetes * p_no_diabetes)/p_pos
print('Probability of an individual not having diabetes, given that that individual got a positive test result is:'\
,p_no_diabetes_pos)
Explanation: 我们可以利用所有这些信息计算后验概率,如下所示:
某人测试结果为阳性时患有糖尿病的概率为:
P(D|Pos) = (P(D) * Sensitivity)) / P(Pos)
某人测试结果为阳性时没有糖尿病的概率为:
P(~D|Pos) = (P(~D) * (1-Specificity)) / P(Pos)
后验概率的和将始终为 1。
End of explanation
根据上述公式和贝叶斯定理,我们可以进行以下计算:`P(J|F,I)` = `(P(J) * P(F|J) * P(I|J)) / P(F,I)`。在此等式中,`P(F,I)` 是在研究中提到“自由”和“移民”的概率。
Explanation: 恭喜!你从头实现了贝叶斯定理。你的分析表明即使某人的测试结果为阳性,他/她也有 8.3% 的概率实际上患有糖尿病,以及 91.67% 的概率没有糖尿病。当然前提是全球只有 1% 的人群患有糖尿病,这只是个假设。
“朴素贝叶斯”中的“朴素”一词是什么意思?
朴素贝叶斯中的“朴素”一词实际上是指,算法在进行预测时使用的特征相互之间是独立的,但实际上并非始终这样。在我们的糖尿病示例中,我们只考虑了一个特征,即测试结果。假设我们添加了另一个特征“锻炼”。假设此特征具有二元值 0 和 1,0 表示某人一周的锻炼时间不超过 2 天,1 表示某人一周的锻炼时间超过 2 天。如果我们要同时使用这两个特征(即测试结果和“锻炼”特征的值)计算最终概率,贝叶斯定理将不可行。朴素贝叶斯是贝叶斯定理的一种延伸,假设所有特征相互之间是独立的。
第 4.2 步:从头实现朴素贝叶斯
你已经知道贝叶斯定理的详细原理,现在我们将用它来考虑有多个特征的情况。
假设有两个政党的候选人,“Jill Stein”是绿党候选人,“Gary Johnson”是自由党的候选人,两位候选人在演讲中提到“自由”、“移民”和“环境”这些字眼的概率为:
Jill Stein 提到“自由”的概率:0.1 ---------> P(F|J)
Jill Stein 提到“移民”的概率:0.1 -----> P(I|J)
Jill Stein 提到“环境”的概率:0.8 -----> P(E|J)
Gary Johnson 提到“自由”的概率:0.7 -------> P(F|G)
Gary Johnson 提到“移民”的概率:0.2 ---> P(I|G)
Gary Johnson 提到“环境”的概率:0.1 ---> P(E|G)
假设 Jill Stein 发表演讲的概率 P(J) 是 0.5,Gary Johnson 也是 P(G) = 0.5。
了解这些信息后,如果我们要计算 Jill Stein 提到“自由”和“移民”的概率,该怎么做呢?这时候朴素贝叶斯定理就派上用场了,我们将考虑两个特征:“自由”和“移民”。
现在我们可以定义朴素贝叶斯定理的公式:
<img src="images/naivebayes.png" height="342" width="342">
在该公式中,y 是分类变量,即候选人的姓名,x1 到 xn 是特征向量,即单个单词。该定理假设每个特征向量或单词 (xi) 相互之间是独立的。
为了详细讲解该公式,我们需要计算以下后验概率:
P(J|F,I):Jill Stein 提到“自由”和“移民”的概率。
End of explanation
根据上述公式,我们可以进行以下计算:`P(G|F,I)` = `(P(G) * P(F|G) * P(I|G)) / P(F,I)`
'''
Instructions: Compute the probability of the words 'freedom' and 'immigration' being said in a speech, or
P(F,I).
The first step is multiplying the probabilities of Jill Stein giving a speech with her individual
probabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_j_text
The second step is multiplying the probabilities of Gary Johnson giving a speech with his individual
probabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_g_text
The third step is to add both of these probabilities and you will get P(F,I).
'''
'''
Solution: Step 1
'''
# P(J)
p_j = 0.5
# P(F/J)
p_j_f = 0.1
# P(I/J)
p_j_i = 0.1
p_j_text = p_j_f * p_j_i * p_j
print(p_j_text)
'''
Solution: Step 2
'''
# P(G)
p_g = 0.5
# P(F/G)
p_g_f = 0.7
# P(I/G)
p_g_i = 0.2
p_g_text = p_g_f * p_g_i * p_g;
print(p_g_text)
'''
Solution: Step 3: Compute P(F,I) and store in p_f_i
'''
p_f_i = p_j_text + p_g_text
print('Probability of words freedom and immigration being said are: ', format(p_f_i))
Explanation: P(G|F,I):Gary Johnson 提到“自由”和“移民”的概率。
End of explanation
'''
Instructions:
Compute P(J|F,I) using the formula P(J|F,I) = (P(J) * P(F|J) * P(I|J)) / P(F,I) and store it in a variable p_j_fi
'''
'''
Solution
'''
p_j_fi = p_j_text / p_f_i
print('The probability of Jill Stein saying the words Freedom and Immigration: ', format(p_j_fi))
'''
Instructions:
Compute P(G|F,I) using the formula P(G|F,I) = (P(G) * P(F|G) * P(I|G)) / P(F,I) and store it in a variable p_g_fi
'''
'''
Solution
'''
p_g_fi = p_g_text/p_f_i
print('The probability of Gary Johnson saying the words Freedom and Immigration: ', format(p_g_fi))
Explanation: 现在可以计算 P(J|F,I) 的概率,即 Jill Stein 提到“自由”和“移民”的概率,以及 P(G|F,I),即 Gary Johnson 提到“自由”和“移民”的概率。
End of explanation
'''
Instructions:
We have loaded the training data into the variable 'training_data' and the testing data into the
variable 'testing_data'.
Import the MultinomialNB classifier and fit the training data into the classifier using fit(). Name your classifier
'naive_bayes'. You will be training the classifier using 'training_data' and y_train' from our split earlier.
'''
'''
Solution
'''
from sklearn.naive_bayes import MultinomialNB
naive_bayes = MultinomialNB()
naive_bayes.fit(training_data,y_train)
'''
Instructions:
Now that our algorithm has been trained using the training data set we can now make some predictions on the test data
stored in 'testing_data' using predict(). Save your predictions into the 'predictions' variable.
'''
'''
Solution
'''
predictions = naive_bayes.predict(testing_data)
print(predictions)
Explanation: 可以看出,和贝叶斯定理一样,后验概率之和等于 1。恭喜!你从头实现了朴素贝叶斯定理。分析表明,绿党的 Jill Stein 在演讲中提到“自由”和“移民”的概率只有 6.6%,而自由党的 Gary Johnson 有 93.3% 的可能性会提到这两个词。
另一个比较常见的朴素贝叶斯定理应用示例是在搜索引擎中搜索“萨克拉门托国王”。为了使我们能够获得与萨克拉门托国王队 NBA 篮球队相关的结果,搜索引擎需要将这两个单词关联到一起,而不是单独处理它们,否则就会获得标有“萨克拉门托”的图片(例如风光图片)以及关于“国王”的图片(可能是历史上的国王),而实际上我们想要搜索的是关于篮球队的图片。这是一种搜索引擎将单词当做非独立个体(因此采用的是“朴素”方式)的经典示例。
将此方法应用到我们的垃圾信息分类问题上,朴素贝叶斯算法会查看每个单词,而不是将它们当做有任何联系的关联体。对于垃圾内容检测器来说,这么做通常都可行,因为有些禁用词几乎肯定会被分类为垃圾内容,例如包含“伟哥”的电子邮件通常都被归类为垃圾邮件。
第 5 步:使用 scikit-learn 实现朴素贝叶斯
幸运的是,sklearn 具有多个朴素贝叶斯实现,这样我们就不用从头进行计算。我们将使用 sklearns 的 sklearn.naive_bayes 方法对我们的数据集做出预测。
具体而言,我们将使用多项式朴素贝叶斯实现。这个分类器适合分类离散特征(例如我们的单词计数文本分类)。它会将整数单词计数作为输入。另一方面,高斯朴素贝叶斯更适合连续数据,因为它假设输入数据是高斯(正态)分布。
End of explanation
'''
Instructions:
Compute the accuracy, precision, recall and F1 scores of your model using your test data 'y_test' and the predictions
you made earlier stored in the 'predictions' variable.
'''
'''
Solution
'''
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Accuracy score: ', format(accuracy_score(y_test,predictions)))
print('Precision score: ', format(precision_score(y_test,predictions)))
print('Recall score: ', format(recall_score(y_test,predictions)))
print('F1 score: ', format(f1_score(y_test,predictions)))
Explanation: 我们已经对测试集进行预测,现在需要检查预测的准确率了。
第 6 步:评估模型
我们已经对测试集进行了预测,下一个目标是评估模型的效果。我们可以采用各种衡量指标,但首先快速总结下这些指标。
准确率 衡量的是分类器做出正确预测的概率,即正确预测的数量与预测总数(测试数据点的数量)之比。
精确率 指的是分类为垃圾信息的信息实际上是垃圾信息的概率,即真正例(分类为垃圾内容并且实际上是垃圾内容的单词)与所有正例(所有分类为垃圾内容的单词,无论是否分类正确)之比,换句话说,是以下公式的比值结果:
[True Positives/(True Positives + False Positives)]
召回率(敏感性)表示实际上为垃圾信息并且被分类为垃圾信息的信息所占比例,即真正例(分类为垃圾内容并且实际上是垃圾内容的单词)与所有为垃圾内容的单词之比,换句话说,是以下公式的比值结果:
[True Positives/(True Positives + False Negatives)]
对于偏态分类分布问题(我们的数据集就属于偏态分类),例如如果有 100 条信息,只有 2 条是垃圾信息,剩下的 98 条不是,则准确率本身并不是很好的指标。我们将 90 条信息分类为垃圾信息(包括 2 条垃圾信息,但是我们将其分类为非垃圾信息,因此它们属于假负例),并将 10 条信息分类为垃圾信息(所有 10 条都是假正例),依然会获得比较高的准确率分数。对于此类情形,精确率和召回率非常实用。可以通过这两个指标获得 F1 分数,即精确率和召回率分数的加权平均值。该分数的范围是 0 到 1,1 表示最佳潜在 F1 分数。
我们将使用所有四个指标确保我们的模型效果很好。这四个指标的值范围都在 0 到 1 之间,分数尽量接近 1 可以很好地表示模型的效果如何。
End of explanation |
2,182 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center><u><u>Bayesian Modeling for the Busy and the Confused - Part II</u></u></center>
<center><i>Markov Chain Monte-Carlo</i><center>
Currently, the capacity to gather data is far ahead of the ability to generate meaningful insight using conventional approaches. Hopes of alleviating this bottleneck has come through the application of machine learning tools. Among these tools one that is increasingly garnering traction is probabilistic programming, particularly Bayesian modeling. In this paradigm, variables that are used to define models carry a probabilistic distribution rather than a scalar value. "Fitting" a model to data can then , simplistically, be construed as finding the appropriate parameterization for these distributions, given the model structure and the data. This offers a number of advantages over other methods, not the least of which is the estimation of uncertainty around model results. This in turn can better inform subsequent processes, such as decision-making, and/or scientific discovery.
<br><br>
The present is the first of a two-notebook series, the subject of which is a brief, basic, but hands-on programmatic introduction to Bayesian modeling. This notebook contains an of a few key probability principles relevant to Bayesian inference. An illustration of how to put these in practice follows. In particular, I will explain one of the conmore intuitve approaches to Bayesian computation; Grid Approximation (GA). With this framework I will show how to create simple models that can be used to interpret and predict real world data. <br>
GA is computationally intensive and runs into problems quickly when the data set is large and/or the model increases in complexity. One of the more popular solutions to this problem is the use of the Markov Chain Monte-Carlo (MCMC) algorithm. The implementation of MCMC in Bayesian models will be the subject of the second notebook of this series.
<br>
As of this writing the most popular programming language in machine learning is Python. Python is an easy language to pickup
Step1: Under the hood
Step2: Timing MCMC
Step3: <img src='./resources/mcmc_1.svg?modified="1"'>
Step4: <img src='./figJar/Presentation/mcmc_2.svg?modified=2'>
Step5: What's going on?
Highly autocorrelated trace
Step6: <center>
<img src="./resources/graph_m1.svg"/>
</center>
Step7: Back to Contents
<a id='Reg'></a>
<u><font color='purple'>Tutorial Overview
Step8: Regression coefficients easier to interpret with centered predictor
Step9: $$ y = \alpha + \beta x_c$$<br>
$\rightarrow \alpha=y$ when $x=\bar{x}$<br>
$\rightarrow \beta=\Delta y$ when $x$ increases by one unit
Step10: Back to Contents
<a id='RegPyMC3'></a>
Regression
Step11: <center>
<img src="./resources/m_vague_graph.svg"/>
</center>
Back to Contents
<a id='PriorCheck'></a>
Regression
Step12: <center>
<img src='./figJar/Presentation/prior_checks_1.png?modified=3' width=65%>
</center
Step13: <table>
<tr>
<td>
<img src='./resources/prior_checks_1.png?modif=1' />
</td>
<td>
<img src='./resources/prior_checks_2.png?modif=2' />
</td>
</tr>
</table>
Back to Contents
<a id='Mining'></a>
Regression
Step14: <center>
<img src='./resources/reg_posteriors.svg'/>
</center>
Back to Contents
<a id='UNC'></a>
Regression
Step15: model uncertainty
Step16: <center>
<img src='./resources/mu_posterior.svg/'>
</center>
prediction uncertainty | Python Code:
import pickle
import warnings
import sys
from IPython.display import Image, HTML
import pandas as pd
import numpy as np
from scipy.stats import norm as gaussian, uniform
import pymc3 as pm
from theano import shared
import seaborn as sb
import matplotlib.pyplot as pl
from matplotlib import rcParams
from matplotlib import ticker as mtick
import arviz as ar
print('Versions:')
print('---------')
print(f'python: {sys.version.split("|")[0]}')
print(f'numpy: {np.__version__}')
print(f'pandas: {pd.__version__}')
print(f'seaborn: {sb.__version__}')
print(f'pymc3: {pm.__version__}')
print(f'arviz: {ar.__version__}')
%matplotlib inline
warnings.filterwarnings('ignore', category=FutureWarning)
Explanation: <center><u><u>Bayesian Modeling for the Busy and the Confused - Part II</u></u></center>
<center><i>Markov Chain Monte-Carlo</i><center>
Currently, the capacity to gather data is far ahead of the ability to generate meaningful insight using conventional approaches. Hopes of alleviating this bottleneck has come through the application of machine learning tools. Among these tools one that is increasingly garnering traction is probabilistic programming, particularly Bayesian modeling. In this paradigm, variables that are used to define models carry a probabilistic distribution rather than a scalar value. "Fitting" a model to data can then , simplistically, be construed as finding the appropriate parameterization for these distributions, given the model structure and the data. This offers a number of advantages over other methods, not the least of which is the estimation of uncertainty around model results. This in turn can better inform subsequent processes, such as decision-making, and/or scientific discovery.
<br><br>
The present is the first of a two-notebook series, the subject of which is a brief, basic, but hands-on programmatic introduction to Bayesian modeling. This notebook contains an of a few key probability principles relevant to Bayesian inference. An illustration of how to put these in practice follows. In particular, I will explain one of the conmore intuitve approaches to Bayesian computation; Grid Approximation (GA). With this framework I will show how to create simple models that can be used to interpret and predict real world data. <br>
GA is computationally intensive and runs into problems quickly when the data set is large and/or the model increases in complexity. One of the more popular solutions to this problem is the use of the Markov Chain Monte-Carlo (MCMC) algorithm. The implementation of MCMC in Bayesian models will be the subject of the second notebook of this series.
<br>
As of this writing the most popular programming language in machine learning is Python. Python is an easy language to pickup: pedagogical resources abound. Python is free, open source, and a large number of very useful libraries have been written over the years that have propelled it to its current place of prominence in a number of fields, in addition to machine learning.
<br><br>
I use Python (3.6+) code to illustrate the mechanics of Bayesian inference in lieu of lengthy explanations. I also use a number of dedicated Python libraries that shortens the code considerably. A solid understanding of Bayesian modeling cannot be spoon-fed and can only come from getting one's hands dirty.. Emphasis is therefore on readable reproducible code. This should ease the work the interested has to do to get some practice re-running the notebook and experimenting with some of the coding and Bayesian modeling patterns presented. Some know-how is required regarding installing and running a Python distribution, the required libraries, and jupyter notebooks; this is easily gleaned from the internet. A popular option in the machine learning community is Anaconda.
<a id='TOP'></a>
Notebook Contents
Basics: Joint probability, Inverse probability and Bayes' Theorem
Example: Inferring the Statistical Distribution of Chlorophyll from Data
Grid Approximation
Impact of priors
Impact of data set size
MCMC
PyMC3
Regression
Data Preparation
Regression in PyMC3
Checking Priors
Model Fitting
Flavors of Uncertainty
[Final Comments](#Conclusion
End of explanation
def mcmc(data, μ_0=0.5, n_samples=1000,):
print(f'{data.size} data points')
data = data.reshape(1, -1)
# set priors
σ=0.75 # keep σ fixed for simplicity
trace_μ = np.nan * np.ones(n_samples) # trace: where the sampler has been
trace_μ[0] = μ_0 # start with a first guess
for i in range(1, n_samples):
proposed_μ = norm.rvs(loc=trace_μ[i-1], scale=0.1, size=1)
prop_par_dict = dict(μ=proposed_μ, σ=σ)
curr_par_dict = dict(μ=trace_μ[i-1], σ=σ)
log_prob_prop = get_log_lik(data, prop_par_dict
) + get_log_prior(prop_par_dict)
log_prob_curr = get_log_lik(data, curr_par_dict
) + get_log_prior(curr_par_dict)
ratio = np.exp(log_prob_prop - log_prob_curr)
if ratio > 1:
# accept proposal
trace_μ[i] = proposed_μ
else:
# evaluate low proba proposal
if uniform.rvs(size=1, loc=0, scale=1) > ratio:
# reject proposal
trace_μ[i] = trace_μ[i-1]
else:
# accept proposal
trace_μ[i] = proposed_μ
return trace_μ
def get_log_lik(data, param_dict):
return np.sum(norm.logpdf(data, loc=param_dict['μ'],
scale=param_dict['σ']
),
axis=1)
def get_log_prior(par_dict, loc=1, scale=1):
return norm.logpdf(par_dict['μ'], loc=loc, scale=scale)
Explanation: Under the hood: Inferring chlorophyll distribution
~~Grid approximation: computing probability everywhere~~
<font color='red'>Magical MCMC: Dealing with computational complexity</font>
Probabilistic Programming with PyMC3: Industrial grade MCMC
Back to Contents
<a id="MCMC"></a>
Magical MCMC: Dealing with computational complexity
Grid approximation:
useful for understanding mechanics of Bayesian computation
computationally intensive
impractical and often intractable for large data sets or high-dimension models
MCMC allows sampling <u>where it probabilistically matters</u>:
compute current probability given location in parameter space
propose jump to new location in parameter space
compute new probability at proposed location
jump to new location if $\frac{new\ probability}{current\ probability}>1$
jump to new location if $\frac{new\ probability}{current\ probability}>\gamma\in [0, 1]$
otherwise stay in current location
End of explanation
%%time
mcmc_n_samples = 2000
trace1 = mcmc(data=df_data_s.chl_l.values, n_samples=mcmc_n_samples)
f, ax = pl.subplots(nrows=2, figsize=(8, 8))
ax[0].plot(np.arange(mcmc_n_samples), trace1, marker='.',
ls=':', color='k')
ax[0].set_title('trace of μ, 500 data points')
ax[1].set_title('μ marginal posterior')
pm.plots.kdeplot(trace1, ax=ax[1], label='mcmc',
color='orange', lw=2, zorder=1)
ax[1].legend(loc='upper left')
ax[1].set_ylim(bottom=0)
df_μ = df_grid_3.groupby(['μ']).sum().drop('σ',
axis=1)[['post_prob']
].reset_index()
ax2 = ax[1].twinx()
df_μ.plot(x='μ', y='post_prob', ax=ax2, color='k',
label='grid',)
ax2.set_ylim(bottom=0);
ax2.legend(loc='upper right')
f.tight_layout()
f.savefig('./figJar/Presentation/mcmc_1.svg')
Explanation: Timing MCMC
End of explanation
%%time
samples = 2000
trace2 = mcmc(data=df_data.chl_l.values, n_samples=samples)
f, ax = pl.subplots(nrows=2, figsize=(8, 8))
ax[0].plot(np.arange(samples), trace2, marker='.',
ls=':', color='k')
ax[0].set_title(f'trace of μ, {df_data.chl_l.size} data points')
ax[1].set_title('μ marginal posterior')
pm.plots.kdeplot(trace2, ax=ax[1], label='mcmc',
color='orange', lw=2, zorder=1)
ax[1].legend(loc='upper left')
ax[1].set_ylim(bottom=0)
f.tight_layout()
f.savefig('./figJar/Presentation/mcmc_2.svg')
Explanation: <img src='./resources/mcmc_1.svg?modified="1"'>
End of explanation
f, ax = pl.subplots(ncols=2, figsize=(12, 5))
ax[0].stem(pm.autocorr(trace1[1500:]))
ax[1].stem(pm.autocorr(trace2[1500:]))
ax[0].set_title(f'{df_data_s.chl_l.size} data points')
ax[1].set_title(f'{df_data.chl_l.size} data points')
f.suptitle('trace autocorrelation', fontsize=19)
f.savefig('./figJar/Presentation/grid8.svg')
f, ax = pl.subplots(nrows=2, figsize=(8, 8))
thinned_trace = np.random.choice(trace2[100:], size=200, replace=False)
ax[0].plot(np.arange(200), thinned_trace, marker='.',
ls=':', color='k')
ax[0].set_title('thinned trace of μ')
ax[1].set_title('μ marginal posterior')
pm.plots.kdeplot(thinned_trace, ax=ax[1], label='mcmc',
color='orange', lw=2, zorder=1)
ax[1].legend(loc='upper left')
ax[1].set_ylim(bottom=0)
f.tight_layout()
f.savefig('./figJar/Presentation/grid9.svg')
f, ax = pl.subplots()
ax.stem(pm.autocorr(thinned_trace[:20]));
f.savefig('./figJar/Presentation/stem2.svg', dpi=300, format='svg');
Explanation: <img src='./figJar/Presentation/mcmc_2.svg?modified=2'>
End of explanation
with pm.Model() as m1:
μ_ = pm.Normal('μ', mu=1, sd=1)
σ = pm.Uniform('σ', lower=0, upper=2)
lkl = pm.Normal('likelihood', mu=μ_, sd=σ,
observed=df_data.chl_l.dropna())
graph_m1 = pm.model_to_graphviz(m1)
graph_m1.format = 'svg'
graph_m1.render('./figJar/Presentation/graph_m1');
Explanation: What's going on?
Highly autocorrelated trace: <br>
$\rightarrow$ inadequate parameter space exploration<br>
$\rightarrow$ poor convergence...
Metropolis MCMC<br>
$\rightarrow$ easy to implement + memory efficient<br>
$\rightarrow$ inefficient parameter space exploration<br>
$\rightarrow$ better MCMC sampler?
Hamiltonian Monte Carlo (HMC)
Greatly improved convergence
Well mixed traces are a signature and an easy diagnostic
HMC does require a lot of tuning,
Not practical for the inexperienced applied statistician or scientist
No-U-Turn Sampler (NUTS), HMC that automates most tuning steps
NUTS scales well to complex problems with many parameters (1000's)
Implemented in popular libraries
Probabilistic modeling for the beginner
<font color='red'>Under the hood: Inferring chlorophyll distribution</font>
~~Grid approximation: computing probability everywhere~~
~~MCMC: how it works~~
<font color='red'>Probabilistic Programming with PyMC3: Industrial grade MCMC </font>
Back to Contents
<a id='PyMC3'></a>
<u>Probabilistic Programming with PyMC3</u>
relatively simple syntax
easily used in conjuction with mainstream python scientific data structures<br>
$\rightarrow$numpy arrays <br>
$\rightarrow$pandas dataframes
models of reasonable complexity span ~10-20 lines.
End of explanation
with m1:
trace_m1 = pm.sample(2000, tune=1000, chains=4)
pm.traceplot(trace_m1);
ar.plot_posterior(trace_m1, kind='hist', round_to=2);
Explanation: <center>
<img src="./resources/graph_m1.svg"/>
</center>
End of explanation
df_data.head().T
df_data['Gr-MxBl'] = -1 * df_data['MxBl-Gr']
Explanation: Back to Contents
<a id='Reg'></a>
<u><font color='purple'>Tutorial Overview:</font></u>
Probabilistic modeling for the beginner<br>
$\rightarrow$~~The basics~~<br>
$\rightarrow$~~Starting easy: inferring chlorophyll~~<br>
<font color='red'>$\rightarrow$Regression: adding a predictor to estimate chlorophyll</font>
Back to Contents
<a id='DataPrep'></a>
Regression: Adding a predictor to estimate chlorophyll
<font color=red>Data preparation</font>
Writing a regression model in PyMC3
Are my priors making sense?
Model fitting
Flavors of uncertainty
Linear regression takes the form
$$ y = \alpha + \beta x $$
where
$$\ \ \ \ \ y = log_{10}(chl)$$ and $$x = log_{10}\left(\frac{Gr}{MxBl}\right)$$
End of explanation
df_data['Gr-MxBl_c'] = df_data['Gr-MxBl'] - df_data['Gr-MxBl'].mean()
df_data[['Gr-MxBl_c', 'chl_l']].info()
x_c = df_data.dropna()['Gr-MxBl_c'].values
y = df_data.dropna().chl_l.values
Explanation: Regression coefficients easier to interpret with centered predictor:<br><br>
$$x_c = x - \bar{x}$$
End of explanation
g3 = sb.PairGrid(df_data.loc[:, ['Gr-MxBl_c', 'chl_l']], height=3,
diag_sharey=False,)
g3.map_diag(sb.kdeplot, color='k')
g3.map_offdiag(sb.scatterplot, color='k');
make_lower_triangle(g3)
f = pl.gcf()
axs = f.get_axes()
xlabel = r'$log_{10}\left(\frac{Rrs_{green}}{max(Rrs_{blue})}\right), centered$'
ylabel = r'$log_{10}(chl)$'
axs[0].set_xlabel(xlabel)
axs[2].set_xlabel(xlabel)
axs[2].set_ylabel(ylabel)
axs[3].set_xlabel(ylabel)
f.tight_layout()
f.savefig('./figJar/Presentation/pairwise_1.png')
Explanation: $$ y = \alpha + \beta x_c$$<br>
$\rightarrow \alpha=y$ when $x=\bar{x}$<br>
$\rightarrow \beta=\Delta y$ when $x$ increases by one unit
End of explanation
with pm.Model() as m_vague_prior:
# priors
σ = pm.Uniform('σ', lower=0, upper=2)
α = pm.Normal('α', mu=0, sd=1)
β = pm.Normal('β', mu=0, sd=1)
# deterministic model
μ = α + β * x_c
# likelihood
chl_i = pm.Normal('chl_i', mu=μ, sd=σ, observed=y)
Explanation: Back to Contents
<a id='RegPyMC3'></a>
Regression: Adding a predictor to estimate chlorophyll
~~Data preparation~~
<font color=red>Writing a regression model in PyMC3</font>
Are my priors making sense?
Model fitting
Flavors of uncertainty
End of explanation
vague_priors = pm.sample_prior_predictive(samples=500, model=m_vague_prior, vars=['α', 'β',])
x_dummy = np.linspace(-1.5, 1.5, num=50).reshape(-1, 1)
α_prior_vague = vague_priors['α'].reshape(1, -1)
β_prior_vague = vague_priors['β'].reshape(1, -1)
chl_l_prior_μ_vague = α_prior_vague + β_prior_vague * x_dummy
f, ax = pl.subplots( figsize=(6, 5))
ax.plot(x_dummy, chl_l_prior_μ_vague, color='k', alpha=0.1,);
ax.set_xlabel(r'$log_{10}\left(\frac{green}{max(blue)}\right)$, centered')
ax.set_ylabel('$log_{10}(chl)$')
ax.set_title('Vague priors')
ax.set_ylim(-3.5, 3.5)
f.tight_layout(pad=1)
f.savefig('./figJar/Presentation/prior_checks_1.png')
Explanation: <center>
<img src="./resources/m_vague_graph.svg"/>
</center>
Back to Contents
<a id='PriorCheck'></a>
Regression: Adding a predictor to estimate chlorophyll
~~Data preparation~~
~~Writing a regression model in PyMC3~~
<font color=red>Are my priors making sense?</font>
Model fitting
Flavors of uncertainty
End of explanation
with pm.Model() as m_informative_prior:
α = pm.Normal('α', mu=0, sd=0.2)
β = pm.Normal('β', mu=0, sd=0.5)
σ = pm.Uniform('σ', lower=0, upper=2)
μ = α + β * x_c
chl_i = pm.Normal('chl_i', mu=μ, sd=σ, observed=y)
prior_info = pm.sample_prior_predictive(model=m_informative_prior, vars=['α', 'β'])
α_prior_info = prior_info['α'].reshape(1, -1)
β_prior_info = prior_info['β'].reshape(1, -1)
chl_l_prior_info = α_prior_info + β_prior_info * x_dummy
f, ax = pl.subplots( figsize=(6, 5))
ax.plot(x_dummy, chl_l_prior_info, color='k', alpha=0.1,);
ax.set_xlabel(r'$log_{10}\left(\frac{green}{max(blue}\right)$, centered')
ax.set_ylabel('$log_{10}(chl)$')
ax.set_title('Weakly informative priors')
ax.set_ylim(-3.5, 3.5)
f.tight_layout(pad=1)
f.savefig('./figJar/Presentation/prior_checks_2.png')
Explanation: <center>
<img src='./figJar/Presentation/prior_checks_1.png?modified=3' width=65%>
</center
End of explanation
with m_vague_prior:
trace_vague = pm.sample(2000, tune=1000, chains=4)
with m_informative_prior:
trace_inf = pm.sample(2000, tune=1000, chains=4)
f, axs = pl.subplots(ncols=2, nrows=2, figsize=(12, 7))
ar.plot_posterior(trace_vague, var_names=['α', 'β'], round_to=2, ax=axs[0,:], kind='hist');
ar.plot_posterior(trace_inf, var_names=['α', 'β'], round_to=2, ax=axs[1, :], kind='hist',
color='brown');
axs[0,0].tick_params(rotation=20)
axs[0,0].text(-0.137, 430, 'vague priors',
fontdict={'fontsize': 15})
axs[1,0].tick_params(rotation=20)
axs[1,0].text(-0.137, 430, 'informative priors',
fontdict={'fontsize': 15})
f.tight_layout()
f.savefig('./figJar/Presentation/reg_posteriors.svg')
Explanation: <table>
<tr>
<td>
<img src='./resources/prior_checks_1.png?modif=1' />
</td>
<td>
<img src='./resources/prior_checks_2.png?modif=2' />
</td>
</tr>
</table>
Back to Contents
<a id='Mining'></a>
Regression: Adding a predictor to estimate chlorophyll
~~Data preparatrion~~
~~Writing a regression model in PyMC3~~
~~Are my priors making sense?~~
<font color=red>Model fitting</font>
Flavors of uncertainty
End of explanation
α_posterior = trace_inf.get_values('α').reshape(1, -1)
β_posterior = trace_inf.get_values('β').reshape(1, -1)
σ_posterior = trace_inf.get_values('σ').reshape(1, -1)
Explanation: <center>
<img src='./resources/reg_posteriors.svg'/>
</center>
Back to Contents
<a id='UNC'></a>
Regression: Adding a predictor to estimate chlorophyll
~~Data preparation~~
~~Writing a regression model in PyMC3~~
~~Are my priors making sense?~~
~~Data review and model fitting~~
<font color=red>Flavors of uncertainty</font>
Two types of uncertainties:
1. model uncertainty
2. prediction uncertainty
End of explanation
μ_posterior = α_posterior + β_posterior * x_dummy
pl.plot(x_dummy, μ_posterior[:, ::16], color='k', alpha=0.1);
pl.plot(x_dummy, μ_posterior[:, 1], color='k', label='model mean')
pl.scatter(x_c, y, color='orange', edgecolor='k', alpha=0.5, label='obs'); pl.legend();
pl.ylim(-2.5, 2.5); pl.xlim(-1, 1);
pl.xlabel(r'$log_{10}\left(\frac{Gr}{max(Blue)}\right)$')
pl.ylabel(r'$log_{10}(chlorophyll)$')
f = pl.gcf()
f.savefig('./figJar/Presentation/mu_posterior.svg')
Explanation: model uncertainty: uncertainty around the model mean
End of explanation
ppc = norm.rvs(loc=μ_posterior, scale=σ_posterior);
ci_94_perc = pm.hpd(ppc.T, alpha=0.06);
pl.scatter(x_c, y, color='orange', edgecolor='k', alpha=0.5, label='obs'); pl.legend();
pl.plot(x_dummy, ppc.mean(axis=1), color='k', label='mean prediction');
pl.fill_between(x_dummy.flatten(), ci_94_perc[:, 0], ci_94_perc[:, 1], alpha=0.5, color='k',
label='94% credibility interval:\n94% chance that prediction\nwill be in here!');
pl.xlim(-1, 1); pl.ylim(-2.5, 2.5)
pl.legend(fontsize=12, loc='upper left')
f = pl.gcf()
f.savefig('./figJar/Presentation/ppc.svg')
Explanation: <center>
<img src='./resources/mu_posterior.svg/'>
</center>
prediction uncertainty: posterior predictive checks
End of explanation |
2,183 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
Step1: Mesh generation by Transfinite Interpolation applied to the sea dike problem
We have implemented and tested our mesh generation approach using Transfinite Interpolation (TFI) in the previous lesson. Now, let's apply it to the problem of the sea dike with strong topography.
Revisiting the sea dike problem
To generate a deformed quad mesh incorporating the strong topography of the sea dike, we only have to describe the topography by a parametrized curve. We can roughly describe it by the following equations
Step2: Unfortunately, the TFI is defined on the unit square, so we have to normalize the sea dike topography, before applying the TFI.
Step3: OK, now we have the normalized dike topography on a unit square, so we can define the parametric curve for the topography.
Step4: No error so far. Before plotting the generated mesh, we have to unnormalize the spatial coordinates. | Python Code:
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
End of explanation
# Import Libraries
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Here, I introduce a new library, which is useful
# to define the fonts and size of a figure in a notebook
from pylab import rcParams
# Get rid of a Matplotlib deprecation warning
import warnings
warnings.filterwarnings("ignore")
# Define number of grid points in x-direction and spatial vectors
NXtopo = 100
x_dike = np.linspace(0.0, 61.465, num=NXtopo)
z_dike = np.zeros(NXtopo)
# calculate dike topograpy
def dike_topo(x_dike, z_dike, NX1):
for i in range(NX1):
if(x_dike[i]<4.0):
z_dike[i] = 0.0
if(x_dike[i]>=4.0 and x_dike[i]<18.5):
z_dike[i] = (x_dike[i]-4) * 6.76/14.5
if(x_dike[i]>=18.5 and x_dike[i]<22.5):
z_dike[i] = 6.76
if(x_dike[i]>=22.5 and x_dike[i]<x_dike[-1]):
z_dike[i] = -(x_dike[i]-22.5) * 3.82/21.67 + 6.76
return x_dike, z_dike
# Define figure size
rcParams['figure.figsize'] = 10, 7
# Plot sea dike topography
dike_topo(x_dike,z_dike,NXtopo)
plt.plot(x_dike,z_dike)
plt.title("Sea dike topography" )
plt.xlabel("x [m]")
plt.ylabel("z [m]")
plt.axes().set_aspect('equal')
Explanation: Mesh generation by Transfinite Interpolation applied to the sea dike problem
We have implemented and tested our mesh generation approach using Transfinite Interpolation (TFI) in the previous lesson. Now, let's apply it to the problem of the sea dike with strong topography.
Revisiting the sea dike problem
To generate a deformed quad mesh incorporating the strong topography of the sea dike, we only have to describe the topography by a parametrized curve. We can roughly describe it by the following equations:
$x = 0\; m - 4\; m\; \rightarrow\; z(x) = 0\; m$
$x = 4\; m - 18.5\; m\; \rightarrow\; z(x) = \frac{6.76}{14.5}(x-4)\; m$
$x = 18.5\; m - 22.5\; m\; \rightarrow\; z(x) = 6.76\; m$
$x = 22.5\; m - 44.17\; m\; \rightarrow\; z(x) = -\frac{3.82}{21.67}(x-22.5)\; m$
This might be a little bit rough approximation, because photos of the data acquisition show a smooth transition between the tilted and horizontal surfaces of the dike. Nevertheless, let's try to generate a mesh for this topography model.
End of explanation
# Normalize sea dike topography
xmax_dike = np.max(x_dike)
zmax_dike = np.max(z_dike)
x_dike_norm = x_dike / xmax_dike
z_dike_norm = z_dike / zmax_dike + 1
# Plot normalized sea dike topography
plt.plot(x_dike_norm,z_dike_norm)
plt.title("Normalized sea dike topography" )
plt.xlabel("x []")
plt.ylabel("z []")
plt.axes().set_aspect('equal')
Explanation: Unfortunately, the TFI is defined on the unit square, so we have to normalize the sea dike topography, before applying the TFI.
End of explanation
# Define parameters for deformed Cartesian mesh
NX = 80
NZ = 20
# Define parametric curves at model boundaries ...
# ... bottom boundary
def Xb(s):
x = s
z = 0.0
xzb = [x,z]
return xzb
# ... top boundary
def Xt(s):
x = s
# normalized x-coordinate s -> unnormalized x-coordinate x_d
x_d = xmax_dike * s
z_d = 0.0
if(x_d<4.0):
z_d = 0.0
if(x_d>=4.0 and x_d<18.5):
z_d = (x_d-4) * 6.76/14.5
if(x_d>=18.5 and x_d<22.5):
z_d = 6.76
if(x_d>=22.5 and x_d<xmax_dike):
z_d = -(x_d-22.5) * 3.82/21.67 + 6.76
# unnormalized z-coordinate z_d -> normalized z-coordinate z
z = z_d / zmax_dike + 1
xzt = [x,z]
return xzt
# ... left boundary
def Xl(s):
x = 0.0
z = s
xzl = [x,z]
return xzl
# ... right boundary
def Xr(s):
x = 1
z = s
xzr = [x,z]
return xzr
# Transfinite interpolation
# Discretize along xi and eta axis
xi = np.linspace(0.0, 1.0, num=NX)
eta = np.linspace(0.0, 1.0, num=NZ)
xi1, eta1 = np.meshgrid(xi, eta)
# Intialize matrices for x and z axis
X = np.zeros((NX,NZ))
Z = np.zeros((NX,NZ))
# loop over cells
for i in range(NX):
Xi = xi[i]
for j in range(NZ):
Eta = eta[j]
xb = Xb(Xi)
xb0 = Xb(0)
xb1 = Xb(1)
xt = Xt(Xi)
xt0 = Xt(0)
xt1 = Xt(1)
xl = Xl(Eta)
xr = Xr(Eta)
# Transfinite Interpolation (Gordon-Hall algorithm)
X[i,j] = (1-Eta) * xb[0] + Eta * xt[0] + (1-Xi) * xl[0] + Xi * xr[0] \
- (Xi * Eta * xt1[0] + Xi * (1-Eta) * xb1[0] + Eta * (1-Xi) * xt0[0] \
+ (1-Xi) * (1-Eta) * xb0[0])
Z[i,j] = (1-Eta) * xb[1] + Eta * xt[1] + (1-Xi) * xl[1] + Xi * xr[1] \
- (Xi * Eta * xt1[1] + Xi * (1-Eta) * xb1[1] + Eta * (1-Xi) * xt0[1] \
+ (1-Xi) * (1-Eta) * xb0[1])
Explanation: OK, now we have the normalized dike topography on a unit square, so we can define the parametric curve for the topography.
End of explanation
# Unnormalize the mesh
X = X * xmax_dike
Z = Z * zmax_dike
# Plot TFI mesh (physical domain)
plt.plot(X, Z, 'k')
plt.plot(X.T, Z.T, 'k')
plt.title("Sea dike TFI grid (physical domain)" )
plt.xlabel("x [m]")
plt.ylabel("z [m]")
plt.axes().set_aspect('equal')
plt.savefig('sea_dike_TFI.pdf', bbox_inches='tight', format='pdf')
plt.show()
Explanation: No error so far. Before plotting the generated mesh, we have to unnormalize the spatial coordinates.
End of explanation |
2,184 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Does age correlate with motion?
This has been bothering me for so many of our slack chats that I felt I really needed to start here.
What do we know about motion in our sample!?
Step1: Get the data
Step2: Motion measures
There are two measures we care about that can be used to index motion
Step3: We can see from the plot above that we have a data set of people who do not move all that much and that these two measures correlate well for low motion scans but start to diverge for the scans that have higher motion.
So, back to our original question, does motion correlate with age?
Step4: Yes! It does, and you can see that this correlation is stronger for func_perc_fd. I don't think this is really particularly important and I suspect it is driven by the kurtosis of the distribution. The func_mean_fd distribution is more non-normal (less normal?) than the func_perc_fd and I wonder if this causing the correlation to look messier. To be honest, I don't know and I don't really care. If this is what makes a big difference to our results I'll start to care more ;)
But hang on, we all know that it's important to look at the data so lets make a plot
Step5: Well. That's underinspiring. Does that really count as a significant correlation? Gun to your head would you put that line there?
How does this correlation change when we look at different subsets of data? Specifically different age ranges, motion thresholds and sample sizes.
How does sample size affect the relationship between age and motion?
The following plots show how sample size affects the relationship between age and motion (pearson's r).
I've kept the whole age range (6-18) and I'll show the same plot for 3 different motion thresholds (5%, 15%, 50% bad frames) and for a range of different sample sizes (25, 50, 75, 100, 125 and 150 participants each).
Step6: What I take from this plot is that there is a negative correlation between age and head motion (the older you are the less you move) and that the more participants we have in a sample the more consistent the measure (the narrower the box)
As John has said multiple times
Step7: What I take from this plot is that the correlation with age is less strong when you are more stringent in your exclusion criteria. Which makes sense
Step8: Woah - that's interesting. In this sample we seem to only be able to detect a movement relationship for a 5 year age range (remember that the upper and lower limits are inclusive) when the participants are either 10-14 or 12-16 years old!
Is this pattern related to the threshold? What if we change that?
Step9: So, this to me is the crazy bit that I need to get my head around | Python Code:
import matplotlib.pylab as plt
%matplotlib inline
import numpy as np
import os
import pandas as pd
import seaborn as sns
sns.set_style('white')
sns.set_context('notebook')
from scipy.stats import kurtosis
import sys
%load_ext autoreload
%autoreload 2
sys.path.append('../SCRIPTS/')
import kidsmotion_stats as kms
import kidsmotion_datamanagement as kmdm
import kidsmotion_plotting as kmp
Explanation: Does age correlate with motion?
This has been bothering me for so many of our slack chats that I felt I really needed to start here.
What do we know about motion in our sample!?
End of explanation
behav_data_f = '../Phenotypic_V1_0b_preprocessed1.csv'
behav_df = kmdm.read_in_behavdata(behav_data_f)
Explanation: Get the data
End of explanation
fig, ax_list = kmp.histogram_motion(behav_df)
# Note that there is a warning here but don't worry about it :P
Explanation: Motion measures
There are two measures we care about that can be used to index motion:
func_mean_fd: mean framewise displacement, measured in mm
func_perc_fd: percentage of frames that were more than 0.2mm displaced from the previous frame.
End of explanation
for var in ['func_mean_fd', 'func_perc_fd']:
print(var)
print(' kurtosis = {:2.1f}'.format(kurtosis(behav_df[var])))
print(' corr with age:')
kms.report_correlation(behav_df, 'AGE_AT_SCAN', var, covar_name=None, r_dp=2)
Explanation: We can see from the plot above that we have a data set of people who do not move all that much and that these two measures correlate well for low motion scans but start to diverge for the scans that have higher motion.
So, back to our original question, does motion correlate with age?
End of explanation
fig, ax_list = kmp.corr_motion_age(behav_df, fit_reg=False)
fig, ax_list = kmp.corr_motion_age(behav_df)
Explanation: Yes! It does, and you can see that this correlation is stronger for func_perc_fd. I don't think this is really particularly important and I suspect it is driven by the kurtosis of the distribution. The func_mean_fd distribution is more non-normal (less normal?) than the func_perc_fd and I wonder if this causing the correlation to look messier. To be honest, I don't know and I don't really care. If this is what makes a big difference to our results I'll start to care more ;)
But hang on, we all know that it's important to look at the data so lets make a plot:
End of explanation
age_l = 6
age_u = 18
motion_measure='func_perc_fd'
n_perms = 100
motion_thresh = 50
corr_age_df = pd.DataFrame()
for n in [ 25, 50, 75, 100, 125, 150 ]:
filtered_df = kmdm.filter_data(behav_df, motion_thresh, age_l, age_u, motion_measure=motion_measure)
r_list = []
for i in range(n_perms):
sample_df = kmdm.select_random_sample(filtered_df, n=n)
r, p = kms.calculate_correlation(sample_df, 'AGE_AT_SCAN', motion_measure, covar_name=None)
r_list+=[r]
corr_age_df['N{:2.0f}'.format(n)] = r_list
fig, ax = kmp.compare_groups_boxplots(corr_age_df, title='Thr: {:1.0f}%'.format(motion_thresh))
Explanation: Well. That's underinspiring. Does that really count as a significant correlation? Gun to your head would you put that line there?
How does this correlation change when we look at different subsets of data? Specifically different age ranges, motion thresholds and sample sizes.
How does sample size affect the relationship between age and motion?
The following plots show how sample size affects the relationship between age and motion (pearson's r).
I've kept the whole age range (6-18) and I'll show the same plot for 3 different motion thresholds (5%, 15%, 50% bad frames) and for a range of different sample sizes (25, 50, 75, 100, 125 and 150 participants each).
End of explanation
age_l = 6
age_u = 18
motion_measure='func_perc_fd'
n = 100
n_perms = 100
corr_age_df = pd.DataFrame()
for motion_thresh in [ 5, 10, 25, 50 ]:
filtered_df = kmdm.filter_data(behav_df, motion_thresh, age_l, age_u, motion_measure=motion_measure)
r_list = []
for i in range(n_perms):
sample_df = kmdm.select_random_sample(filtered_df, n=n)
r, p = kms.calculate_correlation(sample_df, 'AGE_AT_SCAN', motion_measure, covar_name=None)
r_list+=[r]
corr_age_df['Thr{:1.0f}'.format(motion_thresh)] = r_list
fig, ax = kmp.compare_groups_boxplots(corr_age_df, title='N: {:1.0f}'.format(n))
Explanation: What I take from this plot is that there is a negative correlation between age and head motion (the older you are the less you move) and that the more participants we have in a sample the more consistent the measure (the narrower the box)
As John has said multiple times: the fact that more people gives you a better estimate of the population is kinda known already :P
So now we move to look at how the different thresholds affect this correlation...
How does the motion cut off affect the relationship between age and motion?
End of explanation
motion_measure='func_perc_fd'
n = 100
n_perms = 100
motion_thresh = 25
corr_age_df = pd.DataFrame()
for age_l in [ 6, 8, 10, 12, 14 ]:
age_u = age_l + 4
filtered_df = kmdm.filter_data(behav_df, motion_thresh, age_l, age_u, motion_measure=motion_measure)
r_list = []
for i in range(n_perms):
sample_df = kmdm.select_random_sample(filtered_df, n=n)
r, p = kms.calculate_correlation(sample_df, 'AGE_AT_SCAN', motion_measure, covar_name=None)
r_list+=[r]
corr_age_df['{:1.0f} to {:1.0f}'.format(age_l, age_u)] = r_list
fig, ax = kmp.compare_groups_boxplots(corr_age_df, title='N: {:1.0f}; Thr: {:1.0f}%'.format(n, motion_thresh))
Explanation: What I take from this plot is that the correlation with age is less strong when you are more stringent in your exclusion criteria. Which makes sense: we're more likely to remove younger people and therefore reduce the correlation with age.
Next on the list is age range, do we see the same pattern across different ages?
How does the age range of our cohort affect the relationship between age and motion?
End of explanation
motion_measure='func_perc_fd'
n = 100
n_perms = 100
motion_thresh = 25
for motion_thresh in [ 5, 10, 25, 50 ]:
corr_age_df = pd.DataFrame()
for age_l in [ 6, 8, 10, 12, 14 ]:
age_u = age_l + 4
filtered_df = kmdm.filter_data(behav_df, motion_thresh, age_l, age_u, motion_measure=motion_measure)
r_list = []
for i in range(n_perms):
sample_df = kmdm.select_random_sample(filtered_df, n=n)
r, p = kms.calculate_correlation(sample_df, 'AGE_AT_SCAN', motion_measure, covar_name=None)
r_list+=[r]
corr_age_df['{:1.0f} to {:1.0f}'.format(age_l, age_u)] = r_list
fig, ax = kmp.compare_groups_boxplots(corr_age_df, title='N: {:1.0f}; Thr: {:1.0f}%'.format(n, motion_thresh))
Explanation: Woah - that's interesting. In this sample we seem to only be able to detect a movement relationship for a 5 year age range (remember that the upper and lower limits are inclusive) when the participants are either 10-14 or 12-16 years old!
Is this pattern related to the threshold? What if we change that?
End of explanation
motion_measure='func_perc_fd'
n = 30
n_perms = 100
motion_thresh = 25
for motion_thresh in [ 5, 10, 25, 50 ]:
corr_age_df = pd.DataFrame()
for age_l in [ 6, 8, 10, 12, 14 ]:
age_u = age_l + 4
filtered_df = kmdm.filter_data(behav_df, motion_thresh, age_l, age_u, motion_measure=motion_measure)
r_list = []
for i in range(n_perms):
sample_df = kmdm.select_random_sample(filtered_df, n=n)
r, p = kms.calculate_correlation(sample_df, 'AGE_AT_SCAN', motion_measure, covar_name=None)
r_list+=[r]
corr_age_df['{:1.0f} to {:1.0f}'.format(age_l, age_u)] = r_list
fig, ax = kmp.compare_groups_boxplots(corr_age_df, title='N: {:1.0f}; Thr: {:1.0f}%'.format(n, motion_thresh))
Explanation: So, this to me is the crazy bit that I need to get my head around: there's different relationships with age for different thresholds. Which means, I think, that any of our results will change according to the thresholds we apply.
Now, I also want to see if we get the same pattern with a smaller number of participants in our cohort (you can see that we have fewer than 100 people in the very youngest group).
End of explanation |
2,185 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What about writing SVG inside a cell in IPython or Jupyter
Step1: let's create a very simple SVG file
Step2: Now let's create a Svg Scene based
inspired from Isendrak Skatasmid code at | Python Code:
%config InlineBackend.figure_format = 'svg'
url_svg = 'http://clipartist.net/social/clipartist.net/B/base_tux_g_v_linux.svg'
from IPython.display import SVG, display, HTML
# testing svg inside jupyter next one does not support width parameter at the time of writing
#display(SVG(url=url_svg))
display(HTML('<img src="' + url_svg + '" width=150 height=50/>'))
Explanation: What about writing SVG inside a cell in IPython or Jupyter
End of explanation
%%writefile basic_circle.svg
<svg xmlns="http://www.w3.org/2000/svg">
<circle id="greencircle" cx="30" cy="30" r="30" fill="blue" />
</svg>
url_svg = 'basic_circle.svg'
HTML('<img src="' + url_svg + '" width=70 height=70/>')
Explanation: let's create a very simple SVG file
End of explanation
class SvgScene:
def __init__(self,name="svg",height=500,width=500):
self.name = name
self.items = []
self.height = height
self.width = width
return
def add(self,item): self.items.append(item)
def strarray(self):
var = ["<?xml version=\"1.0\"?>\n",
"<svg height=\"%d\" width=\"%d\" xmlns=\"http://www.w3.org/2000/svg\" >\n" % (self.height,self.width),
" <g style=\"fill-opacity:1.0; stroke:black;\n",
" stroke-width:1;\">\n"]
for item in self.items: var += item.strarray()
var += [" </g>\n</svg>\n"]
return var
def write_svg(self,filename=None):
if filename:
self.svgname = filename
else:
self.svgname = self.name + ".svg"
file = open(self.svgname,'w')
file.writelines(self.strarray())
file.close()
return
def display(self):
url_svg = self.svgname
display(HTML('<img src="' + url_svg + '" width=' + str(self.width) + ' height=' + str(self.height) + '/>'))
return
class Line:
def __init__(self,start,end,color,width):
self.start = start
self.end = end
self.color = color
self.width = width
return
def strarray(self):
return [" <line x1=\"%d\" y1=\"%d\" x2=\"%d\" y2=\"%d\" style=\"stroke:%s;stroke-width:%d\"/>\n" %\
(self.start[0],self.start[1],self.end[0],self.end[1],colorstr(self.color),self.width)]
class Circle:
def __init__(self,center,radius,fill_color,line_color,line_width):
self.center = center
self.radius = radius
self.fill_color = fill_color
self.line_color = line_color
self.line_width = line_width
return
def strarray(self):
return [" <circle cx=\"%d\" cy=\"%d\" r=\"%d\"\n" %\
(self.center[0],self.center[1],self.radius),
" style=\"fill:%s;stroke:%s;stroke-width:%d\" />\n" % (colorstr(self.fill_color),colorstr(self.line_color),self.line_width)]
class Ellipse:
def __init__(self,center,radius_x,radius_y,fill_color,line_color,line_width):
self.center = center
self.radiusx = radius_x
self.radiusy = radius_y
self.fill_color = fill_color
self.line_color = line_color
self.line_width = line_width
def strarray(self):
return [" <ellipse cx=\"%d\" cy=\"%d\" rx=\"%d\" ry=\"%d\"\n" %\
(self.center[0],self.center[1],self.radius_x,self.radius_y),
" style=\"fill:%s;stroke:%s;stroke-width:%d\"/>\n" % (colorstr(self.fill_color),colorstr(self.line_color),self.line_width)]
class Polygon:
def __init__(self,points,fill_color,line_color,line_width):
self.points = points
self.fill_color = fill_color
self.line_color = line_color
self.line_width = line_width
def strarray(self):
polygon="<polygon points=\""
for point in self.points:
polygon+=" %d,%d" % (point[0],point[1])
return [polygon,\
"\" \nstyle=\"fill:%s;stroke:%s;stroke-width:%d\"/>\n" %\
(colorstr(self.fill_color),colorstr(self.line_color),self.line_width)]
class Rectangle:
def __init__(self,origin,height,width,fill_color,line_color,line_width):
self.origin = origin
self.height = height
self.width = width
self.fill_color = fill_color
self.line_color = line_color
self.line_width = line_width
return
def strarray(self):
return [" <rect x=\"%d\" y=\"%d\" height=\"%d\"\n" %\
(self.origin[0],self.origin[1],self.height),
" width=\"%d\" style=\"fill:%s;stroke:%s;stroke-width:%d\" />\n" %\
(self.width,colorstr(self.fill_color),colorstr(self.line_color),self.line_width)]
class Text:
def __init__(self,origin,text,size,color):
self.origin = origin
self.text = text
self.size = size
self.color = color
return
def strarray(self):
return [" <text x=\"%d\" y=\"%d\" font-size=\"%d\" fill=\"%s\">\n" %\
(self.origin[0],self.origin[1],self.size,colorstr(self.color)),
" %s\n" % self.text,
" </text>\n"]
def colorstr(rgb): return "#%x%x%x" % (rgb[0]/16,rgb[1]/16,rgb[2]/16)
scene = SvgScene("test",300,300)
scene.add(Rectangle((100,100),200,200,(0,255,255),(0,0,0),1))
scene.add(Line((200,200),(200,300),(0,0,0),1))
scene.add(Line((200,200),(300,200),(0,0,0),1))
scene.add(Line((200,200),(100,200),(0,0,0),1))
scene.add(Line((200,200),(200,100),(0,0,0),1))
scene.add(Circle((200,200),30,(0,0,255),(0,0,0),1))
scene.add(Circle((200,300),30,(0,255,0),(0,0,0),1))
scene.add(Circle((300,200),30,(255,0,0),(0,0,0),1))
scene.add(Circle((100,200),30,(255,255,0),(0,0,0),1))
scene.add(Circle((200,100),30,(255,0,255),(0,0,0),1))
scene.add(Text((50,50),"Testing SVG 1",24,(0,0,0)))
scene.write_svg()
scene.display()
Explanation: Now let's create a Svg Scene based
inspired from Isendrak Skatasmid code at :
http://code.activestate.com/recipes/578123-draw-svg-images-in-python-python-recipe-enhanced-v/
End of explanation |
2,186 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex client library
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
Step8: Vertex constants
Setup up the following constants for Vertex
Step9: AutoML constants
Set constants unique to AutoML datasets and training
Step10: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for prediction.
Set the variable DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify
Step11: Container (Docker) image
For AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected.
Machine Type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VM you will use for prediction.
machine type
n1-standard
Step12: Tutorial
Now you are ready to start creating your own AutoML tabular binary classification model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving.
Step13: Dataset
Now that your clients are ready, your first step is to create a Dataset resource instance. This step differs from Vision, Video and Language. For those products, after the Dataset resource is created, one then separately imports the data, using the import_data method.
For tabular, importing of the data is deferred until the training pipeline starts training the model. What do we do different? Well, first you won't be calling the import_data method. Instead, when you create the dataset instance you specify the Cloud Storage location of the CSV file or BigQuery location of the data table, which contains your tabular data as part of the Dataset resource's metadata.
Cloud Storage
metadata = {"input_config"
Step14: Quick peek at your data
You will use a version of the Bank Marketing dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
You also need for training to know the heading name of the label column, which is save as label_column. For this dataset, it is the last column in the CSV file.
Step15: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following
Step16: Now save the unique dataset identifier for the Dataset resource instance you created.
Step17: Train the model
Now train an AutoML tabular binary classification model using your Vertex Dataset resource. To train the model, do the following steps
Step18: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to specify are
Step19: Now save the unique identifier of the training pipeline you created.
Step20: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter
Step21: Deployment
Training the above model may take upwards of 30 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
Step22: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter
Step23: Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps
Step24: Now get the unique identifier for the Endpoint resource you created.
Step25: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests
Step26: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters
Step27: Make a online prediction request
Now do a online prediction to your deployed model.
Make test item
You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
Step28: Make a prediction
Now you have a test item. Use this helper function predict_item, which takes the following parameters
Step29: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters
Step30: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
Explanation: Vertex client library: AutoML tabular binary classification model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to create tabular binary classification models and do online prediction using Google Cloud's AutoML.
Dataset
The dataset used for this tutorial is the Bank Marketing. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.
Objective
In this tutorial, you create an AutoML tabular binary classification model and deploy for online prediction from a Python script using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
# Tabular Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/tables_1.0.0.yaml"
# Tabular Labeling type
LABEL_SCHEMA = (
"gs://google-cloud-aiplatform/schema/dataset/ioformat/table_io_format_1.0.0.yaml"
)
# Tabular Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml"
Explanation: AutoML constants
Set constants unique to AutoML datasets and training:
Dataset Schemas: Tells the Dataset resource service which type of dataset it is.
Data Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated).
Dataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.
End of explanation
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
Explanation: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for prediction.
Set the variable DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
For GPU, available accelerators include:
- aip.AcceleratorType.NVIDIA_TESLA_K80
- aip.AcceleratorType.NVIDIA_TESLA_P100
- aip.AcceleratorType.NVIDIA_TESLA_P4
- aip.AcceleratorType.NVIDIA_TESLA_T4
- aip.AcceleratorType.NVIDIA_TESLA_V100
Otherwise specify (None, None) to use a container image to run on a CPU.
End of explanation
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Container (Docker) image
For AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected.
Machine Type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VM you will use for prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs
End of explanation
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
Explanation: Tutorial
Now you are ready to start creating your own AutoML tabular binary classification model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving.
End of explanation
IMPORT_FILE = "gs://cloud-ml-tables-data/bank-marketing.csv"
Explanation: Dataset
Now that your clients are ready, your first step is to create a Dataset resource instance. This step differs from Vision, Video and Language. For those products, after the Dataset resource is created, one then separately imports the data, using the import_data method.
For tabular, importing of the data is deferred until the training pipeline starts training the model. What do we do different? Well, first you won't be calling the import_data method. Instead, when you create the dataset instance you specify the Cloud Storage location of the CSV file or BigQuery location of the data table, which contains your tabular data as part of the Dataset resource's metadata.
Cloud Storage
metadata = {"input_config": {"gcs_source": {"uri": [gcs_uri]}}}
The format for a Cloud Storage path is:
gs://[bucket_name]/[folder(s)/[file]
BigQuery
metadata = {"input_config": {"bigquery_source": {"uri": [gcs_uri]}}}
The format for a BigQuery path is:
bq://[collection].[dataset].[table]
Note that the uri field is a list, whereby you can input multiple CSV files or BigQuery tables when your data is split across files.
Data preparation
The Vertex Dataset resource for tabular has a couple of requirements for your tabular data.
Must be in a CSV file or a BigQuery query.
CSV
For tabular binary classification, the CSV file has a few requirements:
The first row must be the heading -- note how this is different from Vision, Video and Language where the requirement is no heading.
All but one column are features.
One column is the label, which you will specify when you subsequently create the training pipeline.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
End of explanation
count = ! gsutil cat $IMPORT_FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $IMPORT_FILE | head
heading = ! gsutil cat $IMPORT_FILE | head -n1
label_column = str(heading).split(",")[-1].split("'")[0]
print("Label Column Name", label_column)
if label_column is None:
raise Exception("label column missing")
Explanation: Quick peek at your data
You will use a version of the Bank Marketing dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
You also need for training to know the heading name of the label column, which is save as label_column. For this dataset, it is the last column in the CSV file.
End of explanation
TIMEOUT = 90
def create_dataset(name, schema, src_uri=None, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
if src_uri.startswith("gs://"):
metadata = {"input_config": {"gcs_source": {"uri": [src_uri]}}}
elif src_uri.startswith("bq://"):
metadata = {"input_config": {"bigquery_source": {"uri": [src_uri]}}}
dataset = aip.Dataset(
display_name=name,
metadata_schema_uri=schema,
labels=labels,
metadata=json_format.ParseDict(metadata, Value()),
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("bank-" + TIMESTAMP, DATA_SCHEMA, src_uri=IMPORT_FILE)
Explanation: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following:
Uses the dataset client service.
Creates an Vertex Dataset resource (aip.Dataset), with the following parameters:
display_name: The human-readable name you choose to give it.
metadata_schema_uri: The schema for the dataset type.
metadata: The Cloud Storage or BigQuery location of the tabular data.
Calls the client dataset service method create_dataset, with the following parameters:
parent: The Vertex location root path for your Database, Model and Endpoint resources.
dataset: The Vertex dataset object instance you created.
The method returns an operation object.
An operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.
You can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method:
| Method | Description |
| ----------- | ----------- |
| result() | Waits for the operation to complete and returns a result object in JSON format. |
| running() | Returns True/False on whether the operation is still running. |
| done() | Returns True/False on whether the operation is completed. |
| canceled() | Returns True/False on whether the operation was canceled. |
| cancel() | Cancels the operation (this may take up to 30 seconds). |
End of explanation
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
Explanation: Now save the unique dataset identifier for the Dataset resource instance you created.
End of explanation
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
Explanation: Train the model
Now train an AutoML tabular binary classification model using your Vertex Dataset resource. To train the model, do the following steps:
Create an Vertex training pipeline for the Dataset resource.
Execute the pipeline to start the training.
Create a training pipeline
You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:
Being reusable for subsequent training jobs.
Can be containerized and ran as a batch job.
Can be distributed.
All the steps are associated with the same pipeline job for tracking progress.
Use this helper function create_pipeline, which takes the following parameters:
pipeline_name: A human readable name for the pipeline job.
model_name: A human readable name for the model.
dataset: The Vertex fully qualified dataset identifier.
schema: The dataset labeling (annotation) training schema.
task: A dictionary describing the requirements for the training job.
The helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters:
parent: The Vertex location root path for your Dataset, Model and Endpoint resources.
training_pipeline: the full specification for the pipeline training job.
Let's look now deeper into the minimal requirements for constructing a training_pipeline specification:
display_name: A human readable name for the pipeline job.
training_task_definition: The dataset labeling (annotation) training schema.
training_task_inputs: A dictionary describing the requirements for the training job.
model_to_upload: A human readable name for the model.
input_data_config: The dataset specification.
dataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.
fraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
End of explanation
TRANSFORMATIONS = [
{"auto": {"column_name": "Age"}},
{"auto": {"column_name": "Job"}},
{"auto": {"column_name": "MaritalStatus"}},
{"auto": {"column_name": "Education"}},
{"auto": {"column_name": "Default"}},
{"auto": {"column_name": "Balance"}},
{"auto": {"column_name": "Housing"}},
{"auto": {"column_name": "Loan"}},
{"auto": {"column_name": "Contact"}},
{"auto": {"column_name": "Day"}},
{"auto": {"column_name": "Month"}},
{"auto": {"column_name": "Duration"}},
{"auto": {"column_name": "Campaign"}},
{"auto": {"column_name": "PDays"}},
{"auto": {"column_name": "POutcome"}},
]
PIPE_NAME = "bank_pipe-" + TIMESTAMP
MODEL_NAME = "bank_model-" + TIMESTAMP
task = Value(
struct_value=Struct(
fields={
"target_column": Value(string_value=label_column),
"prediction_type": Value(string_value="classification"),
"train_budget_milli_node_hours": Value(number_value=1000),
"disable_early_stopping": Value(bool_value=False),
"transformations": json_format.ParseDict(TRANSFORMATIONS, Value()),
}
)
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
Explanation: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to specify are:
prediction_type: Whether we are doing "classification" or "regression".
target_column: The CSV heading column name for the column we want to predict (i.e., the label).
train_budget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.
disable_early_stopping: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget.
transformations: Specifies the feature engineering for each feature column.
For transformations, the list must have an entry for each column. The outer key field indicates the type of feature engineering for the corresponding column. In this tutorial, you set it to "auto" to tell AutoML to automatically determine it.
Finally, create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.
End of explanation
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
Explanation: Now save the unique identifier of the training pipeline you created.
End of explanation
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
Explanation: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter:
name: The Vertex fully qualified pipeline identifier.
When the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.
End of explanation
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
Explanation: Deployment
Training the above model may take upwards of 30 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
End of explanation
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("logloss", metrics["logLoss"])
print("auPrc", metrics["auPrc"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
Explanation: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter:
name: The Vertex fully qualified model identifier for the Model resource.
This helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.
For each evaluation (you probably only have one) we then print all the key names for each metric in the evaluation, and for a small set (logLoss and auPrc) you will print the result.
End of explanation
ENDPOINT_NAME = "bank_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
Explanation: Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Create an Endpoint resource
Use this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:
display_name: A human readable name for the Endpoint resource.
The helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:
display_name: A human readable name for the Endpoint resource.
Creating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.
End of explanation
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
Explanation: Now get the unique identifier for the Endpoint resource you created.
End of explanation
MIN_NODES = 1
MAX_NODES = 1
Explanation: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
Single Instance: The online prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
End of explanation
DEPLOYED_NAME = "bank_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"dedicated_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
"machine_spec": machine_spec,
},
"disable_container_logging": False,
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
Explanation: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters:
model: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.
deploy_model_display_name: A human readable name for the deployed model.
endpoint: The Vertex fully qualified endpoint identifier to deploy the model to.
The helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:
endpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.
deployed_model: The requirements specification for deploying the model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
Let's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:
model: The Vertex fully qualified model identifier of the (upload) model to deploy.
display_name: A human readable name for the deployed model.
disable_container_logging: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
dedicated_resources: This refers to how many compute instances (replicas) that are scaled for serving prediction requests.
machine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.
min_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES.
max_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES.
Traffic Split
Let's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
Response
The method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
End of explanation
INSTANCE = {
"Age": "58",
"Job": "managment",
"MaritalStatus": "married",
"Education": "teritary",
"Default": "no",
"Balance": "2143",
"Housing": "yes",
"Loan": "no",
"Contact": "unknown",
"Day": "5",
"Month": "may",
"Duration": "261",
"Campaign": "1",
"PDays": "-1",
"Previous": 0,
"POutcome": "unknown",
}
Explanation: Make a online prediction request
Now do a online prediction to your deployed model.
Make test item
You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
End of explanation
def predict_item(data, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [data]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", dict(prediction))
predict_item(INSTANCE, endpoint_id, None)
Explanation: Make a prediction
Now you have a test item. Use this helper function predict_item, which takes the following parameters:
filename: The Cloud Storage path to the test item.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
parameters_dict: Additional filtering parameters for serving prediction results.
This function calls the prediction client service's predict method with the following parameters:
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
instances: A list of instances (data items) to predict.
parameters: Additional filtering parameters for serving prediction results. Note, tabular models do not support additional parameters.
Request
The format of each instance is, where values must be specified as a string:
{ 'feature_1': 'value_1', 'feature_2': 'value_2', ... }
Since the predict() method can take multiple items (instances), you send your single test item as a list of one test item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the predict() method.
Response
The response object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction -- in this case there is just one:
confidences: Confidence level in the prediction.
displayNames: The predicted label.
End of explanation
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
Explanation: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.
This function calls the endpoint client service's method undeploy_model, with the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.
traffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.
Since this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
2,187 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Functions
Helper functions which will be used later
Step2: Dataset
Now, we will create the dataset. we sample theta_true (probability of occurring head) random variable from the prior distribution which is Beta in this case. Then we sample n_samples coin tosses from likelihood distribution which is Bernouli in this case.
Step3: Prior, Likelihood, and True Posterior
For coin toss problem, since we know the closed form solution of posterior, we compare the distributions of Prior, Likelihood, and True Posterior below.
Step4: Optimizing the ELBO
In order to minimize KL divergence between true posterior and variational distribution, we need to minimize the negative ELBO, as we describe below.
We start with the ELBO, which is given by
Step5: We now apply stochastic gradient descent to minimize negative ELBO and optimize the variational parameters (loc and scale)
Step6: We now plot the ELBO
Step7: We can see that after 200 iterations ELBO is optimized and not changing too much.
Samples using Optimized parameters
Now, we take 1000 samples from variational distribution (Normal) and transform them into true posterior distribution (Beta) by applying tranform_fn (sigmoid) on samples. Then we compare density of samples with exact posterior.
Step8: We can see that the learned q(x) is a reasonably good approximation to the true posterior. It seems to have support over negative theta but this is an artefact of KDE.
Step9: Comparison with pymc.ADVI()
Now, we compare our implementation with pymc's ADVI implementation.
Note
Step10: True posterior, JAX q(x), and pymc q(x)
Step11: Plot of loc and scale for variational distribution | Python Code:
try:
import jax
except ModuleNotFoundError:
%pip install -qqq jax jaxlib
import jax
import jax.numpy as jnp
from jax import lax
try:
from tensorflow_probability.substrates import jax as tfp
except ModuleNotFoundError:
%pip install -qqq tensorflow_probability
from tensorflow_probability.substrates import jax as tfp
try:
import optax
except ModuleNotFoundError:
%pip install -qqq optax
import optax
try:
from rich import print
except ModuleNotFoundError:
%pip install -qqq rich
from rich import print
try:
from tqdm import trange
except:
%pip install -qqq tqdm
from tqdm import trange
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
dist = tfp.distributions
plt.rc("font", size=10) # controls default text sizes
plt.rc("axes", labelsize=12) # fontsize of the x and y labels
plt.rc("legend", fontsize=12) # legend fontsize
plt.rc("figure", titlesize=15) # fontsize of the figure title
Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/advi_beta_binom_jax.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
ADVI from scratch in JAX
Authors: karm-patel@, murphyk@
In this notebook we apply ADVI (automatic differentiation variational inference) to the beta-binomial model, using a Normal Distribution as Variational Posterior. This involves a change of variable from the unconstrained z in R space to the constrained theta in [0,1] space.
End of explanation
def prior_dist():
return dist.Beta(concentration1=1.0, concentration0=1.0)
def likelihood_dist(theta):
return dist.Bernoulli(probs=theta)
def transform_fn(x):
return 1 / (1 + jnp.exp(-x)) # sigmoid
def positivity_fn(x):
return jnp.log(1 + jnp.exp(x)) # softplus
def variational_distribution_q(params):
loc = params["loc"]
scale = positivity_fn(params["scale"]) # apply softplus
return dist.Normal(loc, scale)
jacobian_fn = jax.jacfwd(transform_fn) # define function to find jacobian for tranform_fun
Explanation: Functions
Helper functions which will be used later
End of explanation
# preparing dataset
# key = jax.random.PRNGKey(128)
# n_samples = 12
# theta_true = prior_dist().sample((5,),key)[0]
# dataset = likelihood_dist(theta_true).sample(n_samples,key)
# print(f"Dataset: {dataset}")
# n_heads = dataset.sum()
# n_tails = n_samples - n_heads
# Use same data as https://github.com/probml/probml-notebooks/blob/main/notebooks/beta_binom_approx_post_pymc.ipynb
key = jax.random.PRNGKey(128)
dataset = np.repeat([0, 1], (10, 1))
n_samples = len(dataset)
print(f"Dataset: {dataset}")
n_heads = dataset.sum()
n_tails = n_samples - n_heads
Explanation: Dataset
Now, we will create the dataset. we sample theta_true (probability of occurring head) random variable from the prior distribution which is Beta in this case. Then we sample n_samples coin tosses from likelihood distribution which is Bernouli in this case.
End of explanation
# closed form of beta posterior
a = prior_dist().concentration1
b = prior_dist().concentration0
exact_posterior = dist.Beta(concentration1=a + n_heads, concentration0=b + n_tails)
theta_range = jnp.linspace(0.01, 0.99, 100)
ax = plt.gca()
ax2 = ax.twinx()
(plt2,) = ax2.plot(theta_range, exact_posterior.prob(theta_range), "g--", label="True Posterior")
(plt3,) = ax2.plot(theta_range, prior_dist().prob(theta_range), label="Prior")
likelihood = jax.vmap(lambda x: jnp.prod(likelihood_dist(x).prob(dataset)))(theta_range)
(plt1,) = ax.plot(theta_range, likelihood, "r-.", label="Likelihood")
ax.set_xlabel("theta")
ax.set_ylabel("Likelihood")
ax2.set_ylabel("Prior & Posterior")
ax2.legend(handles=[plt1, plt2, plt3], bbox_to_anchor=(1.6, 1));
Explanation: Prior, Likelihood, and True Posterior
For coin toss problem, since we know the closed form solution of posterior, we compare the distributions of Prior, Likelihood, and True Posterior below.
End of explanation
def log_prior_likelihood_jacobian(normal_sample, dataset):
theta = transform_fn(normal_sample) # transform normal sample to beta sample
likelihood_log_prob = likelihood_dist(theta).log_prob(dataset).sum() # log probability of likelihood
prior_log_prob = prior_dist().log_prob(theta) # log probability of prior
log_det_jacob = jnp.log(
jnp.abs(jnp.linalg.det(jacobian_fn(normal_sample).reshape(1, 1)))
) # log of determinant of jacobian
return likelihood_log_prob + prior_log_prob + log_det_jacob
# reference: https://code-first-ml.github.io/book2/notebooks/introduction/variational.html
def negative_elbo(params, dataset, n_samples=10, key=jax.random.PRNGKey(1)):
q = variational_distribution_q(params) # Normal distribution.
q_loc, q_scale = q.loc, q.scale
std_normal = dist.Normal(0, 1)
sample_set = std_normal.sample(
seed=key,
sample_shape=[
n_samples,
],
)
sample_set = q_loc + q_scale * sample_set # reparameterization trick
# calculate log joint for each sample of z
p_log_prob = jax.vmap(log_prior_likelihood_jacobian, in_axes=(0, None))(sample_set, dataset)
return jnp.mean(q.log_prob(sample_set) - p_log_prob)
Explanation: Optimizing the ELBO
In order to minimize KL divergence between true posterior and variational distribution, we need to minimize the negative ELBO, as we describe below.
We start with the ELBO, which is given by:
\begin{align}
ELBO(\psi) &= E_{z \sim q(z|\psi)} \left[
p(\mathcal{D}|z) + \log p(z) - \log q(z|\psi) \right]
\end{align}
where
$\psi = (\mu, \sigma)$ are the variational parameters,
$p(\mathcal{D}|z) = p(\mathcal{D}|\theta=\sigma(z))$
is the likelihood,
and the prior is given by the change of variables formula:
\begin{align}
p(z) &= p(\theta) | \frac{\partial \theta}{\partial z} |
= p(\theta) | J |
\end{align}
where $J$ is the Jacobian of the $z \rightarrow \theta$ mapping.
We will use a Monte Carlo approximation of the expectation over $z$.
We also apply the reparameterization trick
to replace $z \sim q(z|\psi)$ with
\begin{align}
\epsilon &\sim \mathcal{N}(0,1 ) \
z &= \mu + \sigma \epsilon
\end{align}
Putting it altogether our estimate for the negative ELBO (for a single sample of $\epsilon$) is
\begin{align}
-L(\psi; z) &= -( \log p(\mathcal{D}|\theta )
+\log p( \theta) + \log|J_\boldsymbol{\sigma}(z)|)
+ \log q(z|\psi)
\end{align}
End of explanation
loss_and_grad_fn = jax.value_and_grad(negative_elbo, argnums=(0))
loss_and_grad_fn = jax.jit(loss_and_grad_fn) # jit the loss_and_grad function
params = {"loc": 0.0, "scale": 0.5}
elbo, grads = loss_and_grad_fn(params, dataset)
print(f"loss: {elbo}")
print(f"grads:\n loc: {grads['loc']}\n scale: {grads['scale']} ")
optimizer = optax.adam(learning_rate=0.01)
opt_state = optimizer.init(params)
# jax scannable function for training
def train_step(carry, data_output):
# take carry data
key = carry["key"]
elbo = carry["elbo"]
grads = carry["grads"]
params = carry["params"]
opt_state = carry["opt_state"]
updates = carry["updates"]
# training
key, subkey = jax.random.split(key)
elbo, grads = loss_and_grad_fn(params, dataset, key=subkey)
updates, opt_state = optimizer.update(grads, opt_state)
params = optax.apply_updates(params, updates)
# forward carry to next iteration by storing it
carry = {"key": subkey, "elbo": elbo, "grads": grads, "params": params, "opt_state": opt_state, "updates": updates}
output = {"elbo": elbo, "params": params}
return carry, output
%%time
# dummy iteration to pass carry to jax scannale function train()
key, subkey = jax.random.split(key)
elbo, grads = loss_and_grad_fn(params, dataset, key=subkey)
updates, opt_state = optimizer.update(grads, opt_state)
params = optax.apply_updates(params, updates)
carry = {"key": key, "elbo": elbo, "grads": grads, "params": params, "opt_state": opt_state, "updates": updates}
num_iter = 1000
elbos = np.empty(num_iter)
# apply scan() to optimize training loop
last_carry, output = lax.scan(train_step, carry, elbos)
elbo = output["elbo"]
params = output["params"]
optimized_params = last_carry["params"]
print(params["loc"].shape)
print(params["scale"].shape)
Explanation: We now apply stochastic gradient descent to minimize negative ELBO and optimize the variational parameters (loc and scale)
End of explanation
plt.plot(elbo)
plt.xlabel("Iterations")
plt.ylabel("Negative ELBO")
sns.despine()
plt.savefig("advi_beta_binom_jax_loss.pdf")
Explanation: We now plot the ELBO
End of explanation
q_learned = variational_distribution_q(optimized_params)
key = jax.random.PRNGKey(128)
q_learned_samples = q_learned.sample(1000, seed=key) # q(z|D)
transformed_samples = transform_fn(q_learned_samples) # transform Normal samples into Beta samples
theta_range = jnp.linspace(0.01, 0.99, 100)
plt.plot(theta_range, exact_posterior.prob(theta_range), "r", label="$p(x)$: true posterior")
sns.kdeplot(transformed_samples, color="blue", label="$q(x)$: learned", bw_adjust=1.5, clip=(0.0, 1.0), linestyle="--")
plt.xlabel("theta")
plt.legend() # bbox_to_anchor=(1.5, 1));
sns.despine()
plt.savefig("advi_beta_binom_jax_posterior.pdf")
Explanation: We can see that after 200 iterations ELBO is optimized and not changing too much.
Samples using Optimized parameters
Now, we take 1000 samples from variational distribution (Normal) and transform them into true posterior distribution (Beta) by applying tranform_fn (sigmoid) on samples. Then we compare density of samples with exact posterior.
End of explanation
# print(transformed_samples)
print(len(transformed_samples))
print(jnp.sum(transformed_samples < 0)) # all samples of thetas should be in [0,1]
print(jnp.sum(transformed_samples > 1)) # all samples of thetas should be in [0,1]
print(q_learned)
print(q_learned.mean())
print(jnp.sqrt(q_learned.variance()))
locs, scales = params["loc"], params["scale"]
sigmas = positivity_fn(jnp.array(scales))
plt.plot(locs, label="mu")
plt.xlabel("Iterations")
plt.ylabel("$E_q[z]$")
plt.legend()
sns.despine()
plt.savefig("advi_beta_binom_jax_post_mu_vs_time.pdf")
plt.show()
plt.plot(sigmas, label="sigma")
plt.xlabel("Iterations")
# plt.ylabel(r'$\sqrt{\text{var}(z)}')
plt.ylabel("$std_{q}[z]$")
plt.legend()
sns.despine()
plt.savefig("advi_beta_binom_jax_post_sigma_vs_time.pdf")
plt.show()
Explanation: We can see that the learned q(x) is a reasonably good approximation to the true posterior. It seems to have support over negative theta but this is an artefact of KDE.
End of explanation
try:
import pymc3 as pm
except ModuleNotFoundError:
%pip install -qq pymc3
import pymc3 as pm
try:
import scipy.stats as stats
except ModuleNotFoundError:
%pip install -qq scipy
import scipy.stats as stats
import scipy.special as sp
try:
import arviz as az
except ModuleNotFoundError:
%pip install -qq arviz
import arviz as az
import math
a = prior_dist().concentration1
b = prior_dist().concentration0
with pm.Model() as mf_model:
theta = pm.Beta("theta", a, b)
y = pm.Binomial("y", n=1, p=theta, observed=dataset) # Bernoulli
advi = pm.ADVI()
tracker = pm.callbacks.Tracker(
mean=advi.approx.mean.eval, # callable that returns mean
std=advi.approx.std.eval, # callable that returns std
)
approx = advi.fit(callbacks=[tracker], n=20000)
trace_approx = approx.sample(1000)
thetas = trace_approx["theta"]
plt.plot(advi.hist, label="ELBO")
plt.xlabel("Iterations")
plt.ylabel("ELBO")
plt.legend()
sns.despine()
plt.savefig("advi_beta_binom_pymc_loss.pdf")
plt.show()
print(f"ELBO comparison for last 1% iterations:\nJAX ELBO: {elbo[-10:].mean()}\nPymc ELBO: {advi.hist[-100:].mean()}")
Explanation: Comparison with pymc.ADVI()
Now, we compare our implementation with pymc's ADVI implementation.
Note: For pymc implementation, the code is taken from this notebook: https://github.com/probml/probml-notebooks/blob/main/notebooks/beta_binom_approx_post_pymc.ipynb
End of explanation
plt.plot(theta_range, exact_posterior.prob(theta_range), "b--", label="$p(x)$: True Posterior")
sns.kdeplot(transformed_samples, color="red", label="$q(x)$: learnt - jax", clip=(0.0, 1.0), bw_adjust=1.5)
sns.kdeplot(thetas, label="$q(x)$: learnt - pymc", clip=(0.0, 1.0), bw_adjust=1.5)
plt.xlabel("theta")
plt.legend(bbox_to_anchor=(1.3, 1))
sns.despine()
Explanation: True posterior, JAX q(x), and pymc q(x)
End of explanation
fig1, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 4), sharey=True)
locs, scales = params["loc"], params["scale"]
# plot loc
# JAX
ax1.plot(locs, label="JAX: loc")
ax1.set_ylabel("loc")
ax1.legend()
# pymc
ax2.plot(tracker["mean"], label="Pymc: loc")
ax2.legend()
sns.despine()
# plot scale
fig2, (ax3, ax4) = plt.subplots(1, 2, figsize=(10, 4), sharey=True)
# JAX
ax3.plot(positivity_fn(jnp.array(scales)), label="JAX: scale")
# apply softplus on scale
ax3.set_xlabel("Iterations")
ax3.set_ylabel("scale")
ax3.legend()
# pymc
ax4.plot(tracker["std"], label="Pymc: scale")
ax4.set_xlabel("Iterations")
ax4.legend()
sns.despine();
Explanation: Plot of loc and scale for variational distribution
End of explanation |
2,188 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Survival Analysis with scikit-survival
scikit-survival is a Python module for survival analysis built on top of scikit-learn. It allows doing survival analysis while utilizing the power of scikit-learn, e.g., for pre-processing or doing cross-validation.
Table of Contents
What is Survival Analysis?
The Veterans' Administration Lung Cancer Trial
Survival Data
The Survival Function
Considering other variables by stratification
Multivariate Survival Models
Measuring the Performance of Survival Models
Feature Selection
Step1: We can easily see that only a few survival times are right-censored (Status is False), i.e., most veteran's died during the study period (Status is True).
The Survival Function
A key quantity in survival analysis is the so-called survival function, which relates time to the probability of surviving beyond a given time point.
Let $T$ denote a continuous non-negative random variable corresponding to a patient’s survival time. The survival function $S(t)$ returns the probability of survival beyond time $t$ and is defined as
$$ S(t) = P (T > t). $$
If we observed the exact survival time of all subjects, i.e., everyone died before the study ended, the survival function at time $t$ can simply be estimated by the ratio of patients surviving beyond time $t$ and the total number of patients
Step2: Using the formula from above, we can compute $\hat{S}(t=11) = \frac{3}{5}$, but not $\hat{S}(t=30)$, because we don't know whether the 4th patient is still alive at $t = 30$, all we know is that when we last checked at $t = 25$, the patient was still alive.
An estimator, similar to the one above, that is valid if survival times are right-censored is the Kaplan-Meier estimator.
Step3: The estimated curve is a step function, with steps occurring at time points where one or more patients died. From the plot we can see that most patients died in the first 200 days, as indicated by the steep slope of the estimated survival function in the first 200 days.
Considering other variables by stratification
Survival functions by treatment
Patients enrolled in the Veterans' Administration Lung Cancer Trial were randomized to one of two treatments
Step4: Roughly half the patients received the alternative treatment.
The obvious questions to ask is
Step5: Unfortunately, the results are inconclusive, because the difference between the two estimated survival functions is too small to confidently argue that the drug affects survival or not.
Sidenote
Step6: In this case, we observe a pronounced difference between two groups. Patients with squamous or large cells seem to have a better prognosis compared to patients with small or adeno cells.
Multivariate Survival Models
In the Kaplan-Meier approach used above, we estimated multiple survival curves by dividing the dataset into smaller sub-groups according to a variable. If we want to consider more than 1 or 2 variables, this approach quickly becomes infeasible, because subgroups will get very small. Instead, we can use a linear model, Cox's proportional hazard's model, to estimate the impact each variable has on survival.
First however, we need to convert the categorical variables in the data set into numeric values.
Step7: Survival models in scikit-survival follow the same rules as estimators in scikit-learn, i.e., they have a fit method, which expects a data matrix and a structured array of survival times and binary event indicators.
Step8: The result is a vector of coefficients, one for each variable, where each value corresponds to the log hazard ratio.
Step9: Using the fitted model, we can predict a patient-specific survival function, by passing an appropriate data matrix to the estimator's predict_survival_function method.
First, let's create a set of four synthetic patients.
Step10: Similar to kaplan_meier_estimator, the predict_survival_function method returns a sequence of step functions, which we can plot.
Step11: Measuring the Performance of Survival Models
Once we fit a survival model, we usually want to assess how well a model can actually predict survival. Our test data is usually subject to censoring too, therefore metrics like root mean squared error or correlation are unsuitable. Instead, we use generalization of the area under the receiver operating characteristic (ROC) curve called Harrell's concordance index or c-index.
The interpretation is identical to the traditional area under the ROC curve metric for binary classification
Step12: or alternatively
Step13: Our model's c-index indicates that the model clearly performs better than random, but is also far from perfect.
Feature Selection
Step14: Karnofsky_score is the best variable, whereas Months_from_Diagnosis and Prior_therapy='yes' have almost no predictive power on their own.
Next, we want to build a parsimonious model by excluding irrelevant features. We could use the ranking from above, but would need to determine what the optimal cut-off should be. Luckily, scikit-learn has built-in support for performing grid search.
First, we create a pipeline that puts all the parts together.
Step15: Next, we need to define the range of parameters we want to explore during grid search. Here, we want to optimize the parameter k of the SelectKBest class and allow k to vary from 1 feature to all 8 features.
Step16: The results show that it is sufficient to select the 3 most predictive features. | Python Code:
from sksurv.datasets import load_veterans_lung_cancer
data_x, data_y = load_veterans_lung_cancer()
data_y
Explanation: Introduction to Survival Analysis with scikit-survival
scikit-survival is a Python module for survival analysis built on top of scikit-learn. It allows doing survival analysis while utilizing the power of scikit-learn, e.g., for pre-processing or doing cross-validation.
Table of Contents
What is Survival Analysis?
The Veterans' Administration Lung Cancer Trial
Survival Data
The Survival Function
Considering other variables by stratification
Multivariate Survival Models
Measuring the Performance of Survival Models
Feature Selection: Which Variable is Most Predictive?
What's next?
What is Survival Analysis?
The objective in survival analysis — also referred to as reliability analysis in engineering — is to establish a connection between covariates and the time of an event. The name survival analysis originates from clinical research, where predicting the time to death, i.e., survival, is often the main objective. Survival analysis is a type of regression problem (one wants to predict a continuous value), but with a twist. It differs from traditional regression by the fact that parts of the training data can only be partially observed – they are censored.
As an example, consider a clinical study, which investigates coronary heart disease and has been carried out over a 1 year period as in the figure below.
Patient A was lost to follow-up after three months with no recorded cardiovascular event, patient B experienced an event four and a half months after enrollment, patient D withdrew from the study two months after enrollment, and patient E did not experience any event before the study ended. Consequently, the exact time of a cardiovascular event could only be recorded for patients B and C; their records are uncensored. For the remaining patients it is unknown whether they did or did not experience an event after termination of the study. The only valid information that is available for patients A, D, and E is that they were event-free up to their last follow-up. Therefore, their records are censored.
Formally, each patient record consists of a set of covariates $x \in \mathbb{R}^d$ , and the time $t>0$ when an event occurred or the time $c>0$ of censoring. Since censoring and experiencing and event are mutually exclusive, it is common to define an event indicator $\delta \in {0;1}$ and the observable survival time $y>0$. The observable time $y$ of a right censored sample is defined as
$$
y = \min(t, c) =
\begin{cases}
t & \text{if } \delta = 1 , \
c & \text{if } \delta = 0 .
\end{cases}
$$
Consequently, survival analysis demands for models that take this unique characteristic of such a dataset into account, some of which are showcased below.
The Veterans' Administration Lung Cancer Trial
The Veterans' Administration Lung Cancer Trial is a randomized trial of two treatment regimens for lung cancer. The data set (Kalbfleisch J. and Prentice R, (1980) The Statistical Analysis of Failure Time Data. New York: Wiley) consists of 137 patients and 8 variables, which are described below:
Treatment: denotes the type of lung cancer treatment; standard and test drug.
Celltype: denotes the type of cell involved; squamous, small cell, adeno, large.
Karnofsky_score: is the Karnofsky score.
Diag: is the time since diagnosis in months.
Age: is the age in years.
Prior_Therapy: denotes any prior therapy; none or yes.
Status: denotes the status of the patient as dead or alive; dead or alive.
Survival_in_days: is the survival time in days since the treatment.
Our primary interest is studying whether there are subgroups that differ in survival and whether we can predict survival times.
Survival Data
As described in the section What is Survival Analysis? above, survival times are subject to right-censoring, therefore, we need to consider an individual's status in addition to survival time. To be fully compatible with scikit-learn, Status and Survival_in_days need to be stored as a structured array with the first field indicating whether the actual survival time was observed or if was censored, and the second field denoting the observed survival time, which corresponds to the time of death (if Status == 'dead', $\delta = 1$) or the last time that person was contacted (if Status == 'alive', $\delta = 0$).
End of explanation
import pandas as pd
pd.DataFrame.from_records(data_y[[11, 5, 32, 13, 23]], index=range(1, 6))
Explanation: We can easily see that only a few survival times are right-censored (Status is False), i.e., most veteran's died during the study period (Status is True).
The Survival Function
A key quantity in survival analysis is the so-called survival function, which relates time to the probability of surviving beyond a given time point.
Let $T$ denote a continuous non-negative random variable corresponding to a patient’s survival time. The survival function $S(t)$ returns the probability of survival beyond time $t$ and is defined as
$$ S(t) = P (T > t). $$
If we observed the exact survival time of all subjects, i.e., everyone died before the study ended, the survival function at time $t$ can simply be estimated by the ratio of patients surviving beyond time $t$ and the total number of patients:
$$
\hat{S}(t) = \frac{ \text{number of patients surviving beyond $t$} }{ \text{total number of patients} }
$$
In the presence of censoring, this estimator cannot be used, because the numerator is not always defined. For instance, consider the following set of patients:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from sksurv.nonparametric import kaplan_meier_estimator
time, survival_prob = kaplan_meier_estimator(data_y["Status"], data_y["Survival_in_days"])
plt.step(time, survival_prob, where="post")
plt.ylabel("est. probability of survival $\hat{S}(t)$")
plt.xlabel("time $t$")
Explanation: Using the formula from above, we can compute $\hat{S}(t=11) = \frac{3}{5}$, but not $\hat{S}(t=30)$, because we don't know whether the 4th patient is still alive at $t = 30$, all we know is that when we last checked at $t = 25$, the patient was still alive.
An estimator, similar to the one above, that is valid if survival times are right-censored is the Kaplan-Meier estimator.
End of explanation
data_x["Treatment"].value_counts()
Explanation: The estimated curve is a step function, with steps occurring at time points where one or more patients died. From the plot we can see that most patients died in the first 200 days, as indicated by the steep slope of the estimated survival function in the first 200 days.
Considering other variables by stratification
Survival functions by treatment
Patients enrolled in the Veterans' Administration Lung Cancer Trial were randomized to one of two treatments: standard and a new test drug. Next, let's have a look at how many patients underwent the standard treatment and how many received the new drug.
End of explanation
for treatment_type in ("standard", "test"):
mask_treat = data_x["Treatment"] == treatment_type
time_treatment, survival_prob_treatment = kaplan_meier_estimator(
data_y["Status"][mask_treat],
data_y["Survival_in_days"][mask_treat])
plt.step(time_treatment, survival_prob_treatment, where="post",
label="Treatment = %s" % treatment_type)
plt.ylabel("est. probability of survival $\hat{S}(t)$")
plt.xlabel("time $t$")
plt.legend(loc="best")
Explanation: Roughly half the patients received the alternative treatment.
The obvious questions to ask is:
Is there any difference in survival between the two treatment groups?
As a first attempt, we can estimate the survival function in both treatment groups separately.
End of explanation
for value in data_x["Celltype"].unique():
mask = data_x["Celltype"] == value
time_cell, survival_prob_cell = kaplan_meier_estimator(data_y["Status"][mask],
data_y["Survival_in_days"][mask])
plt.step(time_cell, survival_prob_cell, where="post",
label="%s (n = %d)" % (value, mask.sum()))
plt.ylabel("est. probability of survival $\hat{S}(t)$")
plt.xlabel("time $t$")
plt.legend(loc="best")
Explanation: Unfortunately, the results are inconclusive, because the difference between the two estimated survival functions is too small to confidently argue that the drug affects survival or not.
Sidenote: Visually comparing estimated survival curves in order to assess whether there is a difference in survival between groups is usually not recommended, because it is highly subjective. Statistical tests such as the log-rank test are usually more appropriate.
Survival functions by cell type
Next, let's have a look at the cell type, which has been recorded as well, and repeat the analysis from above.
End of explanation
from sksurv.preprocessing import OneHotEncoder
data_x_numeric = OneHotEncoder().fit_transform(data_x)
data_x_numeric.head()
Explanation: In this case, we observe a pronounced difference between two groups. Patients with squamous or large cells seem to have a better prognosis compared to patients with small or adeno cells.
Multivariate Survival Models
In the Kaplan-Meier approach used above, we estimated multiple survival curves by dividing the dataset into smaller sub-groups according to a variable. If we want to consider more than 1 or 2 variables, this approach quickly becomes infeasible, because subgroups will get very small. Instead, we can use a linear model, Cox's proportional hazard's model, to estimate the impact each variable has on survival.
First however, we need to convert the categorical variables in the data set into numeric values.
End of explanation
from sksurv.linear_model import CoxPHSurvivalAnalysis
estimator = CoxPHSurvivalAnalysis()
estimator.fit(data_x_numeric, data_y)
Explanation: Survival models in scikit-survival follow the same rules as estimators in scikit-learn, i.e., they have a fit method, which expects a data matrix and a structured array of survival times and binary event indicators.
End of explanation
pd.Series(estimator.coef_, index=data_x_numeric.columns)
Explanation: The result is a vector of coefficients, one for each variable, where each value corresponds to the log hazard ratio.
End of explanation
x_new = pd.DataFrame.from_dict({
1: [65, 0, 0, 1, 60, 1, 0, 1],
2: [65, 0, 0, 1, 60, 1, 0, 0],
3: [65, 0, 1, 0, 60, 1, 0, 0],
4: [65, 0, 1, 0, 60, 1, 0, 1]},
columns=data_x_numeric.columns, orient='index')
x_new
Explanation: Using the fitted model, we can predict a patient-specific survival function, by passing an appropriate data matrix to the estimator's predict_survival_function method.
First, let's create a set of four synthetic patients.
End of explanation
import numpy as np
pred_surv = estimator.predict_survival_function(x_new)
time_points = np.arange(1, 1000)
for i, surv_func in enumerate(pred_surv):
plt.step(time_points, surv_func(time_points), where="post",
label="Sample %d" % (i + 1))
plt.ylabel("est. probability of survival $\hat{S}(t)$")
plt.xlabel("time $t$")
plt.legend(loc="best")
Explanation: Similar to kaplan_meier_estimator, the predict_survival_function method returns a sequence of step functions, which we can plot.
End of explanation
from sksurv.metrics import concordance_index_censored
prediction = estimator.predict(data_x_numeric)
result = concordance_index_censored(data_y["Status"], data_y["Survival_in_days"], prediction)
result[0]
Explanation: Measuring the Performance of Survival Models
Once we fit a survival model, we usually want to assess how well a model can actually predict survival. Our test data is usually subject to censoring too, therefore metrics like root mean squared error or correlation are unsuitable. Instead, we use generalization of the area under the receiver operating characteristic (ROC) curve called Harrell's concordance index or c-index.
The interpretation is identical to the traditional area under the ROC curve metric for binary classification:
- a value of 0.5 denotes a random model,
- a value of 1.0 denotes a perfect model,
- a value of 0.0 denotes a perfectly wrong model.
End of explanation
estimator.score(data_x_numeric, data_y)
Explanation: or alternatively
End of explanation
import numpy as np
def fit_and_score_features(X, y):
n_features = X.shape[1]
scores = np.empty(n_features)
m = CoxPHSurvivalAnalysis()
for j in range(n_features):
Xj = X[:, j:j+1]
m.fit(Xj, y)
scores[j] = m.score(Xj, y)
return scores
scores = fit_and_score_features(data_x_numeric.values, data_y)
pd.Series(scores, index=data_x_numeric.columns).sort_values(ascending=False)
Explanation: Our model's c-index indicates that the model clearly performs better than random, but is also far from perfect.
Feature Selection: Which Variable is Most Predictive?
The model above considered all available variables for prediction. Next, we want to investigate which single variable is the best risk predictor. Therefore, we fit a Cox model to each variable individually and record the c-index on the training set.
End of explanation
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import Pipeline
pipe = Pipeline([('encode', OneHotEncoder()),
('select', SelectKBest(fit_and_score_features, k=3)),
('model', CoxPHSurvivalAnalysis())])
Explanation: Karnofsky_score is the best variable, whereas Months_from_Diagnosis and Prior_therapy='yes' have almost no predictive power on their own.
Next, we want to build a parsimonious model by excluding irrelevant features. We could use the ranking from above, but would need to determine what the optimal cut-off should be. Luckily, scikit-learn has built-in support for performing grid search.
First, we create a pipeline that puts all the parts together.
End of explanation
from sklearn.model_selection import GridSearchCV, KFold
param_grid = {'select__k': np.arange(1, data_x_numeric.shape[1] + 1)}
cv = KFold(n_splits=3, random_state=1, shuffle=True)
gcv = GridSearchCV(pipe, param_grid, return_train_score=True, cv=cv)
gcv.fit(data_x, data_y)
results = pd.DataFrame(gcv.cv_results_).sort_values(by='mean_test_score', ascending=False)
results.loc[:, ~results.columns.str.endswith("_time")]
Explanation: Next, we need to define the range of parameters we want to explore during grid search. Here, we want to optimize the parameter k of the SelectKBest class and allow k to vary from 1 feature to all 8 features.
End of explanation
pipe.set_params(**gcv.best_params_)
pipe.fit(data_x, data_y)
encoder, transformer, final_estimator = [s[1] for s in pipe.steps]
pd.Series(final_estimator.coef_, index=encoder.encoded_columns_[transformer.get_support()])
Explanation: The results show that it is sufficient to select the 3 most predictive features.
End of explanation |
2,189 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 2</font>
Download
Step1: Variáveis e Operadores
Step2: Declaração Múltipla
Step3: Pode-se usar letras, números e underline (mas não se pode começar com números)
Step4: Não se pode usar palavras reservadas como nome de variável
False
class
finally
is
return
None
continue
for
lambda
try
True
def
from
nonlocal
while
and
del
global
not
with
as
elif
if
or
yield
assert
else
import
pass
break
except
in
raise
Step5: Variáveis atribuídas a outras variáveis e ordem dos operadores
Step6: Operações com variáveis
Step7: Concatenação de Variáveis | Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 2</font>
Download: http://github.com/dsacademybr
End of explanation
# Atribuindo o valor 1 à variável var_teste
var_teste = 1
# Imprimindo o valor da variável
var_teste
# Imprimindo o valor da variável
print(var_teste)
# Não podemos utilizar uma variável que não foi definida. Veja a mensagem de erro.
my_var
var_teste = 2
var_teste
type(var_teste)
var_teste = 9.5
type(var_teste)
x = 1
x
Explanation: Variáveis e Operadores
End of explanation
pessoa1, pessoa2, pessoa3 = "Maria", "José", "Tobias"
pessoa1
pessoa2
pessoa3
fruta1 = fruta2 = fruta3 = "Laranja"
fruta1
fruta2
# Fique atento!!! Python é case-sensitive. Criamos a variável fruta2, mas não a variável Fruta2.
# Letras maiúsculas e minúsculas tem diferença no nome da variável.
Fruta2
Explanation: Declaração Múltipla
End of explanation
x1 = 50
x1
# Mensagem de erro, pois o Python não permite nomes de variáveis que iniciem com números
1x = 50
Explanation: Pode-se usar letras, números e underline (mas não se pode começar com números)
End of explanation
# Não podemos usar palavras reservadas como nome de variável
break = 1
Explanation: Não se pode usar palavras reservadas como nome de variável
False
class
finally
is
return
None
continue
for
lambda
try
True
def
from
nonlocal
while
and
del
global
not
with
as
elif
if
or
yield
assert
else
import
pass
break
except
in
raise
End of explanation
largura = 2
altura = 4
area = largura * altura
area
perimetro = 2 * largura + 2 * altura
perimetro
# A ordem dos operadores é a mesma seguida na Matemática
perimetro = 2 * (largura + 2) * altura
perimetro
Explanation: Variáveis atribuídas a outras variáveis e ordem dos operadores
End of explanation
idade1 = 25
idade2 = 35
idade1 + idade2
idade2 - idade1
idade2 * idade1
idade2 / idade1
idade2 % idade1
Explanation: Operações com variáveis
End of explanation
nome = "Steve"
sobrenome = "Jobs"
fullName = nome + " " + sobrenome
fullName
Explanation: Concatenação de Variáveis
End of explanation |
2,190 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EDP Elípticas con Diferencias Finitas
Recordemos que una ecuación diferencial parcial o EDP (PDE en inglés) es una ecuación que involucra funciones en dos o más variables y sus derivadas parciales. En este caso estudiamos las ecuaciones de tipo
$$Au_{xx} + Bu_{xy} + Cu_{yy} + f(u_x, u_y, u, x, y) = 0$$
donde $A$, $B$ y $C$ son escalares y además $x,y$ son las variables independientes.
El discriminante de estas ecuaciones $B^2 -4AC$ nos indicará si una ecuación es parabólica, elíptica o hiperbólica. Las ecuaciones que nos interesan ahora son elípticas, por lo tanto cumplen con que $B^2-4AC < 0$.
Las ecuaciones elípticas, a diferencia de los otros tipos, poseen condiciones de borde $\partial R$ para ambas variables independientes.
Definiciones
Step1: Para simplificar el sistema a resolver, cambiaremos los índices dobles por indices lineales mediante la conversión
$$v_{i+(j-1)m} = w_{ij}$$
Se puede pensar también como una operación de stack donde las filas de la grilla son colocadas una al lado de otra en orden.
Step2: Luego debemos construir una matriz $A$ y un vector $b$ bajo esta nueva numeración tal que el sistema $Av=b$ sea resoluble y el resultado podamos trasladarlo de vuelta al sistema de $w_{ij}$. Esta matriz naturalmente será de tamaño $mn \times mn$ y cada punto de la grilla tendrá su propia ecuación, como uno podría pensar.
La entrada $A_{pq}$ corresponde al $q$-ésimo coeficiente lineal de la $p$-ésima ecuación del sistema $Av =b$. Por ejemplo la ecuación
$$\frac{w_{i-1,j} -2w_{ij} + w_{i+1,j}}{h^2} + \frac{w_{i,j-1}-2w_{ij}+ w_{i,j+1}}{k^2} = f(x_i, y_j)$$
Corresponde a la ecuación para el punto $(i,j)$ de la grilla, y será la ecuación $p = i + (j-1)m$. Entonces, la entrada $A_{pq}$ no es más que, dada la conversión definida | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.mlab import griddata
m, n = 10, 4
xl, xr = (0.0, 1.0)
yb, yt = (0.0, 1.0)
h = (xr - xl) / (m - 1.0)
k = (yt - yb) / (n - 1.0)
xx = [xl + (i - 1)*h for i in range(1, m+1)]
yy = [yb + (i - 1)*k for i in range(1, n+1)]
plt.figure(figsize=(10, 5))
for y in yy:
plt.plot(xx, [y for _x in xx], 'co')
plt.xlim(xl-0.1, xr+0.1)
plt.ylim(yb-0.1, yt+0.1)
plt.xticks([xl, xr], ['$x_l$', '$x_r$'], fontsize=20)
plt.yticks([yb, yt], ['$y_b$', '$y_t$'], fontsize=20)
plt.text(xl, yb, "$w_{11}$", fontsize=20)
plt.text(xl+h, yb, "$w_{21}$", fontsize=20)
plt.text(xl, yb+k, "$w_{12}$", fontsize=20)
plt.text(xl, yt, "$w_{1n}$", fontsize=20)
plt.text(xr, yt, "$w_{mn}$", fontsize=20)
plt.text(xr, yb, "$w_{m1}$", fontsize=20)
plt.title("Mesh para coordenadas en dos dimensiones")
plt.show()
Explanation: EDP Elípticas con Diferencias Finitas
Recordemos que una ecuación diferencial parcial o EDP (PDE en inglés) es una ecuación que involucra funciones en dos o más variables y sus derivadas parciales. En este caso estudiamos las ecuaciones de tipo
$$Au_{xx} + Bu_{xy} + Cu_{yy} + f(u_x, u_y, u, x, y) = 0$$
donde $A$, $B$ y $C$ son escalares y además $x,y$ son las variables independientes.
El discriminante de estas ecuaciones $B^2 -4AC$ nos indicará si una ecuación es parabólica, elíptica o hiperbólica. Las ecuaciones que nos interesan ahora son elípticas, por lo tanto cumplen con que $B^2-4AC < 0$.
Las ecuaciones elípticas, a diferencia de los otros tipos, poseen condiciones de borde $\partial R$ para ambas variables independientes.
Definiciones:
Nunca deben olvidar la fórmula del laplaciano de una función! Es simplemente la suma de las segundas derivadas respecto a una misma variable.
$$\mathcal{L}(u) = \Delta u = u_{xx} + u_{yy}$$
La ecuación $\Delta u = f(x,y)$ es conocida como Ecuación de Poisson. En particular cuando $f(x,y) = 0$ la ecuación se conoce como Ecuación de Laplace. (Notar cómo la definición de las ecuaciones es consistente con que sean elípticas...)
Existen dos tipos de condiciones de borde. Se puede imponer una condición en el borde tanto para $u$ (condiciones de Dirichlet) como para alguna derivada direccional $\partial u / \partial n$ (Condiciones de Neumann)
Aplicación del Método
Como siempre declaramos el problema: Resolveremos la ecuación $\Delta u = f$ en un rectángulo dado $[x_l, x_r] \times [y_b, y_t]$. Además consideremos las condiciones de borde de Dirichlet que definen alguna función $g$ para cada borde, digamos:
\begin{align}
u(x,y_b) = g_1(x)\
u(x,y_t) = g_2(x)\
u(x_l,y) = g_3(y)\
u(x_r,y) = g_4(y)
\end{align}
Ahora debemos discretizar el dominio bidimensional. Para $m$ puntos en el eje horizontal y $n$ en el vertical, es decir con $M = m-1$ y $N = n-1$ steps de tamaño $h = (x_r − x_l)/M$ and $k=(y_t − y_ b)/N$. Si reemplazamos las diferencias finitas centradas que estudiamos anteriormente en la ecuación de Poisson obtenemos:
$$\frac{u(x-h,y) -2u(x,y) + u(x+h,y)}{h^2} + \mathcal{O}(h^2) + \frac{u(x,y-k) -2u(x,y) + u(x,y+k)}{k^2} + \mathcal{O}(k^2) = f(x,y)$$
Trasladando esto a las soluciones aproximadas $w$ obtenemos
$$\frac{w_{i-1,j} -2w_{ij} + w_{i+1,j}}{h^2} + \frac{w_{i,j-1}-2w_{ij}+ w_{i,j+1}}{k^2} = f(x_i, y_j)$$
Donde $x_i = x_l + (i − 1)h$ y $y_j = y_b + (j − 1)k$
Las incógnitas a resolver, al estar situadas en dos dimensiones, son incómodas de abordar, por lo que simplemente indexaremos de forma lineal las aproxmaciones $w_{ij}$
End of explanation
plt.figure(figsize=(10,5))
plt.title("Mesh para coordenadas lineales")
for y in yy:
plt.plot(xx, [y for _x in xx], 'co')
plt.xlim(xl-0.1, xr+0.1)
plt.ylim(yb-0.1, yt+0.1)
plt.xticks([xl, xr], ['$x_l$', '$x_r$'], fontsize=20)
plt.yticks([yb, yt], ['$y_b$', '$y_t$'], fontsize=20)
plt.text(xl, yb, "$v_{1}$", fontsize=20)
plt.text(xl+h, yb, "$v_{2}$", fontsize=20)
plt.text(xl, yb+k, "$v_{m+1}$", fontsize=20)
plt.text(xr, yb+k, "$v_{2m}$", fontsize=20)
plt.text(xl, yt, "$v_{(n-1)m+1}$", fontsize=20)
plt.text(xr, yt, "$v_{mn}$", fontsize=20)
plt.text(xr, yb, "$v_{m}$", fontsize=20)
plt.title("Mesh para coordenadas en dos dimensiones")
plt.show()
plt.show()
Explanation: Para simplificar el sistema a resolver, cambiaremos los índices dobles por indices lineales mediante la conversión
$$v_{i+(j-1)m} = w_{ij}$$
Se puede pensar también como una operación de stack donde las filas de la grilla son colocadas una al lado de otra en orden.
End of explanation
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from scipy import linalg
from mpl_toolkits.mplot3d import Axes3D
def f(x,y):
return 0.0
# Condiciones de borde
def g1(x):
return np.log(x**2 + 1)
def g2(x):
return np.log(x**2 + 4)
def g3(y):
return 2*np.log(y)
def g4(y):
return np.log(y**2 + 1)
# Puntos de la grilla
m, n = 30, 30
# Precálculo de m*n
mn = m * n
# Cantidad de steps
M = m - 1
N = n - 1
# Limites del dominio, x_left, x_right, y_bottom, y_top
xl, xr = (0.0, 1.0)
yb, yt = (1.0, 2.0)
# Tamaño de stepsize por dimensión
h = (xr - xl) / float(M)
k = (yt - yb) / float(N)
# Precálculo de h**2 y k**2
h2 = h**2.0
k2 = k**2.0
# Generar arreglos para dimension...
x = [xl + (i - 1)*h for i in range(1, m+1)]
y = [yb + (i - 1)*k for i in range(1, n+1)]
A = np.zeros((mn, mn))
b = np.zeros((mn))
for i in range(1, m-1):
for j in range(1, n-1):
A[i+(j-1)*m, i-1+(j-1)*m] = 1.0/h2
A[i+(j-1)*m, i+1+(j-1)*m] = 1.0/h2
A[i+(j-1)*m, i+(j-1)*m] = -2.0/h2 -2.0/k2
A[i+(j-1)*m, i+(j-2)*m] = 1.0/k2
A[i+(j-1)*m, i+j*m] = 1.0/k2
b[i+(j-1)*m] = f(x[i], y[j])
for i in range(0,m):
j = 0
A[i+(j-1)*m, i+(j-1)*m] = 1.0
b[i+(j-1)*m] = g1(x[i])
j = n-1
A[i+(j-1)*m, i+(j-1)*m] = 1.0
b[i+(j-1)*m] = g2(x[i])
for j in range(1, n-1):
i = 0
A[i+(j-1)*m, i+(j-1)*m] = 1.0
b[i+(j-1)*m] = g3(y[j])
i = m-1
A[i+(j-1)*m, i+(j-1)*m] = 1.0
b[i+(j-1)*m] = g4(y[j])
v = linalg.solve(A, b)
w = np.reshape(v, (m,n))
fig = plt.figure(figsize=(10,7))
ax = fig.add_subplot(111, projection='3d')
xv, yv = np.meshgrid(x, y)
ax.plot_surface(xv, yv, w, rstride=1, cstride=1)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
fig = plt.figure(figsize=(10,7))
ax = fig.add_subplot(111, projection='3d')
xv, yv = np.meshgrid(x, y)
zv = np.log(xv**2 + yv**2)
ax.plot_surface(xv, yv, zv, rstride=1, cstride=1)
plt.show()
Explanation: Luego debemos construir una matriz $A$ y un vector $b$ bajo esta nueva numeración tal que el sistema $Av=b$ sea resoluble y el resultado podamos trasladarlo de vuelta al sistema de $w_{ij}$. Esta matriz naturalmente será de tamaño $mn \times mn$ y cada punto de la grilla tendrá su propia ecuación, como uno podría pensar.
La entrada $A_{pq}$ corresponde al $q$-ésimo coeficiente lineal de la $p$-ésima ecuación del sistema $Av =b$. Por ejemplo la ecuación
$$\frac{w_{i-1,j} -2w_{ij} + w_{i+1,j}}{h^2} + \frac{w_{i,j-1}-2w_{ij}+ w_{i,j+1}}{k^2} = f(x_i, y_j)$$
Corresponde a la ecuación para el punto $(i,j)$ de la grilla, y será la ecuación $p = i + (j-1)m$. Entonces, la entrada $A_{pq}$ no es más que, dada la conversión definida:
\begin{align}
A_{i + (j-1)m, i + (j-1)m} &= -\frac{2}{h^2}- \frac{2}{k^2}\
A_{i + (j-1)m, i+1 + (j-1)m} &= \frac{1}{h^2}\
A_{i + (j-1)m, i-1 + (j-1)m} &= \frac{1}{h^2}\
A_{i + (j-1)m, i + jm} &= \frac{1}{k^2}\
A_{i + (j-1)m, i + (j-2)m} &= \frac{1}{k^2}\
\end{align}
Análogamente los $b$ del lado derecho del sistema son, naturalmente, la función dada en el punto $(x_i, y_j)$
$$b_{i+(j-1)m} = f(x_i, y_j)$$
Como las condiciones de borde son conocidas estas ecuaciones excluyen dichos puntos, los índices $i,j$ van desde $1 < i < m$ y $1< j<n$. Las ecuaciones correspondientes a los bordes también consisten en evaluar las funciones dadas en los puntos $(x_i, y_j)$ e introducen $1$'s en la matriz $A$, mientras que para $b$ esto se traduce a colocar $g_s(z)$ donde corresponda utilizando la misma convención lineal (No vale la pena repasar estas ecuaciones).
Al final del día este sistema sigue siendo lineal, y con métodos básicos podemos resolverlos sin problemas!
Ejemplo
1) Aplique diferencias finitas para una grilla de 25 puntos (5 por lado) para aproximar la solución de la ecuación de Laplace en el rectángulo $[0,1]\times[1,2]$ dadas las siguientes condiciones de borde:
\begin{align}
u(x,1) &= \ln(x^2+1)\
u(x,2) &= \ln(x^2+4)\
u(0,y) &= 2\ln(y)\
u(1,y) &= \ln(y^2+1)
\end{align}
2) Verifique que la solución analítica resulta ser $u(x,y) = \ln(x^2+y^2)$.
3) Proponga un método para calcular los errores para cada punto de la grilla excluyendo los puntos de borde.
End of explanation |
2,191 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Missing Data
pandas uses np.nan to represent missing data. By default, it is not included in computations.
documentation
Step1: reindex() creates a copy (not a view)
Step2: drop rows that have missing data
documentation
Step3: fill-in missing data
documentation
Step4: get boolean mask where values are nan
Step5: NaN propagates during arithmetic operations | Python Code:
browser_index = ['Firefox', 'Chrome', 'Safari', 'IE10', 'Konqueror']
browser_df = pd.DataFrame({
'http_status': [200,200,404,404,301],
'response_time': [0.04, 0.02, 0.07, 0.08, 1.0]},
index=browser_index)
browser_df
Explanation: Missing Data
pandas uses np.nan to represent missing data. By default, it is not included in computations.
documentation: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#missing-data
reindex()
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html
End of explanation
new_index= ['Safari', 'Iceweasel', 'Comodo Dragon', 'IE10', 'Chrome']
browser_df_2 = browser_df.reindex(new_index)
browser_df_2
Explanation: reindex() creates a copy (not a view)
End of explanation
browser_df_3 = browser_df_2.dropna(how='any')
browser_df_3
Explanation: drop rows that have missing data
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html
End of explanation
browser_df_2.fillna(value=-0.05555)
Explanation: fill-in missing data
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html
End of explanation
pd.isnull(browser_df_2)
Explanation: get boolean mask where values are nan
End of explanation
browser_df_2 * 17
Explanation: NaN propagates during arithmetic operations
End of explanation |
2,192 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Slightly more advanced notebook that fits the restaurants revenue data sets using RFR with a grid optimal parameters searching
Import libraries and prepare the data
Step1: Grid search the parameters space and fit a Random Forest
Step2: Grid search the parameters space and fit a Gradient boosting | Python Code:
## Similar to Regressors_simple...
import pandas as pd
import numpy as np
import csv as csv
from datetime import datetime
from sklearn.ensemble import RandomForestRegressor
from sklearn.preprocessing import LabelEncoder
import scipy as sp
import re
import sklearn
from sklearn.cross_validation import train_test_split,cross_val_score
from sklearn.preprocessing import StandardScaler
from sklearn import metrics
import matplotlib
from matplotlib import pyplot as plt
from sklearn.ensemble import GradientBoostingClassifier #GBM algorithm
from sklearn.ensemble import GradientBoostingRegressor #GBM algorithm
from sklearn import cross_validation, metrics #Additional scklearn functions
from sklearn.grid_search import GridSearchCV #Perforing grid search
from sklearn.svm import SVR
%matplotlib inline
trainData = pd.read_csv('data/train.csv', header=0, parse_dates = [1])
testData = pd.read_csv('data/test.csv', header=0, parse_dates = [1])
# Replace 'Open Date' by a feature representing the age of the resturant in years
# Replace 'Type', 'City' and 'City Group' by integer indicators
trainData['Open Date'] = (datetime.now() - trainData['Open Date']).astype('timedelta64[D]') / 365
trainData['Type'] = LabelEncoder().fit_transform(trainData['Type'])
trainData['City Group'] = LabelEncoder().fit_transform(trainData['City Group'])
trainData['City'] = LabelEncoder().fit_transform(trainData['City'])
# Separate the Y array
Y_train = trainData['revenue']
# Drop the Id and Y variable to create the finale X array to be fitted
X_train = trainData.drop(['Id','revenue'], axis=1)
# Same for Test data
testData['Open Date'] = (datetime.now() - testData['Open Date']).astype('timedelta64[D]') / 365
testData['Type'] = LabelEncoder().fit_transform(testData['Type'])
testData['City Group'] = LabelEncoder().fit_transform(testData['City Group'])
testData['City'] = LabelEncoder().fit_transform(testData['City'])
ids = testData['Id'].values
testData = testData.drop(['Id'], axis=1)
Explanation: Slightly more advanced notebook that fits the restaurants revenue data sets using RFR with a grid optimal parameters searching
Import libraries and prepare the data
End of explanation
# Define the parameters grid to search
param_grid = {'n_estimators':[100,1000],
'max_depth': [1,2,4],
'min_samples_leaf': [1, 3, 5],
'max_features': [1.0, 0.3, 0.1]}
est = RandomForestRegressor()
gs_cv = GridSearchCV(est, param_grid,n_jobs=-1, cv=10).fit(X_train, Y_train)
# print best fit parameters
gs_cv.best_params_
# Creating a RFR with the best fit parameters (entered manually)
forest=RandomForestRegressor(max_depth= 4, max_features= 0.1, min_samples_leaf= 3, n_estimators= 100)
# Fit the training data
forest=forest.fit(X_train,Y_train )
# Predict the testing data
output = forest.predict(testData)
# Write into submission file
predictions_file = open("interRF.csv", "w")
open_file_object = csv.writer(predictions_file)
open_file_object.writerow(["Id","Prediction"])
open_file_object.writerows(zip(ids, output))
predictions_file.close()
Explanation: Grid search the parameters space and fit a Random Forest
End of explanation
# Define the parameters grid to search, notice the learning_rate parameter
param_grid2 = {'n_estimators':[100,1000],
'max_depth': [1,2,4],
'learning_rate': [0.1,0.01],
'min_samples_leaf': [1, 3, 5],
'max_features': [1.0, 0.3, 0.1]}
est2 = GradientBoostingRegressor()
gs_cv2 = GridSearchCV(est2, param_grid2,n_jobs=-1, cv=10).fit(X_train, Y_train)
# print best fit parameters
gs_cv2.best_params_
# Creating a GBR with the best fit parameters (entered manually)
gbr=GradientBoostingRegressor(max_depth= 4, max_features= 0.1, min_samples_leaf= 1, n_estimators= 100,learning_rate=0.01)
# Fit the training data
gbr=gbr.fit(X_train,Y_train )
# Predict the testing data
output = gbr.predict(testData)
# Write into submission file
predictions_file = open("interGB.csv", "w")
open_file_object = csv.writer(predictions_file)
open_file_object.writerow(["Id","Prediction"])
open_file_object.writerows(zip(ids, output))
predictions_file.close()
Explanation: Grid search the parameters space and fit a Gradient boosting
End of explanation |
2,193 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Construct data and experiments directorys from environment variables
Step1: Specify main run parameters
Step2: Load data and normalise inputs
Step3: Specify prior parameters (data dependent so do after data load)
Step4: Assemble run parameters into dictionary for recording with results
Step5: Create necessary run objects
Step6: Run chains, starting from random sample from prior in each and saving results to experiments directory | Python Code:
data_dir = os.path.join(os.environ['DATA_DIR'], 'uci')
exp_dir = os.path.join(os.environ['EXP_DIR'], 'apm_mcmc')
Explanation: Construct data and experiments directorys from environment variables
End of explanation
data_set = 'pima'
method = 'apm(ess+rdss)'
n_chain = 10
chain_offset = 0
seeds = np.random.random_integers(10000, size=n_chain)
n_imp_sample = 1
n_sample = 10000 + 500 # 500 'warm-up' updates
epsilon = 1e-8
w = 1.
max_steps_out = 0
Explanation: Specify main run parameters
End of explanation
X = np.genfromtxt(os.path.join(data_dir, data_set + '_X.txt'))
y = np.genfromtxt(os.path.join(data_dir, data_set + '_y.txt'))
X, X_mn, X_sd = utils.normalise_inputs(X)
Explanation: Load data and normalise inputs
End of explanation
prior = dict(
a_tau = 1.,
b_tau = 1. / X.shape[1]**0.5,
a_sigma = 1.1,
b_sigma = 0.1
)
Explanation: Specify prior parameters (data dependent so do after data load)
End of explanation
run_params = dict(
data_set = data_set,
n_data = X.shape[0],
n_feature = X.shape[1],
method = method,
n_imp_sample = n_imp_sample,
epsilon = epsilon,
prior = prior,
w = w,
max_steps_out = max_steps_out,
n_sample = n_sample
)
Explanation: Assemble run parameters into dictionary for recording with results
End of explanation
def dir_and_w_sampler(w):
d = prng.normal(size=2)
d /= d.dot(d)**0.5
return d, w
prng = np.random.RandomState()
kernel_func = lambda K, X, theta: (
krn.isotropic_squared_exponential_kernel(K, X, theta, epsilon=epsilon)
)
ml_estimator = est.LogMarginalLikelihoodApproxPosteriorISEstimator(
X, y, kernel_func, lpa.laplace_approximation)
def log_f_estimator(u, theta=None, cached_res=None):
log_marg_lik_est, new_cached_res = ml_estimator(u, theta, cached_res)
log_prior = (
utils.log_gamma_log_pdf(theta[0], prior['a_sigma'], prior['b_sigma']) +
utils.log_gamma_log_pdf(theta[1], prior['a_tau'], prior['b_tau'])
)
return log_marg_lik_est + log_prior, new_cached_res
sampler = smp.APMEllSSPlusRandDirSliceSampler(
log_f_estimator, lambda: prng.normal(size=(y.shape[0], n_imp_sample)), prng,
lambda: dir_and_w_sampler(w), max_steps_out)
Explanation: Create necessary run objects
End of explanation
for c in range(n_chain):
try:
print('Starting chain {0}...'.format(c + 1))
prng.seed(seeds[c])
theta_init = np.array([
np.log(prng.gamma(prior['a_sigma'], 1. / prior['b_sigma'])),
np.log(prng.gamma(prior['a_tau'], 1. / prior['b_tau'])),
])
ml_estimator.reset_cubic_op_count()
start_time = time.clock()
thetas = sampler.get_samples(theta_init, n_sample)
comp_time = time.clock() - start_time
n_cubic_ops = ml_estimator.n_cubic_ops
tag = '{0}_{1}_chain_{2}'.format(data_set, method, c + 1 + chain_offset)
print('Completed: time {0}s, # cubic ops {1}'
.format(comp_time, n_cubic_ops))
utils.save_run(exp_dir, tag, thetas, 0, n_cubic_ops, comp_time, run_params)
utils.plot_trace(thetas)
plt.show()
except Exception as e:
print('Exception encountered')
print(e.message)
print(traceback.format_exc())
print('Skipping to next chain')
continue
Explanation: Run chains, starting from random sample from prior in each and saving results to experiments directory
End of explanation |
2,194 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute source power spectral density (PSD) in a label
Returns an STC file containing the PSD (in dB) of each of the sources
within a label.
Step1: Set parameters
Step2: View PSD of sources in label | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, compute_source_psd
print(__doc__)
Explanation: Compute source power spectral density (PSD) in a label
Returns an STC file containing the PSD (in dB) of each of the sources
within a label.
End of explanation
data_path = sample.data_path()
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_raw.fif'
fname_inv = meg_path / 'sample_audvis-meg-oct-6-meg-inv.fif'
fname_label = meg_path / 'labels' / 'Aud-lh.label'
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, verbose=False)
events = mne.find_events(raw, stim_channel='STI 014')
inverse_operator = read_inverse_operator(fname_inv)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
stim=False, exclude='bads')
tmin, tmax = 0, 120 # use the first 120s of data
fmin, fmax = 4, 100 # look at frequencies between 4 and 100Hz
n_fft = 2048 # the FFT size (n_fft). Ideally a power of 2
label = mne.read_label(fname_label)
stc = compute_source_psd(raw, inverse_operator, lambda2=1. / 9., method="dSPM",
tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax,
pick_ori="normal", n_fft=n_fft, label=label,
dB=True)
stc.save('psd_dSPM', overwrite=True)
Explanation: Set parameters
End of explanation
plt.plot(stc.times, stc.data.T)
plt.xlabel('Frequency (Hz)')
plt.ylabel('PSD (dB)')
plt.title('Source Power Spectrum (PSD)')
plt.show()
Explanation: View PSD of sources in label
End of explanation |
2,195 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 使用分布策略保存和加载模型
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 使用 tf.distribute.Strategy 准备数据和模型:
Step3: 训练模型:
Step4: 保存和加载模型
现在,您已经有一个简单的模型可供使用,让我们了解一下如何保存/加载 API。有两组可用的 API:
高级 Keras model.save 和 tf.keras.models.load_model
低级 tf.saved_model.save 和 tf.saved_model.load
Keras API
以下为使用 Keras API 保存和加载模型的示例:
Step5: 恢复无 tf.distribute.Strategy 的模型:
Step6: 恢复模型后,您可以继续在它上面进行训练,甚至无需再次调用 compile(),因为在保存之前已经对其进行了编译。模型以 TensorFlow 的标准 SavedModel proto 格式保存。有关更多信息,请参阅 saved_model 格式指南。
现在,加载模型并使用 tf.distribute.Strategy 进行训练:
Step7: 如您所见, tf.distribute.Strategy 可以按预期进行加载。此处使用的策略不必与保存前所用策略相同。
tf.saved_model API
现在,让我们看一下较低级别的 API。保存模型与 Keras API 类似:
Step8: 可以使用 tf.saved_model.load() 进行加载。但是,由于该 API 级别较低(因此用例范围更广泛),所以不会返回 Keras 模型。相反,它返回一个对象,其中包含可用于进行推断的函数。例如:
Step9: 加载的对象可能包含多个函数,每个函数与一个键关联。"serving_default" 是使用已保存的 Keras 模型的推断函数的默认键。要使用此函数进行推断,请运行以下代码:
Step10: 您还可以采用分布式方式加载和进行推断:
Step11: 调用已恢复的函数只是基于已保存模型的前向传递(预测)。如果您想继续训练加载的函数,或者将加载的函数嵌入到更大的模型中,应如何操作? 通常的做法是将此加载对象包装到 Keras 层以实现此目的。幸运的是,TF Hub 为此提供了 hub.KerasLayer,如下所示:
Step12: 如您所见,hub.KerasLayer 可将从 tf.saved_model.load() 加载回的结果封装到可用于构建其他模型的 Keras 层。这对于迁移学习非常实用。
我应使用哪种 API?
对于保存,如果您使用的是 Keras 模型,那么始终建议使用 Keras 的 model.save() API。如果您保存的不是 Keras 模型,那么您只能选择使用较低级的 API。
对于加载,使用哪种 API 取决于您要从加载的 API 中获得什么。如果您无法或不想获取 Keras 模型,请使用 tf.saved_model.load()。否则,请使用 tf.keras.models.load_model()。请注意,只有保存 Keras 模型后,才能恢复 Keras 模型。
可以混合使用 API。您可以使用 model.save 保存 Keras 模型,并使用低级 API tf.saved_model.load 加载非 Keras 模型。
Step13: 从本地设备保存/加载
要在远程运行(例如使用 Cloud TPU)的情况下从本地 I/O 设备保存和加载,则必须使用选项 experimental_io_device 将 I/O 设备设置为本地主机。
Step14: 警告
有一种特殊情况,您的 Keras 模型没有明确定义的输入。例如,可以创建没有任何输入形状的序贯模型 (Sequential([Dense(3), ...])。子类化模型在初始化后也没有明确定义的输入。在这种情况下,在保存和加载时都应坚持使用较低级别的 API,否则会出现错误。
要检查您的模型是否具有明确定义的输入,只需检查 model.inputs 是否为 None。如果非 None,则一切正常。在 .fit、.evaluate、.predict 中使用模型,或调用模型 (model(inputs)) 时,输入形状将自动定义。
以下为示例: | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import tensorflow_datasets as tfds
import tensorflow as tf
Explanation: 使用分布策略保存和加载模型
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/tutorials/distribute/save_and_load"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/distribute/save_and_load.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/distribute/save_and_load.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 Github 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/distribute/save_and_load.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
</table>
概述
在训练期间一般需要保存和加载模型。有两组用于保存和加载 Keras 模型的 API:高级 API 和低级 API。本教程演示了在使用 tf.distribute.Strategy 时如何使用 SavedModel API。要了解 SavedModel 和序列化的相关概况,请参阅保存的模型指南和 Keras 模型序列化指南。让我们从一个简单的示例开始:
导入依赖项:
End of explanation
mirrored_strategy = tf.distribute.MirroredStrategy()
def get_data():
datasets, ds_info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
BUFFER_SIZE = 10000
BATCH_SIZE_PER_REPLICA = 64
BATCH_SIZE = BATCH_SIZE_PER_REPLICA * mirrored_strategy.num_replicas_in_sync
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
train_dataset = mnist_train.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)
return train_dataset, eval_dataset
def get_model():
with mirrored_strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=[tf.metrics.SparseCategoricalAccuracy()])
return model
Explanation: 使用 tf.distribute.Strategy 准备数据和模型:
End of explanation
model = get_model()
train_dataset, eval_dataset = get_data()
model.fit(train_dataset, epochs=2)
Explanation: 训练模型:
End of explanation
keras_model_path = "/tmp/keras_save"
model.save(keras_model_path)
Explanation: 保存和加载模型
现在,您已经有一个简单的模型可供使用,让我们了解一下如何保存/加载 API。有两组可用的 API:
高级 Keras model.save 和 tf.keras.models.load_model
低级 tf.saved_model.save 和 tf.saved_model.load
Keras API
以下为使用 Keras API 保存和加载模型的示例:
End of explanation
restored_keras_model = tf.keras.models.load_model(keras_model_path)
restored_keras_model.fit(train_dataset, epochs=2)
Explanation: 恢复无 tf.distribute.Strategy 的模型:
End of explanation
another_strategy = tf.distribute.OneDeviceStrategy("/cpu:0")
with another_strategy.scope():
restored_keras_model_ds = tf.keras.models.load_model(keras_model_path)
restored_keras_model_ds.fit(train_dataset, epochs=2)
Explanation: 恢复模型后,您可以继续在它上面进行训练,甚至无需再次调用 compile(),因为在保存之前已经对其进行了编译。模型以 TensorFlow 的标准 SavedModel proto 格式保存。有关更多信息,请参阅 saved_model 格式指南。
现在,加载模型并使用 tf.distribute.Strategy 进行训练:
End of explanation
model = get_model() # get a fresh model
saved_model_path = "/tmp/tf_save"
tf.saved_model.save(model, saved_model_path)
Explanation: 如您所见, tf.distribute.Strategy 可以按预期进行加载。此处使用的策略不必与保存前所用策略相同。
tf.saved_model API
现在,让我们看一下较低级别的 API。保存模型与 Keras API 类似:
End of explanation
DEFAULT_FUNCTION_KEY = "serving_default"
loaded = tf.saved_model.load(saved_model_path)
inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]
Explanation: 可以使用 tf.saved_model.load() 进行加载。但是,由于该 API 级别较低(因此用例范围更广泛),所以不会返回 Keras 模型。相反,它返回一个对象,其中包含可用于进行推断的函数。例如:
End of explanation
predict_dataset = eval_dataset.map(lambda image, label: image)
for batch in predict_dataset.take(1):
print(inference_func(batch))
Explanation: 加载的对象可能包含多个函数,每个函数与一个键关联。"serving_default" 是使用已保存的 Keras 模型的推断函数的默认键。要使用此函数进行推断,请运行以下代码:
End of explanation
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
loaded = tf.saved_model.load(saved_model_path)
inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]
dist_predict_dataset = another_strategy.experimental_distribute_dataset(
predict_dataset)
# Calling the function in a distributed manner
for batch in dist_predict_dataset:
another_strategy.run(inference_func,args=(batch,))
Explanation: 您还可以采用分布式方式加载和进行推断:
End of explanation
import tensorflow_hub as hub
def build_model(loaded):
x = tf.keras.layers.Input(shape=(28, 28, 1), name='input_x')
# Wrap what's loaded to a KerasLayer
keras_layer = hub.KerasLayer(loaded, trainable=True)(x)
model = tf.keras.Model(x, keras_layer)
return model
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
loaded = tf.saved_model.load(saved_model_path)
model = build_model(loaded)
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=[tf.metrics.SparseCategoricalAccuracy()])
model.fit(train_dataset, epochs=2)
Explanation: 调用已恢复的函数只是基于已保存模型的前向传递(预测)。如果您想继续训练加载的函数,或者将加载的函数嵌入到更大的模型中,应如何操作? 通常的做法是将此加载对象包装到 Keras 层以实现此目的。幸运的是,TF Hub 为此提供了 hub.KerasLayer,如下所示:
End of explanation
model = get_model()
# Saving the model using Keras's save() API
model.save(keras_model_path)
another_strategy = tf.distribute.MirroredStrategy()
# Loading the model using lower level API
with another_strategy.scope():
loaded = tf.saved_model.load(keras_model_path)
Explanation: 如您所见,hub.KerasLayer 可将从 tf.saved_model.load() 加载回的结果封装到可用于构建其他模型的 Keras 层。这对于迁移学习非常实用。
我应使用哪种 API?
对于保存,如果您使用的是 Keras 模型,那么始终建议使用 Keras 的 model.save() API。如果您保存的不是 Keras 模型,那么您只能选择使用较低级的 API。
对于加载,使用哪种 API 取决于您要从加载的 API 中获得什么。如果您无法或不想获取 Keras 模型,请使用 tf.saved_model.load()。否则,请使用 tf.keras.models.load_model()。请注意,只有保存 Keras 模型后,才能恢复 Keras 模型。
可以混合使用 API。您可以使用 model.save 保存 Keras 模型,并使用低级 API tf.saved_model.load 加载非 Keras 模型。
End of explanation
model = get_model()
# Saving the model to a path on localhost.
saved_model_path = "/tmp/tf_save"
save_options = tf.saved_model.SaveOptions(experimental_io_device='/job:localhost')
model.save(saved_model_path, options=save_options)
# Loading the model from a path on localhost.
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
load_options = tf.saved_model.LoadOptions(experimental_io_device='/job:localhost')
loaded = tf.keras.models.load_model(saved_model_path, options=load_options)
Explanation: 从本地设备保存/加载
要在远程运行(例如使用 Cloud TPU)的情况下从本地 I/O 设备保存和加载,则必须使用选项 experimental_io_device 将 I/O 设备设置为本地主机。
End of explanation
class SubclassedModel(tf.keras.Model):
output_name = 'output_layer'
def __init__(self):
super(SubclassedModel, self).__init__()
self._dense_layer = tf.keras.layers.Dense(
5, dtype=tf.dtypes.float32, name=self.output_name)
def call(self, inputs):
return self._dense_layer(inputs)
my_model = SubclassedModel()
# my_model.save(keras_model_path) # ERROR!
tf.saved_model.save(my_model, saved_model_path)
Explanation: 警告
有一种特殊情况,您的 Keras 模型没有明确定义的输入。例如,可以创建没有任何输入形状的序贯模型 (Sequential([Dense(3), ...])。子类化模型在初始化后也没有明确定义的输入。在这种情况下,在保存和加载时都应坚持使用较低级别的 API,否则会出现错误。
要检查您的模型是否具有明确定义的输入,只需检查 model.inputs 是否为 None。如果非 None,则一切正常。在 .fit、.evaluate、.predict 中使用模型,或调用模型 (model(inputs)) 时,输入形状将自动定义。
以下为示例:
End of explanation |
2,196 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Implementing a Neural Network
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
Step2: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
Step3: Forward pass
Step4: Forward pass
Step5: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check
Step6: Train the network
To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function TwoLayerNet.train and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement TwoLayerNet.predict, as the training process periodically performs prediction to keep track of accuracy over time while the network trains.
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
Step8: Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
Step9: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
Step10: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
Step11: Tune your hyperparameters
What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.
Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment
Step12: Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
We will give you extra bonus point for every 1% of accuracy above 52%. | Python Code:
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.neural_net import TwoLayerNet
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
Explanation: Implementing a Neural Network
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
End of explanation
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
Explanation: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
End of explanation
scores = net.loss(X)
print 'Your scores:'
print scores
print
print 'correct scores:'
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print correct_scores
print
# The difference should be very small. We get < 1e-7
print 'Difference between your scores and correct scores:'
print np.sum(np.abs(scores - correct_scores))
Explanation: Forward pass: compute scores
Open the file cs231n/classifiers/neural_net.py and look at the method TwoLayerNet.loss. This function is very similar to the loss functions you have written for the SVM and Softmax exercises: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters.
Implement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs.
End of explanation
loss, _ = net.loss(X, y, reg=0.1)
correct_loss = 1.30378789133
# should be very small, we get < 1e-12
print 'Difference between your loss and correct loss:'
print np.sum(np.abs(loss - correct_loss))
Explanation: Forward pass: compute loss
In the same function, implement the second part that computes the data and regularizaion loss.
End of explanation
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = net.loss(X, y, reg=0.1)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.1)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
Explanation: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
End of explanation
net = init_toy_model()
stats = net.train(X, y, X, y,
learning_rate=1e-1, reg=1e-5,
num_iters=100, verbose=False)
print 'Final training loss: ', stats['loss_history'][-1]
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
Explanation: Train the network
To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function TwoLayerNet.train and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement TwoLayerNet.predict, as the training process periodically performs prediction to keep track of accuracy over time while the network trains.
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
End of explanation
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
Explanation: Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
End of explanation
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=200,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.5, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print 'Validation accuracy: ', val_acc
Explanation: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
End of explanation
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
Explanation: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
End of explanation
best_net = None # store the best model into this
learning = [1e-5, 1e-3]
regularization = [0, 1]
decay = [0.9, 1]
results = {}
best_val = -1
for num_hidden in np.arange(50, 300, 50):
for _ in np.arange(0, 50):
i = np.random.uniform(low=learning[0], high=learning[1])
j = np.random.uniform(low=regularization[0], high=regularization[1])
k = np.random.uniform(low=decay[0], high=decay[1])
# Train the network
net = TwoLayerNet(input_size, num_hidden, num_classes)
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=500, batch_size=200,
learning_rate=i, learning_rate_decay=k,
reg=j, verbose=False)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
results[(num_hidden, i, j, k)] = val_acc
if val_acc > best_val:
best_val = val_acc
# Print the obtained accuracies
for nh, lr, reg, dec in sorted(results):
print 'Hidden: %d, learning rate: %f, regularisation: %f, decay: %f -> %f' % ( \
nh, lr, reg, dec, results[nh, lr, reg, dec])
# Find the best learning rate and regularization strength
best_hidden = 25
best_lr = 0.000958
best_reg = 0.952745
best_decay = 0.935156
best_val = -1
for nh, lr, reg, dec in sorted(results):
if results[(nh, lr, reg, dec)] > best_val:
best_val = results[(nh, lr, reg, dec)]
best_hidden = nh
best_lr = lr
best_reg = reg
best_decay = dec
# Train the best_svm with more iterations
best_net = TwoLayerNet(input_size, best_hidden, num_classes)
stats = best_net.train(X_train, y_train, X_val, y_val,
num_iters=2000, batch_size=200,
learning_rate=best_lr, learning_rate_decay=best_decay,
reg=best_reg, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print 'Best validation accuracy now: %f' % val_acc
# visualize the weights of the best network
show_net_weights(best_net)
Explanation: Tune your hyperparameters
What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.
Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).
End of explanation
test_acc = (best_net.predict(X_test) == y_test).mean()
print 'Test accuracy: ', test_acc
Explanation: Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
We will give you extra bonus point for every 1% of accuracy above 52%.
End of explanation |
2,197 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook uses mvncall to phase two multiallelic SNPs within VGSC and to add back in the insecticide resistance linked N1570Y SNP filtered out of the PASS callset.
Step1: install mvncall
mvncall depends on a couple of boost libraries, I installed these first using "sudo apt-get install libboost-dev"
Step2: prepare input files
Step3: list of SNPs to phase
Step4: haplotype scaffold
Step5: convert to numpy arrays
so we can interleave these variants back into the genotype array easily
Step6: check parsing... | Python Code:
%run setup.ipynb
Explanation: This notebook uses mvncall to phase two multiallelic SNPs within VGSC and to add back in the insecticide resistance linked N1570Y SNP filtered out of the PASS callset.
End of explanation
%%bash --err install_err --out install_out
# This script downloads and installs mvncall. We won't include this in
# the standard install.sh script as this is not something we want to do
# as part of continuous integration, it is only needed for this data
# generation task.
set -xeo pipefail
cd ../dependencies
if [ ! -f mvncall.installed ]; then
echo installing mvncall
# clean up
rm -rvf mvncall*
# download and unpack
wget https://mathgen.stats.ox.ac.uk/genetics_software/mvncall/mvncall_v1.0_x86_64_dynamic.tgz
tar zxvf mvncall_v1.0_x86_64_dynamic.tgz
# trick mnvcall into finding boost libraries - their names aren't what mvncall expects
locate libboost_iostreams | xargs -I '{}' ln -v -f -s '{}' libboost_iostreams.so.5
locate libboost_program_options | xargs -I '{}' ln -v -f -s '{}' libboost_program_options.so.5
# try running mvncall
export LD_LIBRARY_PATH=.
./mvncall_v1.0_x86_64_dynamic/mvncall
# mark success
touch mvncall.installed
else
echo mvncall already installed
fi
#check install
print(install_out)
# check we can run mvncall
mvncall = 'LD_LIBRARY_PATH=../dependencies ../dependencies/mvncall_v1.0_x86_64_dynamic/mvncall'
!{mvncall}
Explanation: install mvncall
mvncall depends on a couple of boost libraries, I installed these first using "sudo apt-get install libboost-dev"
End of explanation
# these are the source data files for the phasing
sample_file = '../ngs.sanger.ac.uk/production/ag1000g/phase1/AR3.1/haplotypes/main/shapeit/ag1000g.phase1.ar3.1.haplotypes.2L.sample.gz'
vcf_file = '../ngs.sanger.ac.uk/production/ag1000g/phase1/AR3/variation/main/vcf/ag1000g.phase1.ar3.2L.vcf.gz'
scaffold_file = '../ngs.sanger.ac.uk/production/ag1000g/phase1/AR3.1/haplotypes/main/shapeit/ag1000g.phase1.ar3.1.haplotypes.2L.haps.gz'
Explanation: prepare input files
End of explanation
# this file will contain the list of SNPs to be phased
list_file = '../data/phasing_extra_phase1.list'
%%file {list_file}
2391228
2400071
2429745
# for mvncall we need a simple manifest of sample IDs
# N.B., we will exclude the cross parents
!gunzip -v {sample_file} -c | head -n 767 | tail -n 765 | cut -d' ' -f1 > /tmp/ag1000g.phase1.ar3.1.haplotypes.2L.sample
!head /tmp/ag1000g.phase1.ar3.1.haplotypes.2L.sample
!tail /tmp/ag1000g.phase1.ar3.1.haplotypes.2L.sample
!wc -l /tmp/ag1000g.phase1.ar3.1.haplotypes.2L.sample
Explanation: list of SNPs to phase
End of explanation
# mvncall needs the haps unzipped. Also we will exclude the cross parents
!if [ ! -f /tmp/ag1000g.phase1.ar3.1.haplotypes.2L.haps ]; then gunzip -v {scaffold_file} -c | cut -d' ' -f1-1535 > /tmp/ag1000g.phase1.ar3.1.haplotypes.2L.haps; fi
# check cut has worked
!head -n1 /tmp/ag1000g.phase1.ar3.1.haplotypes.2L.haps
# check cut has worked
!head -n1 /tmp/ag1000g.phase1.ar3.1.haplotypes.2L.haps | wc
# mvncall needs an unzipped VCF, we'll extract only the region we need
region_vgsc = SeqFeature('2L', 2358158, 2431617)
region_vgsc.region_str
# extract the VCF
!bcftools view -r {region_vgsc.region_str} --output-file /tmp/vgsc.vcf --output-type v {vcf_file}
%%bash
for numsnps in 50 100 200; do
echo $numsnps
done
%%bash
# run mvncall, only if output file doesn't exist (it's slow)
mvncall="../dependencies/mvncall_v1.0_x86_64_dynamic/mvncall"
export LD_LIBRARY_PATH=../dependencies
for numsnps in 50 100 200; do
output_file=../data/phasing_extra_phase1.mvncall.${numsnps}.vcf
if [ ! -f $output_file ]; then
echo running mvncall $numsnps
$mvncall \
--sample-file /tmp/ag1000g.phase1.ar3.1.haplotypes.2L.sample \
--glfs /tmp/vgsc.vcf \
--scaffold-file /tmp/ag1000g.phase1.ar3.1.haplotypes.2L.haps \
--list ../data/phasing_extra_phase1.list \
--numsnps $numsnps \
--o $output_file > /tmp/mvncall.${numsnps}.log
else
echo skipping mvncall $numsnps
fi
done
!tail /tmp/mvncall.100.log
!cat ../data/phasing_extra_phase1.mvncall.50.vcf
!ls -lh ../data/*.mvncall*
Explanation: haplotype scaffold
End of explanation
def vcf_to_numpy(numsnps):
# input VCF filename
vcf_fn = '../data/phasing_extra_phase1.mvncall.{}.vcf'.format(numsnps)
# extract variants
variants = vcfnp.variants(vcf_fn, cache=False,
dtypes={'REF': 'S1', 'ALT': 'S1'},
flatten_filter=True)
# fix the chromosome
variants['CHROM'] = (b'2L',) * len(variants)
# extract calldata
calldata = vcfnp.calldata_2d(vcf_fn, cache=False,
fields=['genotype', 'GT', 'is_phased'])
# N.B., there is a trailing tab character somewhere in the input VCFs (samples line?)
# which means an extra sample gets added when parsing. Hence we will trim off the last
# field.
calldata = calldata[:, :-1]
# save output
output_fn = vcf_fn[:-3] + 'npz'
np.savez_compressed(output_fn, variants=variants, calldata=calldata)
for numsnps in 50, 100, 200:
vcf_to_numpy(numsnps)
Explanation: convert to numpy arrays
so we can interleave these variants back into the genotype array easily
End of explanation
callset = np.load('../data/phasing_extra_phase1.mvncall.200.npz')
callset
variants = callset['variants']
allel.VariantTable(variants)
calldata = callset['calldata']
g = allel.GenotypeArray(calldata['genotype'])
g.is_phased = calldata['is_phased']
g.displayall()
Explanation: check parsing...
End of explanation |
2,198 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Read data
Step1: Add features
Step2: Feature pdfs
Step3: One versus One
Prepare datasets
Step4: Prepare stacking variables
Step5: Multiclassification | Python Code:
treename = 'tag'
data_b = pandas.DataFrame(root_numpy.root2array('datasets/type=5.root', treename=treename)).dropna()
data_b = data_b[::40]
data_c = pandas.DataFrame(root_numpy.root2array('datasets/type=4.root', treename=treename)).dropna()
data_light = pandas.DataFrame(root_numpy.root2array('datasets/type=0.root', treename=treename)).dropna()
data = {'b': data_b, 'c': data_c, 'light': data_light}
jet_features = [column for column in data_b.columns if "Jet" in column]
sv_features = [column for column in data_b.columns if "SV" in column]
print "Jet features", ", ".join(jet_features)
print "SV features", ", ".join(sv_features)
Explanation: Read data
End of explanation
for d in data.values():
d['log_SVFDChi2'] = numpy.log(d['SVFDChi2'].values)
d['log_SVSumIPChi2'] = numpy.log(d['SVSumIPChi2'].values)
d['SVM_diff'] = numpy.log(d['SVMC'] ** 2 - d['SVM']**2)
d['SVM_rel'] = numpy.tanh(d['SVM'] / d['SVMC'])
d['SVM_rel2'] = (d['SVM'] / d['SVMC'])**2
d['SVR_rel'] = d['SVDR'] / (d['SVR'] + 1e-5)
d['R_FD_rel'] = numpy.tanh(d['SVR'] / d['SVFDChi2'])
d['jetP'] = numpy.sqrt(d['JetPx'] ** 2 + d['JetPy'] ** 2 + d['JetPz'] ** 2)
d['jetPt'] = numpy.sqrt(d['JetPx'] ** 2 + d['JetPy'] ** 2)
d['jetM'] = numpy.sqrt(d['JetE'] ** 2 - d['jetP'] ** 2 )
d['SV_jet_M_rel'] = d['SVM'] / d['jetM']
d['SV_jet_MC_rel'] = d['SVMC'] / d['jetM']
# full_data['P_Sin'] = 0.5 * d['SVMC'].values - (d['SVM'].values)**2 / (2. * d['SVMC'].values)
# full_data['Psv'] = d['SVPT'].values * d['P_Sin'].values
# full_data['Psv2'] = d['P_Sin'].values / d['SVPT'].values
# full_data['Mt'] = d['SVMC'].values - d['P_Sin'].values
# full_data['QtoN'] = 1. * d['SVQ'].values / d['SVN'].values
data_b = data_b.drop(['JetParton', 'JetFlavor', 'JetPx', 'JetPy'], axis=1)
data_c = data_c.drop(['JetParton', 'JetFlavor', 'JetPx', 'JetPy'], axis=1)
data_light = data_light.drop(['JetParton', 'JetFlavor', 'JetPx', 'JetPy'], axis=1)
jet_features = [column for column in data_b.columns if "Jet" in column]
additional_features = ['log_SVFDChi2', 'log_SVSumIPChi2',
'SVM_diff', 'SVM_rel', 'SVR_rel', 'SVM_rel2', 'SVR_rel', 'R_FD_rel',
'jetP', 'jetPt', 'jetM', 'SV_jet_M_rel', 'SV_jet_MC_rel']
Explanation: Add features
End of explanation
figsize(18, 60)
for i, feature in enumerate(data_b.columns):
subplot(len(data_b.columns) / 3, 3, i)
hist(data_b[feature].values, label='b', alpha=0.2, bins=60, normed=True)
hist(data_c[feature].values, label='c', alpha=0.2, bins=60, normed=True)
# hist(data_light[feature].values, label='light', alpha=0.2, bins=60, normed=True)
xlabel(feature); legend(loc='best');
title(roc_auc_score([0] * len(data_b) + [1]*len(data_c),
numpy.hstack([data_b[feature].values, data_c[feature].values])))
len(data_b), len(data_c), len(data_light)
jet_features = jet_features[2:]
Explanation: Feature pdfs
End of explanation
data_b_c_lds = LabeledDataStorage(pandas.concat([data_b, data_c]), [1] * len(data_b) + [0] * len(data_c))
data_c_light_lds = LabeledDataStorage(pandas.concat([data_c, data_light]), [1] * len(data_c) + [0] * len(data_light))
data_b_light_lds = LabeledDataStorage(pandas.concat([data_b, data_light]), [1] * len(data_b) + [0] * len(data_light))
def one_vs_one_training(base_estimators, data_b_c_lds, data_c_light_lds, data_b_light_lds, full_data,
prefix='bdt', folding=True, features=None):
if folding:
tt_folding_b_c = FoldingClassifier(base_estimators[0], n_folds=2, random_state=11, parallel_profile=PROFILE,
features=features)
tt_folding_c_light = FoldingClassifier(base_estimators[1], n_folds=2, random_state=11, parallel_profile=PROFILE,
features=features)
tt_folding_b_light = FoldingClassifier(base_estimators[2], n_folds=2, random_state=11, parallel_profile=PROFILE,
features=features)
else:
tt_folding_b_c = base_estimators[0]
tt_folding_b_c.features = features
tt_folding_c_light = base_estimators[1]
tt_folding_c_light.features = features
tt_folding_b_light = base_estimators[2]
tt_folding_b_light.features = features
%time tt_folding_b_c.fit_lds(data_b_c_lds)
%time tt_folding_c_light.fit_lds(data_c_light_lds)
%time tt_folding_b_light.fit_lds(data_b_light_lds)
bdt_b_c = numpy.concatenate([tt_folding_b_c.predict_proba(pandas.concat([data_b, data_c])),
tt_folding_b_c.predict_proba(data_light)])[:, 1]
bdt_c_light = numpy.concatenate([tt_folding_c_light.predict_proba(data_b),
tt_folding_c_light.predict_proba(pandas.concat([data_c, data_light]))])[:, 1]
p_b_light = tt_folding_b_light.predict_proba(pandas.concat([data_b, data_light]))[:, 1]
bdt_b_light = numpy.concatenate([p_b_light[:len(data_b)], tt_folding_b_light.predict_proba(data_c)[:, 1],
p_b_light[len(data_b):]])
full_data[prefix + '_b_c'] = bdt_b_c
full_data[prefix + '_b_light'] = bdt_b_light
full_data[prefix + '_c_light'] = bdt_c_light
Explanation: One versus One
Prepare datasets:
b vs c
b vs light
c vs light
End of explanation
full_data = pandas.concat([data_b, data_c, data_light])
full_data['label'] = [0] * len(data_b) + [1] * len(data_c) + [2] * len(data_light)
from hep_ml.nnet import MLPClassifier
from rep.estimators import SklearnClassifier
one_vs_one_training([SklearnClassifier(MLPClassifier(layers=(30, 10), epochs=700, random_state=11))]*3,
data_b_c_lds, data_c_light_lds, data_b_light_lds, full_data, 'mlp', folding=True,
features=sv_features + additional_features + jet_features)
from sklearn.linear_model import LogisticRegression
one_vs_one_training([LogisticRegression()]*3,
data_b_c_lds, data_c_light_lds, data_b_light_lds, full_data,
'logistic', folding=True, features=sv_features + additional_features + jet_features)
# from sklearn.svm import SVC
# from sklearn.pipeline import make_pipeline
# from sklearn.preprocessing import StandardScaler
# svm_feat = SklearnClassifier(make_pipeline(StandardScaler(), SVC(probability=True)), features=sv_features)
# %time svm_feat.fit(data_b_c_lds.data, data_b_c_lds.target)
# from sklearn.neighbors import KNeighborsClassifier
# one_vs_one_training([KNeighborsClassifier(metric='canberra')]*3,
# data_b_c_lds, data_c_light_lds, data_b_light_lds, full_data,
# 'knn', folding=True, features=sv_features)
# from rep.estimators import TheanetsClassifier
# theanets_base = TheanetsClassifier(layers=(20, 10), trainers=[{'algo': 'adadelta', 'learining_rate': 0.1}, ])
# nn = FoldingClassifier(theanets_base, features=sv_features, random_state=11, parallel_profile='ssh-py2')
# nn.fit(full_data, full_data.label)
# multi_probs = nn.predict_proba(full_data)
# full_data['th_0'] = multi_probs[:, 0] / multi_probs[:, 1]
# full_data['th_1'] = multi_probs[:, 0] / multi_probs[:, 2]
# full_data['th_2'] = multi_probs[:, 1] / multi_probs[:, 2]
mlp_features = ['mlp_b_c', 'mlp_b_light', 'mlp_c_light']
# knn_features = ['knn_b_c', 'knn_b_light', 'knn_c_light']
# th_features = ['th_0', 'th_1', 'th_2']
logistic_features = ['logistic_b_c', 'logistic_b_light', 'logistic_c_light']
Explanation: Prepare stacking variables
End of explanation
data_multi_lds = LabeledDataStorage(full_data, 'label')
variables_final = set(sv_features + additional_features + jet_features + mlp_features)
# variables_final = list(variables_final - {'SVN', 'SVQ', 'log_SVFDChi2', 'log_SVSumIPChi2', 'SVM_rel2', 'JetE', 'JetNDis'})
from rep.estimators import XGBoostClassifier
xgb_base = XGBoostClassifier(n_estimators=3000, colsample=0.7, eta=0.005, nthreads=8,
subsample=0.7, max_depth=6)
multi_folding_rbf = FoldingClassifier(xgb_base, n_folds=2, random_state=11,
features=variables_final)
%time multi_folding_rbf.fit_lds(data_multi_lds)
multi_probs = multi_folding_rbf.predict_proba(full_data)
'log loss', -numpy.log(multi_probs[numpy.arange(len(multi_probs)), full_data['label']]).sum() / len(full_data)
multi_folding_rbf.get_feature_importances()
labels = full_data['label'].values.astype(int)
multiclass_result = generate_result(1 - roc_auc_score(labels > 0, multi_probs[:, 0] / multi_probs[:, 1],
sample_weight=(labels != 2) * 1),
1 - roc_auc_score(labels > 1, multi_probs[:, 0] / multi_probs[:, 2],
sample_weight=(labels != 1) * 1),
1 - roc_auc_score(labels > 1, multi_probs[:, 1] / multi_probs[:, 2],
sample_weight=(labels != 0) * 1),
label='multiclass')
result = pandas.concat([multiclass_result])
result.index = result['name']
result = result.drop('name', axis=1)
result
Explanation: Multiclassification
End of explanation |
2,199 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Setup data
We're going to look at the IMDB dataset, which contains movie reviews from IMDB, along with their sentiment. Keras comes with some helpers for this dataset.
Step1: This is the word list
Step2: ...and this is the mapping from id to word
Step3: We download the reviews using code copied from keras.datasets
Step4: Here's the 1st review. As you see, the words have been replaced by ids. The ids can be looked up in idx2word.
Step5: The first word of the first review is 23022. Let's see what that is.
Step6: Here's the whole review, mapped from ids to words.
Step7: The labels are 1 for positive, 0 for negative.
Step8: Reduce vocab size by setting rare words to max index.
Step9: Look at distribution of lengths of sentences.
Step10: Pad (with zero) or truncate each sentence to make consistent length.
Step11: This results in nice rectangular matrices that can be passed to ML algorithms. Reviews shorter than 500 words are pre-padded with zeros, those greater are truncated.
Step12: Create simple models
Single hidden layer NN
The simplest model that tends to give reasonable results is a single hidden layer net. So let's try that. Note that we can't expect to get any useful results by feeding word ids directly into a neural net - so instead we use an embedding to replace them with a vector of 32 (initially random) floats for each word in the vocab.
Step13: The stanford paper that this dataset is from cites a state of the art accuracy (without unlabelled data) of 0.883. So we're short of that, but on the right track.
Single conv layer with max pooling
A CNN is likely to work better, since it's designed to take advantage of ordered data. We'll need to use a 1D CNN, since a sequence of words is 1D.
Step14: That's well past the Stanford paper's accuracy - another win for CNNs!
Step16: Pre-trained vectors
You may want to look at wordvectors.ipynb before moving on.
In this section, we replicate the previous CNN, but using pre-trained embeddings.
Step17: The glove word ids and imdb word ids use different indexes. So we create a simple function that creates an embedding matrix using the indexes from imdb, and the embeddings from glove (where they exist).
Step18: We pass our embedding matrix to the Embedding constructor, and set it to non-trainable.
Step19: We already have beaten our previous model! But let's fine-tune the embedding weights - especially since the words we couldn't find in glove just have random embeddings.
Step20: As expected, that's given us a nice little boost.
Step21: Multi-size CNN
This is an implementation of a multi-size CNN as shown in Ben Bowles' excellent blog post.
Step22: We use the functional API to create multiple conv layers of different sizes, and then concatenate them.
Step23: We then replace the conv/max-pool layer in our original CNN with the concatenated conv layers.
Step24: Interestingly, I found that in this case I got best results when I started the embedding layer as being trainable, and then set it to non-trainable after a couple of epochs. I have no idea why!
Step25: This more complex architecture has given us another boost in accuracy.
LSTM
We haven't covered this bit yet! | Python Code:
from keras.datasets import imdb
idx = imdb.get_word_index()
Explanation: Setup data
We're going to look at the IMDB dataset, which contains movie reviews from IMDB, along with their sentiment. Keras comes with some helpers for this dataset.
End of explanation
idx_arr = sorted(idx, key=idx.get)
idx_arr[:10]
Explanation: This is the word list:
End of explanation
idx2word = {v: k for k, v in idx.iteritems()}
Explanation: ...and this is the mapping from id to word
End of explanation
path = get_file('imdb_full.pkl',
origin='https://s3.amazonaws.com/text-datasets/imdb_full.pkl',
md5_hash='d091312047c43cf9e4e38fef92437263')
f = open(path, 'rb')
(x_train, labels_train), (x_test, labels_test) = pickle.load(f)
len(x_train)
Explanation: We download the reviews using code copied from keras.datasets:
End of explanation
', '.join(map(str, x_train[0]))
Explanation: Here's the 1st review. As you see, the words have been replaced by ids. The ids can be looked up in idx2word.
End of explanation
idx2word[23022]
Explanation: The first word of the first review is 23022. Let's see what that is.
End of explanation
' '.join([idx2word[o] for o in x_train[0]])
Explanation: Here's the whole review, mapped from ids to words.
End of explanation
labels_train[:10]
Explanation: The labels are 1 for positive, 0 for negative.
End of explanation
vocab_size = 5000
trn = [np.array([i if i<vocab_size-1 else vocab_size-1 for i in s]) for s in x_train]
test = [np.array([i if i<vocab_size-1 else vocab_size-1 for i in s]) for s in x_test]
Explanation: Reduce vocab size by setting rare words to max index.
End of explanation
lens = np.array(map(len, trn))
(lens.max(), lens.min(), lens.mean())
Explanation: Look at distribution of lengths of sentences.
End of explanation
seq_len = 500
trn = sequence.pad_sequences(trn, maxlen=seq_len, value=0)
test = sequence.pad_sequences(test, maxlen=seq_len, value=0)
Explanation: Pad (with zero) or truncate each sentence to make consistent length.
End of explanation
trn.shape
Explanation: This results in nice rectangular matrices that can be passed to ML algorithms. Reviews shorter than 500 words are pre-padded with zeros, those greater are truncated.
End of explanation
model = Sequential([
Embedding(vocab_size, 32, input_length=seq_len),
Flatten(),
Dense(100, activation='relu'),
Dropout(0.7),
Dense(1, activation='sigmoid')])
model.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])
model.summary()
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64)
Explanation: Create simple models
Single hidden layer NN
The simplest model that tends to give reasonable results is a single hidden layer net. So let's try that. Note that we can't expect to get any useful results by feeding word ids directly into a neural net - so instead we use an embedding to replace them with a vector of 32 (initially random) floats for each word in the vocab.
End of explanation
conv1 = Sequential([
Embedding(vocab_size, 32, input_length=seq_len, dropout=0.2),
Dropout(0.2),
Convolution1D(64, 5, border_mode='same', activation='relu'),
Dropout(0.2),
MaxPooling1D(),
Flatten(),
Dense(100, activation='relu'),
Dropout(0.7),
Dense(1, activation='sigmoid')])
conv1.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])
conv1.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64)
Explanation: The stanford paper that this dataset is from cites a state of the art accuracy (without unlabelled data) of 0.883. So we're short of that, but on the right track.
Single conv layer with max pooling
A CNN is likely to work better, since it's designed to take advantage of ordered data. We'll need to use a 1D CNN, since a sequence of words is 1D.
End of explanation
conv1.save_weights(model_path + 'conv1.h5')
conv1.load_weights(model_path + 'conv1.h5')
Explanation: That's well past the Stanford paper's accuracy - another win for CNNs!
End of explanation
def get_glove_dataset(dataset):
Download the requested glove dataset from files.fast.ai
and return a location that can be passed to load_vectors.
# see wordvectors.ipynb for info on how these files were
# generated from the original glove data.
md5sums = {'6B.50d': '8e1557d1228decbda7db6dfd81cd9909',
'6B.100d': 'c92dbbeacde2b0384a43014885a60b2c',
'6B.200d': 'af271b46c04b0b2e41a84d8cd806178d',
'6B.300d': '30290210376887dcc6d0a5a6374d8255'}
glove_path = os.path.abspath('data/glove/results')
%mkdir -p $glove_path
return get_file(dataset,
'http://files.fast.ai/models/glove/' + dataset + '.tgz',
cache_subdir=glove_path,
md5_hash=md5sums.get(dataset, None),
untar=True)
def load_vectors(loc):
return (load_array(loc+'.dat'),
pickle.load(open(loc+'_words.pkl','rb')),
pickle.load(open(loc+'_idx.pkl','rb')))
vecs, words, wordidx = load_vectors(get_glove_dataset('6B.50d'))
Explanation: Pre-trained vectors
You may want to look at wordvectors.ipynb before moving on.
In this section, we replicate the previous CNN, but using pre-trained embeddings.
End of explanation
def create_emb():
n_fact = vecs.shape[1]
emb = np.zeros((vocab_size, n_fact))
for i in range(1,len(emb)):
word = idx2word[i]
if word and re.match(r"^[a-zA-Z0-9\-]*$", word):
src_idx = wordidx[word]
emb[i] = vecs[src_idx]
else:
# If we can't find the word in glove, randomly initialize
emb[i] = normal(scale=0.6, size=(n_fact,))
# This is our "rare word" id - we want to randomly initialize
emb[-1] = normal(scale=0.6, size=(n_fact,))
emb/=3
return emb
emb = create_emb()
Explanation: The glove word ids and imdb word ids use different indexes. So we create a simple function that creates an embedding matrix using the indexes from imdb, and the embeddings from glove (where they exist).
End of explanation
model = Sequential([
Embedding(vocab_size, 50, input_length=seq_len, dropout=0.2,
weights=[emb], trainable=False),
Dropout(0.25),
Convolution1D(64, 5, border_mode='same', activation='relu'),
Dropout(0.25),
MaxPooling1D(),
Flatten(),
Dense(100, activation='relu'),
Dropout(0.7),
Dense(1, activation='sigmoid')])
model.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64)
Explanation: We pass our embedding matrix to the Embedding constructor, and set it to non-trainable.
End of explanation
model.layers[0].trainable=True
model.optimizer.lr=1e-4
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=1, batch_size=64)
Explanation: We already have beaten our previous model! But let's fine-tune the embedding weights - especially since the words we couldn't find in glove just have random embeddings.
End of explanation
model.save_weights(model_path+'glove50.h5')
Explanation: As expected, that's given us a nice little boost. :)
End of explanation
from keras.layers import Merge
Explanation: Multi-size CNN
This is an implementation of a multi-size CNN as shown in Ben Bowles' excellent blog post.
End of explanation
graph_in = Input ((vocab_size, 50))
convs = [ ]
for fsz in range (3, 6):
x = Convolution1D(64, fsz, border_mode='same', activation="relu")(graph_in)
x = MaxPooling1D()(x)
x = Flatten()(x)
convs.append(x)
out = Merge(mode="concat")(convs)
graph = Model(graph_in, out)
emb = create_emb()
Explanation: We use the functional API to create multiple conv layers of different sizes, and then concatenate them.
End of explanation
model = Sequential ([
Embedding(vocab_size, 50, input_length=seq_len, dropout=0.2, weights=[emb]),
Dropout (0.2),
graph,
Dropout (0.5),
Dense (100, activation="relu"),
Dropout (0.7),
Dense (1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64)
Explanation: We then replace the conv/max-pool layer in our original CNN with the concatenated conv layers.
End of explanation
model.layers[0].trainable=False
model.optimizer.lr=1e-5
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64)
Explanation: Interestingly, I found that in this case I got best results when I started the embedding layer as being trainable, and then set it to non-trainable after a couple of epochs. I have no idea why!
End of explanation
model = Sequential([
Embedding(vocab_size, 32, input_length=seq_len, mask_zero=True,
W_regularizer=l2(1e-6), dropout=0.2),
LSTM(100, consume_less='gpu'),
Dense(1, activation='sigmoid')])
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=5, batch_size=64)
Explanation: This more complex architecture has given us another boost in accuracy.
LSTM
We haven't covered this bit yet!
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.